netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2  00/22] Broadcom RoCE Driver (bnxt_re)
@ 2016-12-09  6:47 Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 01/22] bnxt_re: Add bnxt_re RoCE driver files Selvin Xavier
                   ` (22 more replies)
  0 siblings, 23 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:47 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Selvin Xavier


This series introduces the RoCE driver for the Broadcom
NetXtreme-E 10/25/40/50 gigabit RoCE HCAs. 
This driver is dependent on the bnxt_en NIC driver and is 
based on the bnxt_re branch in Doug's repository. bnxt_en changes
required for this patch series is already available in this branch.

I am preparing a git repository with these changes as per Jason's
comment and will share the details later today.

v1-> v2:
  * The license text in each file updated to reflect Dual license.
  * Makefile and Kconfig changes are pushed to the last patch
  * Moved bnxt_re_uverbs_abi.h to include/uapi/rdma folder
  * Remove duplicate structure definitions from bnxt_re_hsi.h as
    it is available in the corresponding bnxt_en header file (bnxt_hsi.h)
  * Removed some unused code reported during code review.
  * Fixed few sparse warnings

Doug,
Please review and consider applying this to linux-rdma repository.

Thanks,
Selvin Xavier

Selvin Xavier (22):
  bnxt_re: Add bnxt_re RoCE driver files
  bnxt_re: Introducing autogenerated Host Software Interface(hsi) file
  bnxt_re: register with the NIC driver
  bnxt_re: Enabling RoCE control path
  bnxt_re: Adding Notification Queue support
  bnxt_re: Support for PD, ucontext and mmap verbs
  bnxt_re: Support for query and modify device verbs
  bnxt_re: Adding support for port related verbs
  bnxt_re: Support for GID related verbs
  bnxt_re: Support for CQ verbs
  bnxt_re: Support for AH verbs
  bnxt_re: Support memory registration verbs
  bnxt_re: Support QP verbs
  bnxt_re: Support post_send verb
  bnxt_re: Support post_recv
  bnxt_re: Support poll_cq verb
  bnxt_re: Handling dispatching of events to IB stack
  bnxt_re: Support for DCB
  bnxt_re: Support debugfs
  bnxt_re: Set uverbs command mask
  bnxt_re: Add QP event handling
  bnxt_re: Add bnxt_re driver build support

 drivers/infiniband/Kconfig                      |    2 +
 drivers/infiniband/hw/Makefile                  |    1 +
 drivers/infiniband/hw/bnxtre/Kconfig            |    9 +
 drivers/infiniband/hw/bnxtre/Makefile           |    6 +
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 2171 +++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |  416 +++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c  |  685 +++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h  |  218 ++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c   |  827 ++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h   |  223 ++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c    |  836 ++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h    |  160 ++
 drivers/infiniband/hw/bnxtre/bnxt_re.h          |  149 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c  |  159 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h  |   48 +
 drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h      | 2785 ++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 3215 +++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  196 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     | 1330 ++++++++++
 include/uapi/rdma/bnxt_re_uverbs_abi.h          |   86 +
 20 files changed, 13522 insertions(+)
 create mode 100644 drivers/infiniband/hw/bnxtre/Kconfig
 create mode 100644 drivers/infiniband/hw/bnxtre/Makefile
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_main.c
 create mode 100644 include/uapi/rdma/bnxt_re_uverbs_abi.h

-- 
2.5.5

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH V2  01/22] bnxt_re: Add bnxt_re RoCE driver files
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
@ 2016-12-09  6:47 ` Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 02/22] bnxt_re: Introducing autogenerated Host Software Interface(hsi) file Selvin Xavier
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:47 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch adds the required skeletal files for Broadcom NetXtreme RoCE
driver. Also, added the load/unload routines for bnxt_re driver.

v2: Modified the license text to include Dual License

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    |  37 ++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |  42 +++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c  |  37 ++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h  |  42 +++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c   |  37 ++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h   |  42 +++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c    |  37 ++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h    |  43 +++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re.h          |  46 ++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h      |  42 +++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c |  37 ++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  42 +++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     | 113 ++++++++++++++++++++++++
 13 files changed, 597 insertions(+)
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_main.c

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
new file mode 100644
index 0000000..36c4b81
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -0,0 +1,37 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Fast Path Operators
+ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
new file mode 100644
index 0000000..0983465
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -0,0 +1,42 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Fast Path Operators (header)
+ */
+
+#ifndef __BNXT_QPLIB_FP_H__
+#define __BNXT_QPLIB_FP_H__
+
+#endif /* __BNXT_QPLIB_FP_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
new file mode 100644
index 0000000..4029935
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
@@ -0,0 +1,37 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: RDMA Controller HW interface
+ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
new file mode 100644
index 0000000..1f63956
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
@@ -0,0 +1,42 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: RDMA Controller HW interface (header)
+ */
+
+#ifndef __BNXT_QPLIB_RCFW_H__
+#define __BNXT_QPLIB_RCFW_H__
+
+#endif /* __BNXT_QPLIB_RCFW_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
new file mode 100644
index 0000000..178eebd
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
@@ -0,0 +1,37 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: QPLib resource manager
+ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
new file mode 100644
index 0000000..5fc4107
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
@@ -0,0 +1,42 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: QPLib resource manager (header)
+ */
+
+#ifndef __BNXT_QPLIB_RES_H__
+#define __BNXT_QPLIB_RES_H__
+
+#endif /* __BNXT_QPLIB_RES_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
new file mode 100644
index 0000000..667c8e1
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
@@ -0,0 +1,37 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Slow Path Operators
+ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
new file mode 100644
index 0000000..90901c1
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -0,0 +1,43 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Slow Path Operators (header)
+ *
+ */
+
+#ifndef __BNXT_QPLIB_SP_H__
+#define __BNXT_QPLIB_SP_H__
+
+#endif /* __BNXT_QPLIB_SP_H__*/
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
new file mode 100644
index 0000000..f9b8542
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -0,0 +1,46 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Slow Path Operators (header)
+ *
+ */
+
+#ifndef __BNXT_RE_H__
+#define __BNXT_RE_H__
+#define ROCE_DRV_MODULE_NAME		"bnxt_re"
+#define ROCE_DRV_MODULE_VERSION		"1.0.0"
+
+#define BNXT_RE_DESC	"Broadcom NetXtreme-C/E RoCE Driver"
+#endif
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h b/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
new file mode 100644
index 0000000..b7d35aa
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
@@ -0,0 +1,42 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: RoCE HSI File - Autogenerated
+ */
+
+#ifndef __BNXT_RE_HSI_H__
+#define __BNXT_RE_HSI_H__
+
+#endif /* __BNXT_RE_HSI_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
new file mode 100644
index 0000000..c01fa40
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -0,0 +1,37 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: IB Verbs interpreter
+ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
new file mode 100644
index 0000000..9162774
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -0,0 +1,42 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: IB Verbs interpreter (header)
+ */
+
+#ifndef __BNXT_RE_IB_VERBS_H__
+#define __BNXT_RE_IB_VERBS_H__
+
+#endif /* __BNXT_RE_IB_VERBS_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
new file mode 100644
index 0000000..ebe1c69
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -0,0 +1,113 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Main component of the bnxt_re driver
+ */
+
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/rculist.h>
+#include "bnxt_re.h"
+static char version[] =
+		BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
+
+
+MODULE_AUTHOR("Eddie Wai <eddie.wai@broadcom.com>");
+MODULE_DESCRIPTION(BNXT_RE_DESC " Driver");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION(ROCE_DRV_MODULE_VERSION);
+
+/* globals */
+static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
+static DEFINE_MUTEX(bnxt_re_dev_lock);
+static struct workqueue_struct *bnxt_re_wq;
+/*
+ * "Notifier chain callback can be invoked for the same chain from
+ * different CPUs at the same time".
+ *
+ * For cases when the netdev is already present, our call to the
+ * register_netdevice_notifier() will actually get the rtnl_lock()
+ * before sending NETDEV_REGISTER and (if up) NETDEV_UP
+ * events.
+ *
+ * But for cases when the netdev is not already present, the notifier
+ * chain is subjected to be invoked from different CPUs simultaneously.
+ *
+ * This is protected by the netdev_mutex.
+ */
+static int bnxt_re_netdev_event(struct notifier_block *notifier,
+				unsigned long event, void *ptr)
+{
+	return NOTIFY_DONE;
+}
+static struct notifier_block bnxt_re_netdev_notifier = {
+	.notifier_call = bnxt_re_netdev_event
+};
+static int __init bnxt_re_mod_init(void)
+{
+	int rc = 0;
+
+	pr_info("%s: %s", ROCE_DRV_MODULE_NAME, version);
+
+	bnxt_re_wq = create_singlethread_workqueue("bnxt_re");
+	if (!bnxt_re_wq)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&bnxt_re_dev_list);
+
+	rc = register_netdevice_notifier(&bnxt_re_netdev_notifier);
+	if (rc) {
+		pr_err("%s: Cannot register to netdevice_notifier",
+			ROCE_DRV_MODULE_NAME);
+		goto err_netdev;
+	}
+	return 0;
+
+err_netdev:
+	destroy_workqueue(bnxt_re_wq);
+
+	return rc;
+}
+static void __exit bnxt_re_mod_exit(void)
+{
+	unregister_netdevice_notifier(&bnxt_re_netdev_notifier);
+	if (bnxt_re_wq)
+		destroy_workqueue(bnxt_re_wq);
+}
+
+module_init(bnxt_re_mod_init);
+module_exit(bnxt_re_mod_exit);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  02/22] bnxt_re: Introducing autogenerated Host Software Interface(hsi) file
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 01/22] bnxt_re: Add bnxt_re RoCE driver files Selvin Xavier
@ 2016-12-09  6:47 ` Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 03/22] bnxt_re: register with the NIC driver Selvin Xavier
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:47 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch introduces all the structures used by driver for communicating
with the Hardware. This file is the equivalent of the bnxt_en_hsi.h used by
bnxt_en driver.

v2: Remove duplicate structure definitions from bnxt_en HSI file and include
    bnxt_hsi.h from bnxt_en driver

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h | 2743 ++++++++++++++++++++++++++++
 1 file changed, 2743 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h b/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
index b7d35aa..f2fdc3e 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h
@@ -39,4 +39,2747 @@
 #ifndef __BNXT_RE_HSI_H__
 #define __BNXT_RE_HSI_H__
 
+/* include bnxt_hsi.h from bnxt_en driver */
+#include "bnxt_hsi.h"
+
+/* CMP Door Bell Format (4 bytes) */
+struct cmpl_doorbell {
+	__le32 key_mask_valid_idx;
+	#define CMPL_DOORBELL_IDX_MASK				    0xffffffUL
+	#define CMPL_DOORBELL_IDX_SFT				    0
+	#define CMPL_DOORBELL_RESERVED_MASK			    0x3000000UL
+	#define CMPL_DOORBELL_RESERVED_SFT			    24
+	#define CMPL_DOORBELL_IDX_VALID			    0x4000000UL
+	#define CMPL_DOORBELL_MASK				    0x8000000UL
+	#define CMPL_DOORBELL_KEY_MASK				    0xf0000000UL
+	#define CMPL_DOORBELL_KEY_SFT				    28
+	#define CMPL_DOORBELL_KEY_CMPL				   (0x2UL << 28)
+};
+
+/* Status Door Bell Format (4 bytes) */
+struct status_doorbell {
+	__le32 key_idx;
+	#define STATUS_DOORBELL_IDX_MASK			    0xffffffUL
+	#define STATUS_DOORBELL_IDX_SFT			    0
+	#define STATUS_DOORBELL_RESERVED_MASK			    0xf000000UL
+	#define STATUS_DOORBELL_RESERVED_SFT			    24
+	#define STATUS_DOORBELL_KEY_MASK			    0xf0000000UL
+	#define STATUS_DOORBELL_KEY_SFT			    28
+	#define STATUS_DOORBELL_KEY_STAT			   (0x3UL << 28)
+};
+
+
+/* RoCE Host Structures */
+
+/* Doorbell Structures */
+/* 64b Doorbell Format (8 bytes) */
+struct dbr_dbr {
+	__le32 index;
+	#define DBR_DBR_INDEX_MASK				    0xfffffUL
+	#define DBR_DBR_INDEX_SFT				    0
+	#define DBR_DBR_RESERVED12_MASK			    0xfff00000UL
+	#define DBR_DBR_RESERVED12_SFT				    20
+	__le32 type_xid;
+	#define DBR_DBR_XID_MASK				    0xfffffUL
+	#define DBR_DBR_XID_SFT				    0
+	#define DBR_DBR_RESERVED8_MASK				    0xff00000UL
+	#define DBR_DBR_RESERVED8_SFT				    20
+	#define DBR_DBR_TYPE_MASK				    0xf0000000UL
+	#define DBR_DBR_TYPE_SFT				    28
+	#define DBR_DBR_TYPE_SQ				   (0x0UL << 28)
+	#define DBR_DBR_TYPE_RQ				   (0x1UL << 28)
+	#define DBR_DBR_TYPE_SRQ				   (0x2UL << 28)
+	#define DBR_DBR_TYPE_SRQ_ARM				   (0x3UL << 28)
+	#define DBR_DBR_TYPE_CQ				   (0x4UL << 28)
+	#define DBR_DBR_TYPE_CQ_ARMSE				   (0x5UL << 28)
+	#define DBR_DBR_TYPE_CQ_ARMALL				   (0x6UL << 28)
+	#define DBR_DBR_TYPE_CQ_ARMENA				   (0x7UL << 28)
+	#define DBR_DBR_TYPE_SRQ_ARMENA			   (0x8UL << 28)
+	#define DBR_DBR_TYPE_CQ_CUTOFF_ACK			   (0x9UL << 28)
+	#define DBR_DBR_TYPE_NULL				   (0xfUL << 28)
+};
+
+/* 32b Doorbell Format (4 bytes) */
+struct dbr_dbr32 {
+	__le32 type_abs_incr_xid;
+	#define DBR_DBR32_XID_MASK				    0xfffffUL
+	#define DBR_DBR32_XID_SFT				    0
+	#define DBR_DBR32_RESERVED4_MASK			    0xf00000UL
+	#define DBR_DBR32_RESERVED4_SFT			    20
+	#define DBR_DBR32_INCR_MASK				    0xf000000UL
+	#define DBR_DBR32_INCR_SFT				    24
+	#define DBR_DBR32_ABS					    0x10000000UL
+	#define DBR_DBR32_TYPE_MASK				    0xe0000000UL
+	#define DBR_DBR32_TYPE_SFT				    29
+	#define DBR_DBR32_TYPE_SQ				   (0x0UL << 29)
+};
+
+/* SQ WQE Structures */
+/* Base SQ WQE (8 bytes) */
+struct sq_base {
+	u8 wqe_type;
+	#define SQ_BASE_WQE_TYPE_SEND				   0x0UL
+	#define SQ_BASE_WQE_TYPE_SEND_W_IMMEAD			   0x1UL
+	#define SQ_BASE_WQE_TYPE_SEND_W_INVALID		   0x2UL
+	#define SQ_BASE_WQE_TYPE_WRITE_WQE			   0x4UL
+	#define SQ_BASE_WQE_TYPE_WRITE_W_IMMEAD		   0x5UL
+	#define SQ_BASE_WQE_TYPE_READ_WQE			   0x6UL
+	#define SQ_BASE_WQE_TYPE_ATOMIC_CS			   0x8UL
+	#define SQ_BASE_WQE_TYPE_ATOMIC_FA			   0xbUL
+	#define SQ_BASE_WQE_TYPE_LOCAL_INVALID			   0xcUL
+	#define SQ_BASE_WQE_TYPE_FR_PMR			   0xdUL
+	#define SQ_BASE_WQE_TYPE_BIND				   0xeUL
+	u8 unused_0[7];
+};
+
+/* WQE SGE (16 bytes) */
+struct sq_sge {
+	__le64 va_or_pa;
+	__le32 l_key;
+	__le32 size;
+};
+
+/* PSN Search Structure (8 bytes) */
+struct sq_psn_search {
+	__le32 opcode_start_psn;
+	#define SQ_PSN_SEARCH_START_PSN_MASK			    0xffffffUL
+	#define SQ_PSN_SEARCH_START_PSN_SFT			    0
+	#define SQ_PSN_SEARCH_OPCODE_MASK			    0xff000000UL
+	#define SQ_PSN_SEARCH_OPCODE_SFT			    24
+	__le32 flags_next_psn;
+	#define SQ_PSN_SEARCH_NEXT_PSN_MASK			    0xffffffUL
+	#define SQ_PSN_SEARCH_NEXT_PSN_SFT			    0
+	#define SQ_PSN_SEARCH_FLAGS_MASK			    0xff000000UL
+	#define SQ_PSN_SEARCH_FLAGS_SFT			    24
+};
+
+/* Send SQ WQE (40 bytes) */
+struct sq_send {
+	u8 wqe_type;
+	#define SQ_SEND_WQE_TYPE_SEND				   0x0UL
+	#define SQ_SEND_WQE_TYPE_SEND_W_IMMEAD			   0x1UL
+	#define SQ_SEND_WQE_TYPE_SEND_W_INVALID		   0x2UL
+	u8 flags;
+	#define SQ_SEND_FLAGS_SIGNAL_COMP			    0x1UL
+	#define SQ_SEND_FLAGS_RD_OR_ATOMIC_FENCE		    0x2UL
+	#define SQ_SEND_FLAGS_UC_FENCE				    0x4UL
+	#define SQ_SEND_FLAGS_SE				    0x8UL
+	#define SQ_SEND_FLAGS_INLINE				    0x10UL
+	u8 wqe_size;
+	u8 reserved8_1;
+	__le32 inv_key_or_imm_data;
+	__le32 length;
+	__le32 q_key;
+	__le32 dst_qp;
+	#define SQ_SEND_DST_QP_MASK				    0xffffffUL
+	#define SQ_SEND_DST_QP_SFT				    0
+	#define SQ_SEND_RESERVED8_2_MASK			    0xff000000UL
+	#define SQ_SEND_RESERVED8_2_SFT			    24
+	__le32 avid;
+	#define SQ_SEND_AVID_MASK				    0xfffffUL
+	#define SQ_SEND_AVID_SFT				    0
+	#define SQ_SEND_RESERVED_AVID_MASK			    0xfff00000UL
+	#define SQ_SEND_RESERVED_AVID_SFT			    20
+	__le64 reserved64;
+	__le32 data[24];
+};
+
+/* Send Raw Ethernet and QP1 SQ WQE (40 bytes) */
+struct sq_send_raweth_qp1 {
+	u8 wqe_type;
+	#define SQ_SEND_RAWETH_QP1_WQE_TYPE_SEND		   0x0UL
+	u8 flags;
+	#define SQ_SEND_RAWETH_QP1_FLAGS_SIGNAL_COMP		    0x1UL
+	#define SQ_SEND_RAWETH_QP1_FLAGS_RD_OR_ATOMIC_FENCE	    0x2UL
+	#define SQ_SEND_RAWETH_QP1_FLAGS_UC_FENCE		    0x4UL
+	#define SQ_SEND_RAWETH_QP1_FLAGS_SE			    0x8UL
+	#define SQ_SEND_RAWETH_QP1_FLAGS_INLINE		    0x10UL
+	u8 wqe_size;
+	u8 reserved8;
+	__le16 lflags;
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_TCP_UDP_CHKSUM	    0x1UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_IP_CHKSUM		    0x2UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_NOCRC		    0x4UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_STAMP		    0x8UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_T_IP_CHKSUM		    0x10UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_RESERVED1_1		    0x20UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_RESERVED1_2		    0x40UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_RESERVED1_3		    0x80UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_ROCE_CRC		    0x100UL
+	#define SQ_SEND_RAWETH_QP1_LFLAGS_FCOE_CRC		    0x200UL
+	__le16 cfa_action;
+	__le32 length;
+	__le32 reserved32_1;
+	__le32 cfa_meta;
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_VID_MASK	    0xfffUL
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_VID_SFT	    0
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_DE		    0x1000UL
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_PRI_MASK	    0xe000UL
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_PRI_SFT	    13
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_MASK	    0x70000UL
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_SFT	    16
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPID88A8    (0x0UL << 16)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPID8100    (0x1UL << 16)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPID9100    (0x2UL << 16)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPID9200    (0x3UL << 16)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPID9300    (0x4UL << 16)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPIDCFG     (0x5UL << 16)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_LAST    SQ_SEND_RAWETH_QP1_CFA_META_VLAN_TPID_TPIDCFG
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_RESERVED_MASK     0xff80000UL
+	#define SQ_SEND_RAWETH_QP1_CFA_META_VLAN_RESERVED_SFT      19
+	#define SQ_SEND_RAWETH_QP1_CFA_META_KEY_MASK		    0xf0000000UL
+	#define SQ_SEND_RAWETH_QP1_CFA_META_KEY_SFT		    28
+	#define SQ_SEND_RAWETH_QP1_CFA_META_KEY_NONE		   (0x0UL << 28)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_KEY_VLAN_TAG	   (0x1UL << 28)
+	#define SQ_SEND_RAWETH_QP1_CFA_META_KEY_LAST    SQ_SEND_RAWETH_QP1_CFA_META_KEY_VLAN_TAG
+	__le32 reserved32_2;
+	__le64 reserved64;
+	__le32 data[24];
+};
+
+/* RDMA SQ WQE (40 bytes) */
+struct sq_rdma {
+	u8 wqe_type;
+	#define SQ_RDMA_WQE_TYPE_WRITE_WQE			   0x4UL
+	#define SQ_RDMA_WQE_TYPE_WRITE_W_IMMEAD		   0x5UL
+	#define SQ_RDMA_WQE_TYPE_READ_WQE			   0x6UL
+	u8 flags;
+	#define SQ_RDMA_FLAGS_SIGNAL_COMP			    0x1UL
+	#define SQ_RDMA_FLAGS_RD_OR_ATOMIC_FENCE		    0x2UL
+	#define SQ_RDMA_FLAGS_UC_FENCE				    0x4UL
+	#define SQ_RDMA_FLAGS_SE				    0x8UL
+	#define SQ_RDMA_FLAGS_INLINE				    0x10UL
+	u8 wqe_size;
+	u8 reserved8;
+	__le32 imm_data;
+	__le32 length;
+	__le32 reserved32_1;
+	__le64 remote_va;
+	__le32 remote_key;
+	__le32 reserved32_2;
+	__le32 data[24];
+};
+
+/* Atomic SQ WQE (40 bytes) */
+struct sq_atomic {
+	u8 wqe_type;
+	#define SQ_ATOMIC_WQE_TYPE_ATOMIC_CS			   0x8UL
+	#define SQ_ATOMIC_WQE_TYPE_ATOMIC_FA			   0xbUL
+	u8 flags;
+	#define SQ_ATOMIC_FLAGS_SIGNAL_COMP			    0x1UL
+	#define SQ_ATOMIC_FLAGS_RD_OR_ATOMIC_FENCE		    0x2UL
+	#define SQ_ATOMIC_FLAGS_UC_FENCE			    0x4UL
+	#define SQ_ATOMIC_FLAGS_SE				    0x8UL
+	#define SQ_ATOMIC_FLAGS_INLINE				    0x10UL
+	__le16 reserved16;
+	__le32 remote_key;
+	__le64 remote_va;
+	__le64 swap_data;
+	__le64 cmp_data;
+	__le32 data[24];
+};
+
+/* Local Invalidate SQ WQE (40 bytes) */
+struct sq_localinvalidate {
+	u8 wqe_type;
+	#define SQ_LOCALINVALIDATE_WQE_TYPE_LOCAL_INVALID	   0xcUL
+	u8 flags;
+	#define SQ_LOCALINVALIDATE_FLAGS_SIGNAL_COMP		    0x1UL
+	#define SQ_LOCALINVALIDATE_FLAGS_RD_OR_ATOMIC_FENCE	    0x2UL
+	#define SQ_LOCALINVALIDATE_FLAGS_UC_FENCE		    0x4UL
+	#define SQ_LOCALINVALIDATE_FLAGS_SE			    0x8UL
+	#define SQ_LOCALINVALIDATE_FLAGS_INLINE		    0x10UL
+	__le16 reserved16;
+	__le32 inv_l_key;
+	__le64 reserved64;
+	__le32 reserved128[4];
+	__le32 data[24];
+};
+
+/* FR-PMR SQ WQE (40 bytes) */
+struct sq_fr_pmr {
+	u8 wqe_type;
+	#define SQ_FR_PMR_WQE_TYPE_FR_PMR			   0xdUL
+	u8 flags;
+	#define SQ_FR_PMR_FLAGS_SIGNAL_COMP			    0x1UL
+	#define SQ_FR_PMR_FLAGS_RD_OR_ATOMIC_FENCE		    0x2UL
+	#define SQ_FR_PMR_FLAGS_UC_FENCE			    0x4UL
+	#define SQ_FR_PMR_FLAGS_SE				    0x8UL
+	#define SQ_FR_PMR_FLAGS_INLINE				    0x10UL
+	u8 access_cntl;
+	#define SQ_FR_PMR_ACCESS_CNTL_LOCAL_WRITE		    0x1UL
+	#define SQ_FR_PMR_ACCESS_CNTL_REMOTE_READ		    0x2UL
+	#define SQ_FR_PMR_ACCESS_CNTL_REMOTE_WRITE		    0x4UL
+	#define SQ_FR_PMR_ACCESS_CNTL_REMOTE_ATOMIC		    0x8UL
+	#define SQ_FR_PMR_ACCESS_CNTL_WINDOW_BIND		    0x10UL
+	u8 zero_based_page_size_log;
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_MASK			    0x1fUL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_SFT			    0
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_4K		   0x0UL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_8K		   0x1UL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_64K		   0x4UL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_256K		   0x6UL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_1M		   0x8UL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_2M		   0x9UL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_4M		   0xaUL
+	#define SQ_FR_PMR_PAGE_SIZE_LOG_PGSZ_1G		   0x12UL
+	#define SQ_FR_PMR_ZERO_BASED				    0x20UL
+	#define SQ_FR_PMR_RESERVED2_MASK			    0xc0UL
+	#define SQ_FR_PMR_RESERVED2_SFT			    6
+	__le32 l_key;
+	u8 length[5];
+	u8 reserved8_1;
+	u8 reserved8_2;
+	u8 numlevels_pbl_page_size_log;
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_MASK		    0x1fUL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_SFT		    0
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_4K		   0x0UL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_8K		   0x1UL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_64K		   0x4UL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_256K		   0x6UL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_1M		   0x8UL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_2M		   0x9UL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_4M		   0xaUL
+	#define SQ_FR_PMR_PBL_PAGE_SIZE_LOG_PGSZ_1G		   0x12UL
+	#define SQ_FR_PMR_RESERVED1				    0x20UL
+	#define SQ_FR_PMR_NUMLEVELS_MASK			    0xc0UL
+	#define SQ_FR_PMR_NUMLEVELS_SFT			    6
+	#define SQ_FR_PMR_NUMLEVELS_PHYSICAL			   (0x0UL << 6)
+	#define SQ_FR_PMR_NUMLEVELS_LAYER1			   (0x1UL << 6)
+	#define SQ_FR_PMR_NUMLEVELS_LAYER2			   (0x2UL << 6)
+	__le64 pblptr;
+	__le64 va;
+	__le32 data[24];
+};
+
+/* Bind SQ WQE (40 bytes) */
+struct sq_bind {
+	u8 wqe_type;
+	#define SQ_BIND_WQE_TYPE_BIND				   0xeUL
+	u8 flags;
+	#define SQ_BIND_FLAGS_SIGNAL_COMP			    0x1UL
+	#define SQ_BIND_FLAGS_RD_OR_ATOMIC_FENCE		    0x2UL
+	#define SQ_BIND_FLAGS_UC_FENCE				    0x4UL
+	#define SQ_BIND_FLAGS_SE				    0x8UL
+	#define SQ_BIND_FLAGS_INLINE				    0x10UL
+	u8 access_cntl;
+	#define SQ_BIND_ACCESS_CNTL_LOCAL_WRITE		    0x1UL
+	#define SQ_BIND_ACCESS_CNTL_REMOTE_READ		    0x2UL
+	#define SQ_BIND_ACCESS_CNTL_REMOTE_WRITE		    0x4UL
+	#define SQ_BIND_ACCESS_CNTL_REMOTE_ATOMIC		    0x8UL
+	#define SQ_BIND_ACCESS_CNTL_WINDOW_BIND		    0x10UL
+	u8 reserved8_1;
+	u8 mw_type_zero_based;
+	#define SQ_BIND_ZERO_BASED				    0x1UL
+	#define SQ_BIND_MW_TYPE				    0x2UL
+	#define SQ_BIND_MW_TYPE_TYPE1				   (0x0UL << 1)
+	#define SQ_BIND_MW_TYPE_TYPE2				   (0x1UL << 1)
+	#define SQ_BIND_RESERVED6_MASK				    0xfcUL
+	#define SQ_BIND_RESERVED6_SFT				    2
+	u8 reserved8_2;
+	__le16 reserved16;
+	__le32 parent_l_key;
+	__le32 l_key;
+	__le64 va;
+	u8 length[5];
+	u8 data_reserved24[99];
+	#define SQ_BIND_RESERVED24_MASK			    0xffffff00UL
+	#define SQ_BIND_RESERVED24_SFT				    8
+	#define SQ_BIND_DATA_MASK				    0xffffffffUL
+	#define SQ_BIND_DATA_SFT				    0
+};
+
+/* RQ/SRQ WQE Structures */
+/* RQ/SRQ WQE (40 bytes) */
+struct rq_wqe {
+	u8 wqe_type;
+	#define RQ_WQE_WQE_TYPE_RCV				   0x80UL
+	u8 flags;
+	u8 wqe_size;
+	u8 reserved8;
+	__le32 reserved32;
+	__le32 wr_id[2];
+	#define RQ_WQE_WR_ID_MASK				    0xfffffUL
+	#define RQ_WQE_WR_ID_SFT				    0
+	#define RQ_WQE_RESERVED44_MASK				    0xfff00000UL
+	#define RQ_WQE_RESERVED44_SFT				    20
+	__le32 reserved128[4];
+	__le32 data[24];
+};
+
+/* CQ CQE Structures */
+/* Base CQE (32 bytes) */
+struct cq_base {
+	__le64 reserved64_1;
+	__le64 reserved64_2;
+	__le64 reserved64_3;
+	u8 cqe_type_toggle;
+	#define CQ_BASE_TOGGLE					    0x1UL
+	#define CQ_BASE_CQE_TYPE_MASK				    0x1eUL
+	#define CQ_BASE_CQE_TYPE_SFT				    1
+	#define CQ_BASE_CQE_TYPE_REQ				   (0x0UL << 1)
+	#define CQ_BASE_CQE_TYPE_RES_RC			   (0x1UL << 1)
+	#define CQ_BASE_CQE_TYPE_RES_UD			   (0x2UL << 1)
+	#define CQ_BASE_CQE_TYPE_RES_RAWETH_QP1		   (0x3UL << 1)
+	#define CQ_BASE_CQE_TYPE_TERMINAL			   (0xeUL << 1)
+	#define CQ_BASE_CQE_TYPE_CUT_OFF			   (0xfUL << 1)
+	#define CQ_BASE_RESERVED3_MASK				    0xe0UL
+	#define CQ_BASE_RESERVED3_SFT				    5
+	u8 status;
+	__le16 reserved16;
+	__le32 reserved32;
+};
+
+/* Requester CQ CQE (32 bytes) */
+struct cq_req {
+	__le64 qp_handle;
+	__le16 sq_cons_idx;
+	__le16 reserved16_1;
+	__le32 reserved32_2;
+	__le64 reserved64;
+	u8 cqe_type_toggle;
+	#define CQ_REQ_TOGGLE					    0x1UL
+	#define CQ_REQ_CQE_TYPE_MASK				    0x1eUL
+	#define CQ_REQ_CQE_TYPE_SFT				    1
+	#define CQ_REQ_CQE_TYPE_REQ				   (0x0UL << 1)
+	#define CQ_REQ_RESERVED3_MASK				    0xe0UL
+	#define CQ_REQ_RESERVED3_SFT				    5
+	u8 status;
+	#define CQ_REQ_STATUS_OK				   0x0UL
+	#define CQ_REQ_STATUS_BAD_RESPONSE_ERR			   0x1UL
+	#define CQ_REQ_STATUS_LOCAL_LENGTH_ERR			   0x2UL
+	#define CQ_REQ_STATUS_LOCAL_QP_OPERATION_ERR		   0x3UL
+	#define CQ_REQ_STATUS_LOCAL_PROTECTION_ERR		   0x4UL
+	#define CQ_REQ_STATUS_MEMORY_MGT_OPERATION_ERR		   0x5UL
+	#define CQ_REQ_STATUS_REMOTE_INVALID_REQUEST_ERR	   0x6UL
+	#define CQ_REQ_STATUS_REMOTE_ACCESS_ERR		   0x7UL
+	#define CQ_REQ_STATUS_REMOTE_OPERATION_ERR		   0x8UL
+	#define CQ_REQ_STATUS_RNR_NAK_RETRY_CNT_ERR		   0x9UL
+	#define CQ_REQ_STATUS_TRANSPORT_RETRY_CNT_ERR		   0xaUL
+	#define CQ_REQ_STATUS_WORK_REQUEST_FLUSHED_ERR		   0xbUL
+	__le16 reserved16_2;
+	__le32 reserved32_1;
+};
+
+/* Responder RC CQE (32 bytes) */
+struct cq_res_rc {
+	__le32 length;
+	__le32 imm_data_or_inv_r_key;
+	__le64 qp_handle;
+	__le64 mr_handle;
+	u8 cqe_type_toggle;
+	#define CQ_RES_RC_TOGGLE				    0x1UL
+	#define CQ_RES_RC_CQE_TYPE_MASK			    0x1eUL
+	#define CQ_RES_RC_CQE_TYPE_SFT				    1
+	#define CQ_RES_RC_CQE_TYPE_RES_RC			   (0x1UL << 1)
+	#define CQ_RES_RC_RESERVED3_MASK			    0xe0UL
+	#define CQ_RES_RC_RESERVED3_SFT			    5
+	u8 status;
+	#define CQ_RES_RC_STATUS_OK				   0x0UL
+	#define CQ_RES_RC_STATUS_LOCAL_ACCESS_ERROR		   0x1UL
+	#define CQ_RES_RC_STATUS_LOCAL_LENGTH_ERR		   0x2UL
+	#define CQ_RES_RC_STATUS_LOCAL_PROTECTION_ERR		   0x3UL
+	#define CQ_RES_RC_STATUS_LOCAL_QP_OPERATION_ERR	   0x4UL
+	#define CQ_RES_RC_STATUS_MEMORY_MGT_OPERATION_ERR	   0x5UL
+	#define CQ_RES_RC_STATUS_REMOTE_INVALID_REQUEST_ERR       0x6UL
+	#define CQ_RES_RC_STATUS_WORK_REQUEST_FLUSHED_ERR	   0x7UL
+	#define CQ_RES_RC_STATUS_HW_FLUSH_ERR			   0x8UL
+	__le16 flags;
+	#define CQ_RES_RC_FLAGS_SRQ				    0x1UL
+	#define CQ_RES_RC_FLAGS_SRQ_RQ				   (0x0UL << 0)
+	#define CQ_RES_RC_FLAGS_SRQ_SRQ			   (0x1UL << 0)
+	#define CQ_RES_RC_FLAGS_SRQ_LAST    CQ_RES_RC_FLAGS_SRQ_SRQ
+	#define CQ_RES_RC_FLAGS_IMM				    0x2UL
+	#define CQ_RES_RC_FLAGS_INV				    0x4UL
+	#define CQ_RES_RC_FLAGS_RDMA				    0x8UL
+	#define CQ_RES_RC_FLAGS_RDMA_SEND			   (0x0UL << 3)
+	#define CQ_RES_RC_FLAGS_RDMA_RDMA_WRITE		   (0x1UL << 3)
+	#define CQ_RES_RC_FLAGS_RDMA_LAST    CQ_RES_RC_FLAGS_RDMA_RDMA_WRITE
+	__le32 srq_or_rq_wr_id;
+	#define CQ_RES_RC_SRQ_OR_RQ_WR_ID_MASK			    0xfffffUL
+	#define CQ_RES_RC_SRQ_OR_RQ_WR_ID_SFT			    0
+	#define CQ_RES_RC_RESERVED12_MASK			    0xfff00000UL
+	#define CQ_RES_RC_RESERVED12_SFT			    20
+};
+
+/* Responder UD CQE (32 bytes) */
+struct cq_res_ud {
+	__le32 length;
+	#define CQ_RES_UD_LENGTH_MASK				    0x3fffUL
+	#define CQ_RES_UD_LENGTH_SFT				    0
+	#define CQ_RES_UD_RESERVED18_MASK			    0xffffc000UL
+	#define CQ_RES_UD_RESERVED18_SFT			    14
+	__le32 imm_data;
+	__le64 qp_handle;
+	__le16 src_mac[3];
+	__le16 src_qp_low;
+	u8 cqe_type_toggle;
+	#define CQ_RES_UD_TOGGLE				    0x1UL
+	#define CQ_RES_UD_CQE_TYPE_MASK			    0x1eUL
+	#define CQ_RES_UD_CQE_TYPE_SFT				    1
+	#define CQ_RES_UD_CQE_TYPE_RES_UD			   (0x2UL << 1)
+	#define CQ_RES_UD_RESERVED3_MASK			    0xe0UL
+	#define CQ_RES_UD_RESERVED3_SFT			    5
+	u8 status;
+	#define CQ_RES_UD_STATUS_OK				   0x0UL
+	#define CQ_RES_UD_STATUS_LOCAL_ACCESS_ERROR		   0x1UL
+	#define CQ_RES_UD_STATUS_HW_LOCAL_LENGTH_ERR		   0x2UL
+	#define CQ_RES_UD_STATUS_LOCAL_PROTECTION_ERR		   0x3UL
+	#define CQ_RES_UD_STATUS_LOCAL_QP_OPERATION_ERR	   0x4UL
+	#define CQ_RES_UD_STATUS_MEMORY_MGT_OPERATION_ERR	   0x5UL
+	#define CQ_RES_UD_STATUS_WORK_REQUEST_FLUSHED_ERR	   0x7UL
+	#define CQ_RES_UD_STATUS_HW_FLUSH_ERR			   0x8UL
+	__le16 flags;
+	#define CQ_RES_UD_FLAGS_SRQ				    0x1UL
+	#define CQ_RES_UD_FLAGS_SRQ_RQ				   (0x0UL << 0)
+	#define CQ_RES_UD_FLAGS_SRQ_SRQ			   (0x1UL << 0)
+	#define CQ_RES_UD_FLAGS_SRQ_LAST    CQ_RES_UD_FLAGS_SRQ_SRQ
+	#define CQ_RES_UD_FLAGS_IMM				    0x2UL
+	#define CQ_RES_UD_FLAGS_ROCE_IP_VER_MASK		    0xcUL
+	#define CQ_RES_UD_FLAGS_ROCE_IP_VER_SFT		    2
+	#define CQ_RES_UD_FLAGS_ROCE_IP_VER_V1			   (0x0UL << 2)
+	#define CQ_RES_UD_FLAGS_ROCE_IP_VER_V2IPV4		   (0x2UL << 2)
+	#define CQ_RES_UD_FLAGS_ROCE_IP_VER_V2IPV6		   (0x3UL << 2)
+	#define CQ_RES_UD_FLAGS_ROCE_IP_VER_LAST    CQ_RES_UD_FLAGS_ROCE_IP_VER_V2IPV6
+	__le32 src_qp_high_srq_or_rq_wr_id;
+	#define CQ_RES_UD_SRQ_OR_RQ_WR_ID_MASK			    0xfffffUL
+	#define CQ_RES_UD_SRQ_OR_RQ_WR_ID_SFT			    0
+	#define CQ_RES_UD_RESERVED4_MASK			    0xf00000UL
+	#define CQ_RES_UD_RESERVED4_SFT			    20
+	#define CQ_RES_UD_SRC_QP_HIGH_MASK			    0xff000000UL
+	#define CQ_RES_UD_SRC_QP_HIGH_SFT			    24
+};
+
+/* Responder RawEth and QP1 CQE (32 bytes) */
+struct cq_res_raweth_qp1 {
+	__le16 length;
+	#define CQ_RES_RAWETH_QP1_LENGTH_MASK			    0x3fffUL
+	#define CQ_RES_RAWETH_QP1_LENGTH_SFT			    0
+	#define CQ_RES_RAWETH_QP1_RESERVED2_MASK		    0xc000UL
+	#define CQ_RES_RAWETH_QP1_RESERVED2_SFT		    14
+	__le16 raweth_qp1_flags;
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ERROR	    0x1UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_RESERVED5_1_MASK 0x3eUL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_RESERVED5_1_SFT 1
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_MASK      0x3c0UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_SFT       6
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_NOT_KNOWN (0x0UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_IP       (0x1UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_TCP      (0x2UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_UDP      (0x3UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_FCOE     (0x4UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_ROCE     (0x5UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_ICMP     (0x7UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_PTP_WO_TIMESTAMP (0x8UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_PTP_W_TIMESTAMP (0x9UL << 6)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_LAST    CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_PTP_W_TIMESTAMP
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_MASK	    0x3ffUL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_SFT		    0
+	#define CQ_RES_RAWETH_QP1_RESERVED6_MASK		    0xfc00UL
+	#define CQ_RES_RAWETH_QP1_RESERVED6_SFT		    10
+	__le16 raweth_qp1_errors;
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_RESERVED4_MASK 0xfUL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_RESERVED4_SFT  0
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_IP_CS_ERROR    0x10UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_L4_CS_ERROR    0x20UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_IP_CS_ERROR  0x40UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_L4_CS_ERROR  0x80UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_CRC_ERROR      0x100UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_MASK 0xe00UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_SFT 9
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_NO_ERROR (0x0UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION (0x1UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN (0x2UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_TUNNEL_TOTAL_ERROR (0x3UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR (0x4UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR (0x5UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL (0x6UL << 9)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_LAST    CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_MASK 0xf000UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_SFT  12
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_NO_ERROR (0x0UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L3_BAD_VERSION (0x1UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN (0x2UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L3_BAD_TTL (0x3UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_IP_TOTAL_ERROR (0x4UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR (0x5UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN (0x6UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL (0x7UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN (0x8UL << 12)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_LAST    CQ_RES_RAWETH_QP1_RAWETH_QP1_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN
+	__le16 raweth_qp1_cfa_code;
+	__le64 qp_handle;
+	__le32 raweth_qp1_flags2;
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_IP_CS_CALC     0x1UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_L4_CS_CALC     0x2UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_T_IP_CS_CALC   0x4UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_T_L4_CS_CALC   0x8UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_META_FORMAT_MASK 0xf0UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_META_FORMAT_SFT 4
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_META_FORMAT_NONE (0x0UL << 4)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_META_FORMAT_VLAN (0x1UL << 4)
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_META_FORMAT_LAST    CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_META_FORMAT_VLAN
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_IP_TYPE	    0x100UL
+	__le32 raweth_qp1_metadata;
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_VID_MASK     0xfffUL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_VID_SFT      0
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_DE	    0x1000UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_PRI_MASK     0xe000UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_PRI_SFT      13
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_TPID_MASK    0xffff0000UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_METADATA_TPID_SFT     16
+	u8 cqe_type_toggle;
+	#define CQ_RES_RAWETH_QP1_TOGGLE			    0x1UL
+	#define CQ_RES_RAWETH_QP1_CQE_TYPE_MASK		    0x1eUL
+	#define CQ_RES_RAWETH_QP1_CQE_TYPE_SFT			    1
+	#define CQ_RES_RAWETH_QP1_CQE_TYPE_RES_RAWETH_QP1	   (0x3UL << 1)
+	#define CQ_RES_RAWETH_QP1_RESERVED3_MASK		    0xe0UL
+	#define CQ_RES_RAWETH_QP1_RESERVED3_SFT		    5
+	u8 status;
+	#define CQ_RES_RAWETH_QP1_STATUS_OK			   0x0UL
+	#define CQ_RES_RAWETH_QP1_STATUS_LOCAL_ACCESS_ERROR       0x1UL
+	#define CQ_RES_RAWETH_QP1_STATUS_HW_LOCAL_LENGTH_ERR      0x2UL
+	#define CQ_RES_RAWETH_QP1_STATUS_LOCAL_PROTECTION_ERR     0x3UL
+	#define CQ_RES_RAWETH_QP1_STATUS_LOCAL_QP_OPERATION_ERR   0x4UL
+	#define CQ_RES_RAWETH_QP1_STATUS_MEMORY_MGT_OPERATION_ERR 0x5UL
+	#define CQ_RES_RAWETH_QP1_STATUS_WORK_REQUEST_FLUSHED_ERR 0x7UL
+	#define CQ_RES_RAWETH_QP1_STATUS_HW_FLUSH_ERR		   0x8UL
+	__le16 flags;
+	#define CQ_RES_RAWETH_QP1_FLAGS_SRQ			    0x1UL
+	#define CQ_RES_RAWETH_QP1_FLAGS_SRQ_RQ			   0x0UL
+	#define CQ_RES_RAWETH_QP1_FLAGS_SRQ_SRQ		   0x1UL
+	#define CQ_RES_RAWETH_QP1_FLAGS_SRQ_LAST    CQ_RES_RAWETH_QP1_FLAGS_SRQ_SRQ
+	__le32 raweth_qp1_payload_offset_srq_or_rq_wr_id;
+	#define CQ_RES_RAWETH_QP1_SRQ_OR_RQ_WR_ID_MASK		    0xfffffUL
+	#define CQ_RES_RAWETH_QP1_SRQ_OR_RQ_WR_ID_SFT		    0
+	#define CQ_RES_RAWETH_QP1_RESERVED4_MASK		    0xf00000UL
+	#define CQ_RES_RAWETH_QP1_RESERVED4_SFT		    20
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_PAYLOAD_OFFSET_MASK   0xff000000UL
+	#define CQ_RES_RAWETH_QP1_RAWETH_QP1_PAYLOAD_OFFSET_SFT    24
+};
+
+/* Terminal CQE (32 bytes) */
+struct cq_terminal {
+	__le64 qp_handle;
+	__le16 sq_cons_idx;
+	__le16 rq_cons_idx;
+	__le32 reserved32_1;
+	__le64 reserved64_3;
+	u8 cqe_type_toggle;
+	#define CQ_TERMINAL_TOGGLE				    0x1UL
+	#define CQ_TERMINAL_CQE_TYPE_MASK			    0x1eUL
+	#define CQ_TERMINAL_CQE_TYPE_SFT			    1
+	#define CQ_TERMINAL_CQE_TYPE_TERMINAL			   (0xeUL << 1)
+	#define CQ_TERMINAL_RESERVED3_MASK			    0xe0UL
+	#define CQ_TERMINAL_RESERVED3_SFT			    5
+	u8 status;
+	#define CQ_TERMINAL_STATUS_OK				   0x0UL
+	__le16 reserved16;
+	__le32 reserved32_2;
+};
+
+/* Cutoff CQE (32 bytes) */
+struct cq_cutoff {
+	__le64 reserved64_1;
+	__le64 reserved64_2;
+	__le64 reserved64_3;
+	u8 cqe_type_toggle;
+	#define CQ_CUTOFF_TOGGLE				    0x1UL
+	#define CQ_CUTOFF_CQE_TYPE_MASK			    0x1eUL
+	#define CQ_CUTOFF_CQE_TYPE_SFT				    1
+	#define CQ_CUTOFF_CQE_TYPE_CUT_OFF			   (0xfUL << 1)
+	#define CQ_CUTOFF_RESERVED3_MASK			    0xe0UL
+	#define CQ_CUTOFF_RESERVED3_SFT			    5
+	u8 status;
+	#define CQ_CUTOFF_STATUS_OK				   0x0UL
+	__le16 reserved16;
+	__le32 reserved32;
+};
+
+/* Notification Queue (NQ) Structures */
+/* Base NQ Record (16 bytes) */
+struct nq_base {
+	__le16 info10_type;
+	#define NQ_BASE_TYPE_MASK				    0x3fUL
+	#define NQ_BASE_TYPE_SFT				    0
+	#define NQ_BASE_TYPE_CQ_NOTIFICATION			   0x30UL
+	#define NQ_BASE_TYPE_SRQ_EVENT				   0x32UL
+	#define NQ_BASE_TYPE_DBQ_EVENT				   0x34UL
+	#define NQ_BASE_TYPE_QP_EVENT				   0x38UL
+	#define NQ_BASE_TYPE_FUNC_EVENT			   0x3aUL
+	#define NQ_BASE_INFO10_MASK				    0xffc0UL
+	#define NQ_BASE_INFO10_SFT				    6
+	__le16 info16;
+	__le32 info32;
+	__le32 info63_v[2];
+	#define NQ_BASE_V					    0x1UL
+	#define NQ_BASE_INFO63_MASK				    0xfffffffeUL
+	#define NQ_BASE_INFO63_SFT				    1
+};
+
+/* Completion Queue Notification (16 bytes) */
+struct nq_cn {
+	__le16 type;
+	#define NQ_CN_TYPE_MASK				    0x3fUL
+	#define NQ_CN_TYPE_SFT					    0
+	#define NQ_CN_TYPE_CQ_NOTIFICATION			   0x30UL
+	#define NQ_CN_RESERVED9_MASK				    0xffc0UL
+	#define NQ_CN_RESERVED9_SFT				    6
+	__le16 reserved16;
+	__le32 cq_handle_low;
+	__le32 v;
+	#define NQ_CN_V					    0x1UL
+	#define NQ_CN_RESERVED31_MASK				    0xfffffffeUL
+	#define NQ_CN_RESERVED31_SFT				    1
+	__le32 cq_handle_high;
+};
+
+/* SRQ Event Notification (16 bytes) */
+struct nq_srq_event {
+	u8 type;
+	#define NQ_SRQ_EVENT_TYPE_MASK				    0x3fUL
+	#define NQ_SRQ_EVENT_TYPE_SFT				    0
+	#define NQ_SRQ_EVENT_TYPE_SRQ_EVENT			   0x32UL
+	#define NQ_SRQ_EVENT_RESERVED1_MASK			    0xc0UL
+	#define NQ_SRQ_EVENT_RESERVED1_SFT			    6
+	u8 event;
+	#define NQ_SRQ_EVENT_EVENT_SRQ_THRESHOLD_EVENT		   0x1UL
+	__le16 reserved16;
+	__le32 srq_handle_low;
+	__le32 v;
+	#define NQ_SRQ_EVENT_V					    0x1UL
+	#define NQ_SRQ_EVENT_RESERVED31_MASK			    0xfffffffeUL
+	#define NQ_SRQ_EVENT_RESERVED31_SFT			    1
+	__le32 srq_handle_high;
+};
+
+/* DBQ Async Event Notification (16 bytes) */
+struct nq_dbq_event {
+	u8 type;
+	#define NQ_DBQ_EVENT_TYPE_MASK				    0x3fUL
+	#define NQ_DBQ_EVENT_TYPE_SFT				    0
+	#define NQ_DBQ_EVENT_TYPE_DBQ_EVENT			   0x34UL
+	#define NQ_DBQ_EVENT_RESERVED1_MASK			    0xc0UL
+	#define NQ_DBQ_EVENT_RESERVED1_SFT			    6
+	u8 event;
+	#define NQ_DBQ_EVENT_EVENT_DBQ_THRESHOLD_EVENT		   0x1UL
+	__le16 db_pfid;
+	#define NQ_DBQ_EVENT_DB_PFID_MASK			    0xfUL
+	#define NQ_DBQ_EVENT_DB_PFID_SFT			    0
+	#define NQ_DBQ_EVENT_RESERVED12_MASK			    0xfff0UL
+	#define NQ_DBQ_EVENT_RESERVED12_SFT			    4
+	__le32 db_dpi;
+	#define NQ_DBQ_EVENT_DB_DPI_MASK			    0xfffffUL
+	#define NQ_DBQ_EVENT_DB_DPI_SFT			    0
+	#define NQ_DBQ_EVENT_RESERVED12_2_MASK			    0xfff00000UL
+	#define NQ_DBQ_EVENT_RESERVED12_2_SFT			    20
+	__le32 v;
+	#define NQ_DBQ_EVENT_V					    0x1UL
+	#define NQ_DBQ_EVENT_RESERVED32_MASK			    0xfffffffeUL
+	#define NQ_DBQ_EVENT_RESERVED32_SFT			    1
+	__le32 db_type_db_xid;
+	#define NQ_DBQ_EVENT_DB_XID_MASK			    0xfffffUL
+	#define NQ_DBQ_EVENT_DB_XID_SFT			    0
+	#define NQ_DBQ_EVENT_RESERVED8_MASK			    0xff00000UL
+	#define NQ_DBQ_EVENT_RESERVED8_SFT			    20
+	#define NQ_DBQ_EVENT_DB_TYPE_MASK			    0xf0000000UL
+	#define NQ_DBQ_EVENT_DB_TYPE_SFT			    28
+};
+
+/* Read Request/Response Queue Structures */
+/* Input Read Request Queue (IRRQ) Message (32 bytes) */
+struct xrrq_irrq {
+	__le16 credits_type;
+	#define XRRQ_IRRQ_TYPE					    0x1UL
+	#define XRRQ_IRRQ_TYPE_READ_REQ			   0x0UL
+	#define XRRQ_IRRQ_TYPE_ATOMIC_REQ			   0x1UL
+	#define XRRQ_IRRQ_RESERVED10_MASK			    0x7feUL
+	#define XRRQ_IRRQ_RESERVED10_SFT			    1
+	#define XRRQ_IRRQ_CREDITS_MASK				    0xf800UL
+	#define XRRQ_IRRQ_CREDITS_SFT				    11
+	__le16 reserved16;
+	__le32 reserved32;
+	__le32 psn;
+	#define XRRQ_IRRQ_PSN_MASK				    0xffffffUL
+	#define XRRQ_IRRQ_PSN_SFT				    0
+	#define XRRQ_IRRQ_RESERVED8_1_MASK			    0xff000000UL
+	#define XRRQ_IRRQ_RESERVED8_1_SFT			    24
+	__le32 msn;
+	#define XRRQ_IRRQ_MSN_MASK				    0xffffffUL
+	#define XRRQ_IRRQ_MSN_SFT				    0
+	#define XRRQ_IRRQ_RESERVED8_2_MASK			    0xff000000UL
+	#define XRRQ_IRRQ_RESERVED8_2_SFT			    24
+	__le64 va_or_atomic_result;
+	__le32 rdma_r_key;
+	__le32 length;
+};
+
+/* Output Read Request Queue (ORRQ) Message (32 bytes) */
+struct xrrq_orrq {
+	__le16 num_sges_type;
+	#define XRRQ_ORRQ_TYPE					    0x1UL
+	#define XRRQ_ORRQ_TYPE_READ_REQ			   0x0UL
+	#define XRRQ_ORRQ_TYPE_ATOMIC_REQ			   0x1UL
+	#define XRRQ_ORRQ_RESERVED10_MASK			    0x7feUL
+	#define XRRQ_ORRQ_RESERVED10_SFT			    1
+	#define XRRQ_ORRQ_NUM_SGES_MASK			    0xf800UL
+	#define XRRQ_ORRQ_NUM_SGES_SFT				    11
+	__le16 reserved16;
+	__le32 length;
+	__le32 psn;
+	#define XRRQ_ORRQ_PSN_MASK				    0xffffffUL
+	#define XRRQ_ORRQ_PSN_SFT				    0
+	#define XRRQ_ORRQ_RESERVED8_1_MASK			    0xff000000UL
+	#define XRRQ_ORRQ_RESERVED8_1_SFT			    24
+	__le32 end_psn;
+	#define XRRQ_ORRQ_END_PSN_MASK				    0xffffffUL
+	#define XRRQ_ORRQ_END_PSN_SFT				    0
+	#define XRRQ_ORRQ_RESERVED8_2_MASK			    0xff000000UL
+	#define XRRQ_ORRQ_RESERVED8_2_SFT			    24
+	__le64 first_sge_phy_or_sing_sge_va;
+	__le32 single_sge_l_key;
+	__le32 single_sge_size;
+};
+
+/* Page Buffer List Memory Structures (PBL) */
+/* Page Table Entry (PTE) (8 bytes) */
+struct ptu_pte {
+	__le32 page_next_to_last_last_valid[2];
+	#define PTU_PTE_VALID					    0x1UL
+	#define PTU_PTE_LAST					    0x2UL
+	#define PTU_PTE_NEXT_TO_LAST				    0x4UL
+	#define PTU_PTE_PAGE_MASK				    0xfffff000UL
+	#define PTU_PTE_PAGE_SFT				    12
+};
+
+/* Page Directory Entry (PDE) (8 bytes) */
+struct ptu_pde {
+	__le32 page_valid[2];
+	#define PTU_PDE_VALID					    0x1UL
+	#define PTU_PDE_PAGE_MASK				    0xfffff000UL
+	#define PTU_PDE_PAGE_SFT				    12
+};
+
+/* RoCE Fastpath Host Structures */
+/* Command Queue (CMDQ) Interface */
+/* Init CMDQ (16 bytes) */
+struct cmdq_init {
+	__le64 cmdq_pbl;
+	__le16 cmdq_size_cmdq_lvl;
+	#define CMDQ_INIT_CMDQ_LVL_MASK			    0x3UL
+	#define CMDQ_INIT_CMDQ_LVL_SFT				    0
+	#define CMDQ_INIT_CMDQ_SIZE_MASK			    0xfffcUL
+	#define CMDQ_INIT_CMDQ_SIZE_SFT			    2
+	__le16 creq_ring_id;
+	__le32 prod_idx;
+};
+
+/* Update CMDQ producer index (16 bytes) */
+struct cmdq_update {
+	__le64 reserved64;
+	__le32 reserved32;
+	__le32 prod_idx;
+};
+
+/* CMDQ common header structure (16 bytes) */
+struct cmdq_base {
+	u8 opcode;
+	#define CMDQ_BASE_OPCODE_CREATE_QP			   0x1UL
+	#define CMDQ_BASE_OPCODE_DESTROY_QP			   0x2UL
+	#define CMDQ_BASE_OPCODE_MODIFY_QP			   0x3UL
+	#define CMDQ_BASE_OPCODE_QUERY_QP			   0x4UL
+	#define CMDQ_BASE_OPCODE_CREATE_SRQ			   0x5UL
+	#define CMDQ_BASE_OPCODE_DESTROY_SRQ			   0x6UL
+	#define CMDQ_BASE_OPCODE_QUERY_SRQ			   0x8UL
+	#define CMDQ_BASE_OPCODE_CREATE_CQ			   0x9UL
+	#define CMDQ_BASE_OPCODE_DESTROY_CQ			   0xaUL
+	#define CMDQ_BASE_OPCODE_RESIZE_CQ			   0xcUL
+	#define CMDQ_BASE_OPCODE_ALLOCATE_MRW			   0xdUL
+	#define CMDQ_BASE_OPCODE_DEALLOCATE_KEY		   0xeUL
+	#define CMDQ_BASE_OPCODE_REGISTER_MR			   0xfUL
+	#define CMDQ_BASE_OPCODE_DEREGISTER_MR			   0x10UL
+	#define CMDQ_BASE_OPCODE_ADD_GID			   0x11UL
+	#define CMDQ_BASE_OPCODE_DELETE_GID			   0x12UL
+	#define CMDQ_BASE_OPCODE_MODIFY_GID			   0x17UL
+	#define CMDQ_BASE_OPCODE_QUERY_GID			   0x18UL
+	#define CMDQ_BASE_OPCODE_CREATE_QP1			   0x13UL
+	#define CMDQ_BASE_OPCODE_DESTROY_QP1			   0x14UL
+	#define CMDQ_BASE_OPCODE_CREATE_AH			   0x15UL
+	#define CMDQ_BASE_OPCODE_DESTROY_AH			   0x16UL
+	#define CMDQ_BASE_OPCODE_INITIALIZE_FW			   0x80UL
+	#define CMDQ_BASE_OPCODE_DEINITIALIZE_FW		   0x81UL
+	#define CMDQ_BASE_OPCODE_STOP_FUNC			   0x82UL
+	#define CMDQ_BASE_OPCODE_QUERY_FUNC			   0x83UL
+	#define CMDQ_BASE_OPCODE_SET_FUNC_RESOURCES		   0x84UL
+	#define CMDQ_BASE_OPCODE_READ_CONTEXT			   0x85UL
+	#define CMDQ_BASE_OPCODE_VF_BACKCHANNEL_REQUEST	   0x86UL
+	#define CMDQ_BASE_OPCODE_READ_VF_MEMORY		   0x87UL
+	#define CMDQ_BASE_OPCODE_COMPLETE_VF_REQUEST		   0x88UL
+	#define CMDQ_BASE_OPCODE_EXTEND_CONTEXT_ARRRAY		   0x89UL
+	#define CMDQ_BASE_OPCODE_MAP_TC_TO_COS			   0x8aUL
+	#define CMDQ_BASE_OPCODE_QUERY_VERSION			   0x8bUL
+	#define CMDQ_BASE_OPCODE_MODIFY_CC			   0x8cUL
+	#define CMDQ_BASE_OPCODE_QUERY_CC			   0x8dUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+};
+
+/* Create QP command (96 bytes) */
+struct cmdq_create_qp {
+	u8 opcode;
+	#define CMDQ_CREATE_QP_OPCODE_CREATE_QP		   0x1UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le64 qp_handle;
+	__le32 qp_flags;
+	#define CMDQ_CREATE_QP_QP_FLAGS_SRQ_USED		   0x1UL
+	#define CMDQ_CREATE_QP_QP_FLAGS_FORCE_COMPLETION	   0x2UL
+	#define CMDQ_CREATE_QP_QP_FLAGS_RESERVED_LKEY_ENABLE      0x4UL
+	#define CMDQ_CREATE_QP_QP_FLAGS_FR_PMR_ENABLED		   0x8UL
+	u8 type;
+	#define CMDQ_CREATE_QP_TYPE_RC				   0x2UL
+	#define CMDQ_CREATE_QP_TYPE_UD				   0x4UL
+	#define CMDQ_CREATE_QP_TYPE_RAW_ETHERTYPE		   0x6UL
+	u8 sq_pg_size_sq_lvl;
+	#define CMDQ_CREATE_QP_SQ_LVL_MASK			    0xfUL
+	#define CMDQ_CREATE_QP_SQ_LVL_SFT			    0
+	#define CMDQ_CREATE_QP_SQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_CREATE_QP_SQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_CREATE_QP_SQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_MASK			    0xf0UL
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_SFT			    4
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_CREATE_QP_SQ_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 rq_pg_size_rq_lvl;
+	#define CMDQ_CREATE_QP_RQ_LVL_MASK			    0xfUL
+	#define CMDQ_CREATE_QP_RQ_LVL_SFT			    0
+	#define CMDQ_CREATE_QP_RQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_CREATE_QP_RQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_CREATE_QP_RQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_MASK			    0xf0UL
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_SFT			    4
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_CREATE_QP_RQ_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 unused_0;
+	__le32 dpi;
+	__le32 sq_size;
+	__le32 rq_size;
+	__le16 sq_fwo_sq_sge;
+	#define CMDQ_CREATE_QP_SQ_SGE_MASK			    0xfUL
+	#define CMDQ_CREATE_QP_SQ_SGE_SFT			    0
+	#define CMDQ_CREATE_QP_SQ_FWO_MASK			    0xfff0UL
+	#define CMDQ_CREATE_QP_SQ_FWO_SFT			    4
+	__le16 rq_fwo_rq_sge;
+	#define CMDQ_CREATE_QP_RQ_SGE_MASK			    0xfUL
+	#define CMDQ_CREATE_QP_RQ_SGE_SFT			    0
+	#define CMDQ_CREATE_QP_RQ_FWO_MASK			    0xfff0UL
+	#define CMDQ_CREATE_QP_RQ_FWO_SFT			    4
+	__le32 scq_cid;
+	__le32 rcq_cid;
+	__le32 srq_cid;
+	__le32 pd_id;
+	__le64 sq_pbl;
+	__le64 rq_pbl;
+	__le64 irrq_addr;
+	__le64 orrq_addr;
+};
+
+/* Destroy QP command (24 bytes) */
+struct cmdq_destroy_qp {
+	u8 opcode;
+	#define CMDQ_DESTROY_QP_OPCODE_DESTROY_QP		   0x2UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 qp_cid;
+	__le32 unused_0;
+};
+
+/* Modify QP command (112 bytes) */
+struct cmdq_modify_qp {
+	u8 opcode;
+	#define CMDQ_MODIFY_QP_OPCODE_MODIFY_QP		   0x3UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 modify_mask;
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_STATE		    0x1UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_EN_SQD_ASYNC_NOTIFY     0x2UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_ACCESS		    0x4UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_PKEY		    0x8UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_QKEY		    0x10UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_DGID		    0x20UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL		    0x40UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX		    0x80UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT		    0x100UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS	    0x200UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC		    0x400UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU		    0x1000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_TIMEOUT		    0x2000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_RETRY_CNT		    0x4000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_RNR_RETRY		    0x8000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN		    0x10000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_MAX_RD_ATOMIC	    0x20000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER	    0x40000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN		    0x80000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC      0x100000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_SQ_SIZE		    0x200000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_RQ_SIZE		    0x400000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_SQ_SGE		    0x800000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_RQ_SGE		    0x1000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_MAX_INLINE_DATA	    0x2000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID		    0x4000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_SRC_MAC		    0x8000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID		    0x10000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_ENABLE_CC		    0x20000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_TOS_ECN		    0x40000000UL
+	#define CMDQ_MODIFY_QP_MODIFY_MASK_TOS_DSCP		    0x80000000UL
+	__le32 qp_cid;
+	u8 network_type_en_sqd_async_notify_new_state;
+	#define CMDQ_MODIFY_QP_NEW_STATE_MASK			    0xfUL
+	#define CMDQ_MODIFY_QP_NEW_STATE_SFT			    0
+	#define CMDQ_MODIFY_QP_NEW_STATE_RESET			   0x0UL
+	#define CMDQ_MODIFY_QP_NEW_STATE_INIT			   0x1UL
+	#define CMDQ_MODIFY_QP_NEW_STATE_RTR			   0x2UL
+	#define CMDQ_MODIFY_QP_NEW_STATE_RTS			   0x3UL
+	#define CMDQ_MODIFY_QP_NEW_STATE_SQD			   0x4UL
+	#define CMDQ_MODIFY_QP_NEW_STATE_SQE			   0x5UL
+	#define CMDQ_MODIFY_QP_NEW_STATE_ERR			   0x6UL
+	#define CMDQ_MODIFY_QP_EN_SQD_ASYNC_NOTIFY		    0x10UL
+	#define CMDQ_MODIFY_QP_NETWORK_TYPE_MASK		    0xc0UL
+	#define CMDQ_MODIFY_QP_NETWORK_TYPE_SFT		    6
+	#define CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV1		   (0x0UL << 6)
+	#define CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV2_IPV4	   (0x2UL << 6)
+	#define CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV2_IPV6	   (0x3UL << 6)
+	u8 access;
+	#define CMDQ_MODIFY_QP_ACCESS_LOCAL_WRITE		    0x1UL
+	#define CMDQ_MODIFY_QP_ACCESS_REMOTE_WRITE		    0x2UL
+	#define CMDQ_MODIFY_QP_ACCESS_REMOTE_READ		    0x4UL
+	#define CMDQ_MODIFY_QP_ACCESS_REMOTE_ATOMIC		    0x8UL
+	__le16 pkey;
+	__le32 qkey;
+	__le32 dgid[4];
+	__le32 flow_label;
+	__le16 sgid_index;
+	u8 hop_limit;
+	u8 traffic_class;
+	__le16 dest_mac[3];
+	u8 tos_dscp_tos_ecn;
+	#define CMDQ_MODIFY_QP_TOS_ECN_MASK			    0x3UL
+	#define CMDQ_MODIFY_QP_TOS_ECN_SFT			    0
+	#define CMDQ_MODIFY_QP_TOS_DSCP_MASK			    0xfcUL
+	#define CMDQ_MODIFY_QP_TOS_DSCP_SFT			    2
+	u8 path_mtu;
+	#define CMDQ_MODIFY_QP_PATH_MTU_MASK			    0xf0UL
+	#define CMDQ_MODIFY_QP_PATH_MTU_SFT			    4
+	#define CMDQ_MODIFY_QP_PATH_MTU_MTU_256		   (0x0UL << 4)
+	#define CMDQ_MODIFY_QP_PATH_MTU_MTU_512		   (0x1UL << 4)
+	#define CMDQ_MODIFY_QP_PATH_MTU_MTU_1024		   (0x2UL << 4)
+	#define CMDQ_MODIFY_QP_PATH_MTU_MTU_2048		   (0x3UL << 4)
+	#define CMDQ_MODIFY_QP_PATH_MTU_MTU_4096		   (0x4UL << 4)
+	#define CMDQ_MODIFY_QP_PATH_MTU_MTU_8192		   (0x5UL << 4)
+	u8 timeout;
+	u8 retry_cnt;
+	u8 rnr_retry;
+	u8 min_rnr_timer;
+	__le32 rq_psn;
+	__le32 sq_psn;
+	u8 max_rd_atomic;
+	u8 max_dest_rd_atomic;
+	__le16 enable_cc;
+	#define CMDQ_MODIFY_QP_ENABLE_CC			    0x1UL
+	__le32 sq_size;
+	__le32 rq_size;
+	__le16 sq_sge;
+	__le16 rq_sge;
+	__le32 max_inline_data;
+	__le32 dest_qp_id;
+	__le32 unused_3;
+	__le16 src_mac[3];
+	__le16 vlan_pcp_vlan_dei_vlan_id;
+	#define CMDQ_MODIFY_QP_VLAN_ID_MASK			    0xfffUL
+	#define CMDQ_MODIFY_QP_VLAN_ID_SFT			    0
+	#define CMDQ_MODIFY_QP_VLAN_DEI			    0x1000UL
+	#define CMDQ_MODIFY_QP_VLAN_PCP_MASK			    0xe000UL
+	#define CMDQ_MODIFY_QP_VLAN_PCP_SFT			    13
+};
+
+/* Query QP command (24 bytes) */
+struct cmdq_query_qp {
+	u8 opcode;
+	#define CMDQ_QUERY_QP_OPCODE_QUERY_QP			   0x4UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 qp_cid;
+	__le32 unused_0;
+};
+
+/* Create SRQ command (48 bytes) */
+struct cmdq_create_srq {
+	u8 opcode;
+	#define CMDQ_CREATE_SRQ_OPCODE_CREATE_SRQ		   0x5UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le64 srq_handle;
+	__le16 pg_size_lvl;
+	#define CMDQ_CREATE_SRQ_LVL_MASK			    0x3UL
+	#define CMDQ_CREATE_SRQ_LVL_SFT			    0
+	#define CMDQ_CREATE_SRQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_CREATE_SRQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_CREATE_SRQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_CREATE_SRQ_PG_SIZE_MASK			    0x1cUL
+	#define CMDQ_CREATE_SRQ_PG_SIZE_SFT			    2
+	#define CMDQ_CREATE_SRQ_PG_SIZE_PG_4K			   (0x0UL << 2)
+	#define CMDQ_CREATE_SRQ_PG_SIZE_PG_8K			   (0x1UL << 2)
+	#define CMDQ_CREATE_SRQ_PG_SIZE_PG_64K			   (0x2UL << 2)
+	#define CMDQ_CREATE_SRQ_PG_SIZE_PG_2M			   (0x3UL << 2)
+	#define CMDQ_CREATE_SRQ_PG_SIZE_PG_8M			   (0x4UL << 2)
+	#define CMDQ_CREATE_SRQ_PG_SIZE_PG_1G			   (0x5UL << 2)
+	__le16 eventq_id;
+	#define CMDQ_CREATE_SRQ_EVENTQ_ID_MASK			    0xfffUL
+	#define CMDQ_CREATE_SRQ_EVENTQ_ID_SFT			    0
+	__le16 srq_size;
+	__le16 srq_fwo;
+	__le32 dpi;
+	__le32 pd_id;
+	__le64 pbl;
+};
+
+/* Destroy SRQ command (24 bytes) */
+struct cmdq_destroy_srq {
+	u8 opcode;
+	#define CMDQ_DESTROY_SRQ_OPCODE_DESTROY_SRQ		   0x6UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 srq_cid;
+	__le32 unused_0;
+};
+
+/* Query SRQ command (24 bytes) */
+struct cmdq_query_srq {
+	u8 opcode;
+	#define CMDQ_QUERY_SRQ_OPCODE_QUERY_SRQ		   0x8UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 srq_cid;
+	__le32 unused_0;
+};
+
+/* Create CQ command (48 bytes) */
+struct cmdq_create_cq {
+	u8 opcode;
+	#define CMDQ_CREATE_CQ_OPCODE_CREATE_CQ		   0x9UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le64 cq_handle;
+	__le32 pg_size_lvl;
+	#define CMDQ_CREATE_CQ_LVL_MASK			    0x3UL
+	#define CMDQ_CREATE_CQ_LVL_SFT				    0
+	#define CMDQ_CREATE_CQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_CREATE_CQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_CREATE_CQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_CREATE_CQ_PG_SIZE_MASK			    0x1cUL
+	#define CMDQ_CREATE_CQ_PG_SIZE_SFT			    2
+	#define CMDQ_CREATE_CQ_PG_SIZE_PG_4K			   (0x0UL << 2)
+	#define CMDQ_CREATE_CQ_PG_SIZE_PG_8K			   (0x1UL << 2)
+	#define CMDQ_CREATE_CQ_PG_SIZE_PG_64K			   (0x2UL << 2)
+	#define CMDQ_CREATE_CQ_PG_SIZE_PG_2M			   (0x3UL << 2)
+	#define CMDQ_CREATE_CQ_PG_SIZE_PG_8M			   (0x4UL << 2)
+	#define CMDQ_CREATE_CQ_PG_SIZE_PG_1G			   (0x5UL << 2)
+	__le32 cq_fco_cnq_id;
+	#define CMDQ_CREATE_CQ_CNQ_ID_MASK			    0xfffUL
+	#define CMDQ_CREATE_CQ_CNQ_ID_SFT			    0
+	#define CMDQ_CREATE_CQ_CQ_FCO_MASK			    0xfffff000UL
+	#define CMDQ_CREATE_CQ_CQ_FCO_SFT			    12
+	__le32 dpi;
+	__le32 cq_size;
+	__le64 pbl;
+};
+
+/* Destroy CQ command (24 bytes) */
+struct cmdq_destroy_cq {
+	u8 opcode;
+	#define CMDQ_DESTROY_CQ_OPCODE_DESTROY_CQ		   0xaUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 cq_cid;
+	__le32 unused_0;
+};
+
+/* Resize CQ command (40 bytes) */
+struct cmdq_resize_cq {
+	u8 opcode;
+	#define CMDQ_RESIZE_CQ_OPCODE_RESIZE_CQ		   0xcUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 cq_cid;
+	__le32 new_cq_size_pg_size_lvl;
+	#define CMDQ_RESIZE_CQ_LVL_MASK			    0x3UL
+	#define CMDQ_RESIZE_CQ_LVL_SFT				    0
+	#define CMDQ_RESIZE_CQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_RESIZE_CQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_RESIZE_CQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_RESIZE_CQ_PG_SIZE_MASK			    0x1cUL
+	#define CMDQ_RESIZE_CQ_PG_SIZE_SFT			    2
+	#define CMDQ_RESIZE_CQ_PG_SIZE_PG_4K			   (0x0UL << 2)
+	#define CMDQ_RESIZE_CQ_PG_SIZE_PG_8K			   (0x1UL << 2)
+	#define CMDQ_RESIZE_CQ_PG_SIZE_PG_64K			   (0x2UL << 2)
+	#define CMDQ_RESIZE_CQ_PG_SIZE_PG_2M			   (0x3UL << 2)
+	#define CMDQ_RESIZE_CQ_PG_SIZE_PG_8M			   (0x4UL << 2)
+	#define CMDQ_RESIZE_CQ_PG_SIZE_PG_1G			   (0x5UL << 2)
+	#define CMDQ_RESIZE_CQ_NEW_CQ_SIZE_MASK		    0x1fffe0UL
+	#define CMDQ_RESIZE_CQ_NEW_CQ_SIZE_SFT			    5
+	__le64 new_pbl;
+	__le32 new_cq_fco;
+	__le32 unused_2;
+};
+
+/* Allocate MRW command (32 bytes) */
+struct cmdq_allocate_mrw {
+	u8 opcode;
+	#define CMDQ_ALLOCATE_MRW_OPCODE_ALLOCATE_MRW		   0xdUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le64 mrw_handle;
+	u8 mrw_flags;
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_MASK		    0xfUL
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_SFT		    0
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_MR			   0x0UL
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR		   0x1UL
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE1		   0x2UL
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2A		   0x3UL
+	#define CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2B		   0x4UL
+	u8 access;
+	#define CMDQ_ALLOCATE_MRW_ACCESS_RESERVED_MASK		    0x1fUL
+	#define CMDQ_ALLOCATE_MRW_ACCESS_RESERVED_SFT		    0
+	#define CMDQ_ALLOCATE_MRW_ACCESS_CONSUMER_OWNED_KEY	    0x20UL
+	__le16 unused_1;
+	__le32 pd_id;
+};
+
+/* De-allocate key command (24 bytes) */
+struct cmdq_deallocate_key {
+	u8 opcode;
+	#define CMDQ_DEALLOCATE_KEY_OPCODE_DEALLOCATE_KEY	   0xeUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	u8 mrw_flags;
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_MASK		    0xfUL
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_SFT		    0
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_MR		   0x0UL
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_PMR		   0x1UL
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_MW_TYPE1		   0x2UL
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_MW_TYPE2A	   0x3UL
+	#define CMDQ_DEALLOCATE_KEY_MRW_FLAGS_MW_TYPE2B	   0x4UL
+	u8 unused_1[3];
+	__le32 key;
+};
+
+/* Register MR command (48 bytes) */
+struct cmdq_register_mr {
+	u8 opcode;
+	#define CMDQ_REGISTER_MR_OPCODE_REGISTER_MR		   0xfUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	u8 log2_pg_size_lvl;
+	#define CMDQ_REGISTER_MR_LVL_MASK			    0x3UL
+	#define CMDQ_REGISTER_MR_LVL_SFT			    0
+	#define CMDQ_REGISTER_MR_LVL_LVL_0			   0x0UL
+	#define CMDQ_REGISTER_MR_LVL_LVL_1			   0x1UL
+	#define CMDQ_REGISTER_MR_LVL_LVL_2			   0x2UL
+	#define CMDQ_REGISTER_MR_LOG2_PG_SIZE_MASK		    0x7cUL
+	#define CMDQ_REGISTER_MR_LOG2_PG_SIZE_SFT		    2
+	u8 access;
+	#define CMDQ_REGISTER_MR_ACCESS_LOCAL_WRITE		    0x1UL
+	#define CMDQ_REGISTER_MR_ACCESS_REMOTE_READ		    0x2UL
+	#define CMDQ_REGISTER_MR_ACCESS_REMOTE_WRITE		    0x4UL
+	#define CMDQ_REGISTER_MR_ACCESS_REMOTE_ATOMIC		    0x8UL
+	#define CMDQ_REGISTER_MR_ACCESS_MW_BIND		    0x10UL
+	#define CMDQ_REGISTER_MR_ACCESS_ZERO_BASED		    0x20UL
+	__le16 unused_1;
+	__le32 key;
+	__le64 pbl;
+	__le64 va;
+	__le64 mr_size;
+};
+
+/* Deregister MR command (24 bytes) */
+struct cmdq_deregister_mr {
+	u8 opcode;
+	#define CMDQ_DEREGISTER_MR_OPCODE_DEREGISTER_MR	   0x10UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 lkey;
+	__le32 unused_0;
+};
+
+/* Add GID command (48 bytes) */
+struct cmdq_add_gid {
+	u8 opcode;
+	#define CMDQ_ADD_GID_OPCODE_ADD_GID			   0x11UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 gid[4];
+	__le16 src_mac[3];
+	__le16 vlan;
+	#define CMDQ_ADD_GID_VLAN_VLAN_ID_MASK			    0xfffUL
+	#define CMDQ_ADD_GID_VLAN_VLAN_ID_SFT			    0
+	#define CMDQ_ADD_GID_VLAN_TPID_MASK			    0x7000UL
+	#define CMDQ_ADD_GID_VLAN_TPID_SFT			    12
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_88A8		   (0x0UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_8100		   (0x1UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_9100		   (0x2UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_9200		   (0x3UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_9300		   (0x4UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_CFG1		   (0x5UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_CFG2		   (0x6UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_TPID_CFG3		   (0x7UL << 12)
+	#define CMDQ_ADD_GID_VLAN_TPID_LAST    CMDQ_ADD_GID_VLAN_TPID_TPID_CFG3
+	#define CMDQ_ADD_GID_VLAN_VLAN_EN			    0x8000UL
+	__le16 ipid;
+	__le16 stats_ctx;
+	#define CMDQ_ADD_GID_STATS_CTX_STATS_CTX_ID_MASK	    0x7fffUL
+	#define CMDQ_ADD_GID_STATS_CTX_STATS_CTX_ID_SFT	    0
+	#define CMDQ_ADD_GID_STATS_CTX_STATS_CTX_VALID		    0x8000UL
+	__le32 unused_0;
+};
+
+/* Delete GID command (24 bytes) */
+struct cmdq_delete_gid {
+	u8 opcode;
+	#define CMDQ_DELETE_GID_OPCODE_DELETE_GID		   0x12UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le16 gid_index;
+	__le16 unused_0;
+	__le32 unused_1;
+};
+
+/* Modify GID command (48 bytes) */
+struct cmdq_modify_gid {
+	u8 opcode;
+	#define CMDQ_MODIFY_GID_OPCODE_MODIFY_GID		   0x17UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 gid[4];
+	__le16 src_mac[3];
+	__le16 vlan;
+	#define CMDQ_MODIFY_GID_VLAN_VLAN_ID_MASK		    0xfffUL
+	#define CMDQ_MODIFY_GID_VLAN_VLAN_ID_SFT		    0
+	#define CMDQ_MODIFY_GID_VLAN_TPID_MASK			    0x7000UL
+	#define CMDQ_MODIFY_GID_VLAN_TPID_SFT			    12
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_88A8		   (0x0UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_8100		   (0x1UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_9100		   (0x2UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_9200		   (0x3UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_9300		   (0x4UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_CFG1		   (0x5UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_CFG2		   (0x6UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_TPID_CFG3		   (0x7UL << 12)
+	#define CMDQ_MODIFY_GID_VLAN_TPID_LAST    CMDQ_MODIFY_GID_VLAN_TPID_TPID_CFG3
+	#define CMDQ_MODIFY_GID_VLAN_VLAN_EN			    0x8000UL
+	__le16 ipid;
+	__le16 gid_index;
+	__le16 stats_ctx;
+	#define CMDQ_MODIFY_GID_STATS_CTX_STATS_CTX_ID_MASK	    0x7fffUL
+	#define CMDQ_MODIFY_GID_STATS_CTX_STATS_CTX_ID_SFT	    0
+	#define CMDQ_MODIFY_GID_STATS_CTX_STATS_CTX_VALID	    0x8000UL
+	__le16 unused_0;
+};
+
+/* Query GID command (24 bytes) */
+struct cmdq_query_gid {
+	u8 opcode;
+	#define CMDQ_QUERY_GID_OPCODE_QUERY_GID		   0x18UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le16 gid_index;
+	__le16 unused_0;
+	__le32 unused_1;
+};
+
+/* Create QP1 command (80 bytes) */
+struct cmdq_create_qp1 {
+	u8 opcode;
+	#define CMDQ_CREATE_QP1_OPCODE_CREATE_QP1		   0x13UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le64 qp_handle;
+	__le32 qp_flags;
+	#define CMDQ_CREATE_QP1_QP_FLAGS_SRQ_USED		   0x1UL
+	#define CMDQ_CREATE_QP1_QP_FLAGS_FORCE_COMPLETION	   0x2UL
+	#define CMDQ_CREATE_QP1_QP_FLAGS_RESERVED_LKEY_ENABLE     0x4UL
+	u8 type;
+	#define CMDQ_CREATE_QP1_TYPE_GSI			   0x1UL
+	u8 sq_pg_size_sq_lvl;
+	#define CMDQ_CREATE_QP1_SQ_LVL_MASK			    0xfUL
+	#define CMDQ_CREATE_QP1_SQ_LVL_SFT			    0
+	#define CMDQ_CREATE_QP1_SQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_CREATE_QP1_SQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_CREATE_QP1_SQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_SFT			    4
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 rq_pg_size_rq_lvl;
+	#define CMDQ_CREATE_QP1_RQ_LVL_MASK			    0xfUL
+	#define CMDQ_CREATE_QP1_RQ_LVL_SFT			    0
+	#define CMDQ_CREATE_QP1_RQ_LVL_LVL_0			   0x0UL
+	#define CMDQ_CREATE_QP1_RQ_LVL_LVL_1			   0x1UL
+	#define CMDQ_CREATE_QP1_RQ_LVL_LVL_2			   0x2UL
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_SFT			    4
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 unused_0;
+	__le32 dpi;
+	__le32 sq_size;
+	__le32 rq_size;
+	__le16 sq_fwo_sq_sge;
+	#define CMDQ_CREATE_QP1_SQ_SGE_MASK			    0xfUL
+	#define CMDQ_CREATE_QP1_SQ_SGE_SFT			    0
+	#define CMDQ_CREATE_QP1_SQ_FWO_MASK			    0xfff0UL
+	#define CMDQ_CREATE_QP1_SQ_FWO_SFT			    4
+	__le16 rq_fwo_rq_sge;
+	#define CMDQ_CREATE_QP1_RQ_SGE_MASK			    0xfUL
+	#define CMDQ_CREATE_QP1_RQ_SGE_SFT			    0
+	#define CMDQ_CREATE_QP1_RQ_FWO_MASK			    0xfff0UL
+	#define CMDQ_CREATE_QP1_RQ_FWO_SFT			    4
+	__le32 scq_cid;
+	__le32 rcq_cid;
+	__le32 srq_cid;
+	__le32 pd_id;
+	__le64 sq_pbl;
+	__le64 rq_pbl;
+};
+
+/* Destroy QP1 command (24 bytes) */
+struct cmdq_destroy_qp1 {
+	u8 opcode;
+	#define CMDQ_DESTROY_QP1_OPCODE_DESTROY_QP1		   0x14UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 qp1_cid;
+	__le32 unused_0;
+};
+
+/* Create AH command (64 bytes) */
+struct cmdq_create_ah {
+	u8 opcode;
+	#define CMDQ_CREATE_AH_OPCODE_CREATE_AH		   0x15UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le64 ah_handle;
+	__le32 dgid[4];
+	u8 type;
+	#define CMDQ_CREATE_AH_TYPE_V1				   0x0UL
+	#define CMDQ_CREATE_AH_TYPE_V2IPV4			   0x2UL
+	#define CMDQ_CREATE_AH_TYPE_V2IPV6			   0x3UL
+	u8 hop_limit;
+	__le16 sgid_index;
+	__le32 dest_vlan_id_flow_label;
+	#define CMDQ_CREATE_AH_FLOW_LABEL_MASK			    0xfffffUL
+	#define CMDQ_CREATE_AH_FLOW_LABEL_SFT			    0
+	#define CMDQ_CREATE_AH_DEST_VLAN_ID_MASK		    0xfff00000UL
+	#define CMDQ_CREATE_AH_DEST_VLAN_ID_SFT		    20
+	__le32 pd_id;
+	__le32 unused_0;
+	__le16 dest_mac[3];
+	u8 traffic_class;
+	u8 unused_1;
+};
+
+/* Destroy AH command (24 bytes) */
+struct cmdq_destroy_ah {
+	u8 opcode;
+	#define CMDQ_DESTROY_AH_OPCODE_DESTROY_AH		   0x16UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 ah_cid;
+	__le32 unused_0;
+};
+
+/* Initialize Firmware command (112 bytes) */
+struct cmdq_initialize_fw {
+	u8 opcode;
+	#define CMDQ_INITIALIZE_FW_OPCODE_INITIALIZE_FW	   0x80UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	u8 qpc_pg_size_qpc_lvl;
+	#define CMDQ_INITIALIZE_FW_QPC_LVL_MASK		    0xfUL
+	#define CMDQ_INITIALIZE_FW_QPC_LVL_SFT			    0
+	#define CMDQ_INITIALIZE_FW_QPC_LVL_LVL_0		   0x0UL
+	#define CMDQ_INITIALIZE_FW_QPC_LVL_LVL_1		   0x1UL
+	#define CMDQ_INITIALIZE_FW_QPC_LVL_LVL_2		   0x2UL
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_SFT		    4
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 mrw_pg_size_mrw_lvl;
+	#define CMDQ_INITIALIZE_FW_MRW_LVL_MASK		    0xfUL
+	#define CMDQ_INITIALIZE_FW_MRW_LVL_SFT			    0
+	#define CMDQ_INITIALIZE_FW_MRW_LVL_LVL_0		   0x0UL
+	#define CMDQ_INITIALIZE_FW_MRW_LVL_LVL_1		   0x1UL
+	#define CMDQ_INITIALIZE_FW_MRW_LVL_LVL_2		   0x2UL
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_SFT		    4
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_INITIALIZE_FW_MRW_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 srq_pg_size_srq_lvl;
+	#define CMDQ_INITIALIZE_FW_SRQ_LVL_MASK		    0xfUL
+	#define CMDQ_INITIALIZE_FW_SRQ_LVL_SFT			    0
+	#define CMDQ_INITIALIZE_FW_SRQ_LVL_LVL_0		   0x0UL
+	#define CMDQ_INITIALIZE_FW_SRQ_LVL_LVL_1		   0x1UL
+	#define CMDQ_INITIALIZE_FW_SRQ_LVL_LVL_2		   0x2UL
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_SFT		    4
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_INITIALIZE_FW_SRQ_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 cq_pg_size_cq_lvl;
+	#define CMDQ_INITIALIZE_FW_CQ_LVL_MASK			    0xfUL
+	#define CMDQ_INITIALIZE_FW_CQ_LVL_SFT			    0
+	#define CMDQ_INITIALIZE_FW_CQ_LVL_LVL_0		   0x0UL
+	#define CMDQ_INITIALIZE_FW_CQ_LVL_LVL_1		   0x1UL
+	#define CMDQ_INITIALIZE_FW_CQ_LVL_LVL_2		   0x2UL
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_SFT		    4
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_INITIALIZE_FW_CQ_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 tqm_pg_size_tqm_lvl;
+	#define CMDQ_INITIALIZE_FW_TQM_LVL_MASK		    0xfUL
+	#define CMDQ_INITIALIZE_FW_TQM_LVL_SFT			    0
+	#define CMDQ_INITIALIZE_FW_TQM_LVL_LVL_0		   0x0UL
+	#define CMDQ_INITIALIZE_FW_TQM_LVL_LVL_1		   0x1UL
+	#define CMDQ_INITIALIZE_FW_TQM_LVL_LVL_2		   0x2UL
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_SFT		    4
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_INITIALIZE_FW_TQM_PG_SIZE_PG_1G		   (0x5UL << 4)
+	u8 tim_pg_size_tim_lvl;
+	#define CMDQ_INITIALIZE_FW_TIM_LVL_MASK		    0xfUL
+	#define CMDQ_INITIALIZE_FW_TIM_LVL_SFT			    0
+	#define CMDQ_INITIALIZE_FW_TIM_LVL_LVL_0		   0x0UL
+	#define CMDQ_INITIALIZE_FW_TIM_LVL_LVL_1		   0x1UL
+	#define CMDQ_INITIALIZE_FW_TIM_LVL_LVL_2		   0x2UL
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_MASK		    0xf0UL
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_SFT		    4
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_PG_4K		   (0x0UL << 4)
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_PG_8K		   (0x1UL << 4)
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_PG_64K		   (0x2UL << 4)
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_PG_2M		   (0x3UL << 4)
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_PG_8M		   (0x4UL << 4)
+	#define CMDQ_INITIALIZE_FW_TIM_PG_SIZE_PG_1G		   (0x5UL << 4)
+	__le16 reserved16;
+	__le64 qpc_page_dir;
+	__le64 mrw_page_dir;
+	__le64 srq_page_dir;
+	__le64 cq_page_dir;
+	__le64 tqm_page_dir;
+	__le64 tim_page_dir;
+	__le32 number_of_qp;
+	__le32 number_of_mrw;
+	__le32 number_of_srq;
+	__le32 number_of_cq;
+	__le32 max_qp_per_vf;
+	__le32 max_mrw_per_vf;
+	__le32 max_srq_per_vf;
+	__le32 max_cq_per_vf;
+	__le32 max_gid_per_vf;
+	__le32 stat_ctx_id;
+};
+
+/* De-initialize Firmware command (16 bytes) */
+struct cmdq_deinitialize_fw {
+	u8 opcode;
+	#define CMDQ_DEINITIALIZE_FW_OPCODE_DEINITIALIZE_FW       0x81UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+};
+
+/* Stop function command (16 bytes) */
+struct cmdq_stop_func {
+	u8 opcode;
+	#define CMDQ_STOP_FUNC_OPCODE_STOP_FUNC		   0x82UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+};
+
+/* Query function command (16 bytes) */
+struct cmdq_query_func {
+	u8 opcode;
+	#define CMDQ_QUERY_FUNC_OPCODE_QUERY_FUNC		   0x83UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+};
+
+/* Set function resources command (16 bytes) */
+struct cmdq_set_func_resources {
+	u8 opcode;
+	#define CMDQ_SET_FUNC_RESOURCES_OPCODE_SET_FUNC_RESOURCES 0x84UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+};
+
+/* Read hardware resource context command (24 bytes) */
+struct cmdq_read_context {
+	u8 opcode;
+	#define CMDQ_READ_CONTEXT_OPCODE_READ_CONTEXT		   0x85UL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le32 type_xid;
+	#define CMDQ_READ_CONTEXT_XID_MASK			    0xffffffUL
+	#define CMDQ_READ_CONTEXT_XID_SFT			    0
+	#define CMDQ_READ_CONTEXT_TYPE_MASK			    0xff000000UL
+	#define CMDQ_READ_CONTEXT_TYPE_SFT			    24
+	#define CMDQ_READ_CONTEXT_TYPE_QPC			   (0x0UL << 24)
+	#define CMDQ_READ_CONTEXT_TYPE_CQ			   (0x1UL << 24)
+	#define CMDQ_READ_CONTEXT_TYPE_MRW			   (0x2UL << 24)
+	#define CMDQ_READ_CONTEXT_TYPE_SRQ			   (0x3UL << 24)
+	__le32 unused_0;
+};
+
+/* Map TC to COS. Can only be issued from a PF (24 bytes) */
+struct cmdq_map_tc_to_cos {
+	u8 opcode;
+	#define CMDQ_MAP_TC_TO_COS_OPCODE_MAP_TC_TO_COS	   0x8aUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+	__le16 cos0;
+	#define CMDQ_MAP_TC_TO_COS_COS0_NO_CHANGE		   0xffffUL
+	__le16 cos1;
+	#define CMDQ_MAP_TC_TO_COS_COS1_DISABLE		   0x8000UL
+	#define CMDQ_MAP_TC_TO_COS_COS1_NO_CHANGE		   0xffffUL
+	__le32 unused_0;
+};
+
+/* Query version command (16 bytes) */
+struct cmdq_query_version {
+	u8 opcode;
+	#define CMDQ_QUERY_VERSION_OPCODE_QUERY_VERSION	   0x8bUL
+	u8 cmd_size;
+	__le16 flags;
+	__le16 cookie;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 resp_addr;
+};
+
+/* Command-Response Event Queue (CREQ) Structures */
+/* Base CREQ Record (16 bytes) */
+struct creq_base {
+	u8 type;
+	#define CREQ_BASE_TYPE_MASK				    0x3fUL
+	#define CREQ_BASE_TYPE_SFT				    0
+	#define CREQ_BASE_TYPE_QP_EVENT			   0x38UL
+	#define CREQ_BASE_TYPE_FUNC_EVENT			   0x3aUL
+	#define CREQ_BASE_RESERVED2_MASK			    0xc0UL
+	#define CREQ_BASE_RESERVED2_SFT			    6
+	u8 reserved56[7];
+	u8 v;
+	#define CREQ_BASE_V					    0x1UL
+	#define CREQ_BASE_RESERVED7_MASK			    0xfeUL
+	#define CREQ_BASE_RESERVED7_SFT			    1
+	u8 event;
+	__le16 reserved48[3];
+};
+
+/* RoCE Function Async Event Notification (16 bytes) */
+struct creq_func_event {
+	u8 type;
+	#define CREQ_FUNC_EVENT_TYPE_MASK			    0x3fUL
+	#define CREQ_FUNC_EVENT_TYPE_SFT			    0
+	#define CREQ_FUNC_EVENT_TYPE_FUNC_EVENT		   0x3aUL
+	#define CREQ_FUNC_EVENT_RESERVED2_MASK			    0xc0UL
+	#define CREQ_FUNC_EVENT_RESERVED2_SFT			    6
+	u8 reserved56[7];
+	u8 v;
+	#define CREQ_FUNC_EVENT_V				    0x1UL
+	#define CREQ_FUNC_EVENT_RESERVED7_MASK			    0xfeUL
+	#define CREQ_FUNC_EVENT_RESERVED7_SFT			    1
+	u8 event;
+	#define CREQ_FUNC_EVENT_EVENT_TX_WQE_ERROR		   0x1UL
+	#define CREQ_FUNC_EVENT_EVENT_TX_DATA_ERROR		   0x2UL
+	#define CREQ_FUNC_EVENT_EVENT_RX_WQE_ERROR		   0x3UL
+	#define CREQ_FUNC_EVENT_EVENT_RX_DATA_ERROR		   0x4UL
+	#define CREQ_FUNC_EVENT_EVENT_CQ_ERROR			   0x5UL
+	#define CREQ_FUNC_EVENT_EVENT_TQM_ERROR		   0x6UL
+	#define CREQ_FUNC_EVENT_EVENT_CFCQ_ERROR		   0x7UL
+	#define CREQ_FUNC_EVENT_EVENT_CFCS_ERROR		   0x8UL
+	#define CREQ_FUNC_EVENT_EVENT_CFCC_ERROR		   0x9UL
+	#define CREQ_FUNC_EVENT_EVENT_CFCM_ERROR		   0xaUL
+	#define CREQ_FUNC_EVENT_EVENT_TIM_ERROR		   0xbUL
+	#define CREQ_FUNC_EVENT_EVENT_VF_COMM_REQUEST		   0x80UL
+	#define CREQ_FUNC_EVENT_EVENT_RESOURCE_EXHAUSTED	   0x81UL
+	__le16 reserved48[3];
+};
+
+/* RoCE Slowpath Command Completion (16 bytes) */
+struct creq_qp_event {
+	u8 type;
+	#define CREQ_QP_EVENT_TYPE_MASK			    0x3fUL
+	#define CREQ_QP_EVENT_TYPE_SFT				    0
+	#define CREQ_QP_EVENT_TYPE_QP_EVENT			   0x38UL
+	#define CREQ_QP_EVENT_RESERVED2_MASK			    0xc0UL
+	#define CREQ_QP_EVENT_RESERVED2_SFT			    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_QP_EVENT_V				    0x1UL
+	#define CREQ_QP_EVENT_RESERVED7_MASK			    0xfeUL
+	#define CREQ_QP_EVENT_RESERVED7_SFT			    1
+	u8 event;
+	#define CREQ_QP_EVENT_EVENT_CREATE_QP			   0x1UL
+	#define CREQ_QP_EVENT_EVENT_DESTROY_QP			   0x2UL
+	#define CREQ_QP_EVENT_EVENT_MODIFY_QP			   0x3UL
+	#define CREQ_QP_EVENT_EVENT_QUERY_QP			   0x4UL
+	#define CREQ_QP_EVENT_EVENT_CREATE_SRQ			   0x5UL
+	#define CREQ_QP_EVENT_EVENT_DESTROY_SRQ		   0x6UL
+	#define CREQ_QP_EVENT_EVENT_QUERY_SRQ			   0x8UL
+	#define CREQ_QP_EVENT_EVENT_CREATE_CQ			   0x9UL
+	#define CREQ_QP_EVENT_EVENT_DESTROY_CQ			   0xaUL
+	#define CREQ_QP_EVENT_EVENT_RESIZE_CQ			   0xcUL
+	#define CREQ_QP_EVENT_EVENT_ALLOCATE_MRW		   0xdUL
+	#define CREQ_QP_EVENT_EVENT_DEALLOCATE_KEY		   0xeUL
+	#define CREQ_QP_EVENT_EVENT_REGISTER_MR		   0xfUL
+	#define CREQ_QP_EVENT_EVENT_DEREGISTER_MR		   0x10UL
+	#define CREQ_QP_EVENT_EVENT_ADD_GID			   0x11UL
+	#define CREQ_QP_EVENT_EVENT_DELETE_GID			   0x12UL
+	#define CREQ_QP_EVENT_EVENT_MODIFY_GID			   0x17UL
+	#define CREQ_QP_EVENT_EVENT_QUERY_GID			   0x18UL
+	#define CREQ_QP_EVENT_EVENT_CREATE_QP1			   0x13UL
+	#define CREQ_QP_EVENT_EVENT_DESTROY_QP1		   0x14UL
+	#define CREQ_QP_EVENT_EVENT_CREATE_AH			   0x15UL
+	#define CREQ_QP_EVENT_EVENT_DESTROY_AH			   0x16UL
+	#define CREQ_QP_EVENT_EVENT_INITIALIZE_FW		   0x80UL
+	#define CREQ_QP_EVENT_EVENT_DEINITIALIZE_FW		   0x81UL
+	#define CREQ_QP_EVENT_EVENT_STOP_FUNC			   0x82UL
+	#define CREQ_QP_EVENT_EVENT_QUERY_FUNC			   0x83UL
+	#define CREQ_QP_EVENT_EVENT_SET_FUNC_RESOURCES		   0x84UL
+	#define CREQ_QP_EVENT_EVENT_MAP_TC_TO_COS		   0x8aUL
+	#define CREQ_QP_EVENT_EVENT_QUERY_VERSION		   0x8bUL
+	#define CREQ_QP_EVENT_EVENT_MODIFY_CC			   0x8cUL
+	#define CREQ_QP_EVENT_EVENT_QUERY_CC			   0x8dUL
+	#define CREQ_QP_EVENT_EVENT_QP_ERROR_NOTIFICATION	   0xc0UL
+	__le16 reserved48[3];
+};
+
+/* Create QP command response (16 bytes) */
+struct creq_create_qp_resp {
+	u8 type;
+	#define CREQ_CREATE_QP_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_CREATE_QP_RESP_TYPE_SFT			    0
+	#define CREQ_CREATE_QP_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_CREATE_QP_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_CREATE_QP_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_CREATE_QP_RESP_V				    0x1UL
+	#define CREQ_CREATE_QP_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_CREATE_QP_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_CREATE_QP_RESP_EVENT_CREATE_QP		   0x1UL
+	__le16 reserved48[3];
+};
+
+/* Destroy QP command response (16 bytes) */
+struct creq_destroy_qp_resp {
+	u8 type;
+	#define CREQ_DESTROY_QP_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_DESTROY_QP_RESP_TYPE_SFT			    0
+	#define CREQ_DESTROY_QP_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DESTROY_QP_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DESTROY_QP_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DESTROY_QP_RESP_V				    0x1UL
+	#define CREQ_DESTROY_QP_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DESTROY_QP_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DESTROY_QP_RESP_EVENT_DESTROY_QP		   0x2UL
+	__le16 reserved48[3];
+};
+
+/* Modify QP command response (16 bytes) */
+struct creq_modify_qp_resp {
+	u8 type;
+	#define CREQ_MODIFY_QP_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_MODIFY_QP_RESP_TYPE_SFT			    0
+	#define CREQ_MODIFY_QP_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_MODIFY_QP_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_MODIFY_QP_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_MODIFY_QP_RESP_V				    0x1UL
+	#define CREQ_MODIFY_QP_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_MODIFY_QP_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_MODIFY_QP_RESP_EVENT_MODIFY_QP		   0x3UL
+	__le16 reserved48[3];
+};
+
+/* Query QP command response (16 bytes) */
+struct creq_query_qp_resp {
+	u8 type;
+	#define CREQ_QUERY_QP_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_QUERY_QP_RESP_TYPE_SFT			    0
+	#define CREQ_QUERY_QP_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_QUERY_QP_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_QUERY_QP_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 size;
+	u8 v;
+	#define CREQ_QUERY_QP_RESP_V				    0x1UL
+	#define CREQ_QUERY_QP_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_QP_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_QUERY_QP_RESP_EVENT_QUERY_QP		   0x4UL
+	__le16 reserved48[3];
+};
+
+/* Query QP command response side buffer structure (104 bytes) */
+struct creq_query_qp_resp_sb {
+	u8 opcode;
+	#define CREQ_QUERY_QP_RESP_SB_OPCODE_QUERY_QP		   0x4UL
+	u8 status;
+	__le16 cookie;
+	__le16 flags;
+	u8 resp_size;
+	u8 reserved8;
+	__le32 xid;
+	u8 en_sqd_async_notify_state;
+	#define CREQ_QUERY_QP_RESP_SB_STATE_MASK		    0xfUL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_SFT		    0
+	#define CREQ_QUERY_QP_RESP_SB_STATE_RESET		   0x0UL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_INIT		   0x1UL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_RTR		   0x2UL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_RTS		   0x3UL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_SQD		   0x4UL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_SQE		   0x5UL
+	#define CREQ_QUERY_QP_RESP_SB_STATE_ERR		   0x6UL
+	#define CREQ_QUERY_QP_RESP_SB_EN_SQD_ASYNC_NOTIFY	    0x10UL
+	u8 access;
+	#define CREQ_QUERY_QP_RESP_SB_ACCESS_LOCAL_WRITE	    0x1UL
+	#define CREQ_QUERY_QP_RESP_SB_ACCESS_REMOTE_WRITE	    0x2UL
+	#define CREQ_QUERY_QP_RESP_SB_ACCESS_REMOTE_READ	    0x4UL
+	#define CREQ_QUERY_QP_RESP_SB_ACCESS_REMOTE_ATOMIC	    0x8UL
+	__le16 pkey;
+	__le32 qkey;
+	__le32 reserved32;
+	__le32 dgid[4];
+	__le32 flow_label;
+	__le16 sgid_index;
+	u8 hop_limit;
+	u8 traffic_class;
+	__le16 dest_mac[3];
+	__le16 path_mtu_dest_vlan_id;
+	#define CREQ_QUERY_QP_RESP_SB_DEST_VLAN_ID_MASK	    0xfffUL
+	#define CREQ_QUERY_QP_RESP_SB_DEST_VLAN_ID_SFT		    0
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MASK		    0xf000UL
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_SFT		    12
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MTU_256		   (0x0UL << 12)
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MTU_512		   (0x1UL << 12)
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MTU_1024	   (0x2UL << 12)
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MTU_2048	   (0x3UL << 12)
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MTU_4096	   (0x4UL << 12)
+	#define CREQ_QUERY_QP_RESP_SB_PATH_MTU_MTU_8192	   (0x5UL << 12)
+	u8 timeout;
+	u8 retry_cnt;
+	u8 rnr_retry;
+	u8 min_rnr_timer;
+	__le32 rq_psn;
+	__le32 sq_psn;
+	u8 max_rd_atomic;
+	u8 max_dest_rd_atomic;
+	u8 tos_dscp_tos_ecn;
+	#define CREQ_QUERY_QP_RESP_SB_TOS_ECN_MASK		    0x3UL
+	#define CREQ_QUERY_QP_RESP_SB_TOS_ECN_SFT		    0
+	#define CREQ_QUERY_QP_RESP_SB_TOS_DSCP_MASK		    0xfcUL
+	#define CREQ_QUERY_QP_RESP_SB_TOS_DSCP_SFT		    2
+	u8 enable_cc;
+	#define CREQ_QUERY_QP_RESP_SB_ENABLE_CC		    0x1UL
+	#define CREQ_QUERY_QP_RESP_SB_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_QP_RESP_SB_RESERVED7_SFT		    1
+	__le32 sq_size;
+	__le32 rq_size;
+	__le16 sq_sge;
+	__le16 rq_sge;
+	__le32 max_inline_data;
+	__le32 dest_qp_id;
+	__le32 unused_1;
+	__le16 src_mac[3];
+	__le16 vlan_pcp_vlan_dei_vlan_id;
+	#define CREQ_QUERY_QP_RESP_SB_VLAN_ID_MASK		    0xfffUL
+	#define CREQ_QUERY_QP_RESP_SB_VLAN_ID_SFT		    0
+	#define CREQ_QUERY_QP_RESP_SB_VLAN_DEI			    0x1000UL
+	#define CREQ_QUERY_QP_RESP_SB_VLAN_PCP_MASK		    0xe000UL
+	#define CREQ_QUERY_QP_RESP_SB_VLAN_PCP_SFT		    13
+};
+
+/* Create SRQ command response (16 bytes) */
+struct creq_create_srq_resp {
+	u8 type;
+	#define CREQ_CREATE_SRQ_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_CREATE_SRQ_RESP_TYPE_SFT			    0
+	#define CREQ_CREATE_SRQ_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_CREATE_SRQ_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_CREATE_SRQ_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_CREATE_SRQ_RESP_V				    0x1UL
+	#define CREQ_CREATE_SRQ_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_CREATE_SRQ_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_CREATE_SRQ_RESP_EVENT_CREATE_SRQ		   0x5UL
+	__le16 reserved48[3];
+};
+
+/* Destroy SRQ command response (16 bytes) */
+struct creq_destroy_srq_resp {
+	u8 type;
+	#define CREQ_DESTROY_SRQ_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_DESTROY_SRQ_RESP_TYPE_SFT			    0
+	#define CREQ_DESTROY_SRQ_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DESTROY_SRQ_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DESTROY_SRQ_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DESTROY_SRQ_RESP_V			    0x1UL
+	#define CREQ_DESTROY_SRQ_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DESTROY_SRQ_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DESTROY_SRQ_RESP_EVENT_DESTROY_SRQ	   0x6UL
+	__le16 enable_for_arm[3];
+	#define CREQ_DESTROY_SRQ_RESP_ENABLE_FOR_ARM_MASK	    0x30000UL
+	#define CREQ_DESTROY_SRQ_RESP_ENABLE_FOR_ARM_SFT	    16
+	#define CREQ_DESTROY_SRQ_RESP_RESERVED46_MASK		    0xfffc0000UL
+	#define CREQ_DESTROY_SRQ_RESP_RESERVED46_SFT		    18
+};
+
+/* Query SRQ command response (16 bytes) */
+struct creq_query_srq_resp {
+	u8 type;
+	#define CREQ_QUERY_SRQ_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_QUERY_SRQ_RESP_TYPE_SFT			    0
+	#define CREQ_QUERY_SRQ_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_QUERY_SRQ_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_QUERY_SRQ_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 size;
+	u8 v;
+	#define CREQ_QUERY_SRQ_RESP_V				    0x1UL
+	#define CREQ_QUERY_SRQ_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_SRQ_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_QUERY_SRQ_RESP_EVENT_QUERY_SRQ		   0x8UL
+	__le16 reserved48[3];
+};
+
+/* Query SRQ command response side buffer structure (24 bytes) */
+struct creq_query_srq_resp_sb {
+	u8 opcode;
+	#define CREQ_QUERY_SRQ_RESP_SB_OPCODE_QUERY_SRQ	   0x8UL
+	u8 status;
+	__le16 cookie;
+	__le16 flags;
+	u8 resp_size;
+	u8 reserved8;
+	__le32 xid;
+	__le16 srq_limit;
+	__le16 reserved16;
+	__le32 data[4];
+};
+
+/* Create CQ command Response (16 bytes) */
+struct creq_create_cq_resp {
+	u8 type;
+	#define CREQ_CREATE_CQ_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_CREATE_CQ_RESP_TYPE_SFT			    0
+	#define CREQ_CREATE_CQ_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_CREATE_CQ_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_CREATE_CQ_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_CREATE_CQ_RESP_V				    0x1UL
+	#define CREQ_CREATE_CQ_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_CREATE_CQ_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_CREATE_CQ_RESP_EVENT_CREATE_CQ		   0x9UL
+	__le16 reserved48[3];
+};
+
+/* Destroy CQ command response (16 bytes) */
+struct creq_destroy_cq_resp {
+	u8 type;
+	#define CREQ_DESTROY_CQ_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_DESTROY_CQ_RESP_TYPE_SFT			    0
+	#define CREQ_DESTROY_CQ_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DESTROY_CQ_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DESTROY_CQ_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DESTROY_CQ_RESP_V				    0x1UL
+	#define CREQ_DESTROY_CQ_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DESTROY_CQ_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DESTROY_CQ_RESP_EVENT_DESTROY_CQ		   0xaUL
+	__le16 cq_arm_lvl;
+	#define CREQ_DESTROY_CQ_RESP_CQ_ARM_LVL_MASK		    0x3UL
+	#define CREQ_DESTROY_CQ_RESP_CQ_ARM_LVL_SFT		    0
+	#define CREQ_DESTROY_CQ_RESP_RESERVED14_MASK		    0xfffcUL
+	#define CREQ_DESTROY_CQ_RESP_RESERVED14_SFT		    2
+	__le16 total_cnq_events;
+	__le16 reserved16;
+};
+
+/* Resize CQ command response (16 bytes) */
+struct creq_resize_cq_resp {
+	u8 type;
+	#define CREQ_RESIZE_CQ_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_RESIZE_CQ_RESP_TYPE_SFT			    0
+	#define CREQ_RESIZE_CQ_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_RESIZE_CQ_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_RESIZE_CQ_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_RESIZE_CQ_RESP_V				    0x1UL
+	#define CREQ_RESIZE_CQ_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_RESIZE_CQ_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_RESIZE_CQ_RESP_EVENT_RESIZE_CQ		   0xcUL
+	__le16 reserved48[3];
+};
+
+/* Allocate MRW command response (16 bytes) */
+struct creq_allocate_mrw_resp {
+	u8 type;
+	#define CREQ_ALLOCATE_MRW_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_ALLOCATE_MRW_RESP_TYPE_SFT		    0
+	#define CREQ_ALLOCATE_MRW_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_ALLOCATE_MRW_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_ALLOCATE_MRW_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_ALLOCATE_MRW_RESP_V			    0x1UL
+	#define CREQ_ALLOCATE_MRW_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_ALLOCATE_MRW_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_ALLOCATE_MRW_RESP_EVENT_ALLOCATE_MRW	   0xdUL
+	__le16 reserved48[3];
+};
+
+/* De-allocate key command response (16 bytes) */
+struct creq_deallocate_key_resp {
+	u8 type;
+	#define CREQ_DEALLOCATE_KEY_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_DEALLOCATE_KEY_RESP_TYPE_SFT		    0
+	#define CREQ_DEALLOCATE_KEY_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DEALLOCATE_KEY_RESP_RESERVED2_MASK	    0xc0UL
+	#define CREQ_DEALLOCATE_KEY_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DEALLOCATE_KEY_RESP_V			    0x1UL
+	#define CREQ_DEALLOCATE_KEY_RESP_RESERVED7_MASK	    0xfeUL
+	#define CREQ_DEALLOCATE_KEY_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DEALLOCATE_KEY_RESP_EVENT_DEALLOCATE_KEY     0xeUL
+	__le16 reserved16;
+	__le32 bound_window_info;
+};
+
+/* Register MR command response (16 bytes) */
+struct creq_register_mr_resp {
+	u8 type;
+	#define CREQ_REGISTER_MR_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_REGISTER_MR_RESP_TYPE_SFT			    0
+	#define CREQ_REGISTER_MR_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_REGISTER_MR_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_REGISTER_MR_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_REGISTER_MR_RESP_V			    0x1UL
+	#define CREQ_REGISTER_MR_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_REGISTER_MR_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_REGISTER_MR_RESP_EVENT_REGISTER_MR	   0xfUL
+	__le16 reserved48[3];
+};
+
+/* Deregister MR command response (16 bytes) */
+struct creq_deregister_mr_resp {
+	u8 type;
+	#define CREQ_DEREGISTER_MR_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_DEREGISTER_MR_RESP_TYPE_SFT		    0
+	#define CREQ_DEREGISTER_MR_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DEREGISTER_MR_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DEREGISTER_MR_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DEREGISTER_MR_RESP_V			    0x1UL
+	#define CREQ_DEREGISTER_MR_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DEREGISTER_MR_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DEREGISTER_MR_RESP_EVENT_DEREGISTER_MR       0x10UL
+	__le16 reserved16;
+	__le32 bound_windows;
+};
+
+/* Add GID command response (16 bytes) */
+struct creq_add_gid_resp {
+	u8 type;
+	#define CREQ_ADD_GID_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_ADD_GID_RESP_TYPE_SFT			    0
+	#define CREQ_ADD_GID_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_ADD_GID_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_ADD_GID_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_ADD_GID_RESP_V				    0x1UL
+	#define CREQ_ADD_GID_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_ADD_GID_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_ADD_GID_RESP_EVENT_ADD_GID		   0x11UL
+	__le16 reserved48[3];
+};
+
+/* Delete GID command response (16 bytes) */
+struct creq_delete_gid_resp {
+	u8 type;
+	#define CREQ_DELETE_GID_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_DELETE_GID_RESP_TYPE_SFT			    0
+	#define CREQ_DELETE_GID_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DELETE_GID_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DELETE_GID_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DELETE_GID_RESP_V				    0x1UL
+	#define CREQ_DELETE_GID_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DELETE_GID_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DELETE_GID_RESP_EVENT_DELETE_GID		   0x12UL
+	__le16 reserved48[3];
+};
+
+/* Modify GID command response (16 bytes) */
+struct creq_modify_gid_resp {
+	u8 type;
+	#define CREQ_MODIFY_GID_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_MODIFY_GID_RESP_TYPE_SFT			    0
+	#define CREQ_MODIFY_GID_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_MODIFY_GID_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_MODIFY_GID_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_MODIFY_GID_RESP_V				    0x1UL
+	#define CREQ_MODIFY_GID_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_MODIFY_GID_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_MODIFY_GID_RESP_EVENT_ADD_GID		   0x11UL
+	__le16 reserved48[3];
+};
+
+/* Query GID command response (16 bytes) */
+struct creq_query_gid_resp {
+	u8 type;
+	#define CREQ_QUERY_GID_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_QUERY_GID_RESP_TYPE_SFT			    0
+	#define CREQ_QUERY_GID_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_QUERY_GID_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_QUERY_GID_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 size;
+	u8 v;
+	#define CREQ_QUERY_GID_RESP_V				    0x1UL
+	#define CREQ_QUERY_GID_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_GID_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_QUERY_GID_RESP_EVENT_QUERY_GID		   0x18UL
+	__le16 reserved48[3];
+};
+
+/* Query GID command response side buffer structure (40 bytes) */
+struct creq_query_gid_resp_sb {
+	u8 opcode;
+	#define CREQ_QUERY_GID_RESP_SB_OPCODE_QUERY_GID	   0x18UL
+	u8 status;
+	__le16 cookie;
+	__le16 flags;
+	u8 resp_size;
+	u8 reserved8;
+	__le32 gid[4];
+	__le16 src_mac[3];
+	__le16 vlan;
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_VLAN_ID_MASK	    0xfffUL
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_VLAN_ID_SFT	    0
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_MASK		    0x7000UL
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_SFT		    12
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_88A8	   (0x0UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_8100	   (0x1UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_9100	   (0x2UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_9200	   (0x3UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_9300	   (0x4UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_CFG1	   (0x5UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_CFG2	   (0x6UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_CFG3	   (0x7UL << 12)
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_TPID_LAST    CREQ_QUERY_GID_RESP_SB_VLAN_TPID_TPID_CFG3
+	#define CREQ_QUERY_GID_RESP_SB_VLAN_VLAN_EN		    0x8000UL
+	__le16 ipid;
+	__le16 gid_index;
+	__le32 unused_0;
+};
+
+/* Create QP1 command response (16 bytes) */
+struct creq_create_qp1_resp {
+	u8 type;
+	#define CREQ_CREATE_QP1_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_CREATE_QP1_RESP_TYPE_SFT			    0
+	#define CREQ_CREATE_QP1_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_CREATE_QP1_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_CREATE_QP1_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_CREATE_QP1_RESP_V				    0x1UL
+	#define CREQ_CREATE_QP1_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_CREATE_QP1_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_CREATE_QP1_RESP_EVENT_CREATE_QP1		   0x13UL
+	__le16 reserved48[3];
+};
+
+/* Destroy QP1 command response (16 bytes) */
+struct creq_destroy_qp1_resp {
+	u8 type;
+	#define CREQ_DESTROY_QP1_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_DESTROY_QP1_RESP_TYPE_SFT			    0
+	#define CREQ_DESTROY_QP1_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DESTROY_QP1_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DESTROY_QP1_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DESTROY_QP1_RESP_V			    0x1UL
+	#define CREQ_DESTROY_QP1_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DESTROY_QP1_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DESTROY_QP1_RESP_EVENT_DESTROY_QP1	   0x14UL
+	__le16 reserved48[3];
+};
+
+/* Create AH command response (16 bytes) */
+struct creq_create_ah_resp {
+	u8 type;
+	#define CREQ_CREATE_AH_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_CREATE_AH_RESP_TYPE_SFT			    0
+	#define CREQ_CREATE_AH_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_CREATE_AH_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_CREATE_AH_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_CREATE_AH_RESP_V				    0x1UL
+	#define CREQ_CREATE_AH_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_CREATE_AH_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_CREATE_AH_RESP_EVENT_CREATE_AH		   0x15UL
+	__le16 reserved48[3];
+};
+
+/* Destroy AH command response (16 bytes) */
+struct creq_destroy_ah_resp {
+	u8 type;
+	#define CREQ_DESTROY_AH_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_DESTROY_AH_RESP_TYPE_SFT			    0
+	#define CREQ_DESTROY_AH_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_DESTROY_AH_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_DESTROY_AH_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 xid;
+	u8 v;
+	#define CREQ_DESTROY_AH_RESP_V				    0x1UL
+	#define CREQ_DESTROY_AH_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_DESTROY_AH_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_DESTROY_AH_RESP_EVENT_DESTROY_AH		   0x16UL
+	__le16 reserved48[3];
+};
+
+/* Initialize Firmware command response (16 bytes) */
+struct creq_initialize_fw_resp {
+	u8 type;
+	#define CREQ_INITIALIZE_FW_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_INITIALIZE_FW_RESP_TYPE_SFT		    0
+	#define CREQ_INITIALIZE_FW_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_INITIALIZE_FW_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_INITIALIZE_FW_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_INITIALIZE_FW_RESP_V			    0x1UL
+	#define CREQ_INITIALIZE_FW_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_INITIALIZE_FW_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_INITIALIZE_FW_RESP_EVENT_INITIALIZE_FW       0x80UL
+	__le16 reserved48[3];
+};
+
+/* De-initialize Firmware command response (16 bytes) */
+struct creq_deinitialize_fw_resp {
+	u8 type;
+	#define CREQ_DEINITIALIZE_FW_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_DEINITIALIZE_FW_RESP_TYPE_SFT		    0
+	#define CREQ_DEINITIALIZE_FW_RESP_TYPE_QP_EVENT	   0x38UL
+	#define CREQ_DEINITIALIZE_FW_RESP_RESERVED2_MASK	    0xc0UL
+	#define CREQ_DEINITIALIZE_FW_RESP_RESERVED2_SFT	    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_DEINITIALIZE_FW_RESP_V			    0x1UL
+	#define CREQ_DEINITIALIZE_FW_RESP_RESERVED7_MASK	    0xfeUL
+	#define CREQ_DEINITIALIZE_FW_RESP_RESERVED7_SFT	    1
+	u8 event;
+	#define CREQ_DEINITIALIZE_FW_RESP_EVENT_DEINITIALIZE_FW   0x81UL
+	__le16 reserved48[3];
+};
+
+/* Stop function command response (16 bytes) */
+struct creq_stop_func_resp {
+	u8 type;
+	#define CREQ_STOP_FUNC_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_STOP_FUNC_RESP_TYPE_SFT			    0
+	#define CREQ_STOP_FUNC_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_STOP_FUNC_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_STOP_FUNC_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_STOP_FUNC_RESP_V				    0x1UL
+	#define CREQ_STOP_FUNC_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_STOP_FUNC_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_STOP_FUNC_RESP_EVENT_STOP_FUNC		   0x82UL
+	__le16 reserved48[3];
+};
+
+/* Query function command response (16 bytes) */
+struct creq_query_func_resp {
+	u8 type;
+	#define CREQ_QUERY_FUNC_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_QUERY_FUNC_RESP_TYPE_SFT			    0
+	#define CREQ_QUERY_FUNC_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_QUERY_FUNC_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_QUERY_FUNC_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 size;
+	u8 v;
+	#define CREQ_QUERY_FUNC_RESP_V				    0x1UL
+	#define CREQ_QUERY_FUNC_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_FUNC_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_QUERY_FUNC_RESP_EVENT_QUERY_FUNC		   0x83UL
+	__le16 reserved48[3];
+};
+
+/* Query function command response side buffer structure (88 bytes) */
+struct creq_query_func_resp_sb {
+	u8 opcode;
+	#define CREQ_QUERY_FUNC_RESP_SB_OPCODE_QUERY_FUNC	   0x83UL
+	u8 status;
+	__le16 cookie;
+	__le16 flags;
+	u8 resp_size;
+	u8 reserved8;
+	__le64 max_mr_size;
+	__le32 max_qp;
+	__le16 max_qp_wr;
+	__le16 dev_cap_flags;
+	#define CREQ_QUERY_FUNC_RESP_SB_DEV_CAP_FLAGS_RESIZE_QP   0x1UL
+	__le32 max_cq;
+	__le32 max_cqe;
+	__le32 max_pd;
+	u8 max_sge;
+	u8 max_srq_sge;
+	u8 max_qp_rd_atom;
+	u8 max_qp_init_rd_atom;
+	__le32 max_mr;
+	__le32 max_mw;
+	__le32 max_raw_eth_qp;
+	__le32 max_ah;
+	__le32 max_fmr;
+	__le32 max_srq_wr;
+	__le32 max_pkeys;
+	__le32 max_inline_data;
+	u8 max_map_per_fmr;
+	u8 l2_db_space_size;
+	__le16 max_srq;
+	__le32 max_gid;
+	__le32 tqm_alloc_reqs[8];
+};
+
+/* Set resources command response (16 bytes) */
+struct creq_set_func_resources_resp {
+	u8 type;
+	#define CREQ_SET_FUNC_RESOURCES_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_SET_FUNC_RESOURCES_RESP_TYPE_SFT		    0
+	#define CREQ_SET_FUNC_RESOURCES_RESP_TYPE_QP_EVENT	   0x38UL
+	#define CREQ_SET_FUNC_RESOURCES_RESP_RESERVED2_MASK	    0xc0UL
+	#define CREQ_SET_FUNC_RESOURCES_RESP_RESERVED2_SFT	    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_SET_FUNC_RESOURCES_RESP_V			    0x1UL
+	#define CREQ_SET_FUNC_RESOURCES_RESP_RESERVED7_MASK	    0xfeUL
+	#define CREQ_SET_FUNC_RESOURCES_RESP_RESERVED7_SFT	    1
+	u8 event;
+	#define CREQ_SET_FUNC_RESOURCES_RESP_EVENT_SET_FUNC_RESOURCES 0x84UL
+	__le16 reserved48[3];
+};
+
+/* Map TC to COS response (16 bytes) */
+struct creq_map_tc_to_cos_resp {
+	u8 type;
+	#define CREQ_MAP_TC_TO_COS_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_MAP_TC_TO_COS_RESP_TYPE_SFT		    0
+	#define CREQ_MAP_TC_TO_COS_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_MAP_TC_TO_COS_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_MAP_TC_TO_COS_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_MAP_TC_TO_COS_RESP_V			    0x1UL
+	#define CREQ_MAP_TC_TO_COS_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_MAP_TC_TO_COS_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_MAP_TC_TO_COS_RESP_EVENT_MAP_TC_TO_COS       0x8aUL
+	__le16 reserved48[3];
+};
+
+/* Query version response (16 bytes) */
+struct creq_query_version_resp {
+	u8 type;
+	#define CREQ_QUERY_VERSION_RESP_TYPE_MASK		    0x3fUL
+	#define CREQ_QUERY_VERSION_RESP_TYPE_SFT		    0
+	#define CREQ_QUERY_VERSION_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_QUERY_VERSION_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_QUERY_VERSION_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	u8 fw_maj;
+	u8 fw_minor;
+	u8 fw_bld;
+	u8 fw_rsvd;
+	u8 v;
+	#define CREQ_QUERY_VERSION_RESP_V			    0x1UL
+	#define CREQ_QUERY_VERSION_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_VERSION_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_QUERY_VERSION_RESP_EVENT_QUERY_VERSION       0x8bUL
+	__le16 reserved16;
+	u8 intf_maj;
+	u8 intf_minor;
+	u8 intf_bld;
+	u8 intf_rsvd;
+};
+
+/* Modify congestion control command response (16 bytes) */
+struct creq_modify_cc_resp {
+	u8 type;
+	#define CREQ_MODIFY_CC_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_MODIFY_CC_RESP_TYPE_SFT			    0
+	#define CREQ_MODIFY_CC_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_MODIFY_CC_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_MODIFY_CC_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 reserved32;
+	u8 v;
+	#define CREQ_MODIFY_CC_RESP_V				    0x1UL
+	#define CREQ_MODIFY_CC_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_MODIFY_CC_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_MODIFY_CC_RESP_EVENT_MODIFY_CC		   0x8cUL
+	__le16 reserved48[3];
+};
+
+/* Query congestion control command response (16 bytes) */
+struct creq_query_cc_resp {
+	u8 type;
+	#define CREQ_QUERY_CC_RESP_TYPE_MASK			    0x3fUL
+	#define CREQ_QUERY_CC_RESP_TYPE_SFT			    0
+	#define CREQ_QUERY_CC_RESP_TYPE_QP_EVENT		   0x38UL
+	#define CREQ_QUERY_CC_RESP_RESERVED2_MASK		    0xc0UL
+	#define CREQ_QUERY_CC_RESP_RESERVED2_SFT		    6
+	u8 status;
+	__le16 cookie;
+	__le32 size;
+	u8 v;
+	#define CREQ_QUERY_CC_RESP_V				    0x1UL
+	#define CREQ_QUERY_CC_RESP_RESERVED7_MASK		    0xfeUL
+	#define CREQ_QUERY_CC_RESP_RESERVED7_SFT		    1
+	u8 event;
+	#define CREQ_QUERY_CC_RESP_EVENT_QUERY_CC		   0x8dUL
+	__le16 reserved48[3];
+};
+
+/* Query congestion control command response side buffer structure (32 bytes) */
+struct creq_query_cc_resp_sb {
+	u8 opcode;
+	#define CREQ_QUERY_CC_RESP_SB_OPCODE_QUERY_CC		   0x8dUL
+	u8 status;
+	__le16 cookie;
+	__le16 flags;
+	u8 resp_size;
+	u8 reserved8;
+	u8 enable_cc;
+	#define CREQ_QUERY_CC_RESP_SB_ENABLE_CC		    0x1UL
+	u8 g;
+	#define CREQ_QUERY_CC_RESP_SB_G_MASK			    0x7UL
+	#define CREQ_QUERY_CC_RESP_SB_G_SFT			    0
+	u8 num_phases_per_state;
+	__le16 init_cr;
+	u8 unused_2;
+	__le16 unused_3;
+	u8 unused_4;
+	__le16 init_tr;
+	u8 tos_dscp_tos_ecn;
+	#define CREQ_QUERY_CC_RESP_SB_TOS_ECN_MASK		    0x3UL
+	#define CREQ_QUERY_CC_RESP_SB_TOS_ECN_SFT		    0
+	#define CREQ_QUERY_CC_RESP_SB_TOS_DSCP_MASK		    0xfcUL
+	#define CREQ_QUERY_CC_RESP_SB_TOS_DSCP_SFT		    2
+	__le64 reserved64;
+	__le64 reserved64_1;
+};
+
+/* QP error notification event (16 bytes) */
+struct creq_qp_error_notification {
+	u8 type;
+	#define CREQ_QP_ERROR_NOTIFICATION_TYPE_MASK		    0x3fUL
+	#define CREQ_QP_ERROR_NOTIFICATION_TYPE_SFT		    0
+	#define CREQ_QP_ERROR_NOTIFICATION_TYPE_QP_EVENT	   0x38UL
+	#define CREQ_QP_ERROR_NOTIFICATION_RESERVED2_MASK	    0xc0UL
+	#define CREQ_QP_ERROR_NOTIFICATION_RESERVED2_SFT	    6
+	u8 status;
+	u8 req_slow_path_state;
+	u8 req_err_state_reason;
+	__le32 xid;
+	u8 v;
+	#define CREQ_QP_ERROR_NOTIFICATION_V			    0x1UL
+	#define CREQ_QP_ERROR_NOTIFICATION_RESERVED7_MASK	    0xfeUL
+	#define CREQ_QP_ERROR_NOTIFICATION_RESERVED7_SFT	    1
+	u8 event;
+	#define CREQ_QP_ERROR_NOTIFICATION_EVENT_QP_ERROR_NOTIFICATION 0xc0UL
+	u8 res_slow_path_state;
+	u8 res_err_state_reason;
+	__le16 sq_cons_idx;
+	__le16 rq_cons_idx;
+};
+
+/* RoCE Slowpath HSI Specification 1.6.0 */
+#define ROCE_SP_HSI_VERSION_MAJOR	1
+#define ROCE_SP_HSI_VERSION_MINOR	6
+#define ROCE_SP_HSI_VERSION_UPDATE	0
+
+#define ROCE_SP_HSI_VERSION_STR	"1.6.0"
+/*
+ * Following is the signature for ROCE_SP_HSI message field that indicates not
+ * applicable (All F's). Need to cast it the size of the field if needed.
+ */
+#define ROCE_SP_HSI_NA_SIGNATURE	((__le32)(-1))
 #endif /* __BNXT_RE_HSI_H__ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  03/22] bnxt_re: register with the NIC driver
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 01/22] bnxt_re: Add bnxt_re RoCE driver files Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 02/22] bnxt_re: Introducing autogenerated Host Software Interface(hsi) file Selvin Xavier
@ 2016-12-09  6:47 ` Selvin Xavier
  2016-12-10  0:03   ` Jonathan Toppins
  2016-12-09  6:47 ` [PATCH V2 04/22] bnxt_re: Enabling RoCE control path Selvin Xavier
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:47 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch handles the registration with bnxt_en driver. The driver registers
with netdev notifier chain. Upon receiving NETDEV_REGISTER event, the driver
in turn registers with bnxt_en driver.
	1. bnxt_en's ulp_probe function returns a structure that contains information
	   about the device and additional entry points.
	2. bnxt_en driver returns 'struct bnxt_eth_dev' that contains set of operation
	   vectors that RocE driver invokes later.
	3. bnxt_request_msix() allows the RoCE driver to specify the number of MSI-X
	   vectors that are needed.
	4. bnxt_send_fw_msg () can be used to send messages to the FW
	5. bnxt_register_async_events() can be used to register for async event
	   callbacks.

v2: Remove some sparse warning. Also, remove some unused code from unreg path.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_re.h      |  48 +++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c | 436 ++++++++++++++++++++++++++++
 2 files changed, 484 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index f9b8542..1fe4cb3 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -42,5 +42,53 @@
 #define ROCE_DRV_MODULE_NAME		"bnxt_re"
 #define ROCE_DRV_MODULE_VERSION		"1.0.0"
 
+#define BNXT_RE_REF_WAIT_COUNT		10
 #define BNXT_RE_DESC	"Broadcom NetXtreme-C/E RoCE Driver"
+
+struct bnxt_re_work {
+	struct work_struct	work;
+	unsigned long		event;
+	struct bnxt_re_dev      *rdev;
+	struct net_device	*vlan_dev;
+};
+
+#define BNXT_RE_MIN_MSIX		2
+#define BNXT_RE_MAX_MSIX		16
+struct bnxt_re_dev {
+	struct ib_device		ibdev;
+	struct list_head		list;
+	atomic_t			ref_count;
+	unsigned long			flags;
+#define BNXT_RE_FLAG_NETDEV_REGISTERED	0
+#define BNXT_RE_FLAG_IBDEV_REGISTERED	1
+#define BNXT_RE_FLAG_GOT_MSIX		2
+#define BNXT_RE_FLAG_RCFW_CHANNEL_EN	8
+#define BNXT_RE_FLAG_QOS_WORK_REG	16
+	struct net_device		*netdev;
+	unsigned int			version, major, minor;
+	struct bnxt_en_dev		*en_dev;
+	struct bnxt_msix_entry		msix_entries[BNXT_RE_MAX_MSIX];
+	int				num_msix;
+
+	int				id;
+
+	atomic_t			qp_count;
+	struct mutex			qp_lock;	/* protect qp list */
+	struct list_head		qp_list;
+
+	atomic_t			cq_count;
+	atomic_t			srq_count;
+	atomic_t			mr_count;
+	atomic_t			mw_count;
+	/* Max of 2 lossless traffic class supported per port */
+	u16				cosq[2];
+};
+
+#define to_bnxt_re(ptr, type, member)	\
+	container_of(ptr, type, member)
+
+#define to_bnxt_re_dev(ptr, member)	\
+	container_of((ptr), struct bnxt_re_dev, member)
+#define	rdev_to_dev(rdev)	((rdev) ? (&(rdev)->ibdev.dev) : NULL)
+
 #endif
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index ebe1c69..029824a 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -38,10 +38,24 @@
 
 #include <linux/module.h>
 #include <linux/netdevice.h>
+#include <linux/ethtool.h>
 #include <linux/mutex.h>
 #include <linux/list.h>
 #include <linux/rculist.h>
+#include <linux/spinlock.h>
+#include <linux/pci.h>
+#include <net/ipv6.h>
+#include <net/addrconf.h>
+
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_user_verbs.h>
+#include <rdma/ib_umem.h>
+#include <rdma/ib_addr.h>
+
+#include "bnxt_ulp.h"
+#include "bnxt_re_hsi.h"
 #include "bnxt_re.h"
+#include "bnxt.h"
 static char version[] =
 		BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
 
@@ -55,6 +69,372 @@ MODULE_VERSION(ROCE_DRV_MODULE_VERSION);
 static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
 static DEFINE_MUTEX(bnxt_re_dev_lock);
 static struct workqueue_struct *bnxt_re_wq;
+
+/* for handling bnxt_en callbacks later */
+static void bnxt_re_stop(void *p)
+{
+}
+
+static void bnxt_re_start(void *p)
+{
+}
+
+static void bnxt_re_sriov_config(void *p, int num_vfs)
+{
+}
+
+static struct bnxt_ulp_ops bnxt_re_ulp_ops = {
+	.ulp_async_notifier = NULL,
+	.ulp_stop = bnxt_re_stop,
+	.ulp_start = bnxt_re_start,
+	.ulp_sriov_config = bnxt_re_sriov_config
+};
+
+/* The rdev ref_count is to protect immature removal of the device */
+static inline void bnxt_re_hold(struct bnxt_re_dev *rdev)
+{
+	atomic_inc(&rdev->ref_count);
+}
+
+static inline void bnxt_re_put(struct bnxt_re_dev *rdev)
+{
+	atomic_dec(&rdev->ref_count);
+}
+
+/* RoCE -> Net driver */
+
+/* Driver registration routines used to let the networking driver (bnxt_en)
+ * to know that the RoCE driver is now installed
+ */
+static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	int rc;
+
+	if (!rdev)
+		return -EINVAL;
+
+	/* Acquire rtnl lock if it is not invokded from netdev event */
+	if (lock_wait)
+		rtnl_lock();
+
+	rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev,
+						    BNXT_ROCE_ULP);
+	if (lock_wait)
+		rtnl_unlock();
+	return rc;
+}
+
+static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	int rc = 0;
+
+	if (!rdev)
+		return -EINVAL;
+
+	rtnl_lock();
+	rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP,
+						  &bnxt_re_ulp_ops, rdev);
+	rtnl_unlock();
+	return rc;
+}
+
+static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	int rc;
+
+	if (!rdev)
+		return -EINVAL;
+
+	if (lock_wait)
+		rtnl_lock();
+
+	rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP);
+
+	if (lock_wait)
+		rtnl_unlock();
+	return rc;
+}
+
+static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+{
+	int rc = 0, num_msix_want = BNXT_RE_MIN_MSIX, num_msix_got;
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+
+	if (!rdev)
+		return -EINVAL;
+
+	rtnl_lock();
+	num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP,
+							 rdev->msix_entries,
+							 num_msix_want);
+	if (num_msix_got < BNXT_RE_MIN_MSIX) {
+		rc = -EINVAL;
+		goto done;
+	}
+	if (num_msix_got != num_msix_want) {
+		dev_warn(rdev_to_dev(rdev),
+			 "Requested %d MSI-X vectors, got %d\n",
+			 num_msix_want, num_msix_got);
+	}
+	rdev->num_msix = num_msix_got;
+done:
+	rtnl_unlock();
+	return rc;
+}
+
+/* Device */
+
+static bool is_bnxt_re_dev(struct net_device *netdev)
+{
+	struct ethtool_drvinfo drvinfo;
+
+	if (netdev->ethtool_ops && netdev->ethtool_ops->get_drvinfo) {
+		memset(&drvinfo, 0, sizeof(drvinfo));
+		netdev->ethtool_ops->get_drvinfo(netdev, &drvinfo);
+
+		if (strcmp(drvinfo.driver, "bnxt_en"))
+			return false;
+		return true;
+	}
+	return false;
+}
+
+static struct bnxt_re_dev *bnxt_re_from_netdev(struct net_device *netdev)
+{
+	struct bnxt_re_dev *rdev;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(rdev, &bnxt_re_dev_list, list) {
+		if (rdev->netdev == netdev) {
+			rcu_read_unlock();
+			return rdev;
+		}
+	}
+	rcu_read_unlock();
+	return NULL;
+}
+
+static void bnxt_re_dev_unprobe(struct net_device *netdev,
+				struct bnxt_en_dev *en_dev)
+{
+	dev_put(netdev);
+	module_put(en_dev->pdev->driver->driver.owner);
+}
+
+static struct bnxt_en_dev *bnxt_re_dev_probe(struct net_device *netdev)
+{
+	struct bnxt *bp = netdev_priv(netdev);
+	struct bnxt_en_dev *en_dev;
+	struct pci_dev *pdev;
+
+	/* Call bnxt_en's RoCE probe via indirect API */
+	if (!bp->ulp_probe)
+		return ERR_PTR(-EINVAL);
+
+	en_dev = bp->ulp_probe(netdev);
+	if (IS_ERR(en_dev))
+		return en_dev;
+
+	pdev = en_dev->pdev;
+	if (!pdev)
+		return ERR_PTR(-EINVAL);
+	/* Bump net device reference count */
+	try_module_get(pdev->driver->driver.owner);
+	dev_hold(netdev);
+
+	return en_dev;
+}
+
+static void bnxt_re_dev_remove(struct bnxt_re_dev *rdev)
+{
+	int i = BNXT_RE_REF_WAIT_COUNT;
+
+	/* Wait for rdev refcount to come down */
+	while ((atomic_read(&rdev->ref_count) > 1) && i--)
+		msleep(100);
+
+	if (atomic_read(&rdev->ref_count) > 1)
+		dev_err(rdev_to_dev(rdev),
+			"Failed waiting for ref count to deplete %d",
+			atomic_read(&rdev->ref_count));
+
+	atomic_set(&rdev->ref_count, 0);
+	dev_put(rdev->netdev);
+	rdev->netdev = NULL;
+
+	mutex_lock(&bnxt_re_dev_lock);
+	list_del_rcu(&rdev->list);
+	mutex_unlock(&bnxt_re_dev_lock);
+
+	synchronize_rcu();
+	flush_workqueue(bnxt_re_wq);
+
+	ib_dealloc_device(&rdev->ibdev);
+	/* rdev is gone */
+}
+
+static struct bnxt_re_dev *bnxt_re_dev_add(struct net_device *netdev,
+					   struct bnxt_en_dev *en_dev)
+{
+	struct bnxt_re_dev *rdev;
+
+	/* Allocate bnxt_re_dev instance here */
+	rdev = (struct bnxt_re_dev *)ib_alloc_device(sizeof(*rdev));
+	if (!rdev) {
+		dev_err(NULL, "%s: bnxt_re_dev allocation failure!",
+			ROCE_DRV_MODULE_NAME);
+		return NULL;
+	}
+	/* Default values */
+	atomic_set(&rdev->ref_count, 0);
+	rdev->netdev = netdev;
+	dev_hold(rdev->netdev);
+	rdev->en_dev = en_dev;
+	rdev->id = rdev->en_dev->pdev->devfn;
+	INIT_LIST_HEAD(&rdev->qp_list);
+	mutex_init(&rdev->qp_lock);
+	atomic_set(&rdev->qp_count, 0);
+	atomic_set(&rdev->cq_count, 0);
+	atomic_set(&rdev->srq_count, 0);
+	atomic_set(&rdev->mr_count, 0);
+	atomic_set(&rdev->mw_count, 0);
+	rdev->cosq[0] = 0xFFFF;
+	rdev->cosq[1] = 0xFFFF;
+
+	mutex_lock(&bnxt_re_dev_lock);
+	list_add_tail_rcu(&rdev->list, &bnxt_re_dev_list);
+	mutex_unlock(&bnxt_re_dev_lock);
+	return rdev;
+}
+
+static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
+{
+	int rc;
+
+	if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) {
+		rc = bnxt_re_free_msix(rdev, lock_wait);
+		if (rc)
+			dev_warn(rdev_to_dev(rdev),
+				 "Failed to free MSI-X vectors: %#x", rc);
+	}
+	if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) {
+		rc = bnxt_re_unregister_netdev(rdev, lock_wait);
+		if (rc)
+			dev_warn(rdev_to_dev(rdev),
+				 "Failed to unregister with netdev: %#x", rc);
+	}
+}
+
+static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+{
+	int i, j, rc;
+
+	/* Registered a new RoCE device instance to netdev */
+	rc = bnxt_re_register_netdev(rdev);
+	if (rc) {
+		pr_err("Failed to register with netedev: %#x\n", rc);
+		return -EINVAL;
+	}
+	set_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
+
+	rc = bnxt_re_request_msix(rdev);
+	if (rc) {
+		pr_err("Failed to get MSI-X vectors: %#x\n", rc);
+		rc = -EINVAL;
+		goto fail;
+	}
+	set_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags);
+
+fail:
+	bnxt_re_ib_unreg(rdev, true);
+	return rc;
+}
+
+static void bnxt_re_dev_unreg(struct bnxt_re_dev *rdev)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	struct net_device *netdev = rdev->netdev;
+
+	bnxt_re_dev_remove(rdev);
+
+	if (netdev)
+		bnxt_re_dev_unprobe(netdev, en_dev);
+}
+
+static int bnxt_re_dev_reg(struct bnxt_re_dev **rdev, struct net_device *netdev)
+{
+	struct bnxt_en_dev *en_dev;
+	int rc = 0;
+
+	if (!is_bnxt_re_dev(netdev))
+		return -ENODEV;
+
+	en_dev = bnxt_re_dev_probe(netdev);
+	if (IS_ERR(en_dev)) {
+		pr_err("%s: Failed to probe\n", ROCE_DRV_MODULE_NAME);
+		goto exit;
+	}
+	*rdev = bnxt_re_dev_add(netdev, en_dev);
+	if (!*rdev) {
+		rc = -ENOMEM;
+		bnxt_re_dev_unprobe(netdev, en_dev);
+		goto exit;
+	}
+	bnxt_re_hold(*rdev);
+exit:
+	return rc;
+}
+
+static void bnxt_re_remove_one(struct bnxt_re_dev *rdev)
+{
+	pci_dev_put(rdev->en_dev->pdev);
+}
+
+/* Handle all deferred netevents tasks */
+static void bnxt_re_task(struct work_struct *work)
+{
+	struct bnxt_re_work *re_work;
+	struct bnxt_re_dev *rdev;
+	int rc = 0;
+
+	re_work = container_of(work, struct bnxt_re_work, work);
+	rdev = re_work->rdev;
+
+	if (re_work->event != NETDEV_REGISTER &&
+	    !test_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags))
+		return;
+
+	switch (re_work->event) {
+	case NETDEV_REGISTER:
+		rc = bnxt_re_ib_reg(rdev);
+		if (rc)
+			dev_err(rdev_to_dev(rdev),
+				"Failed to register with IB: %#x", rc);
+			break;
+	case NETDEV_UP:
+
+		break;
+	case NETDEV_DOWN:
+
+		break;
+
+	case NETDEV_CHANGE:
+
+		break;
+	default:
+		break;
+	}
+	kfree(re_work);
+}
+
+static void bnxt_re_init_one(struct bnxt_re_dev *rdev)
+{
+	pci_dev_get(rdev->en_dev->pdev);
+}
+
 /*
  * "Notifier chain callback can be invoked for the same chain from
  * different CPUs at the same time".
@@ -72,6 +452,62 @@ static struct workqueue_struct *bnxt_re_wq;
 static int bnxt_re_netdev_event(struct notifier_block *notifier,
 				unsigned long event, void *ptr)
 {
+	struct net_device *real_dev, *netdev = netdev_notifier_info_to_dev(ptr);
+	struct bnxt_re_work *re_work;
+	struct bnxt_re_dev *rdev;
+	int rc = 0;
+
+	real_dev = rdma_vlan_dev_real_dev(netdev);
+	if (!real_dev)
+		real_dev = netdev;
+
+	rdev = bnxt_re_from_netdev(real_dev);
+	if (!rdev && event != NETDEV_REGISTER)
+		goto exit;
+	if (real_dev != netdev)
+		goto exit;
+
+	if (rdev)
+		bnxt_re_hold(rdev);
+
+	switch (event) {
+	case NETDEV_REGISTER:
+		if (rdev)
+			break;
+		rc = bnxt_re_dev_reg(&rdev, real_dev);
+		if (rc == -ENODEV)
+			break;
+		if (rc) {
+			pr_err("Failed to register with the device %s: %#x\n",
+			       real_dev->name, rc);
+			break;
+		}
+		bnxt_re_init_one(rdev);
+		goto sch_work;
+
+	case NETDEV_UNREGISTER:
+		bnxt_re_ib_unreg(rdev, false);
+		bnxt_re_remove_one(rdev);
+		bnxt_re_dev_unreg(rdev);
+		break;
+
+	default:
+sch_work:
+		/* Allocate for the deferred task */
+		re_work = kzalloc(sizeof(*re_work), GFP_ATOMIC);
+		if (!re_work)
+			break;
+
+		re_work->rdev = rdev;
+		re_work->event = event;
+		re_work->vlan_dev = (real_dev == netdev ? NULL : netdev);
+		INIT_WORK(&re_work->work, bnxt_re_task);
+		queue_work(bnxt_re_wq, &re_work->work);
+		break;
+	}
+	if (rdev)
+		bnxt_re_put(rdev);
+exit:
 	return NOTIFY_DONE;
 }
 static struct notifier_block bnxt_re_netdev_notifier = {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  04/22] bnxt_re: Enabling RoCE control path
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (2 preceding siblings ...)
  2016-12-09  6:47 ` [PATCH V2 03/22] bnxt_re: register with the NIC driver Selvin Xavier
@ 2016-12-09  6:47 ` Selvin Xavier
  2016-12-09  6:47 ` [PATCH V2 05/22] bnxt_re: Adding Notification Queue support Selvin Xavier
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:47 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch implements the basic initialization for RocE HW interface.
Some of the slow path FW commands required for the HW intialization
are implemented. It also handles registration with the IB stack.

v2: Fix some of the sparse warnings

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c | 599 ++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h | 176 ++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c  | 740 +++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h  | 165 ++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c   | 163 ++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h   |  42 ++
 drivers/infiniband/hw/bnxtre/bnxt_re.h         |  15 +
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c    | 433 ++++++++++++++-
 8 files changed, 2326 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
index 4029935..5b71acd 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
@@ -35,3 +35,602 @@
  *
  * Description: RDMA Controller HW interface
  */
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/pci.h>
+#include "bnxt_re_hsi.h"
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_rcfw.h"
+static void bnxt_qplib_service_creq(unsigned long data);
+
+/* Hardware communication channel */
+int bnxt_qplib_rcfw_wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)
+{
+	u16 cbit;
+	int rc;
+
+	cookie &= RCFW_MAX_COOKIE_VALUE;
+	cbit = cookie % RCFW_MAX_OUTSTANDING_CMD;
+	if (!test_bit(cbit, rcfw->cmdq_bitmap))
+		dev_warn(&rcfw->pdev->dev,
+			 "QPLIB: CMD bit %d for cookie 0x%x is not set?",
+			 cbit, cookie);
+
+	rc = wait_event_timeout(rcfw->waitq,
+				!test_bit(cbit, rcfw->cmdq_bitmap),
+				msecs_to_jiffies(RCFW_CMD_WAIT_TIME_MS));
+	if (!rc) {
+		dev_warn(&rcfw->pdev->dev,
+			 "QPLIB: Bono Error: timeout %d msec, msg {0x%x}\n",
+			 RCFW_CMD_WAIT_TIME_MS, cookie);
+	}
+
+	return rc;
+};
+
+int bnxt_qplib_rcfw_block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)
+{
+	u32 count = -1;
+	u16 cbit;
+
+	cookie &= RCFW_MAX_COOKIE_VALUE;
+	cbit = cookie % RCFW_MAX_OUTSTANDING_CMD;
+	if (!test_bit(cbit, rcfw->cmdq_bitmap))
+		goto done;
+	do {
+		bnxt_qplib_service_creq((unsigned long)rcfw);
+	} while (test_bit(cbit, rcfw->cmdq_bitmap) && --count);
+done:
+	return count;
+};
+
+void *bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+				   struct cmdq_base *req, void **crsbe,
+				   u8 is_block)
+{
+	struct bnxt_qplib_crsq *crsq = &rcfw->crsq;
+	struct bnxt_qplib_cmdqe *cmdqe, **cmdq_ptr;
+	struct bnxt_qplib_hwq *cmdq = &rcfw->cmdq;
+	struct bnxt_qplib_hwq *crsb = &rcfw->crsb;
+	struct bnxt_qplib_crsqe *crsqe = NULL;
+	struct bnxt_qplib_crsbe **crsb_ptr;
+	u32 sw_prod, cmdq_prod;
+	dma_addr_t dma_addr;
+	unsigned long flags;
+	u32 size, opcode;
+	u16 cookie, cbit;
+	int pg, idx;
+	u8 *preq;
+
+retry:
+	opcode = req->opcode;
+	if (!test_bit(FIRMWARE_INITIALIZED_FLAG, &rcfw->flags) &&
+	    (opcode != CMDQ_BASE_OPCODE_QUERY_FUNC &&
+	     opcode != CMDQ_BASE_OPCODE_INITIALIZE_FW)) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: RCFW not initialized, reject opcode 0x%x",
+			opcode);
+		return NULL;
+	}
+
+	if (test_bit(FIRMWARE_INITIALIZED_FLAG, &rcfw->flags) &&
+	    opcode == CMDQ_BASE_OPCODE_INITIALIZE_FW) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: RCFW already initialized!");
+		return NULL;
+	}
+
+	/* Cmdq are in 16-byte units, each request can consume 1 or more
+	 * cmdqe
+	 */
+	spin_lock_irqsave(&cmdq->lock, flags);
+	if (req->cmd_size > cmdq->max_elements -
+	    ((HWQ_CMP(cmdq->prod, cmdq) - HWQ_CMP(cmdq->cons, cmdq)) &
+	     (cmdq->max_elements - 1))) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: RCFW: CMDQ is full!");
+		spin_unlock_irqrestore(&cmdq->lock, flags);
+		goto retry;
+	}
+
+	cookie = atomic_inc_return(&rcfw->seq_num) & RCFW_MAX_COOKIE_VALUE;
+	cbit = cookie % RCFW_MAX_OUTSTANDING_CMD;
+	if (is_block)
+		cookie |= RCFW_CMD_IS_BLOCKING;
+	req->cookie = cpu_to_le16(cookie);
+	if (test_and_set_bit(cbit, rcfw->cmdq_bitmap)) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: RCFW MAX outstanding cmd reached!");
+		atomic_dec(&rcfw->seq_num);
+		spin_unlock_irqrestore(&cmdq->lock, flags);
+		goto retry;
+	}
+	/* Reserve a resp buffer slot if requested */
+	if (req->resp_size && crsbe) {
+		spin_lock_irqsave(&crsb->lock, flags);
+		sw_prod = HWQ_CMP(crsb->prod, crsb);
+		crsb_ptr = (struct bnxt_qplib_crsbe **)crsb->pbl_ptr;
+		*crsbe = (void *)&crsb_ptr[CRSB_PG(sw_prod)][CRSB_IDX(sw_prod)];
+		BNXT_QPLIB_CRSB_DMA_NEXT(crsb->pbl_dma_ptr, sw_prod, dma_addr);
+		req->resp_addr = cpu_to_le64(dma_addr);
+		crsb->prod++;
+		spin_unlock_irqrestore(&crsb->lock, flags);
+
+		req->resp_size = (sizeof(struct bnxt_qplib_crsbe) +
+				  BNXT_QPLIB_CMDQE_UNITS - 1) /
+				 BNXT_QPLIB_CMDQE_UNITS;
+	}
+	cmdq_ptr = (struct bnxt_qplib_cmdqe **)cmdq->pbl_ptr;
+	preq = (u8 *)req;
+	size = req->cmd_size * BNXT_QPLIB_CMDQE_UNITS;
+	do {
+		pg = 0;
+		idx = 0;
+
+		/* Locate the next cmdq slot */
+		sw_prod = HWQ_CMP(cmdq->prod, cmdq);
+		cmdqe = &cmdq_ptr[CMDQ_PG(sw_prod)][CMDQ_IDX(sw_prod)];
+		if (!cmdqe) {
+			dev_err(&rcfw->pdev->dev,
+				"QPLIB: RCFW request failed with no cmdqe!");
+			goto done;
+		}
+		/* Copy a segment of the req cmd to the cmdq */
+		memset(cmdqe, 0, sizeof(*cmdqe));
+		memcpy(cmdqe, preq, min_t(u32, size, sizeof(*cmdqe)));
+		preq += min_t(u32, size, sizeof(*cmdqe));
+		size -= min_t(u32, size, sizeof(*cmdqe));
+		cmdq->prod++;
+	} while (size > 0);
+
+	cmdq_prod = cmdq->prod;
+	if (rcfw->flags & FIRMWARE_FIRST_FLAG) {
+		/* The very first doorbell write is required to set this flag
+		 * which prompts the FW to reset its internal pointers
+		 */
+		cmdq_prod |= FIRMWARE_FIRST_FLAG;
+		rcfw->flags &= ~FIRMWARE_FIRST_FLAG;
+	}
+	sw_prod = HWQ_CMP(crsq->prod, crsq);
+	crsqe = &crsq->crsq[sw_prod];
+	memset(crsqe, 0, sizeof(*crsqe));
+	crsq->prod++;
+	crsqe->req_size = req->cmd_size;
+
+	/* ring CMDQ DB */
+	writel(cmdq_prod, rcfw->cmdq_bar_reg_iomem +
+	       rcfw->cmdq_bar_reg_prod_off);
+	writel(RCFW_CMDQ_TRIG_VAL, rcfw->cmdq_bar_reg_iomem +
+	       rcfw->cmdq_bar_reg_trig_off);
+done:
+	spin_unlock_irqrestore(&cmdq->lock, flags);
+	/* Return the CREQ response pointer */
+	return crsqe ? &crsqe->qp_event : NULL;
+}
+
+/* Completions */
+static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
+					 struct creq_func_event *func_event)
+{
+	switch (func_event->event) {
+	case CREQ_FUNC_EVENT_EVENT_TX_WQE_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_TX_DATA_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_RX_WQE_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_RX_DATA_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CQ_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_TQM_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCQ_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCS_ERROR:
+		/* SRQ ctx error, call srq_handler??
+		 * But there's no SRQ handle!
+		 */
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCC_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCM_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_TIM_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_VF_COMM_REQUEST:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_RESOURCE_EXHAUSTED:
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/* SP - CREQ Completion handlers */
+static void bnxt_qplib_service_creq(unsigned long data)
+{
+	struct bnxt_qplib_rcfw *rcfw = (struct bnxt_qplib_rcfw *)data;
+	struct bnxt_qplib_hwq *creq = &rcfw->creq;
+	struct creq_base *creqe, **creq_ptr;
+	u32 sw_cons, raw_cons;
+	unsigned long flags;
+	u32 type;
+
+	/* Service the CREQ until empty */
+	spin_lock_irqsave(&creq->lock, flags);
+	raw_cons = creq->cons;
+	while (1) {
+		sw_cons = HWQ_CMP(raw_cons, creq);
+		creq_ptr = (struct creq_base **)creq->pbl_ptr;
+		creqe = &creq_ptr[CREQ_PG(sw_cons)][CREQ_IDX(sw_cons)];
+		if (!CREQ_CMP_VALID(creqe, raw_cons, creq->max_elements))
+			break;
+
+		type = creqe->type & CREQ_BASE_TYPE_MASK;
+		switch (type) {
+		case CREQ_BASE_TYPE_QP_EVENT:
+			break;
+		case CREQ_BASE_TYPE_FUNC_EVENT:
+			if (!bnxt_qplib_process_func_event
+			    (rcfw, (struct creq_func_event *)creqe))
+				rcfw->creq_func_event_processed++;
+			else {
+				dev_warn(&rcfw->pdev->dev, "QPLIB: aeqe with");
+				dev_warn(&rcfw->pdev->dev,
+					 "QPLIB: type = 0x%x not handled",
+					 type);
+			}
+			break;
+		default:
+			dev_warn(&rcfw->pdev->dev, "QPLIB: creqe with ");
+			dev_warn(&rcfw->pdev->dev,
+				 "QPLIB: op_event = 0x%x not handled", type);
+			break;
+		}
+		raw_cons++;
+	}
+	if (creq->cons != raw_cons) {
+		creq->cons = raw_cons;
+		CREQ_DB_REARM(rcfw->creq_bar_reg_iomem, raw_cons,
+			      creq->max_elements);
+	}
+	spin_unlock_irqrestore(&creq->lock, flags);
+}
+
+static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
+{
+	struct bnxt_qplib_rcfw *rcfw = dev_instance;
+	struct bnxt_qplib_hwq *creq = &rcfw->creq;
+	struct creq_base **creq_ptr;
+	u32 sw_cons;
+
+	/* Prefetch the CREQ element */
+	sw_cons = HWQ_CMP(creq->cons, creq);
+	creq_ptr = (struct creq_base **)rcfw->creq.pbl_ptr;
+	prefetch(&creq_ptr[CREQ_PG(sw_cons)][CREQ_IDX(sw_cons)]);
+
+	tasklet_schedule(&rcfw->worker);
+
+	return IRQ_HANDLED;
+}
+
+/* RCFW */
+int bnxt_qplib_deinit_rcfw(struct bnxt_qplib_rcfw *rcfw)
+{
+	struct creq_deinitialize_fw_resp *resp;
+	struct cmdq_deinitialize_fw req;
+	u16 cmd_flags = 0;
+
+	RCFW_CMD_PREP(req, DEINITIALIZE_FW, cmd_flags);
+	resp = (struct creq_deinitialize_fw_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp)
+		return -EINVAL;
+
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie)))
+		return -ETIMEDOUT;
+
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req))
+		return -EFAULT;
+
+	clear_bit(FIRMWARE_INITIALIZED_FLAG, &rcfw->flags);
+	return 0;
+}
+
+static int __get_pbl_pg_idx(struct bnxt_qplib_pbl *pbl)
+{
+	return (pbl->pg_size == ROCE_PG_SIZE_4K ?
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_4K :
+		pbl->pg_size == ROCE_PG_SIZE_8K ?
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_8K :
+		pbl->pg_size == ROCE_PG_SIZE_64K ?
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_64K :
+		pbl->pg_size == ROCE_PG_SIZE_2M ?
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_2M :
+		pbl->pg_size == ROCE_PG_SIZE_8M ?
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_8M :
+		pbl->pg_size == ROCE_PG_SIZE_1G ?
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_1G :
+				      CMDQ_INITIALIZE_FW_QPC_PG_SIZE_PG_4K);
+}
+
+int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
+			 struct bnxt_qplib_ctx *ctx, int is_virtfn)
+{
+	struct creq_initialize_fw_resp *resp;
+	struct cmdq_initialize_fw req;
+	u16 cmd_flags = 0, level;
+
+	RCFW_CMD_PREP(req, INITIALIZE_FW, cmd_flags);
+
+	/*
+	 * VFs need not setup the HW context area, PF
+	 * shall setup this area for VF. Skipping the
+	 * HW programming
+	 */
+	if (is_virtfn)
+		goto skip_ctx_setup;
+
+	level = ctx->qpc_tbl.level;
+	req.qpc_pg_size_qpc_lvl = (level << CMDQ_INITIALIZE_FW_QPC_LVL_SFT) |
+				__get_pbl_pg_idx(&ctx->qpc_tbl.pbl[level]);
+	level = ctx->mrw_tbl.level;
+	req.mrw_pg_size_mrw_lvl = (level << CMDQ_INITIALIZE_FW_MRW_LVL_SFT) |
+				__get_pbl_pg_idx(&ctx->mrw_tbl.pbl[level]);
+	level = ctx->srqc_tbl.level;
+	req.srq_pg_size_srq_lvl = (level << CMDQ_INITIALIZE_FW_SRQ_LVL_SFT) |
+				__get_pbl_pg_idx(&ctx->srqc_tbl.pbl[level]);
+	level = ctx->cq_tbl.level;
+	req.cq_pg_size_cq_lvl = (level << CMDQ_INITIALIZE_FW_CQ_LVL_SFT) |
+				__get_pbl_pg_idx(&ctx->cq_tbl.pbl[level]);
+	level = ctx->srqc_tbl.level;
+	req.srq_pg_size_srq_lvl = (level << CMDQ_INITIALIZE_FW_SRQ_LVL_SFT) |
+				__get_pbl_pg_idx(&ctx->srqc_tbl.pbl[level]);
+	level = ctx->cq_tbl.level;
+	req.cq_pg_size_cq_lvl = (level << CMDQ_INITIALIZE_FW_CQ_LVL_SFT) |
+				__get_pbl_pg_idx(&ctx->cq_tbl.pbl[level]);
+	level = ctx->tim_tbl.level;
+	req.tim_pg_size_tim_lvl = (level << CMDQ_INITIALIZE_FW_TIM_LVL_SFT) |
+				  __get_pbl_pg_idx(&ctx->tim_tbl.pbl[level]);
+	level = ctx->tqm_pde_level;
+	req.tqm_pg_size_tqm_lvl = (level << CMDQ_INITIALIZE_FW_TQM_LVL_SFT) |
+				  __get_pbl_pg_idx(&ctx->tqm_pde.pbl[level]);
+
+	req.qpc_page_dir =
+		cpu_to_le64(ctx->qpc_tbl.pbl[PBL_LVL_0].pg_map_arr[0]);
+	req.mrw_page_dir =
+		cpu_to_le64(ctx->mrw_tbl.pbl[PBL_LVL_0].pg_map_arr[0]);
+	req.srq_page_dir =
+		cpu_to_le64(ctx->srqc_tbl.pbl[PBL_LVL_0].pg_map_arr[0]);
+	req.cq_page_dir =
+		cpu_to_le64(ctx->cq_tbl.pbl[PBL_LVL_0].pg_map_arr[0]);
+	req.tim_page_dir =
+		cpu_to_le64(ctx->tim_tbl.pbl[PBL_LVL_0].pg_map_arr[0]);
+	req.tqm_page_dir =
+		cpu_to_le64(ctx->tqm_pde.pbl[PBL_LVL_0].pg_map_arr[0]);
+
+	req.number_of_qp = cpu_to_le32(ctx->qpc_tbl.max_elements);
+	req.number_of_mrw = cpu_to_le32(ctx->mrw_tbl.max_elements);
+	req.number_of_srq = cpu_to_le32(ctx->srqc_tbl.max_elements);
+	req.number_of_cq = cpu_to_le32(ctx->cq_tbl.max_elements);
+
+	req.max_qp_per_vf = cpu_to_le32(ctx->vf_res.max_qp_per_vf);
+	req.max_mrw_per_vf = cpu_to_le32(ctx->vf_res.max_mrw_per_vf);
+	req.max_srq_per_vf = cpu_to_le32(ctx->vf_res.max_srq_per_vf);
+	req.max_cq_per_vf = cpu_to_le32(ctx->vf_res.max_cq_per_vf);
+	req.max_gid_per_vf = cpu_to_le32(ctx->vf_res.max_gid_per_vf);
+
+skip_ctx_setup:
+	req.stat_ctx_id = cpu_to_le32(ctx->stats.fw_id);
+	resp = (struct creq_initialize_fw_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: RCFW: INITIALIZE_FW send failed");
+		return -EINVAL;
+	}
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: RCFW: INITIALIZE_FW timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: RCFW: INITIALIZE_FW failed");
+		return -EINVAL;
+	}
+	set_bit(FIRMWARE_INITIALIZED_FLAG, &rcfw->flags);
+	return 0;
+}
+
+void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+{
+	bnxt_qplib_free_hwq(rcfw->pdev, &rcfw->crsb);
+	kfree(rcfw->crsq.crsq);
+	bnxt_qplib_free_hwq(rcfw->pdev, &rcfw->cmdq);
+	bnxt_qplib_free_hwq(rcfw->pdev, &rcfw->creq);
+
+	rcfw->pdev = NULL;
+}
+
+int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev,
+				  struct bnxt_qplib_rcfw *rcfw)
+{
+	rcfw->pdev = pdev;
+	rcfw->creq.max_elements = BNXT_QPLIB_CREQE_MAX_CNT;
+	if (bnxt_qplib_alloc_init_hwq(rcfw->pdev, &rcfw->creq, NULL, 0,
+				      &rcfw->creq.max_elements,
+				      BNXT_QPLIB_CREQE_UNITS, 0, PAGE_SIZE,
+				      HWQ_TYPE_L2_CMPL)) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: HW channel CREQ allocation failed");
+		goto fail;
+	}
+	rcfw->cmdq.max_elements = BNXT_QPLIB_CMDQE_MAX_CNT;
+	if (bnxt_qplib_alloc_init_hwq(rcfw->pdev, &rcfw->cmdq, NULL, 0,
+				      &rcfw->cmdq.max_elements,
+				      BNXT_QPLIB_CMDQE_UNITS, 0, PAGE_SIZE,
+				      HWQ_TYPE_CTX)) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: HW channel CMDQ allocation failed");
+		goto fail;
+	}
+
+	rcfw->crsq.max_elements = rcfw->cmdq.max_elements;
+	rcfw->crsq.crsq = kcalloc(rcfw->crsq.max_elements,
+				  sizeof(*rcfw->crsq.crsq), GFP_KERNEL);
+	if (!rcfw->crsq.crsq)
+		goto fail;
+
+	rcfw->crsb.max_elements = BNXT_QPLIB_CRSBE_MAX_CNT;
+	if (bnxt_qplib_alloc_init_hwq(rcfw->pdev, &rcfw->crsb, NULL, 0,
+				      &rcfw->crsb.max_elements,
+				      BNXT_QPLIB_CRSBE_UNITS, 0, PAGE_SIZE,
+				      HWQ_TYPE_CTX)) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: HW channel CRSB allocation failed");
+		goto fail;
+	}
+	return 0;
+
+fail:
+	bnxt_qplib_free_rcfw_channel(rcfw);
+	return -ENOMEM;
+}
+
+void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+{
+	unsigned long indx;
+
+	/* Make sure the HW channel is stopped! */
+	synchronize_irq(rcfw->vector);
+	tasklet_disable(&rcfw->worker);
+	tasklet_kill(&rcfw->worker);
+
+	if (rcfw->requested) {
+		free_irq(rcfw->vector, rcfw);
+		rcfw->requested = false;
+	}
+	if (rcfw->cmdq_bar_reg_iomem)
+		iounmap(rcfw->cmdq_bar_reg_iomem);
+	rcfw->cmdq_bar_reg_iomem = NULL;
+
+	if (rcfw->creq_bar_reg_iomem)
+		iounmap(rcfw->creq_bar_reg_iomem);
+	rcfw->creq_bar_reg_iomem = NULL;
+
+	indx = find_first_bit(rcfw->cmdq_bitmap, rcfw->bmap_size);
+	if (indx != rcfw->bmap_size)
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: disabling RCFW with pending cmd-bit %lx", indx);
+	kfree(rcfw->cmdq_bitmap);
+	rcfw->bmap_size = 0;
+
+	rcfw->aeq_handler = NULL;
+	rcfw->vector = 0;
+}
+
+int bnxt_qplib_enable_rcfw_channel(struct pci_dev *pdev,
+				   struct bnxt_qplib_rcfw *rcfw,
+				   int msix_vector,
+				   int cp_bar_reg_off, int virt_fn,
+				   int (*aeq_handler)(struct bnxt_qplib_rcfw *,
+						      struct creq_func_event *))
+{
+	resource_size_t res_base;
+	struct cmdq_init init;
+	u16 bmap_size;
+	int rc;
+
+	/* General */
+	atomic_set(&rcfw->seq_num, 0);
+	rcfw->flags = FIRMWARE_FIRST_FLAG;
+	bmap_size = BITS_TO_LONGS(RCFW_MAX_OUTSTANDING_CMD *
+				  sizeof(unsigned long));
+	rcfw->cmdq_bitmap = kzalloc(bmap_size, GFP_KERNEL);
+	if (!rcfw->cmdq_bitmap)
+		return -ENOMEM;
+	rcfw->bmap_size = bmap_size;
+
+	/* CMDQ */
+	rcfw->cmdq_bar_reg = RCFW_COMM_PCI_BAR_REGION;
+	res_base = pci_resource_start(pdev, rcfw->cmdq_bar_reg);
+	if (!res_base)
+		return -ENOMEM;
+
+	rcfw->cmdq_bar_reg_iomem = ioremap_nocache(res_base +
+					      RCFW_COMM_BASE_OFFSET,
+					      RCFW_COMM_SIZE);
+	if (!rcfw->cmdq_bar_reg_iomem) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: CMDQ BAR region %d mapping failed",
+			rcfw->cmdq_bar_reg);
+		return -ENOMEM;
+	}
+
+	rcfw->cmdq_bar_reg_prod_off = virt_fn ? RCFW_VF_COMM_PROD_OFFSET :
+					RCFW_PF_COMM_PROD_OFFSET;
+
+	rcfw->cmdq_bar_reg_trig_off = RCFW_COMM_TRIG_OFFSET;
+
+	/* CRSQ */
+	rcfw->crsq.prod = 0;
+	rcfw->crsq.cons = 0;
+
+	/* CREQ */
+	rcfw->creq_bar_reg = RCFW_COMM_CONS_PCI_BAR_REGION;
+	res_base = pci_resource_start(pdev, rcfw->creq_bar_reg);
+	if (!res_base)
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: CREQ BAR region %d resc start is 0!",
+			rcfw->creq_bar_reg);
+	rcfw->creq_bar_reg_iomem = ioremap_nocache(res_base + cp_bar_reg_off,
+						   4);
+	if (!rcfw->creq_bar_reg_iomem) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: CREQ BAR region %d mapping failed",
+			rcfw->creq_bar_reg);
+		return -ENOMEM;
+	}
+	rcfw->creq_qp_event_processed = 0;
+	rcfw->creq_func_event_processed = 0;
+
+	rcfw->vector = msix_vector;
+	if (aeq_handler)
+		rcfw->aeq_handler = aeq_handler;
+
+	tasklet_init(&rcfw->worker, bnxt_qplib_service_creq,
+		     (unsigned long)rcfw);
+
+	rcfw->requested = false;
+	rc = request_irq(rcfw->vector, bnxt_qplib_creq_irq, 0,
+			 "bnxt_qplib_creq", rcfw);
+	if (rc) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: Failed to request IRQ for CREQ rc = 0x%x", rc);
+		bnxt_qplib_disable_rcfw_channel(rcfw);
+		return rc;
+	}
+	rcfw->requested = true;
+
+	init_waitqueue_head(&rcfw->waitq);
+
+	CREQ_DB_REARM(rcfw->creq_bar_reg_iomem, 0, rcfw->creq.max_elements);
+
+	init.cmdq_pbl = cpu_to_le64(rcfw->cmdq.pbl[PBL_LVL_0].pg_map_arr[0]);
+	init.cmdq_size_cmdq_lvl = cpu_to_le16(
+		((BNXT_QPLIB_CMDQE_MAX_CNT << CMDQ_INIT_CMDQ_SIZE_SFT) &
+		 CMDQ_INIT_CMDQ_SIZE_MASK) |
+		((rcfw->cmdq.level << CMDQ_INIT_CMDQ_LVL_SFT) &
+		 CMDQ_INIT_CMDQ_LVL_MASK));
+	init.creq_ring_id = cpu_to_le16(rcfw->creq_ring_id);
+
+	/* Write to the Bono mailbox register */
+	__iowrite32_copy(rcfw->cmdq_bar_reg_iomem, &init, sizeof(init) / 4);
+	return 0;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
index 1f63956..942fa11 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.h
@@ -39,4 +39,180 @@
 #ifndef __BNXT_QPLIB_RCFW_H__
 #define __BNXT_QPLIB_RCFW_H__
 
+#define RCFW_CMDQ_TRIG_VAL		1
+#define RCFW_COMM_PCI_BAR_REGION	0
+#define RCFW_COMM_CONS_PCI_BAR_REGION	2
+#define RCFW_COMM_BASE_OFFSET		0x600
+#define RCFW_PF_COMM_PROD_OFFSET		0x7c
+#define RCFW_VF_COMM_PROD_OFFSET	0xc
+#define RCFW_COMM_TRIG_OFFSET		0x100
+#define RCFW_COMM_SIZE			0x104
+
+#define RCFW_DBR_PCI_BAR_REGION		2
+
+#define RCFW_CMD_PREP(req, CMD, cmd_flags)				\
+	do {								\
+		memset(&(req), 0, sizeof((req)));			\
+		(req).opcode = CMDQ_BASE_OPCODE_##CMD;			\
+		(req).cmd_size = (sizeof((req)) +			\
+				BNXT_QPLIB_CMDQE_UNITS - 1) /		\
+				BNXT_QPLIB_CMDQE_UNITS;			\
+		(req).flags = cpu_to_le16(cmd_flags);			\
+	} while (0)
+
+#define RCFW_RESP_STATUS(resp)	((resp)->status)
+
+#define RCFW_CMDQ_COOKIE(req)	le16_to_cpu(req.cookie)
+
+#define RCFW_RESP_COOKIE(resp)	le16_to_cpu((resp)->cookie)
+
+#define RCFW_CMD_WAIT_TIME_MS		20000 /* 20 Seconds timeout */
+
+/* CMDQ elements */
+#define BNXT_QPLIB_CMDQE_MAX_CNT	256
+#define BNXT_QPLIB_CMDQE_UNITS		sizeof(struct bnxt_qplib_cmdqe)
+#define BNXT_QPLIB_CMDQE_CNT_PER_PG	(PAGE_SIZE / BNXT_QPLIB_CMDQE_UNITS)
+
+#define MAX_CMDQ_IDX			(BNXT_QPLIB_CMDQE_MAX_CNT - 1)
+#define MAX_CMDQ_IDX_PER_PG		(BNXT_QPLIB_CMDQE_CNT_PER_PG - 1)
+
+#define CMDQ_PG(x)			\
+	(((x) & ~MAX_CMDQ_IDX_PER_PG) / BNXT_QPLIB_CMDQE_CNT_PER_PG)
+#define CMDQ_IDX(x)			((x) & MAX_CMDQ_IDX_PER_PG)
+
+#define RCFW_MAX_OUTSTANDING_CMD	BNXT_QPLIB_CMDQE_MAX_CNT
+#define RCFW_MAX_COOKIE_VALUE		0x7FFF
+#define RCFW_CMD_IS_BLOCKING		0x8000
+
+/* Cmdq contains a fix number of a 16-Byte slots */
+struct bnxt_qplib_cmdqe {
+	u8		data[16];
+};
+
+/* CRSQ SB */
+#define BNXT_QPLIB_CRSBE_MAX_CNT	4
+#define BNXT_QPLIB_CRSBE_UNITS		sizeof(struct bnxt_qplib_crsbe)
+#define BNXT_QPLIB_CRSBE_CNT_PER_PG	(PAGE_SIZE / BNXT_QPLIB_CRSBE_UNITS)
+
+#define MAX_CRSB_IDX			(BNXT_QPLIB_CRSBE_MAX_CNT - 1)
+#define MAX_CRSB_IDX_PER_PG		(BNXT_QPLIB_CRSBE_CNT_PER_PG - 1)
+
+#define CRSB_PG(x)			\
+	(((x) & ~MAX_CRSB_IDX_PER_PG) / BNXT_QPLIB_CRSBE_CNT_PER_PG)
+#define CRSB_IDX(x)			((x) & MAX_CRSB_IDX_PER_PG)
+
+/* Crsq buf is 1024-Byte */
+struct bnxt_qplib_crsbe {
+	u8			data[1024];
+};
+
+#define BNXT_QPLIB_CRSB_DMA_NEXT(pg_map_arr, prod, dma_addr)		\
+	do {								\
+		dma_addr = pg_map_arr[(prod) / BNXT_QPLIB_CRSBE_CNT_PER_PG];\
+		dma_addr += ((prod) % BNXT_QPLIB_CRSBE_CNT_PER_PG) *	\
+			    BNXT_QPLIB_CRSBE_UNITS;			\
+	} while (0)
+
+/* CREQ */
+/* Allocate 1 per QP for async error notification for now */
+#define BNXT_QPLIB_CREQE_MAX_CNT	(64 * 1024)
+#define BNXT_QPLIB_CREQE_UNITS		16	/* 16-Bytes per prod unit */
+#define BNXT_QPLIB_CREQE_CNT_PER_PG	(PAGE_SIZE / BNXT_QPLIB_CREQE_UNITS)
+
+#define MAX_CREQ_IDX			(BNXT_QPLIB_CREQE_MAX_CNT - 1)
+#define MAX_CREQ_IDX_PER_PG		(BNXT_QPLIB_CREQE_CNT_PER_PG - 1)
+
+#define CREQ_PG(x)			\
+	(((x) & ~MAX_CREQ_IDX_PER_PG) / BNXT_QPLIB_CREQE_CNT_PER_PG)
+#define CREQ_IDX(x)			((x) & MAX_CREQ_IDX_PER_PG)
+
+#define BNXT_QPLIB_CREQE_PER_PG	(PAGE_SIZE / sizeof(struct creq_base))
+
+#define CREQ_CMP_VALID(hdr, raw_cons, cp_bit)			\
+	(!!((hdr)->v & CREQ_BASE_V) ==				\
+	   !((raw_cons) & (cp_bit)))
+
+#define CREQ_DB_KEY_CP			(0x2 << CMPL_DOORBELL_KEY_SFT)
+#define CREQ_DB_IDX_VALID		CMPL_DOORBELL_IDX_VALID
+#define CREQ_DB_IRQ_DIS			CMPL_DOORBELL_MASK
+#define CREQ_DB_CP_FLAGS_REARM		(CREQ_DB_KEY_CP |	\
+					 CREQ_DB_IDX_VALID)
+#define CREQ_DB_CP_FLAGS		(CREQ_DB_KEY_CP |	\
+					 CREQ_DB_IDX_VALID |	\
+					 CREQ_DB_IRQ_DIS)
+#define CREQ_DB_REARM(db, raw_cons, cp_bit)			\
+	writel(CREQ_DB_CP_FLAGS_REARM | ((raw_cons) & ((cp_bit) - 1)), db)
+#define CREQ_DB(db, raw_cons, cp_bit)				\
+	writel(CREQ_DB_CP_FLAGS | ((raw_cons) & ((cp_bit) - 1)), db)
+
+/* HWQ */
+struct bnxt_qplib_crsqe {
+	struct creq_qp_event	qp_event;
+	u32			req_size;
+};
+
+struct bnxt_qplib_crsq {
+	struct bnxt_qplib_crsqe	*crsq;
+	u32			prod;
+	u32			cons;
+	u32			max_elements;
+};
+
+/* RCFW Communication Channels */
+struct bnxt_qplib_rcfw {
+	struct pci_dev		*pdev;
+	int			vector;
+	struct tasklet_struct	worker;
+	bool			requested;
+	unsigned long		*cmdq_bitmap;
+	u32			bmap_size;
+	unsigned long		flags;
+#define FIRMWARE_INITIALIZED_FLAG	BIT(0)
+#define FIRMWARE_FIRST_FLAG		BIT(31)
+	wait_queue_head_t	waitq;
+	int			(*aeq_handler)(struct bnxt_qplib_rcfw *,
+					       struct creq_func_event *);
+	atomic_t		seq_num;
+
+	/* Bar region info */
+	void __iomem		*cmdq_bar_reg_iomem;
+	u16			cmdq_bar_reg;
+	u16			cmdq_bar_reg_prod_off;
+	u16			cmdq_bar_reg_trig_off;
+	u16			creq_ring_id;
+	u16			creq_bar_reg;
+	void __iomem		*creq_bar_reg_iomem;
+
+	/* Cmd-Resp and Async Event notification queue */
+	struct bnxt_qplib_hwq	creq;
+	u64			creq_qp_event_processed;
+	u64			creq_func_event_processed;
+
+	/* Actual Cmd and Resp Queues */
+	struct bnxt_qplib_hwq	cmdq;
+	struct bnxt_qplib_crsq	crsq;
+	struct bnxt_qplib_hwq	crsb;
+};
+
+void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw);
+int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev,
+				  struct bnxt_qplib_rcfw *rcfw);
+void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw);
+int bnxt_qplib_enable_rcfw_channel(struct pci_dev *pdev,
+				   struct bnxt_qplib_rcfw *rcfw,
+				   int msix_vector,
+				   int cp_bar_reg_off, int virt_fn,
+				   int (*aeq_handler)
+					(struct bnxt_qplib_rcfw *,
+					 struct creq_func_event *));
+
+int bnxt_qplib_rcfw_block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie);
+int bnxt_qplib_rcfw_wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie);
+void *bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+				   struct cmdq_base *req, void **crsbe,
+				   u8 is_block);
+
+int bnxt_qplib_deinit_rcfw(struct bnxt_qplib_rcfw *rcfw);
+int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
+			 struct bnxt_qplib_ctx *ctx, int is_virtfn);
 #endif /* __BNXT_QPLIB_RCFW_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
index 178eebd..58db3a2 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
@@ -35,3 +35,743 @@
  *
  * Description: QPLib resource manager
  */
+
+#include <linux/spinlock.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/inetdevice.h>
+#include <linux/dma-mapping.h>
+#include <linux/if_vlan.h>
+#include "bnxt_re_hsi.h"
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_sp.h"
+#include "bnxt_qplib_rcfw.h"
+
+static void bnxt_qplib_free_stats_ctx(struct pci_dev *pdev,
+				      struct bnxt_qplib_stats *stats);
+static int bnxt_qplib_alloc_stats_ctx(struct pci_dev *pdev,
+				      struct bnxt_qplib_stats *stats);
+
+/* PBL */
+static void __free_pbl(struct pci_dev *pdev, struct bnxt_qplib_pbl *pbl,
+		       bool is_umem)
+{
+	int i;
+
+	if (!is_umem) {
+		for (i = 0; i < pbl->pg_count; i++) {
+			if (pbl->pg_arr[i])
+				dma_free_coherent(&pdev->dev, pbl->pg_size,
+						  (void *)((u64)pbl->pg_arr[i] &
+						  PAGE_MASK),
+						  pbl->pg_map_arr[i]);
+			else
+				dev_warn(&pdev->dev,
+					 "QPLIB: PBL free pg_arr[%d] empty?!",
+					 i);
+			pbl->pg_arr[i] = NULL;
+		}
+	}
+	kfree(pbl->pg_arr);
+	pbl->pg_arr = NULL;
+	kfree(pbl->pg_map_arr);
+	pbl->pg_map_arr = NULL;
+	pbl->pg_count = 0;
+	pbl->pg_size = 0;
+}
+
+static int __alloc_pbl(struct pci_dev *pdev, struct bnxt_qplib_pbl *pbl,
+		       struct scatterlist *sghead, u32 pages, u32 pg_size)
+{
+	struct scatterlist *sg;
+	bool is_umem = false;
+	int i;
+
+	/* page ptr arrays */
+	pbl->pg_arr = kcalloc(pages, sizeof(void *), GFP_KERNEL);
+	if (!pbl->pg_arr)
+		return -ENOMEM;
+
+	pbl->pg_map_arr = kcalloc(pages, sizeof(dma_addr_t), GFP_KERNEL);
+	if (!pbl->pg_map_arr) {
+		kfree(pbl->pg_arr);
+		pbl->pg_arr = NULL;
+		return -ENOMEM;
+	}
+	pbl->pg_count = 0;
+	pbl->pg_size = pg_size;
+
+	if (!sghead) {
+		for (i = 0; i < pages; i++) {
+			pbl->pg_arr[i] = dma_alloc_coherent(&pdev->dev,
+							    pbl->pg_size,
+							    &pbl->pg_map_arr[i],
+							    GFP_KERNEL);
+			if (!pbl->pg_arr[i])
+				goto fail;
+			memset(pbl->pg_arr[i], 0, pbl->pg_size);
+			pbl->pg_count++;
+		}
+	} else {
+		i = 0;
+		is_umem = true;
+		for_each_sg(sghead, sg, pages, i) {
+			pbl->pg_map_arr[i] = sg_dma_address(sg);
+			pbl->pg_arr[i] = sg_virt(sg);
+			if (!pbl->pg_arr[i])
+				goto fail;
+
+			pbl->pg_count++;
+		}
+	}
+
+	return 0;
+
+fail:
+	__free_pbl(pdev, pbl, is_umem);
+	return -ENOMEM;
+}
+
+/* HWQ */
+void bnxt_qplib_free_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq)
+{
+	int i;
+
+	if (!hwq->max_elements)
+		return;
+	if (hwq->level >= PBL_LVL_MAX)
+		return;
+
+	for (i = 0; i < hwq->level + 1; i++) {
+		if (i == hwq->level)
+			__free_pbl(pdev, &hwq->pbl[i], hwq->is_user);
+		else
+			__free_pbl(pdev, &hwq->pbl[i], false);
+	}
+
+	hwq->level = PBL_LVL_MAX;
+	hwq->max_elements = 0;
+	hwq->element_size = 0;
+	hwq->prod = 0;
+	hwq->cons = 0;
+	hwq->cp_bit = 0;
+}
+
+/* All HWQs are power of 2 in size */
+int bnxt_qplib_alloc_init_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq,
+			      struct scatterlist *sghead, int nmap,
+			      u32 *elements, u32 element_size, u32 aux,
+			      u32 pg_size, enum bnxt_qplib_hwq_type hwq_type)
+{
+	u32 pages, slots, size, aux_pages = 0, aux_size = 0;
+	dma_addr_t *src_phys_ptr, **dst_virt_ptr;
+	int i, rc;
+
+	hwq->level = PBL_LVL_MAX;
+
+	if (*elements < 0)
+		return -EINVAL;
+
+	slots = roundup_pow_of_two(*elements);
+	if (aux) {
+		aux_size = roundup_pow_of_two(aux);
+		aux_pages = (slots * aux_size) / pg_size;
+		if ((slots * aux_size) % pg_size)
+			aux_pages++;
+	}
+	size = roundup_pow_of_two(element_size);
+
+	if (!sghead) {
+		hwq->is_user = false;
+		pages = (slots * size) / pg_size + aux_pages;
+		if ((slots * size) % pg_size)
+			pages++;
+		if (!pages)
+			return -EINVAL;
+	} else {
+		hwq->is_user = true;
+		pages = nmap;
+	}
+
+	/* Alloc the 1st memory block; can be a PDL/PTL/PBL */
+	if (sghead && (pages == MAX_PBL_LVL_0_PGS))
+		rc = __alloc_pbl(pdev, &hwq->pbl[PBL_LVL_0], sghead,
+				 pages, pg_size);
+	else
+		rc = __alloc_pbl(pdev, &hwq->pbl[PBL_LVL_0], NULL, 1, pg_size);
+	if (rc)
+		goto fail;
+
+	hwq->level = PBL_LVL_0;
+
+	if (pages > MAX_PBL_LVL_0_PGS) {
+		if (pages > MAX_PBL_LVL_1_PGS) {
+			/* 2 levels of indirection */
+			rc = __alloc_pbl(pdev, &hwq->pbl[PBL_LVL_1], NULL,
+					 MAX_PBL_LVL_1_PGS_FOR_LVL_2, pg_size);
+			if (rc)
+				goto fail;
+			/* Fill in lvl0 PBL */
+			dst_virt_ptr =
+				(dma_addr_t **)hwq->pbl[PBL_LVL_0].pg_arr;
+			src_phys_ptr = hwq->pbl[PBL_LVL_1].pg_map_arr;
+			for (i = 0; i < hwq->pbl[PBL_LVL_1].pg_count; i++)
+				dst_virt_ptr[PTR_PG(i)][PTR_IDX(i)] =
+					src_phys_ptr[i] | PTU_PDE_VALID;
+			hwq->level = PBL_LVL_1;
+
+			rc = __alloc_pbl(pdev, &hwq->pbl[PBL_LVL_2], sghead,
+					 pages, pg_size);
+			if (rc)
+				goto fail;
+
+			/* Fill in lvl1 PBL */
+			dst_virt_ptr =
+				(dma_addr_t **)hwq->pbl[PBL_LVL_1].pg_arr;
+			src_phys_ptr = hwq->pbl[PBL_LVL_2].pg_map_arr;
+			for (i = 0; i < hwq->pbl[PBL_LVL_2].pg_count; i++) {
+				dst_virt_ptr[PTR_PG(i)][PTR_IDX(i)] =
+					src_phys_ptr[i] | PTU_PTE_VALID;
+			}
+			if (hwq_type == HWQ_TYPE_QUEUE) {
+				/* Find the last pg of the size */
+				i = hwq->pbl[PBL_LVL_2].pg_count;
+				dst_virt_ptr[PTR_PG(i - 1)][PTR_IDX(i - 1)] |=
+								  PTU_PTE_LAST;
+				if (i > 1)
+					dst_virt_ptr[PTR_PG(i - 2)]
+						    [PTR_IDX(i - 2)] |=
+						    PTU_PTE_NEXT_TO_LAST;
+			}
+			hwq->level = PBL_LVL_2;
+		} else {
+			u32 flag = hwq_type == HWQ_TYPE_L2_CMPL ? 0 :
+						PTU_PTE_VALID;
+
+			/* 1 level of indirection */
+			rc = __alloc_pbl(pdev, &hwq->pbl[PBL_LVL_1], sghead,
+					 pages, pg_size);
+			if (rc)
+				goto fail;
+			/* Fill in lvl0 PBL */
+			dst_virt_ptr =
+				(dma_addr_t **)hwq->pbl[PBL_LVL_0].pg_arr;
+			src_phys_ptr = hwq->pbl[PBL_LVL_1].pg_map_arr;
+			for (i = 0; i < hwq->pbl[PBL_LVL_1].pg_count; i++) {
+				dst_virt_ptr[PTR_PG(i)][PTR_IDX(i)] =
+					src_phys_ptr[i] | flag;
+			}
+			if (hwq_type == HWQ_TYPE_QUEUE) {
+				/* Find the last pg of the size */
+				i = hwq->pbl[PBL_LVL_1].pg_count;
+				dst_virt_ptr[PTR_PG(i - 1)][PTR_IDX(i - 1)] |=
+								  PTU_PTE_LAST;
+				if (i > 1)
+					dst_virt_ptr[PTR_PG(i - 2)]
+						    [PTR_IDX(i - 2)] |=
+						    PTU_PTE_NEXT_TO_LAST;
+			}
+			hwq->level = PBL_LVL_1;
+		}
+	}
+	hwq->pdev = pdev;
+	spin_lock_init(&hwq->lock);
+	hwq->prod = 0;
+	hwq->cons = 0;
+	*elements = hwq->max_elements = slots;
+	hwq->element_size = size;
+
+	/* For direct access to the elements */
+	hwq->pbl_ptr = hwq->pbl[hwq->level].pg_arr;
+	hwq->pbl_dma_ptr = hwq->pbl[hwq->level].pg_map_arr;
+
+	return 0;
+
+fail:
+	bnxt_qplib_free_hwq(pdev, hwq);
+	return -ENOMEM;
+}
+
+/* Context Tables */
+void bnxt_qplib_free_ctx(struct pci_dev *pdev,
+			 struct bnxt_qplib_ctx *ctx)
+{
+	int i;
+
+	bnxt_qplib_free_hwq(pdev, &ctx->qpc_tbl);
+	bnxt_qplib_free_hwq(pdev, &ctx->mrw_tbl);
+	bnxt_qplib_free_hwq(pdev, &ctx->srqc_tbl);
+	bnxt_qplib_free_hwq(pdev, &ctx->cq_tbl);
+	bnxt_qplib_free_hwq(pdev, &ctx->tim_tbl);
+	for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
+		bnxt_qplib_free_hwq(pdev, &ctx->tqm_tbl[i]);
+	bnxt_qplib_free_hwq(pdev, &ctx->tqm_pde);
+	bnxt_qplib_free_stats_ctx(pdev, &ctx->stats);
+}
+
+/*
+ * Routine: bnxt_qplib_alloc_ctx
+ * Description:
+ *     Context tables are memories which are used by the chip fw.
+ *     The 6 tables defined are:
+ *             QPC ctx - holds QP states
+ *             MRW ctx - holds memory region and window
+ *             SRQ ctx - holds shared RQ states
+ *             CQ ctx - holds completion queue states
+ *             TQM ctx - holds Tx Queue Manager context
+ *             TIM ctx - holds timer context
+ *     Depending on the size of the tbl requested, either a 1 Page Buffer List
+ *     or a 1-to-2-stage indirection Page Directory List + 1 PBL is used
+ *     instead.
+ *     Table might be employed as follows:
+ *             For 0      < ctx size <= 1 PAGE, 0 level of ind is used
+ *             For 1 PAGE < ctx size <= 512 entries size, 1 level of ind is used
+ *             For 512    < ctx size <= MAX, 2 levels of ind is used
+ * Returns:
+ *     0 if success, else -ERRORS
+ */
+int bnxt_qplib_alloc_ctx(struct pci_dev *pdev,
+			 struct bnxt_qplib_ctx *ctx,
+			 bool virt_fn)
+{
+	int i, j, k, rc = 0;
+	int fnz_idx = -1;
+	u64 **pbl_ptr;
+
+	if (virt_fn)
+		goto stats_alloc;
+
+	/* QPC Tables */
+	ctx->qpc_tbl.max_elements = ctx->qpc_count;
+	rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->qpc_tbl, NULL, 0,
+				       &ctx->qpc_tbl.max_elements,
+				       BNXT_QPLIB_MAX_QP_CTX_ENTRY_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_CTX);
+	if (rc)
+		goto fail;
+
+	/* MRW Tables */
+	ctx->mrw_tbl.max_elements = ctx->mrw_count;
+	rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->mrw_tbl, NULL, 0,
+				       &ctx->mrw_tbl.max_elements,
+				       BNXT_QPLIB_MAX_MRW_CTX_ENTRY_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_CTX);
+	if (rc)
+		goto fail;
+
+	/* SRQ Tables */
+	ctx->srqc_tbl.max_elements = ctx->srqc_count;
+	rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->srqc_tbl, NULL, 0,
+				       &ctx->srqc_tbl.max_elements,
+				       BNXT_QPLIB_MAX_SRQ_CTX_ENTRY_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_CTX);
+	if (rc)
+		goto fail;
+
+	/* CQ Tables */
+	ctx->cq_tbl.max_elements = ctx->cq_count;
+	rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->cq_tbl, NULL, 0,
+				       &ctx->cq_tbl.max_elements,
+				       BNXT_QPLIB_MAX_CQ_CTX_ENTRY_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_CTX);
+	if (rc)
+		goto fail;
+
+	/* TQM Buffer */
+	ctx->tqm_pde.max_elements = 512;
+	rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->tqm_pde, NULL, 0,
+				       &ctx->tqm_pde.max_elements, sizeof(u64),
+				       0, PAGE_SIZE, HWQ_TYPE_CTX);
+	if (rc)
+		goto fail;
+
+	for (i = 0; i < MAX_TQM_ALLOC_REQ; i++) {
+		if (!ctx->tqm_count[i])
+			continue;
+		ctx->tqm_tbl[i].max_elements = ctx->qpc_count *
+					       ctx->tqm_count[i];
+		rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->tqm_tbl[i], NULL, 0,
+					       &ctx->tqm_tbl[i].max_elements, 1,
+					       0, PAGE_SIZE, HWQ_TYPE_CTX);
+		if (rc)
+			goto fail;
+	}
+	pbl_ptr = (u64 **)ctx->tqm_pde.pbl_ptr;
+	for (i = 0, j = 0; i < MAX_TQM_ALLOC_REQ;
+	     i++, j += MAX_TQM_ALLOC_BLK_SIZE) {
+		if (!ctx->tqm_tbl[i].max_elements)
+			continue;
+		if (fnz_idx == -1)
+			fnz_idx = i;
+		switch (ctx->tqm_tbl[i].level) {
+		case PBL_LVL_2:
+			for (k = 0; k < ctx->tqm_tbl[i].pbl[PBL_LVL_1].pg_count;
+			     k++)
+				pbl_ptr[PTR_PG(j + k)][PTR_IDX(j + k)] =
+				  cpu_to_le64(
+				    ctx->tqm_tbl[i].pbl[PBL_LVL_1].pg_map_arr[k]
+				    | PTU_PTE_VALID);
+			break;
+		case PBL_LVL_1:
+		case PBL_LVL_0:
+		default:
+			pbl_ptr[PTR_PG(j)][PTR_IDX(j)] = cpu_to_le64(
+				ctx->tqm_tbl[i].pbl[PBL_LVL_0].pg_map_arr[0] |
+				PTU_PTE_VALID);
+			break;
+		}
+	}
+	if (fnz_idx == -1)
+		fnz_idx = 0;
+	ctx->tqm_pde_level = ctx->tqm_tbl[fnz_idx].level == PBL_LVL_2 ?
+			     PBL_LVL_2 : ctx->tqm_tbl[fnz_idx].level + 1;
+
+	/* TIM Buffer */
+	ctx->tim_tbl.max_elements = ctx->qpc_count * 16;
+	rc = bnxt_qplib_alloc_init_hwq(pdev, &ctx->tim_tbl, NULL, 0,
+				       &ctx->tim_tbl.max_elements, 1,
+				       0, PAGE_SIZE, HWQ_TYPE_CTX);
+	if (rc)
+		goto fail;
+
+stats_alloc:
+	/* Stats */
+	rc = bnxt_qplib_alloc_stats_ctx(pdev, &ctx->stats);
+	if (rc)
+		goto fail;
+
+	return 0;
+
+fail:
+	bnxt_qplib_free_ctx(pdev, ctx);
+	return rc;
+}
+
+static void bnxt_qplib_free_sgid_tbl(struct bnxt_qplib_res *res,
+				     struct bnxt_qplib_sgid_tbl *sgid_tbl)
+{
+	kfree(sgid_tbl->tbl);
+	kfree(sgid_tbl->hw_id);
+	kfree(sgid_tbl->ctx);
+	sgid_tbl->tbl = NULL;
+	sgid_tbl->hw_id = NULL;
+	sgid_tbl->ctx = NULL;
+	sgid_tbl->max = 0;
+	sgid_tbl->active = 0;
+}
+
+static int bnxt_qplib_alloc_sgid_tbl(struct bnxt_qplib_res *res,
+				     struct bnxt_qplib_sgid_tbl *sgid_tbl,
+				     u16 max)
+{
+	sgid_tbl->tbl = kcalloc(max, sizeof(struct bnxt_qplib_gid), GFP_KERNEL);
+	if (!sgid_tbl->tbl)
+		return -ENOMEM;
+
+	sgid_tbl->hw_id = kcalloc(max, sizeof(u32), GFP_KERNEL);
+	if (!sgid_tbl->hw_id)
+		goto out_free1;
+
+	sgid_tbl->ctx = kcalloc(max, sizeof(void *), GFP_KERNEL);
+	if (!sgid_tbl->ctx)
+		goto out_free2;
+
+	sgid_tbl->max = max;
+	return 0;
+out_free2:
+	kfree(sgid_tbl->hw_id);
+	sgid_tbl->hw_id = NULL;
+out_free1:
+	kfree(sgid_tbl->tbl);
+	sgid_tbl->tbl = NULL;
+	return -ENOMEM;
+};
+
+static void bnxt_qplib_cleanup_sgid_tbl(struct bnxt_qplib_res *res,
+					struct bnxt_qplib_sgid_tbl *sgid_tbl)
+{
+	int i;
+
+	memset(sgid_tbl->tbl, 0, sizeof(struct bnxt_qplib_gid) * sgid_tbl->max);
+	memset(sgid_tbl->hw_id, -1, sizeof(u16) * sgid_tbl->max);
+	sgid_tbl->active = 0;
+}
+
+static void bnxt_qplib_init_sgid_tbl(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+				     struct net_device *netdev)
+{
+	memset(sgid_tbl->tbl, 0, sizeof(struct bnxt_qplib_gid) * sgid_tbl->max);
+	memset(sgid_tbl->hw_id, -1, sizeof(u16) * sgid_tbl->max);
+}
+
+static void bnxt_qplib_free_pkey_tbl(struct bnxt_qplib_res *res,
+				     struct bnxt_qplib_pkey_tbl *pkey_tbl)
+{
+	if (!pkey_tbl->tbl)
+		dev_dbg(&res->pdev->dev, "QPLIB: PKEY tbl not present");
+	else
+		kfree(pkey_tbl->tbl);
+
+	pkey_tbl->tbl = NULL;
+	pkey_tbl->max = 0;
+	pkey_tbl->active = 0;
+}
+
+static int bnxt_qplib_alloc_pkey_tbl(struct bnxt_qplib_res *res,
+				     struct bnxt_qplib_pkey_tbl *pkey_tbl,
+				     u16 max)
+{
+	pkey_tbl->tbl = kcalloc(max, sizeof(u16), GFP_KERNEL);
+	if (!pkey_tbl->tbl)
+		return -ENOMEM;
+
+	pkey_tbl->max = max;
+	return 0;
+};
+
+static void bnxt_qplib_free_pd_tbl(struct bnxt_qplib_pd_tbl *pdt)
+{
+	kfree(pdt->tbl);
+	pdt->tbl = NULL;
+	pdt->max = 0;
+}
+
+static int bnxt_qplib_alloc_pd_tbl(struct bnxt_qplib_res *res,
+				   struct bnxt_qplib_pd_tbl *pdt,
+				   u32 max)
+{
+	u32 bytes;
+
+	bytes = max >> 3;
+	if (!bytes)
+		bytes = 1;
+	pdt->tbl = kmalloc(bytes, GFP_KERNEL);
+	if (!pdt->tbl)
+		return -ENOMEM;
+
+	pdt->max = max;
+	memset((u8 *)pdt->tbl, 0xFF, bytes);
+
+	return 0;
+}
+
+/* DPIs */
+int bnxt_qplib_alloc_dpi(struct bnxt_qplib_dpi_tbl *dpit,
+			 struct bnxt_qplib_dpi     *dpi,
+			 void                      *app)
+{
+	u32 bit_num;
+
+	bit_num = find_first_bit(dpit->tbl, dpit->max);
+	if (bit_num == dpit->max)
+		return -ENOMEM;
+
+	/* Found unused DPI */
+	clear_bit(bit_num, dpit->tbl);
+	dpit->app_tbl[bit_num] = app;
+
+	dpi->dpi = bit_num;
+	dpi->dbr = dpit->dbr_bar_reg_iomem + (bit_num * PAGE_SIZE);
+	dpi->umdbr = dpit->unmapped_dbr + (bit_num * PAGE_SIZE);
+
+	return 0;
+}
+
+int bnxt_qplib_dealloc_dpi(struct bnxt_qplib_res *res,
+			   struct bnxt_qplib_dpi_tbl *dpit,
+			   struct bnxt_qplib_dpi     *dpi)
+{
+	if (dpi->dpi >= dpit->max) {
+		dev_warn(&res->pdev->dev, "Invalid DPI? dpi = %d", dpi->dpi);
+		return -EINVAL;
+	}
+	if (test_and_set_bit(dpi->dpi, dpit->tbl)) {
+		dev_warn(&res->pdev->dev, "Freeing an unused DPI? dpi = %d",
+			 dpi->dpi);
+		return -EINVAL;
+	}
+	if (dpit->app_tbl)
+		dpit->app_tbl[dpi->dpi] = NULL;
+	memset(dpi, 0, sizeof(*dpi));
+
+	return 0;
+}
+
+static void bnxt_qplib_free_dpi_tbl(struct bnxt_qplib_res     *res,
+				    struct bnxt_qplib_dpi_tbl *dpit)
+{
+	kfree(dpit->tbl);
+	kfree(dpit->app_tbl);
+	if (dpit->dbr_bar_reg_iomem)
+		pci_iounmap(res->pdev, dpit->dbr_bar_reg_iomem);
+	memset(dpit, 0, sizeof(*dpit));
+}
+
+static int bnxt_qplib_alloc_dpi_tbl(struct bnxt_qplib_res     *res,
+				    struct bnxt_qplib_dpi_tbl *dpit,
+				    u32                       dbr_offset)
+{
+	u32 dbr_bar_reg = RCFW_DBR_PCI_BAR_REGION;
+	resource_size_t bar_reg_base;
+	u32 dbr_len, bytes;
+
+	if (dpit->dbr_bar_reg_iomem) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: DBR BAR region %d already mapped", dbr_bar_reg);
+		return -EALREADY;
+	}
+
+	bar_reg_base = pci_resource_start(res->pdev, dbr_bar_reg);
+	if (!bar_reg_base) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: BAR region %d resc start failed", dbr_bar_reg);
+		return -ENOMEM;
+	}
+
+	dbr_len = pci_resource_len(res->pdev, dbr_bar_reg) - dbr_offset;
+	if (!dbr_len || ((dbr_len & (PAGE_SIZE - 1)) != 0)) {
+		dev_err(&res->pdev->dev, "QPLIB: Invalid DBR length %d",
+			dbr_len);
+		return -ENOMEM;
+	}
+
+	dpit->dbr_bar_reg_iomem = ioremap_nocache(bar_reg_base + dbr_offset,
+						  dbr_len);
+	if (!dpit->dbr_bar_reg_iomem) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: FP: DBR BAR region %d mapping failed",
+			dbr_bar_reg);
+		return -ENOMEM;
+	}
+
+	dpit->unmapped_dbr = bar_reg_base + dbr_offset;
+	dpit->max = dbr_len / PAGE_SIZE;
+
+	dpit->app_tbl = kcalloc(dpit->max, sizeof(void *), GFP_KERNEL);
+	if (!dpit->app_tbl) {
+		pci_iounmap(res->pdev, dpit->dbr_bar_reg_iomem);
+		dev_err(&res->pdev->dev,
+			"QPLIB: DPI app tbl allocation failed");
+		return -ENOMEM;
+	}
+
+	bytes = dpit->max >> 3;
+	if (!bytes)
+		bytes = 1;
+
+	dpit->tbl = kmalloc(bytes, GFP_KERNEL);
+	if (!dpit->tbl) {
+		pci_iounmap(res->pdev, dpit->dbr_bar_reg_iomem);
+		kfree(dpit->app_tbl);
+		dpit->app_tbl = NULL;
+		dev_err(&res->pdev->dev,
+			"QPLIB: DPI tbl allocation failed for size = %d",
+			bytes);
+		return -ENOMEM;
+	}
+
+	memset((u8 *)dpit->tbl, 0xFF, bytes);
+
+	return 0;
+}
+
+/* PKEYs */
+void bnxt_qplib_cleanup_pkey_tbl(struct bnxt_qplib_pkey_tbl *pkey_tbl)
+{
+	memset(pkey_tbl->tbl, 0, sizeof(u16) * pkey_tbl->max);
+	pkey_tbl->active = 0;
+}
+
+static void bnxt_qplib_init_pkey_tbl(struct bnxt_qplib_res *res,
+				     struct bnxt_qplib_pkey_tbl *pkey_tbl)
+{
+	u16 pkey = 0xFFFF;
+
+	memset(pkey_tbl->tbl, 0, sizeof(u16) * pkey_tbl->max);
+
+	/* pkey default = 0xFFFF */
+	bnxt_qplib_add_pkey(res, pkey_tbl, &pkey, false);
+}
+
+/* Stats */
+static void bnxt_qplib_free_stats_ctx(struct pci_dev *pdev,
+				      struct bnxt_qplib_stats *stats)
+{
+	if (stats->dma) {
+		dma_free_coherent(&pdev->dev, stats->size,
+				  stats->dma, stats->dma_map);
+	}
+	memset(stats, 0, sizeof(*stats));
+	stats->fw_id = -1;
+}
+
+static int bnxt_qplib_alloc_stats_ctx(struct pci_dev *pdev,
+				      struct bnxt_qplib_stats *stats)
+{
+	memset(stats, 0, sizeof(*stats));
+	stats->fw_id = -1;
+	stats->size = sizeof(struct ctx_hw_stats);
+	stats->dma = dma_alloc_coherent(&pdev->dev, stats->size,
+					&stats->dma_map, GFP_KERNEL);
+	if (!stats->dma) {
+		dev_err(&pdev->dev, "QPLIB: Stats DMA allocation failed");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+void bnxt_qplib_cleanup_res(struct bnxt_qplib_res *res)
+{
+	bnxt_qplib_cleanup_pkey_tbl(&res->pkey_tbl);
+	bnxt_qplib_cleanup_sgid_tbl(res, &res->sgid_tbl);
+}
+
+int bnxt_qplib_init_res(struct bnxt_qplib_res *res)
+{
+	bnxt_qplib_init_sgid_tbl(&res->sgid_tbl, res->netdev);
+	bnxt_qplib_init_pkey_tbl(res, &res->pkey_tbl);
+
+	return 0;
+}
+
+void bnxt_qplib_free_res(struct bnxt_qplib_res *res)
+{
+	bnxt_qplib_free_pkey_tbl(res, &res->pkey_tbl);
+	bnxt_qplib_free_sgid_tbl(res, &res->sgid_tbl);
+	bnxt_qplib_free_pd_tbl(&res->pd_tbl);
+	bnxt_qplib_free_dpi_tbl(res, &res->dpi_tbl);
+
+	res->netdev = NULL;
+	res->pdev = NULL;
+}
+
+int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
+			 struct net_device *netdev,
+			 struct bnxt_qplib_dev_attr *dev_attr)
+{
+	int rc = 0;
+
+	res->pdev = pdev;
+	res->netdev = netdev;
+
+	rc = bnxt_qplib_alloc_sgid_tbl(res, &res->sgid_tbl, dev_attr->max_sgid);
+	if (rc)
+		goto fail;
+
+	rc = bnxt_qplib_alloc_pkey_tbl(res, &res->pkey_tbl, dev_attr->max_pkey);
+	if (rc)
+		goto fail;
+
+	rc = bnxt_qplib_alloc_pd_tbl(res, &res->pd_tbl, dev_attr->max_pd);
+	if (rc)
+		goto fail;
+
+	rc = bnxt_qplib_alloc_dpi_tbl(res, &res->dpi_tbl, dev_attr->l2_db_size);
+	if (rc)
+		goto fail;
+
+	return 0;
+fail:
+	bnxt_qplib_free_res(res);
+	return rc;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
index 5fc4107..ce122cf 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
@@ -39,4 +39,169 @@
 #ifndef __BNXT_QPLIB_RES_H__
 #define __BNXT_QPLIB_RES_H__
 
+extern const struct bnxt_qplib_gid bnxt_qplib_gid_zero;
+
+#define PTR_CNT_PER_PG		(PAGE_SIZE / sizeof(void *))
+#define PTR_MAX_IDX_PER_PG	(PTR_CNT_PER_PG - 1)
+#define PTR_PG(x)		(((x) & ~PTR_MAX_IDX_PER_PG) / PTR_CNT_PER_PG)
+#define PTR_IDX(x)		((x) & PTR_MAX_IDX_PER_PG)
+
+#define HWQ_CMP(idx, hwq)	((idx) & ((hwq)->max_elements - 1))
+
+enum bnxt_qplib_hwq_type {
+	HWQ_TYPE_CTX,
+	HWQ_TYPE_QUEUE,
+	HWQ_TYPE_L2_CMPL
+};
+
+#define MAX_PBL_LVL_0_PGS		1
+#define MAX_PBL_LVL_1_PGS		512
+#define MAX_PBL_LVL_1_PGS_SHIFT		9
+#define MAX_PBL_LVL_1_PGS_FOR_LVL_2	256
+#define MAX_PBL_LVL_2_PGS		(256 * 512)
+
+enum bnxt_qplib_pbl_lvl {
+	PBL_LVL_0,
+	PBL_LVL_1,
+	PBL_LVL_2,
+	PBL_LVL_MAX
+};
+
+#define ROCE_PG_SIZE_4K		(4 * 1024)
+#define ROCE_PG_SIZE_8K		(8 * 1024)
+#define ROCE_PG_SIZE_64K	(64 * 1024)
+#define ROCE_PG_SIZE_2M		(2 * 1024 * 1024)
+#define ROCE_PG_SIZE_8M		(8 * 1024 * 1024)
+#define ROCE_PG_SIZE_1G		(1024 * 1024 * 1024)
+
+struct bnxt_qplib_pbl {
+	u32				pg_count;
+	u32				pg_size;
+	void				**pg_arr;
+	dma_addr_t			*pg_map_arr;
+};
+
+struct bnxt_qplib_hwq {
+	struct pci_dev			*pdev;
+	/* lock to protect qplib_hwq */
+	spinlock_t			lock;
+	struct bnxt_qplib_pbl		pbl[PBL_LVL_MAX];
+	enum bnxt_qplib_pbl_lvl		level;		/* 0, 1, or 2 */
+	/* ptr for easy access to the PBL entries */
+	void				**pbl_ptr;
+	/* ptr for easy access to the dma_addr */
+	dma_addr_t			*pbl_dma_ptr;
+	u32				max_elements;
+	u16				element_size;	/* Size of each entry */
+
+	u32				prod;		/* raw */
+	u32				cons;		/* raw */
+	u8				cp_bit;
+	u8				is_user;
+};
+
+/* Tables */
+struct bnxt_qplib_pd_tbl {
+	unsigned long			*tbl;
+	u32				max;
+};
+
+struct bnxt_qplib_sgid_tbl {
+	struct bnxt_qplib_gid		*tbl;
+	u16				*hw_id;
+	u16				max;
+	u16				active;
+	void				*ctx;
+};
+
+struct bnxt_qplib_pkey_tbl {
+	u16				*tbl;
+	u16				max;
+	u16				active;
+};
+
+struct bnxt_qplib_dpi {
+	u32				dpi;
+	void __iomem			*dbr;
+	u64				umdbr;
+};
+
+struct bnxt_qplib_dpi_tbl {
+	void				**app_tbl;
+	unsigned long			*tbl;
+	u16				max;
+	void __iomem			*dbr_bar_reg_iomem;
+	u64				unmapped_dbr;
+};
+
+struct bnxt_qplib_stats {
+	dma_addr_t			dma_map;
+	void				*dma;
+	u32				size;
+	u32				fw_id;
+};
+
+struct bnxt_qplib_vf_res {
+	u32 max_qp_per_vf;
+	u32 max_mrw_per_vf;
+	u32 max_srq_per_vf;
+	u32 max_cq_per_vf;
+	u32 max_gid_per_vf;
+};
+
+#define BNXT_QPLIB_MAX_QP_CTX_ENTRY_SIZE	448
+#define BNXT_QPLIB_MAX_SRQ_CTX_ENTRY_SIZE	64
+#define BNXT_QPLIB_MAX_CQ_CTX_ENTRY_SIZE	64
+#define BNXT_QPLIB_MAX_MRW_CTX_ENTRY_SIZE	128
+
+struct bnxt_qplib_ctx {
+	u32				qpc_count;
+	struct bnxt_qplib_hwq		qpc_tbl;
+	u32				mrw_count;
+	struct bnxt_qplib_hwq		mrw_tbl;
+	u32				srqc_count;
+	struct bnxt_qplib_hwq		srqc_tbl;
+	u32				cq_count;
+	struct bnxt_qplib_hwq		cq_tbl;
+	struct bnxt_qplib_hwq		tim_tbl;
+#define MAX_TQM_ALLOC_REQ		32
+#define MAX_TQM_ALLOC_BLK_SIZE		8
+	u8				tqm_count[MAX_TQM_ALLOC_REQ];
+	struct bnxt_qplib_hwq		tqm_pde;
+	u32				tqm_pde_level;
+	struct bnxt_qplib_hwq		tqm_tbl[MAX_TQM_ALLOC_REQ];
+	struct bnxt_qplib_stats		stats;
+	struct bnxt_qplib_vf_res	vf_res;
+};
+
+struct bnxt_qplib_res {
+	struct pci_dev			*pdev;
+	struct net_device		*netdev;
+
+	struct bnxt_qplib_rcfw		*rcfw;
+
+	struct bnxt_qplib_pd_tbl	pd_tbl;
+	struct bnxt_qplib_sgid_tbl	sgid_tbl;
+	struct bnxt_qplib_pkey_tbl	pkey_tbl;
+	struct bnxt_qplib_dpi_tbl	dpi_tbl;
+};
+
+struct bnxt_qplib_dev_attr;
+
+void bnxt_qplib_free_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq);
+int bnxt_qplib_alloc_init_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq,
+			      struct scatterlist *sl, int nmap, u32 *elements,
+			      u32 elements_per_page, u32 aux, u32 pg_size,
+			      enum bnxt_qplib_hwq_type hwq_type);
+void bnxt_qplib_cleanup_res(struct bnxt_qplib_res *res);
+int bnxt_qplib_init_res(struct bnxt_qplib_res *res);
+void bnxt_qplib_free_res(struct bnxt_qplib_res *res);
+int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
+			 struct net_device *netdev,
+			 struct bnxt_qplib_dev_attr *dev_attr);
+void bnxt_qplib_free_ctx(struct pci_dev *pdev,
+			 struct bnxt_qplib_ctx *ctx);
+int bnxt_qplib_alloc_ctx(struct pci_dev *pdev,
+			 struct bnxt_qplib_ctx *ctx,
+			 bool virt_fn);
 #endif /* __BNXT_QPLIB_RES_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
index 667c8e1..b3d6b46 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
@@ -35,3 +35,166 @@
  *
  * Description: Slow Path Operators
  */
+
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+
+#include "bnxt_re_hsi.h"
+
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_rcfw.h"
+#include "bnxt_qplib_sp.h"
+/* Device */
+int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+			    struct bnxt_qplib_dev_attr *attr)
+{
+	struct cmdq_query_func req;
+	struct creq_query_func_resp *resp;
+	struct creq_query_func_resp_sb *sb;
+	u16 cmd_flags = 0;
+	u32 temp;
+	u8 *tqm_alloc;
+	int i;
+
+	RCFW_CMD_PREP(req, QUERY_FUNC, cmd_flags);
+
+	req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS;
+	resp = (struct creq_query_func_resp *)
+		bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, (void **)&sb,
+					     0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: QUERY_FUNC send failed");
+		return -EINVAL;
+	}
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: QUERY_FUNC timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: QUERY_FUNC failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	/* Extract the context from the side buffer */
+	attr->max_qp = le32_to_cpu(sb->max_qp);
+	attr->max_qp_rd_atom =
+		sb->max_qp_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ?
+		BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_rd_atom;
+	attr->max_qp_init_rd_atom =
+		sb->max_qp_init_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ?
+		BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_init_rd_atom;
+	attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr);
+	attr->max_qp_sges = sb->max_sge;
+	attr->max_cq = le32_to_cpu(sb->max_cq);
+	attr->max_cq_wqes = le32_to_cpu(sb->max_cqe);
+	attr->max_cq_sges = attr->max_qp_sges;
+	attr->max_mr = le32_to_cpu(sb->max_mr);
+	attr->max_mw = le32_to_cpu(sb->max_mw);
+
+	attr->max_mr_size = le64_to_cpu(sb->max_mr_size);
+	attr->max_pd = 64 * 1024;
+	attr->max_raw_ethy_qp = le32_to_cpu(sb->max_raw_eth_qp);
+	attr->max_ah = le32_to_cpu(sb->max_ah);
+
+	attr->max_fmr = le32_to_cpu(sb->max_fmr);
+	attr->max_map_per_fmr = le32_to_cpu(sb->max_map_per_fmr);
+
+	attr->max_srq = le16_to_cpu(sb->max_srq);
+	attr->max_srq_wqes = le32_to_cpu(sb->max_srq_wr - 1);
+	attr->max_srq_sges = sb->max_srq_sge;
+	/* Bono only reports 1 PKEY for now, but it can support > 1 */
+	attr->max_pkey = le32_to_cpu(sb->max_pkeys);
+
+	attr->max_inline_data = le32_to_cpu(sb->max_inline_data);
+	attr->l2_db_size = (sb->l2_db_space_size + 1) * PAGE_SIZE;
+	attr->max_sgid = le32_to_cpu(sb->max_gid);
+
+	strlcpy(attr->fw_ver, "20.6.28.0", sizeof(attr->fw_ver));
+
+	for (i = 0; i < MAX_TQM_ALLOC_REQ / 4; i++) {
+		temp = le32_to_cpu(sb->tqm_alloc_reqs[i]);
+		tqm_alloc = (u8 *)&temp;
+		attr->tqm_alloc_reqs[i * 4] = *tqm_alloc;
+		attr->tqm_alloc_reqs[i * 4 + 1] = *(++tqm_alloc);
+		attr->tqm_alloc_reqs[i * 4 + 2] = *(++tqm_alloc);
+		attr->tqm_alloc_reqs[i * 4 + 3] = *(++tqm_alloc);
+	}
+	return 0;
+}
+
+int bnxt_qplib_del_pkey(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 *pkey,
+			bool update)
+{
+	int i, rc = 0;
+
+	if (!pkey_tbl) {
+		dev_err(&res->pdev->dev, "QPLIB: PKEY table not allocated");
+		return -EINVAL;
+	}
+
+	/* Do we need a pkey_lock here? */
+	if (!pkey_tbl->active) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: PKEY table has no active entries");
+		return -ENOMEM;
+	}
+	for (i = 0; i < pkey_tbl->max; i++) {
+		if (!memcmp(&pkey_tbl->tbl[i], pkey, sizeof(*pkey)))
+			break;
+	}
+	if (i == pkey_tbl->max) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: PKEY 0x%04x not found in the pkey table",
+			*pkey);
+		return -ENOMEM;
+	}
+	memset(&pkey_tbl->tbl[i], 0, sizeof(*pkey));
+	pkey_tbl->active--;
+
+	/* unlock */
+	return rc;
+}
+
+int bnxt_qplib_add_pkey(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 *pkey,
+			bool update)
+{
+	int i, free_idx, rc = 0;
+
+	if (!pkey_tbl) {
+		dev_err(&res->pdev->dev, "QPLIB: PKEY table not allocated");
+		return -EINVAL;
+	}
+
+	/* Do we need a pkey_lock here? */
+	if (pkey_tbl->active == pkey_tbl->max) {
+		dev_err(&res->pdev->dev, "QPLIB: PKEY table is full");
+		return -ENOMEM;
+	}
+	free_idx = pkey_tbl->max;
+	for (i = 0; i < pkey_tbl->max; i++) {
+		if (!memcmp(&pkey_tbl->tbl[i], pkey, sizeof(*pkey)))
+			return -EALREADY;
+		else if (!pkey_tbl->tbl[i] && free_idx == pkey_tbl->max)
+			free_idx = i;
+	}
+	if (free_idx == pkey_tbl->max) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: PKEY table is FULL but count is not MAX??");
+		return -ENOMEM;
+	}
+	/* Add PKEY to the pkey_tbl */
+	memcpy(&pkey_tbl->tbl[free_idx], pkey, sizeof(*pkey));
+	pkey_tbl->active++;
+
+	/* unlock */
+	return rc;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
index 90901c1..0b5adda 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -40,4 +40,46 @@
 #ifndef __BNXT_QPLIB_SP_H__
 #define __BNXT_QPLIB_SP_H__
 
+struct bnxt_qplib_dev_attr {
+	char				fw_ver[32];
+	u16				max_sgid;
+	u16				max_mrw;
+	u32				max_qp;
+#define BNXT_QPLIB_MAX_OUT_RD_ATOM	126
+	u32				max_qp_rd_atom;
+	u32				max_qp_init_rd_atom;
+	u32				max_qp_wqes;
+	u32				max_qp_sges;
+	u32				max_cq;
+	u32				max_cq_wqes;
+	u32				max_cq_sges;
+	u32				max_mr;
+	u64				max_mr_size;
+	u32				max_pd;
+	u32				max_mw;
+	u32				max_raw_ethy_qp;
+	u32				max_ah;
+	u32				max_fmr;
+	u32				max_map_per_fmr;
+	u32				max_srq;
+	u32				max_srq_wqes;
+	u32				max_srq_sges;
+	u32				max_pkey;
+	u32				max_inline_data;
+	u32				l2_db_size;
+	u8				tqm_alloc_reqs[MAX_TQM_ALLOC_REQ];
+};
+
+struct bnxt_qplib_gid {
+	u8				data[16];
+};
+
+int bnxt_qplib_del_pkey(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 *pkey,
+			bool update);
+int bnxt_qplib_add_pkey(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 *pkey,
+			bool update);
+int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+			    struct bnxt_qplib_dev_attr *attr);
 #endif /* __BNXT_QPLIB_SP_H__*/
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index 1fe4cb3..20abc70 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -45,6 +45,11 @@
 #define BNXT_RE_REF_WAIT_COUNT		10
 #define BNXT_RE_DESC	"Broadcom NetXtreme-C/E RoCE Driver"
 
+#define BNXT_RE_MAX_QPC_COUNT		(64 * 1024)
+#define BNXT_RE_MAX_MRW_COUNT		(64 * 1024)
+#define BNXT_RE_MAX_SRQC_COUNT		(64 * 1024)
+#define BNXT_RE_MAX_CQ_COUNT		(64 * 1024)
+
 struct bnxt_re_work {
 	struct work_struct	work;
 	unsigned long		event;
@@ -54,6 +59,7 @@ struct bnxt_re_work {
 
 #define BNXT_RE_MIN_MSIX		2
 #define BNXT_RE_MAX_MSIX		16
+#define BNXT_RE_AEQ_IDX			0
 struct bnxt_re_dev {
 	struct ib_device		ibdev;
 	struct list_head		list;
@@ -72,6 +78,15 @@ struct bnxt_re_dev {
 
 	int				id;
 
+	/* RCFW Channel */
+	struct bnxt_qplib_rcfw		rcfw;
+
+	/* Device Resources */
+	struct bnxt_qplib_dev_attr	dev_attr;
+	struct bnxt_qplib_ctx		qplib_ctx;
+	struct bnxt_qplib_res		qplib_res;
+	struct bnxt_qplib_dpi		dpi_privileged;
+
 	atomic_t			qp_count;
 	struct mutex			qp_lock;	/* protect qp list */
 	struct list_head		qp_list;
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 029824a..add74ba 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -54,6 +54,10 @@
 
 #include "bnxt_ulp.h"
 #include "bnxt_re_hsi.h"
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_sp.h"
+#include "bnxt_qplib_fp.h"
+#include "bnxt_qplib_rcfw.h"
 #include "bnxt_re.h"
 #include "bnxt.h"
 static char version[] =
@@ -100,7 +104,6 @@ static inline void bnxt_re_put(struct bnxt_re_dev *rdev)
 {
 	atomic_dec(&rdev->ref_count);
 }
-
 /* RoCE -> Net driver */
 
 /* Driver registration routines used to let the networking driver (bnxt_en)
@@ -185,6 +188,160 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
 	return rc;
 }
 
+static void bnxt_re_init_hwrm_hdr(struct bnxt_re_dev *rdev, struct input *hdr,
+				  u16 opcd, u16 crid, u16 trid)
+{
+	hdr->req_type = cpu_to_le16(opcd);
+	hdr->cmpl_ring = cpu_to_le16(crid);
+	hdr->target_id = cpu_to_le16(trid);
+}
+
+static void bnxt_re_fill_fw_msg(struct bnxt_fw_msg *fw_msg, void *msg,
+				int msg_len, void *resp, int resp_max_len,
+				int timeout)
+{
+	fw_msg->msg = msg;
+	fw_msg->msg_len = msg_len;
+	fw_msg->resp = resp;
+	fw_msg->resp_max_len = resp_max_len;
+	fw_msg->timeout = timeout;
+}
+
+static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+				 bool lock_wait)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	struct hwrm_ring_free_input req = {0};
+	struct hwrm_ring_free_output resp;
+	struct bnxt_fw_msg fw_msg;
+	bool do_unlock = false;
+	int rc = -EINVAL;
+
+	if (!en_dev)
+		return rc;
+
+	memset(&fw_msg, 0, sizeof(fw_msg));
+	if (lock_wait) {
+		rtnl_lock();
+		do_unlock = true;
+	}
+
+	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1);
+	req.ring_type = RING_ALLOC_REQ_RING_TYPE_CMPL;
+	req.ring_id = cpu_to_le16(fw_ring_id);
+	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+	rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+	if (rc)
+		dev_err(rdev_to_dev(rdev),
+			"Failed to free HW ring:%d :%#x", req.ring_id, rc);
+	if (do_unlock)
+		rtnl_unlock();
+	return rc;
+}
+
+static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+				  int pages, int type, u32 ring_mask,
+				  u32 map_index, u16 *fw_ring_id)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	struct hwrm_ring_alloc_input req = {0};
+	struct hwrm_ring_alloc_output resp;
+	struct bnxt_fw_msg fw_msg;
+	int rc = -EINVAL;
+
+	if (!en_dev)
+		return rc;
+
+	memset(&fw_msg, 0, sizeof(fw_msg));
+	rtnl_lock();
+	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1);
+	req.enables = 0;
+	req.page_tbl_addr =  cpu_to_le64(dma_arr[0]);
+	if (pages > 1) {
+		/* Page size is in log2 units */
+		req.page_size = BNXT_PAGE_SHIFT;
+		req.page_tbl_depth = 1;
+	}
+	req.fbo = 0;
+	/* Association of ring index with doorbell index and MSIX number */
+	req.logical_id = cpu_to_le16(map_index);
+	req.length = cpu_to_le32(ring_mask + 1);
+	req.ring_type = RING_ALLOC_REQ_RING_TYPE_CMPL;
+	req.int_mode = RING_ALLOC_REQ_INT_MODE_MSIX;
+	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+	rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+	if (!rc)
+		*fw_ring_id = le16_to_cpu(resp.ring_id);
+
+	rtnl_unlock();
+	return rc;
+}
+
+static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+				      u32 fw_stats_ctx_id, bool lock_wait)
+{
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	struct hwrm_stat_ctx_free_input req = {0};
+	struct bnxt_fw_msg fw_msg;
+	bool do_unlock = false;
+	int rc = -EINVAL;
+
+	if (!en_dev)
+		return rc;
+
+	memset(&fw_msg, 0, sizeof(fw_msg));
+	if (lock_wait) {
+		rtnl_lock();
+		do_unlock = true;
+	}
+
+	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1);
+	req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id);
+	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&req,
+			    sizeof(req), DFLT_HWRM_CMD_TIMEOUT);
+	rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+	if (rc)
+		dev_err(rdev_to_dev(rdev),
+			"Failed to free HW stats context %#x", rc);
+
+	if (do_unlock)
+		rtnl_unlock();
+	return rc;
+}
+
+static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+				       dma_addr_t dma_map,
+				       u32 *fw_stats_ctx_id)
+{
+	struct hwrm_stat_ctx_alloc_output resp = {0};
+	struct hwrm_stat_ctx_alloc_input req = {0};
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	struct bnxt_fw_msg fw_msg;
+	int rc = -EINVAL;
+
+	*fw_stats_ctx_id = INVALID_STATS_CTX_ID;
+
+	if (!en_dev)
+		return rc;
+
+	memset(&fw_msg, 0, sizeof(fw_msg));
+	rtnl_lock();
+
+	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
+	req.update_period_ms = cpu_to_le32(1000);
+	req.stats_dma_addr = cpu_to_le64(dma_map);
+	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+	rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+	if (!rc)
+		*fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id);
+
+	rtnl_unlock();
+	return rc;
+}
+
 /* Device */
 
 static bool is_bnxt_re_dev(struct net_device *netdev)
@@ -248,6 +405,64 @@ static struct bnxt_en_dev *bnxt_re_dev_probe(struct net_device *netdev)
 	return en_dev;
 }
 
+static void bnxt_re_unregister_ib(struct bnxt_re_dev *rdev)
+{
+	ib_unregister_device(&rdev->ibdev);
+}
+
+static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
+{
+	struct ib_device *ibdev = &rdev->ibdev;
+
+	/* ib device init */
+	ibdev->owner = THIS_MODULE;
+	ibdev->node_type = RDMA_NODE_IB_CA;
+	strlcpy(ibdev->name, "bnxt_re%d", IB_DEVICE_NAME_MAX);
+	strlcpy(ibdev->node_desc, BNXT_RE_DESC " HCA",
+		strlen(BNXT_RE_DESC) + 5);
+	ibdev->phys_port_cnt = 1;
+
+	ibdev->num_comp_vectors	= 1;
+	ibdev->dma_device = &rdev->en_dev->pdev->dev;
+	return ib_register_device(ibdev, NULL);
+}
+
+static ssize_t show_rev(struct device *device, struct device_attribute *attr,
+			char *buf)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(device, ibdev.dev);
+
+	return scnprintf(buf, PAGE_SIZE, "0x%x\n", rdev->en_dev->pdev->vendor);
+	strcpy(buf, "bnxt_re");
+	return 0;
+}
+
+static ssize_t show_fw_ver(struct device *device, struct device_attribute *attr,
+			   char *buf)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(device, ibdev.dev);
+
+	return scnprintf(buf, PAGE_SIZE, "%s\n", rdev->dev_attr.fw_ver);
+}
+
+static ssize_t show_hca(struct device *device, struct device_attribute *attr,
+			char *buf)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(device, ibdev.dev);
+
+	return scnprintf(buf, PAGE_SIZE, "%s\n", rdev->ibdev.node_desc);
+}
+
+static DEVICE_ATTR(hw_rev, 0444, show_rev, NULL);
+static DEVICE_ATTR(fw_rev, 0444, show_fw_ver, NULL);
+static DEVICE_ATTR(hca_type, 0444, show_hca, NULL);
+
+static struct device_attribute *bnxt_re_attributes[] = {
+	&dev_attr_hw_rev,
+	&dev_attr_fw_rev,
+	&dev_attr_hca_type
+};
+
 static void bnxt_re_dev_remove(struct bnxt_re_dev *rdev)
 {
 	int i = BNXT_RE_REF_WAIT_COUNT;
@@ -310,10 +525,109 @@ static struct bnxt_re_dev *bnxt_re_dev_add(struct net_device *netdev,
 	return rdev;
 }
 
+static int bnxt_re_aeq_handler(struct bnxt_qplib_rcfw *rcfw,
+			       struct creq_func_event *aeqe)
+{
+	switch (aeqe->event) {
+	case CREQ_FUNC_EVENT_EVENT_TX_WQE_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_TX_DATA_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_RX_WQE_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_RX_DATA_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CQ_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_TQM_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCQ_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCS_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCC_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_CFCM_ERROR:
+		break;
+	case CREQ_FUNC_EVENT_EVENT_TIM_ERROR:
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
+{
+	if (rdev->qplib_res.rcfw)
+		bnxt_qplib_cleanup_res(&rdev->qplib_res);
+}
+
+static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
+{
+	int rc = 0;
+
+	bnxt_qplib_init_res(&rdev->qplib_res);
+
+	return rc;
+}
+
+static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait)
+{
+	if (rdev->qplib_res.rcfw) {
+		bnxt_qplib_free_res(&rdev->qplib_res);
+		rdev->qplib_res.rcfw = NULL;
+	}
+}
+
+static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+{
+	int rc = 0;
+
+	/* Configure and allocate resources for qplib */
+	rdev->qplib_res.rcfw = &rdev->rcfw;
+	rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
+	if (rc)
+		goto fail;
+
+	rc = bnxt_qplib_alloc_res(&rdev->qplib_res, rdev->en_dev->pdev,
+				  rdev->netdev, &rdev->dev_attr);
+	if (rc)
+		goto fail;
+
+	return 0;
+
+fail:
+	rdev->qplib_res.rcfw = NULL;
+	return rc;
+}
+
 static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
 {
-	int rc;
+	int i, rc;
+
+	if (test_and_clear_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags)) {
+		for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++)
+			device_remove_file(&rdev->ibdev.dev,
+					   bnxt_re_attributes[i]);
+		/* Cleanup ib dev */
+		bnxt_re_unregister_ib(rdev);
+	}
+	bnxt_re_cleanup_res(rdev);
+	bnxt_re_free_res(rdev, lock_wait);
 
+	if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) {
+		rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw);
+		if (rc)
+			dev_warn(rdev_to_dev(rdev),
+				 "Failed to deinitialize RCFW: %#x", rc);
+		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id,
+					   lock_wait);
+		bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+		bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait);
+		bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+	}
 	if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) {
 		rc = bnxt_re_free_msix(rdev, lock_wait);
 		if (rc)
@@ -328,6 +642,19 @@ static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
 	}
 }
 
+static void bnxt_re_set_resource_limits(struct bnxt_re_dev *rdev)
+{
+	u32 i;
+
+	rdev->qplib_ctx.qpc_count = BNXT_RE_MAX_QPC_COUNT;
+	rdev->qplib_ctx.mrw_count = BNXT_RE_MAX_MRW_COUNT;
+	rdev->qplib_ctx.srqc_count = BNXT_RE_MAX_SRQC_COUNT;
+	rdev->qplib_ctx.cq_count = BNXT_RE_MAX_CQ_COUNT;
+	for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
+		rdev->qplib_ctx.tqm_count[i] =
+		rdev->dev_attr.tqm_alloc_reqs[i];
+}
+
 static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
 {
 	int i, j, rc;
@@ -348,6 +675,103 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
 	}
 	set_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags);
 
+	/* Establish RCFW Communication Channel to initialize the context
+	 * memory for the function and all child VFs
+	 */
+	rc = bnxt_qplib_alloc_rcfw_channel(rdev->en_dev->pdev, &rdev->rcfw);
+	if (rc)
+		goto fail;
+
+	rc = bnxt_re_net_ring_alloc
+			(rdev, rdev->rcfw.creq.pbl[PBL_LVL_0].pg_map_arr,
+			 rdev->rcfw.creq.pbl[rdev->rcfw.creq.level].pg_count,
+			 HWRM_RING_ALLOC_CMPL, BNXT_QPLIB_CREQE_MAX_CNT - 1,
+			 rdev->msix_entries[BNXT_RE_AEQ_IDX].ring_idx,
+			 &rdev->rcfw.creq_ring_id);
+	if (rc) {
+		pr_err("Failed to allocate CREQ: %#x\n", rc);
+		goto free_rcfw;
+	}
+	rc = bnxt_qplib_enable_rcfw_channel
+				(rdev->en_dev->pdev, &rdev->rcfw,
+				 rdev->msix_entries[BNXT_RE_AEQ_IDX].vector,
+				 rdev->msix_entries[BNXT_RE_AEQ_IDX].db_offset,
+				 0, &bnxt_re_aeq_handler);
+	if (rc) {
+		pr_err("Failed to enable RCFW channel: %#x\n", rc);
+		goto free_ring;
+	}
+
+	rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
+	if (rc)
+		goto disable_rcfw;
+		bnxt_re_set_resource_limits(rdev);
+
+	rc = bnxt_qplib_alloc_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx, 0);
+	if (rc) {
+		pr_err("Failed to allocate QPLIB context: %#x\n", rc);
+		goto disable_rcfw;
+	}
+	rc = bnxt_re_net_stats_ctx_alloc(rdev,
+					 rdev->qplib_ctx.stats.dma_map,
+					 &rdev->qplib_ctx.stats.fw_id);
+	if (rc) {
+		pr_err("Failed to allocate stats context: %#x\n", rc);
+		goto free_ctx;
+	}
+
+	rc = bnxt_qplib_init_rcfw(&rdev->rcfw, &rdev->qplib_ctx, 0);
+	if (rc) {
+		pr_err("Failed to initialize RCFW: %#x\n", rc);
+		goto free_sctx;
+	}
+	set_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags);
+
+	/* Resources based on the 'new' device caps */
+	rc = bnxt_re_alloc_res(rdev);
+	if (rc) {
+		pr_err("Failed to allocate resources: %#x\n", rc);
+		goto fail;
+	}
+	rc = bnxt_re_init_res(rdev);
+	if (rc) {
+		pr_err("Failed to initialize resources: %#x\n", rc);
+		goto fail;
+	}
+
+	/* Register ib dev */
+	rc = bnxt_re_register_ib(rdev);
+	if (rc) {
+		pr_err("Failed to register with IB: %#x\n", rc);
+		goto fail;
+	}
+	for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) {
+		rc = device_create_file(&rdev->ibdev.dev,
+					bnxt_re_attributes[i]);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev),
+				"Failed to create IB sysfs: %#x", rc);
+			/* Must clean up all created device files */
+			for (j = 0; j < i; j++)
+				device_remove_file(&rdev->ibdev.dev,
+						   bnxt_re_attributes[j]);
+			bnxt_re_unregister_ib(rdev);
+			goto fail;
+		}
+	}
+	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+
+	return 0;
+free_sctx:
+	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true);
+free_ctx:
+	bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+disable_rcfw:
+	bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+free_ring:
+	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true);
+free_rcfw:
+	bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
 fail:
 	bnxt_re_ib_unreg(rdev, true);
 	return rc;
@@ -392,7 +816,6 @@ static void bnxt_re_remove_one(struct bnxt_re_dev *rdev)
 {
 	pci_dev_put(rdev->en_dev->pdev);
 }
-
 /* Handle all deferred netevents tasks */
 static void bnxt_re_task(struct work_struct *work)
 {
@@ -415,14 +838,10 @@ static void bnxt_re_task(struct work_struct *work)
 				"Failed to register with IB: %#x", rc);
 			break;
 	case NETDEV_UP:
-
 		break;
 	case NETDEV_DOWN:
-
 		break;
-
 	case NETDEV_CHANGE:
-
 		break;
 	default:
 		break;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  05/22] bnxt_re: Adding Notification Queue support
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (3 preceding siblings ...)
  2016-12-09  6:47 ` [PATCH V2 04/22] bnxt_re: Enabling RoCE control path Selvin Xavier
@ 2016-12-09  6:47 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 06/22] bnxt_re: Support for PD, ucontext and mmap verbs Selvin Xavier
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:47 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

Completion Notifcations are handled by Notification Queue (NQ). This
patch configures the NQs. Also, configures the Door bell page mapping

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c  | 160 ++++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h  |  60 ++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h |   6 +
 drivers/infiniband/hw/bnxtre/bnxt_re.h        |   8 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c   |  52 ++++++++-
 5 files changed, 285 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
index 36c4b81..9e63032 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -35,3 +35,163 @@
  *
  * Description: Fast Path Operators
  */
+
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/pci.h>
+
+#include "bnxt_re_hsi.h"
+
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_rcfw.h"
+#include "bnxt_qplib_sp.h"
+#include "bnxt_qplib_fp.h"
+
+static void bnxt_qplib_service_nq(unsigned long data)
+{
+	struct bnxt_qplib_nq *nq = (struct bnxt_qplib_nq *)data;
+	struct bnxt_qplib_hwq *hwq = &nq->hwq;
+	struct nq_base *nqe, **nq_ptr;
+	u32 sw_cons, raw_cons;
+	u32 type;
+	int budget = nq->budget;
+
+	/* Service the NQ until empty */
+	raw_cons = hwq->cons;
+	while (budget--) {
+		sw_cons = HWQ_CMP(raw_cons, hwq);
+		nq_ptr = (struct nq_base **)hwq->pbl_ptr;
+		nqe = &nq_ptr[NQE_PG(sw_cons)][NQE_IDX(sw_cons)];
+		if (!NQE_CMP_VALID(nqe, raw_cons, hwq->max_elements))
+			break;
+
+		type = le16_to_cpu(nqe->info10_type & NQ_BASE_TYPE_MASK);
+		switch (type) {
+		case NQ_BASE_TYPE_CQ_NOTIFICATION:
+			break;
+		case NQ_BASE_TYPE_DBQ_EVENT:
+			break;
+		default:
+			dev_warn(&nq->pdev->dev,
+				 "QPLIB: nqe with type = 0x%x not handled",
+				 type);
+			break;
+		}
+		raw_cons++;
+	}
+	if (hwq->cons != raw_cons) {
+		hwq->cons = raw_cons;
+		NQ_DB_REARM(nq->bar_reg_iomem, hwq->cons, hwq->max_elements);
+	}
+}
+
+static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance)
+{
+	struct bnxt_qplib_nq *nq = dev_instance;
+	struct bnxt_qplib_hwq *hwq = &nq->hwq;
+	struct nq_base **nq_ptr;
+	u32 sw_cons;
+
+	/* Prefetch the NQ element */
+	sw_cons = HWQ_CMP(hwq->cons, hwq);
+	nq_ptr = (struct nq_base **)nq->hwq.pbl_ptr;
+	prefetch(&nq_ptr[NQE_PG(sw_cons)][NQE_IDX(sw_cons)]);
+
+	/* Fan out to CPU affinitized kthreads? */
+	tasklet_schedule(&nq->worker);
+
+	return IRQ_HANDLED;
+}
+
+void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+{
+	/* Make sure the HW is stopped! */
+	synchronize_irq(nq->vector);
+	tasklet_disable(&nq->worker);
+	tasklet_kill(&nq->worker);
+
+	if (nq->requested) {
+		free_irq(nq->vector, nq);
+		nq->requested = false;
+	}
+	if (nq->bar_reg_iomem)
+		iounmap(nq->bar_reg_iomem);
+	nq->bar_reg_iomem = NULL;
+
+	nq->cqn_handler = NULL;
+	nq->srqn_handler = NULL;
+	nq->vector = 0;
+}
+
+int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
+			 int msix_vector, int bar_reg_offset,
+			 int (*cqn_handler)(struct bnxt_qplib_nq *nq,
+					    void *),
+			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
+					     void *, u8 event))
+{
+	resource_size_t nq_base;
+	int rc;
+
+	nq->pdev = pdev;
+	nq->vector = msix_vector;
+
+	nq->cqn_handler = cqn_handler;
+
+	nq->srqn_handler = srqn_handler;
+
+	tasklet_init(&nq->worker, bnxt_qplib_service_nq, (unsigned long)nq);
+
+	nq->requested = false;
+	rc = request_irq(nq->vector, bnxt_qplib_nq_irq, 0, "bnxt_qplib_nq", nq);
+	if (rc) {
+		dev_err(&nq->pdev->dev,
+			"Failed to request IRQ for NQ: %#x", rc);
+		bnxt_qplib_disable_nq(nq);
+		goto fail;
+	}
+	nq->requested = true;
+	nq->bar_reg = NQ_CONS_PCI_BAR_REGION;
+	nq->bar_reg_off = bar_reg_offset;
+	nq_base = pci_resource_start(pdev, nq->bar_reg);
+	if (!nq_base) {
+		rc = -ENOMEM;
+		goto fail;
+	}
+	nq->bar_reg_iomem = ioremap_nocache(nq_base + nq->bar_reg_off, 4);
+	if (!nq->bar_reg_iomem) {
+		rc = -ENOMEM;
+		goto fail;
+	}
+	NQ_DB_REARM(nq->bar_reg_iomem, nq->hwq.cons, nq->hwq.max_elements);
+
+	return 0;
+fail:
+	bnxt_qplib_disable_nq(nq);
+	return rc;
+}
+
+void bnxt_qplib_free_nq(struct bnxt_qplib_nq *nq)
+{
+	if (nq->hwq.max_elements)
+		bnxt_qplib_free_hwq(nq->pdev, &nq->hwq);
+}
+
+int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq)
+{
+	nq->pdev = pdev;
+	if (!nq->hwq.max_elements ||
+	    nq->hwq.max_elements > BNXT_QPLIB_NQE_MAX_CNT)
+		nq->hwq.max_elements = BNXT_QPLIB_NQE_MAX_CNT;
+
+	if (bnxt_qplib_alloc_init_hwq(nq->pdev, &nq->hwq, NULL, 0,
+				      &nq->hwq.max_elements,
+				      BNXT_QPLIB_MAX_NQE_ENTRY_SIZE, 0,
+				      PAGE_SIZE, HWQ_TYPE_L2_CMPL))
+		return -ENOMEM;
+
+	nq->budget = 8;
+	return 0;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
index 0983465..d5cd39e 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -39,4 +39,64 @@
 #ifndef __BNXT_QPLIB_FP_H__
 #define __BNXT_QPLIB_FP_H__
 
+#define BNXT_QPLIB_MAX_NQE_ENTRY_SIZE	sizeof(struct nq_base)
+
+#define NQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_NQE_ENTRY_SIZE)
+#define NQE_MAX_IDX_PER_PG	(NQE_CNT_PER_PG - 1)
+#define NQE_PG(x)		(((x) & ~NQE_MAX_IDX_PER_PG) / NQE_CNT_PER_PG)
+#define NQE_IDX(x)		((x) & NQE_MAX_IDX_PER_PG)
+
+#define NQE_CMP_VALID(hdr, raw_cons, cp_bit)			\
+	(!!((hdr)->info63_v[0] & NQ_BASE_V) ==			\
+	   !((raw_cons) & (cp_bit)))
+
+#define BNXT_QPLIB_NQE_MAX_CNT		(128 * 1024)
+
+#define NQ_CONS_PCI_BAR_REGION		2
+#define NQ_DB_KEY_CP			(0x2 << CMPL_DOORBELL_KEY_SFT)
+#define NQ_DB_IDX_VALID			CMPL_DOORBELL_IDX_VALID
+#define NQ_DB_IRQ_DIS			CMPL_DOORBELL_MASK
+#define NQ_DB_CP_FLAGS_REARM		(NQ_DB_KEY_CP |		\
+					 NQ_DB_IDX_VALID)
+#define NQ_DB_CP_FLAGS			(NQ_DB_KEY_CP    |	\
+					 NQ_DB_IDX_VALID |	\
+					 NQ_DB_IRQ_DIS)
+#define NQ_DB_REARM(db, raw_cons, cp_bit)			\
+	writel(NQ_DB_CP_FLAGS_REARM | ((raw_cons) & ((cp_bit) - 1)), db)
+#define NQ_DB(db, raw_cons, cp_bit)				\
+	writel(NQ_DB_CP_FLAGS | ((raw_cons) & ((cp_bit) - 1)), db)
+
+struct bnxt_qplib_nq {
+	struct pci_dev			*pdev;
+
+	int				vector;
+	int				budget;
+	bool				requested;
+	struct tasklet_struct		worker;
+	struct bnxt_qplib_hwq		hwq;
+
+	u16				bar_reg;
+	u16				bar_reg_off;
+	u16				ring_id;
+	void __iomem			*bar_reg_iomem;
+
+	int				(*cqn_handler)
+						(struct bnxt_qplib_nq *nq,
+						 void *cq);
+	int				(*srqn_handler)
+						(struct bnxt_qplib_nq *nq,
+						 void *srq,
+						 u8 event);
+};
+
+void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq);
+int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
+			 int msix_vector, int bar_reg_offset,
+			 int (*cqn_handler)(struct bnxt_qplib_nq *nq,
+					    void *cq),
+			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
+					     void *srq,
+					     u8 event));
+void bnxt_qplib_free_nq(struct bnxt_qplib_nq *nq);
+int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq);
 #endif /* __BNXT_QPLIB_FP_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
index ce122cf..571feda 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
@@ -193,6 +193,12 @@ int bnxt_qplib_alloc_init_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq,
 			      struct scatterlist *sl, int nmap, u32 *elements,
 			      u32 elements_per_page, u32 aux, u32 pg_size,
 			      enum bnxt_qplib_hwq_type hwq_type);
+int bnxt_qplib_alloc_dpi(struct bnxt_qplib_dpi_tbl *dpit,
+			 struct bnxt_qplib_dpi     *dpi,
+			 void                      *app);
+int bnxt_qplib_dealloc_dpi(struct bnxt_qplib_res *res,
+			   struct bnxt_qplib_dpi_tbl *dpi_tbl,
+			   struct bnxt_qplib_dpi *dpi);
 void bnxt_qplib_cleanup_res(struct bnxt_qplib_res *res);
 int bnxt_qplib_init_res(struct bnxt_qplib_res *res);
 void bnxt_qplib_free_res(struct bnxt_qplib_res *res);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index 20abc70..3137a26 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -60,6 +60,8 @@ struct bnxt_re_work {
 #define BNXT_RE_MIN_MSIX		2
 #define BNXT_RE_MAX_MSIX		16
 #define BNXT_RE_AEQ_IDX			0
+#define BNXT_RE_NQ_IDX			1
+
 struct bnxt_re_dev {
 	struct ib_device		ibdev;
 	struct list_head		list;
@@ -78,9 +80,15 @@ struct bnxt_re_dev {
 
 	int				id;
 
+	/* FP Notification Queue (CQ & SRQ) */
+	struct tasklet_struct		nq_task;
+
 	/* RCFW Channel */
 	struct bnxt_qplib_rcfw		rcfw;
 
+	/* NQ */
+	struct bnxt_qplib_nq		nq;
+
 	/* Device Resources */
 	struct bnxt_qplib_dev_attr	dev_attr;
 	struct bnxt_qplib_ctx		qplib_ctx;
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index add74ba..3d02488 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -559,6 +559,9 @@ static int bnxt_re_aeq_handler(struct bnxt_qplib_rcfw *rcfw,
 
 static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
 {
+	if (rdev->nq.hwq.max_elements)
+		bnxt_qplib_disable_nq(&rdev->nq);
+
 	if (rdev->qplib_res.rcfw)
 		bnxt_qplib_cleanup_res(&rdev->qplib_res);
 }
@@ -569,11 +572,32 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
 
 	bnxt_qplib_init_res(&rdev->qplib_res);
 
+	if (rdev->msix_entries[BNXT_RE_NQ_IDX].vector <= 0)
+		return -EINVAL;
+
+	rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nq,
+				  rdev->msix_entries[BNXT_RE_NQ_IDX].vector,
+				  rdev->msix_entries[BNXT_RE_NQ_IDX].db_offset,
+				  NULL,
+				  NULL);
+
+	if (rc)
+		dev_err(rdev_to_dev(rdev), "Failed to enable NQ: %#x", rc);
+
 	return rc;
 }
 
 static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait)
 {
+	if (rdev->nq.hwq.max_elements) {
+		bnxt_re_net_ring_free(rdev, rdev->nq.ring_id, lock_wait);
+		bnxt_qplib_free_nq(&rdev->nq);
+	}
+	if (rdev->qplib_res.dpi_tbl.max) {
+		bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+				       &rdev->qplib_res.dpi_tbl,
+				       &rdev->dpi_privileged);
+	}
 	if (rdev->qplib_res.rcfw) {
 		bnxt_qplib_free_res(&rdev->qplib_res);
 		rdev->qplib_res.rcfw = NULL;
@@ -595,8 +619,34 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
 	if (rc)
 		goto fail;
 
-	return 0;
+	rc = bnxt_qplib_alloc_dpi(&rdev->qplib_res.dpi_tbl,
+				  &rdev->dpi_privileged,
+				  rdev);
+	if (rc)
+		goto fail;
 
+	rdev->nq.hwq.max_elements = BNXT_RE_MAX_CQ_COUNT +
+				    BNXT_RE_MAX_SRQC_COUNT + 2;
+	rc = bnxt_qplib_alloc_nq(rdev->en_dev->pdev, &rdev->nq);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev),
+			"Failed to allocate NQ memory: %#x", rc);
+		goto fail;
+	}
+	rc = bnxt_re_net_ring_alloc
+			(rdev, rdev->nq.hwq.pbl[PBL_LVL_0].pg_map_arr,
+			 rdev->nq.hwq.pbl[rdev->nq.hwq.level].pg_count,
+			 HWRM_RING_ALLOC_CMPL, BNXT_QPLIB_NQE_MAX_CNT - 1,
+			 rdev->msix_entries[BNXT_RE_NQ_IDX].ring_idx,
+			 &rdev->nq.ring_id);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev),
+			"Failed to allocate NQ ring: %#x", rc);
+		goto free_nq;
+	}
+	return 0;
+free_nq:
+	bnxt_qplib_free_nq(&rdev->nq);
 fail:
 	rdev->qplib_res.rcfw = NULL;
 	return rc;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  06/22] bnxt_re: Support for PD, ucontext and mmap verbs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (4 preceding siblings ...)
  2016-12-09  6:47 ` [PATCH V2 05/22] bnxt_re: Adding Notification Queue support Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 08/22] bnxt_re: Adding support for port related verbs Selvin Xavier
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch includes the uverbs_abi.h file to enable user verbs.
Also adds support for the Protection Domain, User Context and mmap
verbs.

v2: Moved the bnxt_re_uverbs_abi.h file to include/uapi/rdma folder.
    Also, Fixed one sparse warning.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c   |  28 +++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h   |   6 +
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h    |   4 +
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 217 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  23 +++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   7 +
 include/uapi/rdma/bnxt_re_uverbs_abi.h          |  58 +++++++
 7 files changed, 343 insertions(+)
 create mode 100644 include/uapi/rdma/bnxt_re_uverbs_abi.h

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
index 58db3a2..9ba1d20 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
@@ -529,6 +529,34 @@ static int bnxt_qplib_alloc_pkey_tbl(struct bnxt_qplib_res *res,
 	return 0;
 };
 
+/* PDs */
+int bnxt_qplib_alloc_pd(struct bnxt_qplib_pd_tbl *pdt, struct bnxt_qplib_pd *pd)
+{
+	u32 bit_num;
+
+	bit_num = find_first_bit(pdt->tbl, pdt->max);
+	if (bit_num == pdt->max)
+		return -ENOMEM;
+
+	/* Found unused PD */
+	clear_bit(bit_num, pdt->tbl);
+	pd->id = bit_num;
+	return 0;
+}
+
+int bnxt_qplib_dealloc_pd(struct bnxt_qplib_res *res,
+			  struct bnxt_qplib_pd_tbl *pdt,
+			  struct bnxt_qplib_pd *pd)
+{
+	if (test_and_set_bit(pd->id, pdt->tbl)) {
+		dev_warn(&res->pdev->dev, "Freeing an unused PD? pdn = %d",
+			 pd->id);
+		return -EINVAL;
+	}
+	pd->id = 0;
+	return 0;
+}
+
 static void bnxt_qplib_free_pd_tbl(struct bnxt_qplib_pd_tbl *pdt)
 {
 	kfree(pdt->tbl);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
index 571feda..c9e376a 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
@@ -186,6 +186,7 @@ struct bnxt_qplib_res {
 	struct bnxt_qplib_dpi_tbl	dpi_tbl;
 };
 
+struct bnxt_qplib_pd;
 struct bnxt_qplib_dev_attr;
 
 void bnxt_qplib_free_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq);
@@ -193,6 +194,11 @@ int bnxt_qplib_alloc_init_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq,
 			      struct scatterlist *sl, int nmap, u32 *elements,
 			      u32 elements_per_page, u32 aux, u32 pg_size,
 			      enum bnxt_qplib_hwq_type hwq_type);
+int bnxt_qplib_alloc_pd(struct bnxt_qplib_pd_tbl *pd_tbl,
+			struct bnxt_qplib_pd *pd);
+int bnxt_qplib_dealloc_pd(struct bnxt_qplib_res *res,
+			  struct bnxt_qplib_pd_tbl *pd_tbl,
+			  struct bnxt_qplib_pd *pd);
 int bnxt_qplib_alloc_dpi(struct bnxt_qplib_dpi_tbl *dpit,
 			 struct bnxt_qplib_dpi     *dpi,
 			 void                      *app);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
index 0b5adda..7de6600 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -70,6 +70,10 @@ struct bnxt_qplib_dev_attr {
 	u8				tqm_alloc_reqs[MAX_TQM_ALLOC_REQ];
 };
 
+struct bnxt_qplib_pd {
+	u32				id;
+};
+
 struct bnxt_qplib_gid {
 	u8				data[16];
 };
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index c01fa40..71eb1f3 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -35,3 +35,220 @@
  *
  * Description: IB Verbs interpreter
  */
+
+#include <linux/interrupt.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_user_verbs.h>
+#include <rdma/ib_umem.h>
+#include <rdma/ib_addr.h>
+#include <rdma/ib_mad.h>
+#include <rdma/ib_cache.h>
+
+#include "bnxt_ulp.h"
+
+#include "bnxt_re_hsi.h"
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_sp.h"
+#include "bnxt_qplib_fp.h"
+#include "bnxt_qplib_rcfw.h"
+
+#include "bnxt_re.h"
+#include "bnxt_re_ib_verbs.h"
+#include <rdma/bnxt_re_uverbs_abi.h>
+
+/* Protection Domains */
+int bnxt_re_dealloc_pd(struct ib_pd *ib_pd)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	int rc;
+
+	if (ib_pd->uobject && pd->dpi.dbr) {
+		struct ib_ucontext *ib_uctx = ib_pd->uobject->context;
+		struct bnxt_re_ucontext *ucntx;
+
+		/* Free DPI only if this is the first PD allocated by the
+		 * application and mark the context dpi as NULL
+		 */
+		ucntx = to_bnxt_re(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+
+		rc = bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+					    &rdev->qplib_res.dpi_tbl,
+					    &pd->dpi);
+		if (rc)
+			dev_err(rdev_to_dev(rdev), "Failed to deallocate HW DPI");
+			/* Don't fail, continue*/
+		ucntx->dpi = NULL;
+	}
+
+	rc = bnxt_qplib_dealloc_pd(&rdev->qplib_res,
+				   &rdev->qplib_res.pd_tbl,
+				   &pd->qplib_pd);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to deallocate HW PD");
+		return rc;
+	}
+
+	kfree(pd);
+	return 0;
+}
+
+struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
+			       struct ib_ucontext *ucontext,
+			       struct ib_udata *udata)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_re_ucontext *ucntx = to_bnxt_re(ucontext,
+						    struct bnxt_re_ucontext,
+						    ib_uctx);
+	struct bnxt_re_pd *pd;
+	int rc;
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		return ERR_PTR(-ENOMEM);
+
+	pd->rdev = rdev;
+	if (bnxt_qplib_alloc_pd(&rdev->qplib_res.pd_tbl, &pd->qplib_pd)) {
+		dev_err(rdev_to_dev(rdev), "Failed to allocate HW PD");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	if (udata) {
+		struct bnxt_re_pd_resp resp;
+
+		if (!ucntx->dpi) {
+			/* Allocate DPI in alloc_pd to avoid failing of
+			 * ibv_devinfo and family of application when DPIs
+			 * are depleted.
+			 */
+			if (bnxt_qplib_alloc_dpi(&rdev->qplib_res.dpi_tbl,
+						 &pd->dpi, ucntx)) {
+				rc = -ENOMEM;
+				goto dbfail;
+			}
+			ucntx->dpi = &pd->dpi;
+		}
+
+		resp.pdid = pd->qplib_pd.id;
+		/* Still allow mapping this DBR to the new user PD. */
+		resp.dpi = ucntx->dpi->dpi;
+		resp.dbr = (u64)ucntx->dpi->umdbr;
+
+		rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
+		if (rc) {
+			dev_err(rdev_to_dev(rdev),
+				"Failed to copy user response\n");
+			goto dbfail;
+		}
+	}
+
+	return &pd->ib_pd;
+dbfail:
+	(void)bnxt_qplib_dealloc_pd(&rdev->qplib_res, &rdev->qplib_res.pd_tbl,
+				    &pd->qplib_pd);
+fail:
+	kfree(pd);
+	return ERR_PTR(rc);
+}
+
+struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev,
+					   struct ib_udata *udata)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_re_uctx_resp resp;
+	struct bnxt_re_ucontext *uctx;
+	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
+	int rc;
+
+	dev_dbg(rdev_to_dev(rdev), "ABI version requested %d",
+		ibdev->uverbs_abi_ver);
+
+	if (ibdev->uverbs_abi_ver != BNXT_RE_ABI_VERSION) {
+		dev_dbg(rdev_to_dev(rdev), " is different from the device %d ",
+			BNXT_RE_ABI_VERSION);
+		return ERR_PTR(-EPERM);
+	}
+
+	uctx = kzalloc(sizeof(*uctx), GFP_KERNEL);
+	if (!uctx)
+		return ERR_PTR(-ENOMEM);
+
+	uctx->rdev = rdev;
+
+	uctx->shpg = (void *)__get_free_page(GFP_KERNEL);
+	if (!uctx->shpg) {
+		rc = -ENOMEM;
+		goto fail;
+	}
+	spin_lock_init(&uctx->sh_lock);
+
+	resp.dev_id = rdev->en_dev->pdev->devfn; /*Temp, Use idr_alloc instead*/
+	resp.max_qp = rdev->qplib_ctx.qpc_count;
+	resp.pg_size = PAGE_SIZE;
+	resp.cqe_sz = sizeof(struct cq_base);
+	resp.max_cqd = dev_attr->max_cq_wqes;
+
+	rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to copy user context");
+		rc = -EFAULT;
+		goto cfail;
+	}
+
+	return &uctx->ib_uctx;
+cfail:
+	free_page((u64)uctx->shpg);
+	uctx->shpg = NULL;
+fail:
+	kfree(uctx);
+	return ERR_PTR(rc);
+}
+
+int bnxt_re_dealloc_ucontext(struct ib_ucontext *ib_uctx)
+{
+	struct bnxt_re_ucontext *uctx = to_bnxt_re(ib_uctx,
+						   struct bnxt_re_ucontext,
+						   ib_uctx);
+	if (uctx->shpg)
+		free_page((u64)uctx->shpg);
+	kfree(uctx);
+	return 0;
+}
+
+/* Helper function to mmap the virtual memory from user app */
+int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct vm_area_struct *vma)
+{
+	struct bnxt_re_ucontext *uctx = to_bnxt_re(ib_uctx,
+						   struct bnxt_re_ucontext,
+						   ib_uctx);
+	struct bnxt_re_dev *rdev = uctx->rdev;
+	u64 pfn;
+
+	if (vma->vm_end - vma->vm_start != PAGE_SIZE)
+		return -EINVAL;
+
+	if (vma->vm_pgoff) {
+		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+		if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+				       PAGE_SIZE, vma->vm_page_prot)) {
+			dev_err(rdev_to_dev(rdev), "Failed to map DPI");
+			return -EAGAIN;
+		}
+	} else {
+		pfn = virt_to_phys(uctx->shpg) >> PAGE_SHIFT;
+		if (remap_pfn_range(vma, vma->vm_start,
+				    pfn, PAGE_SIZE, vma->vm_page_prot)) {
+			dev_err(rdev_to_dev(rdev),
+				"Failed to map shared page");
+			return -EAGAIN;
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 9162774..a133d81 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -39,4 +39,27 @@
 #ifndef __BNXT_RE_IB_VERBS_H__
 #define __BNXT_RE_IB_VERBS_H__
 
+struct bnxt_re_pd {
+	struct bnxt_re_dev	*rdev;
+	struct ib_pd		ib_pd;
+	struct bnxt_qplib_pd	qplib_pd;
+	struct bnxt_qplib_dpi	dpi;
+};
+
+struct bnxt_re_ucontext {
+	struct bnxt_re_dev	*rdev;
+	struct ib_ucontext	ib_uctx;
+	struct bnxt_qplib_dpi	*dpi;
+	void			*shpg;
+	spinlock_t		sh_lock;	/* protect shpg */
+};
+
+struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
+			       struct ib_ucontext *context,
+			       struct ib_udata *udata);
+int bnxt_re_dealloc_pd(struct ib_pd *pd);
+struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev,
+					   struct ib_udata *udata);
+int bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
+int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
 #endif /* __BNXT_RE_IB_VERBS_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 3d02488..3549d3a 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -59,6 +59,7 @@
 #include "bnxt_qplib_fp.h"
 #include "bnxt_qplib_rcfw.h"
 #include "bnxt_re.h"
+#include "bnxt_re_ib_verbs.h"
 #include "bnxt.h"
 static char version[] =
 		BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
@@ -424,6 +425,12 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 
 	ibdev->num_comp_vectors	= 1;
 	ibdev->dma_device = &rdev->en_dev->pdev->dev;
+	ibdev->alloc_pd			= bnxt_re_alloc_pd;
+	ibdev->dealloc_pd		= bnxt_re_dealloc_pd;
+	ibdev->alloc_ucontext		= bnxt_re_alloc_ucontext;
+	ibdev->dealloc_ucontext		= bnxt_re_dealloc_ucontext;
+	ibdev->mmap			= bnxt_re_mmap;
+
 	return ib_register_device(ibdev, NULL);
 }
 
diff --git a/include/uapi/rdma/bnxt_re_uverbs_abi.h b/include/uapi/rdma/bnxt_re_uverbs_abi.h
new file mode 100644
index 0000000..3815184
--- /dev/null
+++ b/include/uapi/rdma/bnxt_re_uverbs_abi.h
@@ -0,0 +1,58 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Uverbs ABI header file
+ */
+
+#ifndef __BNXT_RE_UVERBS_ABI_H__
+#define __BNXT_RE_UVERBS_ABI_H__
+
+#define BNXT_RE_ABI_VERSION	1
+
+struct bnxt_re_uctx_resp {
+	__u32 dev_id;
+	__u32 max_qp;
+	__u32 pg_size;
+	__u32 cqe_sz;
+	__u32 max_cqd;
+} __packed;
+
+struct bnxt_re_pd_resp {
+	__u32 pdid;
+	__u32 dpi;
+	__u64 dbr;
+} __packed;
+
+#endif /* __BNXT_RE_UVERBS_ABI_H__*/
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  07/22] bnxt_re: Support for query and modify device verbs
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
@ 2016-12-09  6:48   ` Selvin Xavier
  2016-12-09  6:48   ` [PATCH V2 10/22] bnxt_re: Support for CQ verbs Selvin Xavier
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Selvin Xavier, Eddie Wai,
	Devesh Sharma, Somnath Kotur, Sriharsha Basavapatna

Adding implementation for the query device and modify device verbs

Signed-off-by: Eddie Wai <eddie.wai-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Selvin Xavier <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c   | 17 +++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h   |  1 +
 drivers/infiniband/hw/bnxtre/bnxt_re.h          |  7 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 90 +++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  6 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |  2 +
 6 files changed, 123 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
index 9ba1d20..9cb1b7a 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
@@ -447,6 +447,23 @@ int bnxt_qplib_alloc_ctx(struct pci_dev *pdev,
 	return rc;
 }
 
+/* GUID */
+void bnxt_qplib_get_guid(u8 *dev_addr, u8 *guid)
+{
+	u8 mac[ETH_ALEN];
+
+	/* MAC-48 to EUI-64 mapping */
+	memcpy(mac, dev_addr, ETH_ALEN);
+	guid[0] = mac[0] ^ 2;
+	guid[1] = mac[1];
+	guid[2] = mac[2];
+	guid[3] = 0xff;
+	guid[4] = 0xfe;
+	guid[5] = mac[3];
+	guid[6] = mac[4];
+	guid[7] = mac[5];
+}
+
 static void bnxt_qplib_free_sgid_tbl(struct bnxt_qplib_res *res,
 				     struct bnxt_qplib_sgid_tbl *sgid_tbl)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
index c9e376a..fc9c38c 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
@@ -194,6 +194,7 @@ int bnxt_qplib_alloc_init_hwq(struct pci_dev *pdev, struct bnxt_qplib_hwq *hwq,
 			      struct scatterlist *sl, int nmap, u32 *elements,
 			      u32 elements_per_page, u32 aux, u32 pg_size,
 			      enum bnxt_qplib_hwq_type hwq_type);
+void bnxt_qplib_get_guid(u8 *dev_addr, u8 *guid);
 int bnxt_qplib_alloc_pd(struct bnxt_qplib_pd_tbl *pd_tbl,
 			struct bnxt_qplib_pd *pd);
 int bnxt_qplib_dealloc_pd(struct bnxt_qplib_res *res,
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index 3137a26..3a93a88 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -45,6 +45,13 @@
 #define BNXT_RE_REF_WAIT_COUNT		10
 #define BNXT_RE_DESC	"Broadcom NetXtreme-C/E RoCE Driver"
 
+#define BNXT_RE_PAGE_SIZE_4K		BIT(12)
+#define BNXT_RE_PAGE_SIZE_8K		BIT(13)
+#define BNXT_RE_PAGE_SIZE_64K		BIT(16)
+#define BNXT_RE_PAGE_SIZE_2M		BIT(21)
+#define BNXT_RE_PAGE_SIZE_8M		BIT(23)
+#define BNXT_RE_PAGE_SIZE_1G		BIT(30)
+
 #define BNXT_RE_MAX_QPC_COUNT		(64 * 1024)
 #define BNXT_RE_MAX_MRW_COUNT		(64 * 1024)
 #define BNXT_RE_MAX_SRQC_COUNT		(64 * 1024)
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 71eb1f3..a0fbc24 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -60,6 +60,96 @@
 #include "bnxt_re_ib_verbs.h"
 #include <rdma/bnxt_re_uverbs_abi.h>
 
+int bnxt_re_query_device(struct ib_device *ibdev,
+			 struct ib_device_attr *ib_attr,
+			 struct ib_udata *udata)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
+
+	memset(ib_attr, 0, sizeof(*ib_attr));
+
+	ib_attr->fw_ver = (u64)dev_attr->fw_ver;
+	bnxt_qplib_get_guid(rdev->netdev->dev_addr,
+			    (u8 *)&ib_attr->sys_image_guid);
+	ib_attr->max_mr_size = ~0ull;
+	ib_attr->page_size_cap = BNXT_RE_PAGE_SIZE_4K | BNXT_RE_PAGE_SIZE_8K |
+				 BNXT_RE_PAGE_SIZE_64K | BNXT_RE_PAGE_SIZE_2M |
+				 BNXT_RE_PAGE_SIZE_8M | BNXT_RE_PAGE_SIZE_1G;
+
+	ib_attr->vendor_id = rdev->en_dev->pdev->vendor;
+	ib_attr->vendor_part_id = rdev->en_dev->pdev->device;
+	ib_attr->hw_ver = rdev->en_dev->pdev->subsystem_device;
+	ib_attr->max_qp = dev_attr->max_qp;
+	ib_attr->max_qp_wr = dev_attr->max_qp_wqes;
+	ib_attr->device_cap_flags =
+				    IB_DEVICE_CURR_QP_STATE_MOD
+				    | IB_DEVICE_RC_RNR_NAK_GEN
+				    | IB_DEVICE_SHUTDOWN_PORT
+				    | IB_DEVICE_SYS_IMAGE_GUID
+				    | IB_DEVICE_LOCAL_DMA_LKEY
+				    | IB_DEVICE_RESIZE_MAX_WR
+				    | IB_DEVICE_PORT_ACTIVE_EVENT
+				    | IB_DEVICE_N_NOTIFY_CQ
+				    | IB_DEVICE_MEM_WINDOW
+				    | IB_DEVICE_MEM_WINDOW_TYPE_2B
+				    | IB_DEVICE_MEM_MGT_EXTENSIONS;
+	ib_attr->max_sge = dev_attr->max_qp_sges;
+	ib_attr->max_sge_rd = dev_attr->max_qp_sges;
+	ib_attr->max_cq = dev_attr->max_cq;
+	ib_attr->max_cqe = dev_attr->max_cq_wqes;
+	ib_attr->max_mr = dev_attr->max_mr;
+	ib_attr->max_pd = dev_attr->max_pd;
+	ib_attr->max_qp_rd_atom = dev_attr->max_qp_rd_atom;
+	ib_attr->max_qp_init_rd_atom = dev_attr->max_qp_rd_atom;
+	ib_attr->atomic_cap = IB_ATOMIC_HCA;
+	ib_attr->masked_atomic_cap = IB_ATOMIC_HCA;
+
+	ib_attr->max_ee_rd_atom = 0;
+	ib_attr->max_res_rd_atom = 0;
+	ib_attr->max_ee_init_rd_atom = 0;
+	ib_attr->max_ee = 0;
+	ib_attr->max_rdd = 0;
+	ib_attr->max_mw = dev_attr->max_mw;
+	ib_attr->max_raw_ipv6_qp = 0;
+	ib_attr->max_raw_ethy_qp = dev_attr->max_raw_ethy_qp;
+	ib_attr->max_mcast_grp = 0;
+	ib_attr->max_mcast_qp_attach = 0;
+	ib_attr->max_total_mcast_qp_attach = 0;
+	ib_attr->max_ah = dev_attr->max_ah;
+
+	ib_attr->max_fmr = dev_attr->max_fmr;
+	ib_attr->max_map_per_fmr = 1;	/* ? */
+
+	ib_attr->max_srq = dev_attr->max_srq;
+	ib_attr->max_srq_wr = dev_attr->max_srq_wqes;
+	ib_attr->max_srq_sge = dev_attr->max_srq_sges;
+
+	ib_attr->max_fast_reg_page_list_len = MAX_PBL_LVL_1_PGS;
+
+	ib_attr->max_pkeys = 1;
+	ib_attr->local_ca_ack_delay = 0;
+	return 0;
+}
+
+int bnxt_re_modify_device(struct ib_device *ibdev,
+			  int device_modify_mask,
+			  struct ib_device_modify *device_modify)
+{
+	switch (device_modify_mask) {
+	case IB_DEVICE_MODIFY_SYS_IMAGE_GUID:
+		/* Modify the GUID requires the modification of the GID table */
+		/* GUID should be made as READ-ONLY */
+		break;
+	case IB_DEVICE_MODIFY_NODE_DESC:
+		/* Node Desc should be made as READ-ONLY */
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
 /* Protection Domains */
 int bnxt_re_dealloc_pd(struct ib_pd *ib_pd)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index a133d81..45f9253 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -54,6 +54,12 @@ struct bnxt_re_ucontext {
 	spinlock_t		sh_lock;	/* protect shpg */
 };
 
+int bnxt_re_query_device(struct ib_device *ibdev,
+			 struct ib_device_attr *ib_attr,
+			 struct ib_udata *udata);
+int bnxt_re_modify_device(struct ib_device *ibdev,
+			  int device_modify_mask,
+			  struct ib_device_modify *device_modify);
 struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 			       struct ib_ucontext *context,
 			       struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 3549d3a..7dfdaef 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -425,6 +425,8 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 
 	ibdev->num_comp_vectors	= 1;
 	ibdev->dma_device = &rdev->en_dev->pdev->dev;
+	ibdev->query_device		= bnxt_re_query_device;
+	ibdev->modify_device		= bnxt_re_modify_device;
 	ibdev->alloc_pd			= bnxt_re_alloc_pd;
 	ibdev->dealloc_pd		= bnxt_re_dealloc_pd;
 	ibdev->alloc_ucontext		= bnxt_re_alloc_ucontext;
-- 
2.5.5

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  08/22] bnxt_re: Adding support for port related verbs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (5 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 06/22] bnxt_re: Support for PD, ucontext and mmap verbs Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 09/22] bnxt_re: Support for GID " Selvin Xavier
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

Implentation of query_port, modify_port, port_immutable verbs

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 122 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |   7 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   4 +
 3 files changed, 133 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index a0fbc24..c94573b 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -150,6 +150,128 @@ int bnxt_re_modify_device(struct ib_device *ibdev,
 	return 0;
 }
 
+static void __to_ib_speed_width(struct net_device *netdev, u8 *speed, u8 *width)
+{
+	struct ethtool_link_ksettings lksettings;
+	u32 espeed;
+
+	if (netdev->ethtool_ops && netdev->ethtool_ops->get_link_ksettings) {
+		memset(&lksettings, 0, sizeof(lksettings));
+		rtnl_lock();
+		netdev->ethtool_ops->get_link_ksettings(netdev, &lksettings);
+		rtnl_unlock();
+		espeed = lksettings.base.speed;
+	} else {
+		espeed = SPEED_UNKNOWN;
+	}
+	switch (espeed) {
+	case SPEED_1000:
+		*speed = IB_SPEED_SDR;
+		*width = IB_WIDTH_1X;
+		break;
+	case SPEED_10000:
+		*speed = IB_SPEED_QDR;
+		*width = IB_WIDTH_1X;
+		break;
+	case SPEED_20000:
+		*speed = IB_SPEED_DDR;
+		*width = IB_WIDTH_4X;
+		break;
+	case SPEED_25000:
+		*speed = IB_SPEED_EDR;
+		*width = IB_WIDTH_1X;
+		break;
+	case SPEED_40000:
+		*speed = IB_SPEED_QDR;
+		*width = IB_WIDTH_4X;
+		break;
+	case SPEED_50000:
+		break;
+	default:
+		*speed = IB_SPEED_SDR;
+		*width = IB_WIDTH_1X;
+		break;
+	}
+}
+
+/* Port */
+int bnxt_re_query_port(struct ib_device *ibdev, u8 port_num,
+		       struct ib_port_attr *port_attr)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
+
+	memset(port_attr, 0, sizeof(*port_attr));
+
+	if (netif_running(rdev->netdev) && netif_carrier_ok(rdev->netdev)) {
+		port_attr->state = IB_PORT_ACTIVE;
+		port_attr->phys_state = 5;
+	} else {
+		port_attr->state = IB_PORT_DOWN;
+		port_attr->phys_state = 3;
+	}
+	port_attr->max_mtu = IB_MTU_4096;
+	port_attr->active_mtu = iboe_get_mtu(rdev->netdev->mtu);
+	port_attr->gid_tbl_len = dev_attr->max_sgid;
+	port_attr->port_cap_flags = IB_PORT_CM_SUP | IB_PORT_REINIT_SUP |
+				    IB_PORT_DEVICE_MGMT_SUP |
+				    IB_PORT_VENDOR_CLASS_SUP |
+				    IB_PORT_IP_BASED_GIDS;
+
+	/* Max MSG size set to 2G for now */
+	port_attr->max_msg_sz = 0x80000000;
+	port_attr->bad_pkey_cntr = 0;
+	port_attr->qkey_viol_cntr = 0;
+	port_attr->pkey_tbl_len = dev_attr->max_pkey;
+	port_attr->lid = 0;
+	port_attr->sm_lid = 0;
+	port_attr->lmc = 0;
+	port_attr->max_vl_num = 4;
+	port_attr->sm_sl = 0;
+	port_attr->subnet_timeout = 0;
+	port_attr->init_type_reply = 0;
+	/* call the underlying netdev's ethtool hooks to query speed settings
+	 * for which we acquire rtnl_lock _only_ if it's registered with
+	 * IB stack to avoid race in the NETDEV_UNREG path
+	 */
+	if (test_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags))
+		__to_ib_speed_width(rdev->netdev, &port_attr->active_speed,
+				    &port_attr->active_width);
+	return 0;
+}
+
+int bnxt_re_modify_port(struct ib_device *ibdev, u8 port_num,
+			int port_modify_mask,
+			struct ib_port_modify *port_modify)
+{
+	switch (port_modify_mask) {
+	case IB_PORT_SHUTDOWN:
+		break;
+	case IB_PORT_INIT_TYPE:
+		break;
+	case IB_PORT_RESET_QKEY_CNTR:
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num,
+			       struct ib_port_immutable *immutable)
+{
+	struct ib_port_attr port_attr;
+
+	if (bnxt_re_query_port(ibdev, port_num, &port_attr))
+		return -EINVAL;
+
+	immutable->pkey_tbl_len = port_attr.pkey_tbl_len;
+	immutable->gid_tbl_len = port_attr.gid_tbl_len;
+	immutable->core_cap_flags = RDMA_CORE_PORT_IBA_ROCE;
+	immutable->core_cap_flags |= RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP;
+	immutable->max_mad_size = IB_MGMT_MAD_SIZE;
+	return 0;
+}
 /* Protection Domains */
 int bnxt_re_dealloc_pd(struct ib_pd *ib_pd)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 45f9253..255a0ad 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -60,6 +60,13 @@ int bnxt_re_query_device(struct ib_device *ibdev,
 int bnxt_re_modify_device(struct ib_device *ibdev,
 			  int device_modify_mask,
 			  struct ib_device_modify *device_modify);
+int bnxt_re_query_port(struct ib_device *ibdev, u8 port_num,
+		       struct ib_port_attr *port_attr);
+int bnxt_re_modify_port(struct ib_device *ibdev, u8 port_num,
+			int port_modify_mask,
+			struct ib_port_modify *port_modify);
+int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num,
+			       struct ib_port_immutable *immutable);
 struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 			       struct ib_ucontext *context,
 			       struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 7dfdaef..b1b5ccee 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -427,6 +427,10 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->dma_device = &rdev->en_dev->pdev->dev;
 	ibdev->query_device		= bnxt_re_query_device;
 	ibdev->modify_device		= bnxt_re_modify_device;
+
+	ibdev->query_port		= bnxt_re_query_port;
+	ibdev->modify_port		= bnxt_re_modify_port;
+	ibdev->get_port_immutable	= bnxt_re_get_port_immutable;
 	ibdev->alloc_pd			= bnxt_re_alloc_pd;
 	ibdev->dealloc_pd		= bnxt_re_dealloc_pd;
 	ibdev->alloc_ucontext		= bnxt_re_alloc_ucontext;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  09/22] bnxt_re: Support for GID related verbs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (6 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 08/22] bnxt_re: Adding support for port related verbs Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 11/22] bnxt_re: Support for AH verbs Selvin Xavier
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch implements add/del GID, get_netdev and pkey related verbs.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c   |   5 +
 drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h   |   3 +
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c    | 218 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h    |  11 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 123 +++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  18 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   7 +
 7 files changed, 385 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
index 9cb1b7a..deb3b15 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
@@ -509,6 +509,11 @@ static void bnxt_qplib_cleanup_sgid_tbl(struct bnxt_qplib_res *res,
 {
 	int i;
 
+	for (i = 0; i < sgid_tbl->max; i++) {
+		if (memcmp(&sgid_tbl->tbl[i], &bnxt_qplib_gid_zero,
+			   sizeof(bnxt_qplib_gid_zero)))
+			bnxt_qplib_del_sgid(sgid_tbl, &sgid_tbl->tbl[i], true);
+	}
 	memset(sgid_tbl->tbl, 0, sizeof(struct bnxt_qplib_gid) * sgid_tbl->max);
 	memset(sgid_tbl->hw_id, -1, sizeof(u16) * sgid_tbl->max);
 	sgid_tbl->active = 0;
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
index fc9c38c..6596ee2 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_res.h
@@ -186,6 +186,9 @@ struct bnxt_qplib_res {
 	struct bnxt_qplib_dpi_tbl	dpi_tbl;
 };
 
+#define to_bnxt_qplib(ptr, type, member)	\
+	container_of(ptr, type, member)
+
 struct bnxt_qplib_pd;
 struct bnxt_qplib_dev_attr;
 
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
index b3d6b46..cf9dd40 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
@@ -46,6 +46,10 @@
 #include "bnxt_qplib_res.h"
 #include "bnxt_qplib_rcfw.h"
 #include "bnxt_qplib_sp.h"
+
+const struct bnxt_qplib_gid bnxt_qplib_gid_zero = {{ 0, 0, 0, 0, 0, 0, 0, 0,
+						     0, 0, 0, 0, 0, 0, 0, 0 } };
+
 /* Device */
 int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
 			    struct bnxt_qplib_dev_attr *attr)
@@ -129,6 +133,220 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
 	return 0;
 }
 
+/* SGID */
+int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
+			struct bnxt_qplib_gid *gid)
+{
+	if (index > sgid_tbl->max) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: Index %d exceeded SGID table max (%d)",
+			index, sgid_tbl->max);
+		return -EINVAL;
+	}
+	memcpy(gid, &sgid_tbl->tbl[index], sizeof(*gid));
+	return 0;
+}
+
+int bnxt_qplib_del_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+			struct bnxt_qplib_gid *gid, bool update)
+{
+	struct bnxt_qplib_res *res = to_bnxt_qplib(sgid_tbl,
+						   struct bnxt_qplib_res,
+						   sgid_tbl);
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	int index;
+
+	if (!sgid_tbl) {
+		dev_err(&res->pdev->dev, "QPLIB: SGID table not allocated");
+		return -EINVAL;
+	}
+	/* Do we need a sgid_lock here? */
+	if (!sgid_tbl->active) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: SGID table has no active entries");
+		return -ENOMEM;
+	}
+	for (index = 0; index < sgid_tbl->max; index++) {
+		if (!memcmp(&sgid_tbl->tbl[index], gid, sizeof(*gid)))
+			break;
+	}
+	if (index == sgid_tbl->max) {
+		dev_warn(&res->pdev->dev, "GID not found in the SGID table");
+		return 0;
+	}
+	/* Remove GID from the SGID table */
+	if (update) {
+		struct cmdq_delete_gid req;
+		struct creq_delete_gid_resp *resp;
+		u16 cmd_flags = 0;
+
+		RCFW_CMD_PREP(req, DELETE_GID, cmd_flags);
+		if (sgid_tbl->hw_id[index] == -1) {
+			dev_err(&res->pdev->dev,
+				"QPLIB: GID entry contains an invalid HW id");
+			return -EINVAL;
+		}
+		req.gid_index = cpu_to_le16(sgid_tbl->hw_id[index]);
+		resp = (struct creq_delete_gid_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, NULL,
+						     0);
+		if (!resp) {
+			dev_err(&res->pdev->dev,
+				"QPLIB: SP: DELETE_GID send failed");
+			return -EINVAL;
+		}
+		if (!bnxt_qplib_rcfw_wait_for_resp(rcfw,
+						   le16_to_cpu(req.cookie))) {
+			/* Cmd timed out */
+			dev_err(&res->pdev->dev,
+				"QPLIB: SP: DELETE_GID timed out");
+			return -ETIMEDOUT;
+		}
+		if (RCFW_RESP_STATUS(resp) ||
+		    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+			dev_err(&res->pdev->dev,
+				"QPLIB: SP: DELETE_GID failed ");
+			dev_err(&res->pdev->dev,
+				"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+				RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+				RCFW_RESP_COOKIE(resp));
+			return -EINVAL;
+		}
+	}
+	memcpy(&sgid_tbl->tbl[index], &bnxt_qplib_gid_zero,
+	       sizeof(bnxt_qplib_gid_zero));
+	sgid_tbl->active--;
+	dev_dbg(&res->pdev->dev,
+		"QPLIB: SGID deleted hw_id[0x%x] = 0x%x active = 0x%x",
+		 index, sgid_tbl->hw_id[index], sgid_tbl->active);
+	sgid_tbl->hw_id[index] = (u16)-1;
+
+	/* unlock */
+	return 0;
+}
+
+int bnxt_qplib_add_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+			struct bnxt_qplib_gid *gid, u8 *smac, u16 vlan_id,
+			bool update, u32 *index)
+{
+	struct bnxt_qplib_res *res = to_bnxt_qplib(sgid_tbl,
+						   struct bnxt_qplib_res,
+						   sgid_tbl);
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	int i, free_idx, rc = 0;
+
+	if (!sgid_tbl) {
+		dev_err(&res->pdev->dev, "QPLIB: SGID table not allocated");
+		return -EINVAL;
+	}
+	/* Do we need a sgid_lock here? */
+	if (sgid_tbl->active == sgid_tbl->max) {
+		dev_err(&res->pdev->dev, "QPLIB: SGID table is full");
+		return -ENOMEM;
+	}
+	free_idx = sgid_tbl->max;
+	for (i = 0; i < sgid_tbl->max; i++) {
+		if (!memcmp(&sgid_tbl->tbl[i], gid, sizeof(*gid))) {
+			dev_dbg(&res->pdev->dev,
+				"QPLIB: SGID entry already exist in entry %d!",
+				i);
+			*index = i;
+			return -EALREADY;
+		} else if (!memcmp(&sgid_tbl->tbl[i], &bnxt_qplib_gid_zero,
+				   sizeof(bnxt_qplib_gid_zero)) &&
+			   free_idx == sgid_tbl->max) {
+			free_idx = i;
+		}
+	}
+	if (free_idx == sgid_tbl->max) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: SGID table is FULL but count is not MAX??");
+		return -ENOMEM;
+	}
+	if (update) {
+		struct cmdq_add_gid req;
+		struct creq_add_gid_resp *resp;
+		u16 cmd_flags = 0;
+		u32 temp32[4];
+		u16 temp16[3];
+
+		RCFW_CMD_PREP(req, ADD_GID, cmd_flags);
+
+		memcpy(temp32, gid->data, sizeof(struct bnxt_qplib_gid));
+		req.gid[0] = cpu_to_be32(temp32[3]);
+		req.gid[1] = cpu_to_be32(temp32[2]);
+		req.gid[2] = cpu_to_be32(temp32[1]);
+		req.gid[3] = cpu_to_be32(temp32[0]);
+		if (vlan_id != 0xFFFF)
+			req.vlan = cpu_to_le32((vlan_id &
+					CMDQ_ADD_GID_VLAN_VLAN_ID_MASK) |
+					CMDQ_ADD_GID_VLAN_TPID_TPID_8100 |
+					CMDQ_ADD_GID_VLAN_VLAN_EN);
+
+		/* MAC in network format */
+		memcpy(temp16, smac, 6);
+		req.src_mac[0] = cpu_to_be16(temp16[0]);
+		req.src_mac[1] = cpu_to_be16(temp16[1]);
+		req.src_mac[2] = cpu_to_be16(temp16[2]);
+
+		resp = (struct creq_add_gid_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+		if (!resp) {
+			dev_err(&res->pdev->dev,
+				"QPLIB: SP: ADD_GID send failed");
+			return -EINVAL;
+		}
+		if (!bnxt_qplib_rcfw_wait_for_resp(rcfw,
+						   le16_to_cpu(req.cookie))) {
+			/* Cmd timed out */
+			dev_err(&res->pdev->dev,
+				"QPIB: SP: ADD_GID timed out");
+			return -ETIMEDOUT;
+		}
+		if (RCFW_RESP_STATUS(resp) ||
+		    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+			dev_err(&res->pdev->dev, "QPLIB: SP: ADD_GID failed ");
+			dev_err(&res->pdev->dev,
+				"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+				RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+				RCFW_RESP_COOKIE(resp));
+			return -EINVAL;
+		}
+		sgid_tbl->hw_id[free_idx] = le32_to_cpu(resp->xid);
+	}
+	/* Add GID to the sgid_tbl */
+	memcpy(&sgid_tbl->tbl[free_idx], gid, sizeof(*gid));
+	sgid_tbl->active++;
+	dev_dbg(&res->pdev->dev,
+		"QPLIB: SGID added hw_id[0x%x] = 0x%x active = 0x%x",
+		 free_idx, sgid_tbl->hw_id[free_idx], sgid_tbl->active);
+
+	*index = free_idx;
+	/* unlock */
+	return rc;
+}
+
+/* pkeys */
+int bnxt_qplib_get_pkey(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 index,
+			u16 *pkey)
+{
+	if (index == 0xFFFF) {
+		*pkey = 0xFFFF;
+		return 0;
+	}
+	if (index > pkey_tbl->max) {
+		dev_err(&res->pdev->dev,
+			"QPLIB: Index %d exceeded PKEY table max (%d)",
+			index, pkey_tbl->max);
+		return -EINVAL;
+	}
+	memcpy(pkey, &pkey_tbl->tbl[index], sizeof(*pkey));
+	return 0;
+}
+
 int bnxt_qplib_del_pkey(struct bnxt_qplib_res *res,
 			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 *pkey,
 			bool update)
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
index 7de6600..8e20924 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -78,6 +78,17 @@ struct bnxt_qplib_gid {
 	u8				data[16];
 };
 
+int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
+			struct bnxt_qplib_gid *gid);
+int bnxt_qplib_del_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+			struct bnxt_qplib_gid *gid, bool update);
+int bnxt_qplib_add_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+			struct bnxt_qplib_gid *gid, u8 *mac, u16 vlan_id,
+			bool update, u32 *index);
+int bnxt_qplib_get_pkey(struct bnxt_qplib_res *res,
+			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 index,
+			u16 *pkey);
 int bnxt_qplib_del_pkey(struct bnxt_qplib_res *res,
 			struct bnxt_qplib_pkey_tbl *pkey_tbl, u16 *pkey,
 			bool update);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index c94573b..3417829 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -60,6 +60,22 @@
 #include "bnxt_re_ib_verbs.h"
 #include <rdma/bnxt_re_uverbs_abi.h>
 
+/* Device */
+struct net_device *bnxt_re_get_netdev(struct ib_device *ibdev, u8 port_num)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct net_device *netdev = NULL;
+
+	rcu_read_lock();
+	if (rdev)
+		netdev = rdev->netdev;
+	if (netdev)
+		dev_hold(netdev);
+
+	rcu_read_unlock();
+	return netdev;
+}
+
 int bnxt_re_query_device(struct ib_device *ibdev,
 			 struct ib_device_attr *ib_attr,
 			 struct ib_udata *udata)
@@ -272,6 +288,113 @@ int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num,
 	immutable->max_mad_size = IB_MGMT_MAD_SIZE;
 	return 0;
 }
+
+int bnxt_re_query_pkey(struct ib_device *ibdev, u8 port_num,
+		       u16 index, u16 *pkey)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+
+	/* Ignore port_num */
+
+	memset(pkey, 0, sizeof(*pkey));
+	return bnxt_qplib_get_pkey(&rdev->qplib_res,
+				   &rdev->qplib_res.pkey_tbl, index, pkey);
+}
+
+int bnxt_re_query_gid(struct ib_device *ibdev, u8 port_num,
+		      int index, union ib_gid *gid)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	int rc = 0;
+
+	/* Ignore port_num */
+	memset(gid, 0, sizeof(*gid));
+	rc = bnxt_qplib_get_sgid(&rdev->qplib_res,
+				 &rdev->qplib_res.sgid_tbl, index,
+				 (struct bnxt_qplib_gid *)gid);
+	return rc;
+}
+
+int bnxt_re_del_gid(struct ib_device *ibdev, u8 port_num,
+		    unsigned int index, void **context)
+{
+	int rc = 0;
+	struct bnxt_re_gid_ctx *ctx, **ctx_tbl;
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl;
+
+	/* Delete the entry from the hardware */
+	ctx = *context;
+	if (!ctx)
+		return -EINVAL;
+
+	if (sgid_tbl && sgid_tbl->active) {
+		if (ctx->idx >= sgid_tbl->max)
+			return -EINVAL;
+		ctx->refcnt--;
+		if (!ctx->refcnt) {
+			rc = bnxt_qplib_del_sgid
+					(sgid_tbl,
+					 &sgid_tbl->tbl[ctx->idx], true);
+			if (rc)
+				dev_err(rdev_to_dev(rdev),
+					"Failed to remove GID: %#x", rc);
+			ctx_tbl = sgid_tbl->ctx;
+			ctx_tbl[ctx->idx] = NULL;
+			kfree(ctx);
+		}
+	} else {
+		return -EINVAL;
+	}
+	return rc;
+}
+
+int bnxt_re_add_gid(struct ib_device *ibdev, u8 port_num,
+		    unsigned int index, const union ib_gid *gid,
+		    const struct ib_gid_attr *attr, void **context)
+{
+	int rc;
+	u32 tbl_idx = 0;
+	u16 vlan_id = 0xFFFF;
+	struct bnxt_re_gid_ctx *ctx, **ctx_tbl;
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl;
+
+	if ((attr->ndev) && is_vlan_dev(attr->ndev))
+		vlan_id = vlan_dev_vlan_id(attr->ndev);
+
+	rc = bnxt_qplib_add_sgid(sgid_tbl, (struct bnxt_qplib_gid *)gid,
+				 rdev->qplib_res.netdev->dev_addr,
+				 vlan_id, true, &tbl_idx);
+	if (rc == -EALREADY) {
+		ctx_tbl = sgid_tbl->ctx;
+		ctx_tbl[tbl_idx]->refcnt++;
+		*context = ctx_tbl[tbl_idx];
+		return 0;
+	}
+
+	if (rc < 0) {
+		dev_err(rdev_to_dev(rdev), "Failed to add GID: %#x", rc);
+		return rc;
+	}
+
+	ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return -ENOMEM;
+	ctx_tbl = sgid_tbl->ctx;
+	ctx->idx = tbl_idx;
+	ctx->refcnt = 1;
+	ctx_tbl[tbl_idx] = ctx;
+
+	return rc;
+}
+
+enum rdma_link_layer bnxt_re_get_link_layer(struct ib_device *ibdev,
+					    u8 port_num)
+{
+	return IB_LINK_LAYER_ETHERNET;
+}
+
 /* Protection Domains */
 int bnxt_re_dealloc_pd(struct ib_pd *ib_pd)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 255a0ad..c10fd63 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -39,6 +39,11 @@
 #ifndef __BNXT_RE_IB_VERBS_H__
 #define __BNXT_RE_IB_VERBS_H__
 
+struct bnxt_re_gid_ctx {
+	u32			idx;
+	u32			refcnt;
+};
+
 struct bnxt_re_pd {
 	struct bnxt_re_dev	*rdev;
 	struct ib_pd		ib_pd;
@@ -54,6 +59,8 @@ struct bnxt_re_ucontext {
 	spinlock_t		sh_lock;	/* protect shpg */
 };
 
+struct net_device *bnxt_re_get_netdev(struct ib_device *ibdev, u8 port_num);
+
 int bnxt_re_query_device(struct ib_device *ibdev,
 			 struct ib_device_attr *ib_attr,
 			 struct ib_udata *udata);
@@ -67,6 +74,17 @@ int bnxt_re_modify_port(struct ib_device *ibdev, u8 port_num,
 			struct ib_port_modify *port_modify);
 int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num,
 			       struct ib_port_immutable *immutable);
+int bnxt_re_query_pkey(struct ib_device *ibdev, u8 port_num,
+		       u16 index, u16 *pkey);
+int bnxt_re_del_gid(struct ib_device *ibdev, u8 port_num,
+		    unsigned int index, void **context);
+int bnxt_re_add_gid(struct ib_device *ibdev, u8 port_num,
+		    unsigned int index, const union ib_gid *gid,
+		    const struct ib_gid_attr *attr, void **context);
+int bnxt_re_query_gid(struct ib_device *ibdev, u8 port_num,
+		      int index, union ib_gid *gid);
+enum rdma_link_layer bnxt_re_get_link_layer(struct ib_device *ibdev,
+					    u8 port_num);
 struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 			       struct ib_ucontext *context,
 			       struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index b1b5ccee..313798f 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -431,6 +431,13 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->query_port		= bnxt_re_query_port;
 	ibdev->modify_port		= bnxt_re_modify_port;
 	ibdev->get_port_immutable	= bnxt_re_get_port_immutable;
+	ibdev->query_pkey		= bnxt_re_query_pkey;
+	ibdev->query_gid		= bnxt_re_query_gid;
+	ibdev->get_netdev		= bnxt_re_get_netdev;
+	ibdev->add_gid			= bnxt_re_add_gid;
+	ibdev->del_gid			= bnxt_re_del_gid;
+	ibdev->get_link_layer		= bnxt_re_get_link_layer;
+
 	ibdev->alloc_pd			= bnxt_re_alloc_pd;
 	ibdev->dealloc_pd		= bnxt_re_dealloc_pd;
 	ibdev->alloc_ucontext		= bnxt_re_alloc_ucontext;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  10/22] bnxt_re: Support for CQ verbs
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
  2016-12-09  6:48   ` [PATCH V2 07/22] bnxt_re: Support for query and modify device verbs Selvin Xavier
@ 2016-12-09  6:48   ` Selvin Xavier
       [not found]     ` <1481266096-23331-11-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
  2016-12-09  6:48   ` [PATCH V2 22/22] bnxt_re: Add bnxt_re driver build support Selvin Xavier
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Selvin Xavier, Eddie Wai,
	Devesh Sharma, Somnath Kotur, Sriharsha Basavapatna

This patch implements support for create_cq, destroy_cq and req_notify_cq
verbs.

Signed-off-by: Eddie Wai <eddie.wai-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Selvin Xavier <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 183 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |  47 ++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 154 ++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  19 +++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   4 +
 include/uapi/rdma/bnxt_re_uverbs_abi.h          |  11 ++
 6 files changed, 418 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
index 9e63032..636306f 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -49,14 +49,17 @@
 #include "bnxt_qplib_sp.h"
 #include "bnxt_qplib_fp.h"
 
+static void bnxt_qplib_arm_cq_enable(struct bnxt_qplib_cq *cq);
 static void bnxt_qplib_service_nq(unsigned long data)
 {
 	struct bnxt_qplib_nq *nq = (struct bnxt_qplib_nq *)data;
 	struct bnxt_qplib_hwq *hwq = &nq->hwq;
 	struct nq_base *nqe, **nq_ptr;
+	int num_cqne_processed = 0;
 	u32 sw_cons, raw_cons;
 	u32 type;
 	int budget = nq->budget;
+	u64 q_handle;
 
 	/* Service the NQ until empty */
 	raw_cons = hwq->cons;
@@ -70,7 +73,23 @@ static void bnxt_qplib_service_nq(unsigned long data)
 		type = le16_to_cpu(nqe->info10_type & NQ_BASE_TYPE_MASK);
 		switch (type) {
 		case NQ_BASE_TYPE_CQ_NOTIFICATION:
+		{
+			struct nq_cn *nqcne = (struct nq_cn *)nqe;
+
+			q_handle = le32_to_cpu(nqcne->cq_handle_low);
+			q_handle |= (u64)le32_to_cpu(nqcne->cq_handle_high)
+						     << 32;
+			bnxt_qplib_arm_cq_enable((struct bnxt_qplib_cq *)
+						 q_handle);
+			if (!nq->cqn_handler(nq, (struct bnxt_qplib_cq *)
+						 q_handle))
+				num_cqne_processed++;
+			else
+				dev_warn(&nq->pdev->dev,
+					 "QPLIB: cqn - type 0x%x not handled",
+					 type);
 			break;
+		}
 		case NQ_BASE_TYPE_DBQ_EVENT:
 			break;
 		default:
@@ -195,3 +214,167 @@ int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq)
 	nq->budget = 8;
 	return 0;
 }
+
+/* CQ */
+
+/* Spinlock must be held */
+static void bnxt_qplib_arm_cq_enable(struct bnxt_qplib_cq *cq)
+{
+	struct dbr_dbr db_msg = { 0 };
+
+	db_msg.type_xid =
+		cpu_to_le32(((cq->id << DBR_DBR_XID_SFT) & DBR_DBR_XID_MASK) |
+			    DBR_DBR_TYPE_CQ_ARMENA);
+	/* Flush memory writes before enabling the CQ */
+	wmb();
+	__iowrite64_copy(cq->dbr_base, &db_msg, sizeof(db_msg) / sizeof(u64));
+}
+
+static void bnxt_qplib_arm_cq(struct bnxt_qplib_cq *cq, u32 arm_type)
+{
+	struct bnxt_qplib_hwq *cq_hwq = &cq->hwq;
+	struct dbr_dbr db_msg = { 0 };
+	u32 sw_cons;
+
+	/* Ring DB */
+	sw_cons = HWQ_CMP(cq_hwq->cons, cq_hwq);
+	db_msg.index = cpu_to_le32((sw_cons << DBR_DBR_INDEX_SFT) &
+				    DBR_DBR_INDEX_MASK);
+	db_msg.type_xid =
+		cpu_to_le32(((cq->id << DBR_DBR_XID_SFT) & DBR_DBR_XID_MASK) |
+			    arm_type);
+	/* flush memory writes before arming the CQ */
+	wmb();
+	__iowrite64_copy(cq->dpi->dbr, &db_msg, sizeof(db_msg) / sizeof(u64));
+}
+
+int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_create_cq req;
+	struct creq_create_cq_resp *resp;
+	struct bnxt_qplib_pbl *pbl;
+	u16 cmd_flags = 0;
+	int rc;
+
+	cq->hwq.max_elements = cq->max_wqe;
+	rc = bnxt_qplib_alloc_init_hwq(res->pdev, &cq->hwq, cq->sghead,
+				       cq->nmap, &cq->hwq.max_elements,
+				       BNXT_QPLIB_MAX_CQE_ENTRY_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_QUEUE);
+	if (rc)
+		goto exit;
+
+	RCFW_CMD_PREP(req, CREATE_CQ, cmd_flags);
+
+	if (!cq->dpi) {
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: FP: CREATE_CQ failed due to NULL DPI");
+		return -EINVAL;
+	}
+	req.dpi = cpu_to_le32(cq->dpi->dpi);
+	req.cq_handle = cpu_to_le64(cq->cq_handle);
+
+	req.cq_size = cpu_to_le32(cq->hwq.max_elements);
+	pbl = &cq->hwq.pbl[PBL_LVL_0];
+	req.pg_size_lvl = cpu_to_le32(
+	    ((cq->hwq.level & CMDQ_CREATE_CQ_LVL_MASK) <<
+						CMDQ_CREATE_CQ_LVL_SFT) |
+	    (pbl->pg_size == ROCE_PG_SIZE_4K ? CMDQ_CREATE_CQ_PG_SIZE_PG_4K :
+	     pbl->pg_size == ROCE_PG_SIZE_8K ? CMDQ_CREATE_CQ_PG_SIZE_PG_8K :
+	     pbl->pg_size == ROCE_PG_SIZE_64K ? CMDQ_CREATE_CQ_PG_SIZE_PG_64K :
+	     pbl->pg_size == ROCE_PG_SIZE_2M ? CMDQ_CREATE_CQ_PG_SIZE_PG_2M :
+	     pbl->pg_size == ROCE_PG_SIZE_8M ? CMDQ_CREATE_CQ_PG_SIZE_PG_8M :
+	     pbl->pg_size == ROCE_PG_SIZE_1G ? CMDQ_CREATE_CQ_PG_SIZE_PG_1G :
+	     CMDQ_CREATE_CQ_PG_SIZE_PG_4K));
+
+	req.pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+
+	req.cq_fco_cnq_id = cpu_to_le16(
+			((cq->cnq_hw_ring_id & CMDQ_CREATE_CQ_CNQ_ID_MASK) <<
+			 CMDQ_CREATE_CQ_CNQ_ID_SFT) | 0);
+
+	resp = (struct creq_create_cq_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_CQ send failed");
+		return -EINVAL;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_CQ timed out");
+		rc = -ETIMEDOUT;
+		goto fail;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_CQ failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		rc = -EINVAL;
+		goto fail;
+	}
+	cq->id = le32_to_cpu(resp->xid);
+	cq->dbr_base = res->dpi_tbl.dbr_bar_reg_iomem;
+	cq->period = BNXT_QPLIB_QUEUE_START_PERIOD;
+	init_waitqueue_head(&cq->waitq);
+
+	bnxt_qplib_arm_cq_enable(cq);
+	return 0;
+
+fail:
+	bnxt_qplib_free_hwq(res->pdev, &cq->hwq);
+exit:
+	return rc;
+}
+
+int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_destroy_cq req;
+	struct creq_destroy_cq_resp *resp;
+	u16 cmd_flags = 0;
+
+	RCFW_CMD_PREP(req, DESTROY_CQ, cmd_flags);
+
+	req.cq_cid = cpu_to_le32(cq->id);
+	resp = (struct creq_destroy_cq_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_CQ send failed");
+		return -EINVAL;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_CQ timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_CQ failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	bnxt_qplib_free_hwq(res->pdev, &cq->hwq);
+	return 0;
+}
+
+void bnxt_qplib_req_notify_cq(struct bnxt_qplib_cq *cq, u32 arm_type)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&cq->hwq.lock, flags);
+	if (arm_type)
+		bnxt_qplib_arm_cq(cq, arm_type);
+
+	spin_unlock_irqrestore(&cq->hwq.lock, flags);
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
index d5cd39e..1991eaa 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -38,6 +38,49 @@
 
 #ifndef __BNXT_QPLIB_FP_H__
 #define __BNXT_QPLIB_FP_H__
+#define BNXT_QPLIB_MAX_CQE_ENTRY_SIZE	sizeof(struct cq_base)
+
+struct bnxt_qplib_cqe {
+	u8				status;
+	u8				type;
+	u8				opcode;
+	u32				length;
+	u64				wr_id;
+	u32				immdata_or_invrkey;
+	u64				qp_handle;
+	u64				mr_handle;
+	u16				flags;
+	u8				smac[6];
+	u32				src_qp;
+	u16				raweth_qp1_flags;
+	u16				raweth_qp1_errors;
+	u16				raweth_qp1_cfa_code;
+	u32				raweth_qp1_flags2;
+	u32				raweth_qp1_metadata;
+	u8				raweth_qp1_payload_offset;
+	u16				pkey_index;
+};
+
+#define BNXT_QPLIB_QUEUE_START_PERIOD		0x01
+struct bnxt_qplib_cq {
+	struct bnxt_qplib_dpi		*dpi;
+	void __iomem			*dbr_base;
+	u32				max_wqe;
+	u32				id;
+	u16				count;
+	u16				period;
+	struct bnxt_qplib_hwq		hwq;
+	u32				cnq_hw_ring_id;
+	bool				resize_in_progress;
+	struct scatterlist		*sghead;
+	u32				nmap;
+	u64				cq_handle;
+
+#define CQ_RESIZE_WAIT_TIME_MS		500
+	unsigned long			flags;
+#define CQ_FLAGS_RESIZE_IN_PROG		1
+	wait_queue_head_t		waitq;
+};
 
 #define BNXT_QPLIB_MAX_NQE_ENTRY_SIZE	sizeof(struct nq_base)
 
@@ -97,6 +140,10 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
 			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
 					     void *srq,
 					     u8 event));
+int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
+int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
+
+void bnxt_qplib_req_notify_cq(struct bnxt_qplib_cq *cq, u32 arm_type);
 void bnxt_qplib_free_nq(struct bnxt_qplib_nq *nq);
 int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq);
 #endif /* __BNXT_QPLIB_FP_H__ */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 3417829..f316598 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -60,6 +60,16 @@
 #include "bnxt_re_ib_verbs.h"
 #include <rdma/bnxt_re_uverbs_abi.h>
 
+static int bnxt_re_copy_to_udata(struct bnxt_re_dev *rdev, void *data, int len,
+				 struct ib_udata *udata)
+{
+	int rc;
+
+	rc = ib_copy_to_udata(udata, data, len);
+
+	return rc ? -EFAULT : 0;
+}
+
 /* Device */
 struct net_device *bnxt_re_get_netdev(struct ib_device *ibdev, u8 port_num)
 {
@@ -492,6 +502,150 @@ struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 	return ERR_PTR(rc);
 }
 
+/* Completion Queues */
+int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
+{
+	struct bnxt_re_cq *cq = to_bnxt_re(ib_cq, struct bnxt_re_cq, ib_cq);
+	struct bnxt_re_dev *rdev = cq->rdev;
+	int rc;
+
+	rc = bnxt_qplib_destroy_cq(&rdev->qplib_res, &cq->qplib_cq);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to destroy HW CQ");
+		return rc;
+	}
+	if (cq->umem && !IS_ERR(cq->umem))
+		ib_umem_release(cq->umem);
+
+	if (cq) {
+		kfree(cq->cql);
+		kfree(cq);
+	}
+	atomic_dec(&rdev->cq_count);
+	rdev->nq.budget--;
+	return 0;
+}
+
+struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
+				const struct ib_cq_init_attr *attr,
+				struct ib_ucontext *context,
+				struct ib_udata *udata)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
+	struct bnxt_re_cq *cq = NULL;
+	int rc, entries;
+	int cqe = attr->cqe;
+
+	/* Validate CQ fields */
+	if (cqe < 1 || cqe > dev_attr->max_cq_wqes) {
+		dev_err(rdev_to_dev(rdev), "Failed to create CQ -max exceeded");
+		return ERR_PTR(-EINVAL);
+	}
+	cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+	if (!cq)
+		return ERR_PTR(-ENOMEM);
+
+	cq->rdev = rdev;
+	cq->qplib_cq.cq_handle = (u64)&cq->qplib_cq;
+
+	entries = roundup_pow_of_two(cqe + 1);
+	if (entries > dev_attr->max_cq_wqes + 1)
+		entries = dev_attr->max_cq_wqes + 1;
+
+	if (context) {
+		struct bnxt_re_cq_req req;
+		struct bnxt_re_ucontext *uctx = to_bnxt_re(context,
+						   struct bnxt_re_ucontext,
+						   ib_uctx);
+		if (ib_copy_from_udata(&req, udata, sizeof(req))) {
+			rc = -EFAULT;
+			goto fail;
+		}
+
+		cq->umem = ib_umem_get(context, req.cq_va,
+				       entries * sizeof(struct cq_base),
+				       IB_ACCESS_LOCAL_WRITE, 1);
+		if (IS_ERR(cq->umem)) {
+			rc = PTR_ERR(cq->umem);
+			goto fail;
+		}
+		cq->qplib_cq.sghead = cq->umem->sg_head.sgl;
+		cq->qplib_cq.nmap = cq->umem->nmap;
+		cq->qplib_cq.dpi = uctx->dpi;
+	} else {
+		cq->max_cql = entries > MAX_CQL_PER_POLL ? MAX_CQL_PER_POLL :
+					entries;
+		cq->max_cql = min_t(u32, entries, MAX_CQL_PER_POLL);
+		cq->cql = kcalloc(cq->max_cql, sizeof(struct bnxt_qplib_cqe),
+				  GFP_KERNEL);
+		if (!cq->cql) {
+			rc = -ENOMEM;
+			goto fail;
+		}
+
+		cq->qplib_cq.dpi = &rdev->dpi_privileged;
+		cq->qplib_cq.sghead = NULL;
+		cq->qplib_cq.nmap = 0;
+	}
+	cq->qplib_cq.max_wqe = entries;
+	cq->qplib_cq.cnq_hw_ring_id = rdev->nq.ring_id;
+
+	rc = bnxt_qplib_create_cq(&rdev->qplib_res, &cq->qplib_cq);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to create HW CQ");
+		goto fail;
+	}
+
+	cq->ib_cq.cqe = entries;
+	cq->cq_period = cq->qplib_cq.period;
+	rdev->nq.budget++;
+
+	atomic_inc(&rdev->cq_count);
+
+	if (context) {
+		struct bnxt_re_cq_resp resp;
+
+		resp.cqid = cq->qplib_cq.id;
+		resp.tail = cq->qplib_cq.hwq.cons;
+		resp.phase = cq->qplib_cq.period;
+		rc = bnxt_re_copy_to_udata(rdev, &resp, sizeof(resp), udata);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev), "Failed to copy CQ udata");
+			bnxt_qplib_destroy_cq(&rdev->qplib_res, &cq->qplib_cq);
+			goto c2fail;
+		}
+	}
+
+	return &cq->ib_cq;
+
+c2fail:
+	if (context && cq->umem && !IS_ERR(cq->umem))
+		ib_umem_release(cq->umem);
+fail:
+	kfree(cq->cql);
+	kfree(cq);
+	return ERR_PTR(rc);
+}
+
+int bnxt_re_req_notify_cq(struct ib_cq *ib_cq,
+			  enum ib_cq_notify_flags ib_cqn_flags)
+{
+	struct bnxt_re_cq *cq = to_bnxt_re(ib_cq, struct bnxt_re_cq, ib_cq);
+	int type = 0;
+
+	/* Trigger on the very next completion */
+	if (ib_cqn_flags & IB_CQ_NEXT_COMP)
+		type = DBR_DBR_TYPE_CQ_ARMALL;
+	/* Trigger on the next solicited completion */
+	else if (ib_cqn_flags & IB_CQ_SOLICITED)
+		type = DBR_DBR_TYPE_CQ_ARMSE;
+
+	bnxt_qplib_req_notify_cq(&cq->qplib_cq, type);
+
+	return 0;
+}
+
 struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev,
 					   struct ib_udata *udata)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index c10fd63..1efb3c8 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -51,6 +51,19 @@ struct bnxt_re_pd {
 	struct bnxt_qplib_dpi	dpi;
 };
 
+struct bnxt_re_cq {
+	struct bnxt_re_dev	*rdev;
+	spinlock_t              cq_lock;	/* protect cq */
+	u16			cq_count;
+	u16			cq_period;
+	struct ib_cq		ib_cq;
+	struct bnxt_qplib_cq	qplib_cq;
+	struct bnxt_qplib_cqe	*cql;
+#define MAX_CQL_PER_POLL	1024
+	u32			max_cql;
+	struct ib_umem		*umem;
+};
+
 struct bnxt_re_ucontext {
 	struct bnxt_re_dev	*rdev;
 	struct ib_ucontext	ib_uctx;
@@ -89,6 +102,12 @@ struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 			       struct ib_ucontext *context,
 			       struct ib_udata *udata);
 int bnxt_re_dealloc_pd(struct ib_pd *pd);
+struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
+				const struct ib_cq_init_attr *attr,
+				struct ib_ucontext *context,
+				struct ib_udata *udata);
+int bnxt_re_destroy_cq(struct ib_cq *cq);
+int bnxt_re_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags);
 struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev,
 					   struct ib_udata *udata);
 int bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 313798f..2e20e2c 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -440,6 +440,10 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 
 	ibdev->alloc_pd			= bnxt_re_alloc_pd;
 	ibdev->dealloc_pd		= bnxt_re_dealloc_pd;
+
+	ibdev->create_cq		= bnxt_re_create_cq;
+	ibdev->destroy_cq		= bnxt_re_destroy_cq;
+	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
 	ibdev->alloc_ucontext		= bnxt_re_alloc_ucontext;
 	ibdev->dealloc_ucontext		= bnxt_re_dealloc_ucontext;
 	ibdev->mmap			= bnxt_re_mmap;
diff --git a/include/uapi/rdma/bnxt_re_uverbs_abi.h b/include/uapi/rdma/bnxt_re_uverbs_abi.h
index 3815184..9080cf8 100644
--- a/include/uapi/rdma/bnxt_re_uverbs_abi.h
+++ b/include/uapi/rdma/bnxt_re_uverbs_abi.h
@@ -55,4 +55,15 @@ struct bnxt_re_pd_resp {
 	__u64 dbr;
 } __packed;
 
+struct bnxt_re_cq_req {
+	__u64 cq_va;
+	__u64 cq_handle;
+} __packed;
+
+struct bnxt_re_cq_resp {
+	__u32 cqid;
+	__u32 tail;
+	__u32 phase;
+} __packed;
+
 #endif /* __BNXT_RE_UVERBS_ABI_H__*/
-- 
2.5.5

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  11/22] bnxt_re: Support for AH verbs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (7 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 09/22] bnxt_re: Support for GID " Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 12/22] bnxt_re: Support memory registration verbs Selvin Xavier
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch implements support for create_ah, destroy_ah, query_ah
and modify_ah verbs.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c    |  94 +++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h    |  18 +++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 147 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  11 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   4 +
 include/uapi/rdma/bnxt_re_uverbs_abi.h          |   7 ++
 6 files changed, 281 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
index cf9dd40..3246e573 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
@@ -416,3 +416,97 @@ int bnxt_qplib_add_pkey(struct bnxt_qplib_res *res,
 	/* unlock */
 	return rc;
 }
+
+/* AH */
+int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_create_ah req;
+	struct creq_create_ah_resp *resp;
+	u16 cmd_flags = 0;
+	u32 temp32[4];
+	u16 temp16[3];
+
+	RCFW_CMD_PREP(req, CREATE_AH, cmd_flags);
+
+	memcpy(temp32, ah->dgid.data, sizeof(struct bnxt_qplib_gid));
+	req.dgid[0] = cpu_to_le32(temp32[0]);
+	req.dgid[1] = cpu_to_le32(temp32[1]);
+	req.dgid[2] = cpu_to_le32(temp32[2]);
+	req.dgid[3] = cpu_to_le32(temp32[3]);
+
+	req.type = ah->nw_type;
+	req.hop_limit = ah->hop_limit;
+	req.sgid_index = cpu_to_le16(res->sgid_tbl.hw_id[ah->sgid_index]);
+	req.dest_vlan_id_flow_label = cpu_to_le32((ah->flow_label &
+					CMDQ_CREATE_AH_FLOW_LABEL_MASK) |
+					CMDQ_CREATE_AH_DEST_VLAN_ID_MASK);
+	req.pd_id = cpu_to_le32(ah->pd->id);
+	req.traffic_class = ah->traffic_class;
+
+	/* MAC in network format */
+	memcpy(temp16, ah->dmac, 6);
+	req.dest_mac[0] = cpu_to_le16(temp16[0]);
+	req.dest_mac[1] = cpu_to_le16(temp16[1]);
+	req.dest_mac[2] = cpu_to_le16(temp16[2]);
+
+	resp = (struct creq_create_ah_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 1);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: CREATE_AH send failed");
+		return -EINVAL;
+	}
+	if (!bnxt_qplib_rcfw_block_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: CREATE_AH timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: CREATE_AH failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	ah->id = le32_to_cpu(resp->xid);
+	return 0;
+}
+
+int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_destroy_ah req;
+	struct creq_destroy_ah_resp *resp;
+	u16 cmd_flags = 0;
+
+	/* Clean up the AH table in the device */
+	RCFW_CMD_PREP(req, DESTROY_AH, cmd_flags);
+
+	req.ah_cid = cpu_to_le32(ah->id);
+
+	resp = (struct creq_destroy_ah_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 1);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: DESTROY_AH send failed");
+		return -EINVAL;
+	}
+	if (!bnxt_qplib_rcfw_block_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: DESTROY_AH timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: DESTROY_AH failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	return 0;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
index 8e20924..26eac17 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -78,6 +78,22 @@ struct bnxt_qplib_gid {
 	u8				data[16];
 };
 
+struct bnxt_qplib_ah {
+	struct bnxt_qplib_gid		dgid;
+	struct bnxt_qplib_pd		*pd;
+	u32				id;
+	u8				sgid_index;
+	/* For Query AH if the hw table and SW table are differnt */
+	u8				host_sgid_index;
+	u8				traffic_class;
+	u32				flow_label;
+	u8				hop_limit;
+	u8				sl;
+	u8				dmac[6];
+	u16				vlan_id;
+	u8				nw_type;
+};
+
 int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
 			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
 			struct bnxt_qplib_gid *gid);
@@ -97,4 +113,6 @@ int bnxt_qplib_add_pkey(struct bnxt_qplib_res *res,
 			bool update);
 int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
 			    struct bnxt_qplib_dev_attr *attr);
+int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah);
+int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah);
 #endif /* __BNXT_QPLIB_SP_H__*/
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index f316598..78824bc 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -502,6 +502,153 @@ struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 	return ERR_PTR(rc);
 }
 
+/* Address Handles */
+int bnxt_re_destroy_ah(struct ib_ah *ib_ah)
+{
+	struct bnxt_re_ah *ah = to_bnxt_re(ib_ah, struct bnxt_re_ah, ib_ah);
+	struct bnxt_re_dev *rdev = ah->rdev;
+	int rc;
+
+	rc = bnxt_qplib_destroy_ah(&rdev->qplib_res, &ah->qplib_ah);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to destroy HW AH");
+		return rc;
+	}
+	kfree(ah);
+	return 0;
+}
+
+struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
+				struct ib_ah_attr *ah_attr)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_ah *ah;
+	int rc;
+	u16 vlan_tag;
+	u8 nw_type;
+
+	struct ib_gid_attr sgid_attr;
+
+	if (!(ah_attr->ah_flags & IB_AH_GRH)) {
+		dev_err(rdev_to_dev(rdev), "Failed to alloc AH: GRH not set");
+		return ERR_PTR(-EINVAL);
+	}
+	ah = kzalloc(sizeof(*ah), GFP_ATOMIC);
+	if (!ah)
+		return ERR_PTR(-ENOMEM);
+
+	ah->rdev = rdev;
+	ah->qplib_ah.pd = &pd->qplib_pd;
+
+	/* Supply the configuration for the HW */
+	memcpy(ah->qplib_ah.dgid.data, ah_attr->grh.dgid.raw,
+	       sizeof(union ib_gid));
+	/*
+	 * If RoCE V2 is enabled, stack will have two entries for
+	 * each GID entry. Avoiding this duplicte entry in HW. Dividing
+	 * the GID index by 2 for RoCE V2
+	 */
+	ah->qplib_ah.sgid_index = ah_attr->grh.sgid_index / 2;
+	ah->qplib_ah.host_sgid_index = ah_attr->grh.sgid_index;
+	ah->qplib_ah.traffic_class = ah_attr->grh.traffic_class;
+	ah->qplib_ah.flow_label = ah_attr->grh.flow_label;
+	ah->qplib_ah.hop_limit = ah_attr->grh.hop_limit;
+	ah->qplib_ah.sl = ah_attr->sl;
+	if (ib_pd->uobject &&
+	    !rdma_is_multicast_addr((struct in6_addr *)
+				    ah_attr->grh.dgid.raw) &&
+	    !rdma_link_local_addr((struct in6_addr *)
+				  ah_attr->grh.dgid.raw)) {
+		union ib_gid sgid;
+
+		rc = ib_get_cached_gid(&rdev->ibdev, 1,
+				       ah_attr->grh.sgid_index, &sgid,
+				       &sgid_attr);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev),
+				"Failed to query gid at index %d",
+				ah_attr->grh.sgid_index);
+			goto fail;
+		}
+		if (sgid_attr.ndev) {
+			if (is_vlan_dev(sgid_attr.ndev))
+				vlan_tag = vlan_dev_vlan_id(sgid_attr.ndev);
+			dev_put(sgid_attr.ndev);
+		}
+		/* Get network header type for this GID */
+		nw_type = ib_gid_to_network_type(sgid_attr.gid_type, &sgid);
+		switch (nw_type) {
+		case RDMA_NETWORK_IPV4:
+			ah->qplib_ah.nw_type = CMDQ_CREATE_AH_TYPE_V2IPV4;
+			break;
+		case RDMA_NETWORK_IPV6:
+			ah->qplib_ah.nw_type = CMDQ_CREATE_AH_TYPE_V2IPV6;
+			break;
+		default:
+			ah->qplib_ah.nw_type = CMDQ_CREATE_AH_TYPE_V1;
+			break;
+		}
+		rc = rdma_addr_find_l2_eth_by_grh(&sgid, &ah_attr->grh.dgid,
+						  ah_attr->dmac, &vlan_tag,
+						  &sgid_attr.ndev->ifindex,
+						  NULL);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev), "Failed to get dmac\n");
+			goto fail;
+		}
+	}
+
+	memcpy(ah->qplib_ah.dmac, ah_attr->dmac, ETH_ALEN);
+	rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to allocate HW AH");
+		goto fail;
+	}
+
+	/* Write AVID to shared page. */
+	if (ib_pd->uobject) {
+		struct ib_ucontext *ib_uctx = ib_pd->uobject->context;
+		struct bnxt_re_ucontext *uctx;
+		unsigned long flag;
+		u32 *wrptr;
+
+		uctx = to_bnxt_re(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+		spin_lock_irqsave(&uctx->sh_lock, flag);
+		wrptr = (u32 *)(uctx->shpg + BNXT_RE_AVID_OFFT);
+		*wrptr = ah->qplib_ah.id;
+		wmb(); /* make sure cache is updated. */
+		spin_unlock_irqrestore(&uctx->sh_lock, flag);
+	}
+
+	return &ah->ib_ah;
+
+fail:
+	kfree(ah);
+	return ERR_PTR(rc);
+}
+
+int bnxt_re_modify_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
+{
+	return 0;
+}
+
+int bnxt_re_query_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
+{
+	struct bnxt_re_ah *ah = to_bnxt_re(ib_ah, struct bnxt_re_ah, ib_ah);
+
+	memcpy(ah_attr->grh.dgid.raw, ah->qplib_ah.dgid.data,
+	       sizeof(union ib_gid));
+	ah_attr->grh.sgid_index = ah->qplib_ah.host_sgid_index;
+	ah_attr->grh.traffic_class = ah->qplib_ah.traffic_class;
+	ah_attr->sl = ah->qplib_ah.sl;
+	memcpy(ah_attr->dmac, ah->qplib_ah.dmac, ETH_ALEN);
+	ah_attr->ah_flags = IB_AH_GRH;
+	ah_attr->port_num = 1;
+	ah_attr->static_rate = 0;
+	return 0;
+}
+
 /* Completion Queues */
 int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 1efb3c8..14c9e02 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -51,6 +51,12 @@ struct bnxt_re_pd {
 	struct bnxt_qplib_dpi	dpi;
 };
 
+struct bnxt_re_ah {
+	struct bnxt_re_dev	*rdev;
+	struct ib_ah		ib_ah;
+	struct bnxt_qplib_ah	qplib_ah;
+};
+
 struct bnxt_re_cq {
 	struct bnxt_re_dev	*rdev;
 	spinlock_t              cq_lock;	/* protect cq */
@@ -102,6 +108,11 @@ struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
 			       struct ib_ucontext *context,
 			       struct ib_udata *udata);
 int bnxt_re_dealloc_pd(struct ib_pd *pd);
+struct ib_ah *bnxt_re_create_ah(struct ib_pd *pd,
+				struct ib_ah_attr *ah_attr);
+int bnxt_re_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
+int bnxt_re_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
+int bnxt_re_destroy_ah(struct ib_ah *ah);
 struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 				const struct ib_cq_init_attr *attr,
 				struct ib_ucontext *context,
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 2e20e2c..e998850 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -441,6 +441,10 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->alloc_pd			= bnxt_re_alloc_pd;
 	ibdev->dealloc_pd		= bnxt_re_dealloc_pd;
 
+	ibdev->create_ah		= bnxt_re_create_ah;
+	ibdev->modify_ah		= bnxt_re_modify_ah;
+	ibdev->query_ah			= bnxt_re_query_ah;
+	ibdev->destroy_ah		= bnxt_re_destroy_ah;
 	ibdev->create_cq		= bnxt_re_create_cq;
 	ibdev->destroy_cq		= bnxt_re_destroy_cq;
 	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
diff --git a/include/uapi/rdma/bnxt_re_uverbs_abi.h b/include/uapi/rdma/bnxt_re_uverbs_abi.h
index 9080cf8..5444eff 100644
--- a/include/uapi/rdma/bnxt_re_uverbs_abi.h
+++ b/include/uapi/rdma/bnxt_re_uverbs_abi.h
@@ -66,4 +66,11 @@ struct bnxt_re_cq_resp {
 	__u32 phase;
 } __packed;
 
+enum bnxt_re_shpg_offt {
+	BNXT_RE_BEG_RESV_OFFT	= 0x00,
+	BNXT_RE_AVID_OFFT	= 0x10,
+	BNXT_RE_AVID_SIZE	= 0x04,
+	BNXT_RE_END_RESV_OFFT	= 0xFF0
+};
+
 #endif /* __BNXT_RE_UVERBS_ABI_H__*/
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  12/22] bnxt_re: Support memory registration verbs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (8 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 11/22] bnxt_re: Support for AH verbs Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 13/22] bnxt_re: Support QP verbs Selvin Xavier
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch implements the kernel and user memory region registration
supported by the bnxt_re driver.
This includes the user MR, FRMR, FMR and DMA MR support.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c    | 324 ++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h    |  41 +++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 375 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  44 +++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |  11 +
 5 files changed, 795 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
index 3246e573..ce752d3 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
@@ -510,3 +510,327 @@ int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah)
 	}
 	return 0;
 }
+
+/* MRW */
+int bnxt_qplib_free_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_deallocate_key req;
+	struct creq_deallocate_key_resp *resp;
+	u16 cmd_flags = 0;
+
+	if (mrw->lkey == 0xFFFFFFFF) {
+		dev_info(&res->pdev->dev,
+			 "QPLIB: SP: Free a reserved lkey MRW");
+		return 0;
+	}
+
+	RCFW_CMD_PREP(req, DEALLOCATE_KEY, cmd_flags);
+
+	req.mrw_flags = mrw->type;
+
+	if ((mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE1)  ||
+	    (mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2A) ||
+	    (mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2B))
+		req.key = cpu_to_le32(mrw->rkey);
+	else
+		req.key = cpu_to_le32(mrw->lkey);
+
+	resp = (struct creq_deallocate_key_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&res->pdev->dev, "QPLIB: SP: FREE_MR send failed");
+		return -EINVAL;
+	}
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&res->pdev->dev, "QPLIB: SP: FREE_MR timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&res->pdev->dev, "QPLIB: SP: FREE_MR failed ");
+		dev_err(&res->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	/* Free the qplib's MRW memory */
+	if (mrw->hwq.max_elements)
+		bnxt_qplib_free_hwq(res->pdev, &mrw->hwq);
+
+	return 0;
+}
+
+int bnxt_qplib_alloc_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_allocate_mrw req;
+	struct creq_allocate_mrw_resp *resp;
+	u16 cmd_flags = 0;
+
+	RCFW_CMD_PREP(req, ALLOCATE_MRW, cmd_flags);
+
+	req.pd_id = cpu_to_le32(mrw->pd->id);
+	req.mrw_flags = mrw->type;
+	if ((mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR &&
+	     mrw->flags & BNXT_QPLIB_FR_PMR) ||
+	    mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2A ||
+	    mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2B)
+		req.access = CMDQ_ALLOCATE_MRW_ACCESS_CONSUMER_OWNED_KEY;
+	req.mrw_handle = cpu_to_le64(mrw);
+
+	resp = (struct creq_allocate_mrw_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: ALLOC_MRW send failed");
+		return -EINVAL;
+	}
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: ALLOC_MRW timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: ALLOC_MRW failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	if ((mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE1)  ||
+	    (mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2A) ||
+	    (mrw->type == CMDQ_ALLOCATE_MRW_MRW_FLAGS_MW_TYPE2B))
+		mrw->rkey = le32_to_cpu(resp->xid);
+	else
+		mrw->lkey = le32_to_cpu(resp->xid);
+	return 0;
+}
+
+int bnxt_qplib_dereg_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw,
+			 bool block)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_deregister_mr req;
+	struct creq_deregister_mr_resp *resp;
+	u16 cmd_flags = 0;
+	int rc;
+
+	RCFW_CMD_PREP(req, DEREGISTER_MR, cmd_flags);
+
+	req.lkey = cpu_to_le32(mrw->lkey);
+	resp = (struct creq_deregister_mr_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, block);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: DEREG_MR send failed");
+		return -EINVAL;
+	}
+	if (block)
+		rc = bnxt_qplib_rcfw_block_for_resp(rcfw,
+						    le16_to_cpu(req.cookie));
+	else
+		rc = bnxt_qplib_rcfw_wait_for_resp(rcfw,
+						   le16_to_cpu(req.cookie));
+	if (!rc) {
+		/* Cmd timed out */
+		dev_err(&res->pdev->dev, "QPLIB: SP: DEREG_MR timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: SP: DEREG_MR failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+
+	/* Free the qplib's MR memory */
+	if (mrw->hwq.max_elements) {
+		mrw->va = 0;
+		mrw->total_size = 0;
+		bnxt_qplib_free_hwq(res->pdev, &mrw->hwq);
+	}
+
+	return 0;
+}
+
+int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+		      u64 *pbl_tbl, int num_pbls, bool block)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_register_mr req;
+	struct creq_register_mr_resp *resp;
+	u16 cmd_flags = 0, level;
+	int pg_ptrs, pages, i, rc;
+	dma_addr_t **pbl_ptr;
+	u32 pg_size;
+
+	if (num_pbls) {
+		pg_ptrs = roundup_pow_of_two(num_pbls);
+		pages = pg_ptrs >> MAX_PBL_LVL_1_PGS_SHIFT;
+		if (!pages)
+			pages++;
+
+		if (pages > MAX_PBL_LVL_1_PGS) {
+			dev_err(&res->pdev->dev, "QPLIB: SP: Reg MR pages ");
+			dev_err(&res->pdev->dev,
+				"requested (0x%x) exceeded max (0x%x)",
+				pages, MAX_PBL_LVL_1_PGS);
+			return -ENOMEM;
+		}
+		/* Free the hwq if it already exist, must be a rereg */
+		if (mr->hwq.max_elements)
+			bnxt_qplib_free_hwq(res->pdev, &mr->hwq);
+
+		mr->hwq.max_elements = pages;
+		rc = bnxt_qplib_alloc_init_hwq(res->pdev, &mr->hwq, NULL, 0,
+					       &mr->hwq.max_elements,
+					       PAGE_SIZE, 0, PAGE_SIZE,
+					       HWQ_TYPE_CTX);
+		if (rc) {
+			dev_err(&res->pdev->dev,
+				"SP: Reg MR memory allocation failed");
+			return -ENOMEM;
+		}
+		/* Write to the hwq */
+		pbl_ptr = (u64 **)mr->hwq.pbl_ptr;
+		for (i = 0; i < num_pbls; i++)
+			pbl_ptr[PTR_PG(i)][PTR_IDX(i)] =
+				(pbl_tbl[i] & PAGE_MASK) | PTU_PTE_VALID;
+	}
+
+	RCFW_CMD_PREP(req, REGISTER_MR, cmd_flags);
+
+	/* Configure the request */
+	if (mr->hwq.level == PBL_LVL_MAX) {
+		level = 0;
+		req.pbl = 0;
+		pg_size = PAGE_SIZE;
+	} else {
+		level = mr->hwq.level + 1;
+		req.pbl = cpu_to_le64(mr->hwq.pbl[PBL_LVL_0].pg_map_arr[0]);
+		pg_size = mr->hwq.pbl[PBL_LVL_0].pg_size;
+	}
+	req.log2_pg_size_lvl = (level << CMDQ_REGISTER_MR_LVL_SFT) |
+			       ((ilog2(pg_size) <<
+				 CMDQ_REGISTER_MR_LOG2_PG_SIZE_SFT) &
+				CMDQ_REGISTER_MR_LOG2_PG_SIZE_MASK);
+	req.access = (mr->flags & 0xFFFF);
+	req.va = cpu_to_le64(mr->va);
+	req.key = cpu_to_le32(mr->lkey);
+	req.mr_size = cpu_to_le64(mr->total_size);
+
+	resp = (struct creq_register_mr_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, block);
+	if (!resp) {
+		dev_err(&res->pdev->dev, "SP: REG_MR send failed");
+		rc = -EINVAL;
+		goto fail;
+	}
+	if (block)
+		rc = bnxt_qplib_rcfw_block_for_resp(rcfw,
+						    le16_to_cpu(req.cookie));
+	else
+		rc = bnxt_qplib_rcfw_wait_for_resp(rcfw,
+						   le16_to_cpu(req.cookie));
+	if (!rc) {
+		/* Cmd timed out */
+		dev_err(&res->pdev->dev, "SP: REG_MR timed out");
+		rc = -ETIMEDOUT;
+		goto fail;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&res->pdev->dev, "QPLIB: SP: REG_MR failed ");
+		dev_err(&res->pdev->dev,
+			"QPLIB: SP: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		rc = -EINVAL;
+		goto fail;
+	}
+	return 0;
+
+fail:
+	if (mr->hwq.max_elements)
+		bnxt_qplib_free_hwq(res->pdev, &mr->hwq);
+	return rc;
+}
+
+int bnxt_qplib_alloc_fast_reg_page_list(struct bnxt_qplib_res *res,
+					struct bnxt_qplib_frpl *frpl,
+					int max_pg_ptrs)
+{
+	int pg_ptrs, pages, rc;
+
+	/* Re-calculate the max to fit the HWQ allocation model */
+	pg_ptrs = roundup_pow_of_two(max_pg_ptrs);
+	pages = pg_ptrs >> MAX_PBL_LVL_1_PGS_SHIFT;
+	if (!pages)
+		pages++;
+
+	if (pages > MAX_PBL_LVL_1_PGS)
+		return -ENOMEM;
+
+	frpl->hwq.max_elements = pages;
+	rc = bnxt_qplib_alloc_init_hwq(res->pdev, &frpl->hwq, NULL, 0,
+				       &frpl->hwq.max_elements, PAGE_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_CTX);
+	if (!rc)
+		frpl->max_pg_ptrs = pg_ptrs;
+
+	return rc;
+}
+
+int bnxt_qplib_free_fast_reg_page_list(struct bnxt_qplib_res *res,
+				       struct bnxt_qplib_frpl *frpl)
+{
+	bnxt_qplib_free_hwq(res->pdev, &frpl->hwq);
+	return 0;
+}
+
+int bnxt_qplib_map_tc2cos(struct bnxt_qplib_res *res, u16 *cids)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_map_tc_to_cos req;
+	struct creq_map_tc_to_cos_resp *resp;
+	u16 cmd_flags = 0;
+	int tleft;
+
+	RCFW_CMD_PREP(req, MAP_TC_TO_COS, cmd_flags);
+	req.cos0 = cids[0];
+	req.cos1 = cids[1];
+
+	resp = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, NULL, 0);
+	if (!resp) {
+		dev_err(&res->pdev->dev, "QPLIB: SP: MAP_TC2COS send failed");
+		return -EINVAL;
+	}
+
+	tleft = bnxt_qplib_rcfw_block_for_resp(rcfw, le16_to_cpu(req.cookie));
+	if (!tleft) {
+		dev_err(&res->pdev->dev, "QPLIB: SP: MAP_TC2COS timed out");
+		return -ETIMEDOUT;
+	}
+
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&res->pdev->dev, "QPLIB: SP: MAP_TC2COS failed ");
+		dev_err(&res->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
index 26eac17..3358f6d 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -94,6 +94,34 @@ struct bnxt_qplib_ah {
 	u8				nw_type;
 };
 
+struct bnxt_qplib_mrw {
+	struct bnxt_qplib_pd		*pd;
+	int				type;
+	u32				flags;
+#define BNXT_QPLIB_FR_PMR		0x80000000
+	u32				lkey;
+	u32				rkey;
+#define BNXT_QPLIB_RSVD_LKEY		0xFFFFFFFF
+	u64				va;
+	u64				total_size;
+	u32				npages;
+	u64				mr_handle;
+	struct bnxt_qplib_hwq		hwq;
+};
+
+struct bnxt_qplib_frpl {
+	int				max_pg_ptrs;
+	struct bnxt_qplib_hwq		hwq;
+};
+
+#define BNXT_QPLIB_ACCESS_LOCAL_WRITE	BIT(0)
+#define BNXT_QPLIB_ACCESS_REMOTE_READ	BIT(1)
+#define BNXT_QPLIB_ACCESS_REMOTE_WRITE	BIT(2)
+#define BNXT_QPLIB_ACCESS_REMOTE_ATOMIC	BIT(3)
+#define BNXT_QPLIB_ACCESS_MW_BIND	BIT(4)
+#define BNXT_QPLIB_ACCESS_ZERO_BASED	BIT(5)
+#define BNXT_QPLIB_ACCESS_ON_DEMAND	BIT(6)
+
 int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
 			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
 			struct bnxt_qplib_gid *gid);
@@ -115,4 +143,17 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
 			    struct bnxt_qplib_dev_attr *attr);
 int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah);
 int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah);
+int bnxt_qplib_alloc_mrw(struct bnxt_qplib_res *res,
+			 struct bnxt_qplib_mrw *mrw);
+int bnxt_qplib_dereg_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw,
+			 bool block);
+int bnxt_qplib_reg_mr(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr,
+		      u64 *pbl_tbl, int num_pbls, bool block);
+int bnxt_qplib_free_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mr);
+int bnxt_qplib_alloc_fast_reg_mr(struct bnxt_qplib_res *res,
+				 struct bnxt_qplib_mrw *mr, int max);
+int bnxt_qplib_alloc_fast_reg_page_list(struct bnxt_qplib_res *res,
+					struct bnxt_qplib_frpl *frpl, int max);
+int bnxt_qplib_free_fast_reg_page_list(struct bnxt_qplib_res *res,
+				       struct bnxt_qplib_frpl *frpl);
 #endif /* __BNXT_QPLIB_SP_H__*/
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 78824bc..5e41317 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -649,6 +649,47 @@ int bnxt_re_query_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
 	return 0;
 }
 
+static int __from_ib_access_flags(int iflags)
+{
+	int qflags = 0;
+
+	if (iflags & IB_ACCESS_LOCAL_WRITE)
+		qflags |= BNXT_QPLIB_ACCESS_LOCAL_WRITE;
+	if (iflags & IB_ACCESS_REMOTE_READ)
+		qflags |= BNXT_QPLIB_ACCESS_REMOTE_READ;
+	if (iflags & IB_ACCESS_REMOTE_WRITE)
+		qflags |= BNXT_QPLIB_ACCESS_REMOTE_WRITE;
+	if (iflags & IB_ACCESS_REMOTE_ATOMIC)
+		qflags |= BNXT_QPLIB_ACCESS_REMOTE_ATOMIC;
+	if (iflags & IB_ACCESS_MW_BIND)
+		qflags |= BNXT_QPLIB_ACCESS_MW_BIND;
+	if (iflags & IB_ZERO_BASED)
+		qflags |= BNXT_QPLIB_ACCESS_ZERO_BASED;
+	if (iflags & IB_ACCESS_ON_DEMAND)
+		qflags |= BNXT_QPLIB_ACCESS_ON_DEMAND;
+	return qflags;
+};
+
+static enum ib_access_flags __to_ib_access_flags(int qflags)
+{
+	enum ib_access_flags iflags = 0;
+
+	if (qflags & BNXT_QPLIB_ACCESS_LOCAL_WRITE)
+		iflags |= IB_ACCESS_LOCAL_WRITE;
+	if (qflags & BNXT_QPLIB_ACCESS_REMOTE_WRITE)
+		iflags |= IB_ACCESS_REMOTE_WRITE;
+	if (qflags & BNXT_QPLIB_ACCESS_REMOTE_READ)
+		iflags |= IB_ACCESS_REMOTE_READ;
+	if (qflags & BNXT_QPLIB_ACCESS_REMOTE_ATOMIC)
+		iflags |= IB_ACCESS_REMOTE_ATOMIC;
+	if (qflags & BNXT_QPLIB_ACCESS_MW_BIND)
+		iflags |= IB_ACCESS_MW_BIND;
+	if (qflags & BNXT_QPLIB_ACCESS_ZERO_BASED)
+		iflags |= IB_ZERO_BASED;
+	if (qflags & BNXT_QPLIB_ACCESS_ON_DEMAND)
+		iflags |= IB_ACCESS_ON_DEMAND;
+	return iflags;
+};
 /* Completion Queues */
 int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
 {
@@ -793,6 +834,340 @@ int bnxt_re_req_notify_cq(struct ib_cq *ib_cq,
 	return 0;
 }
 
+/* Memory Regions */
+struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *ib_pd, int mr_access_flags)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_mr *mr;
+	u64 pbl = 0;
+	int rc;
+
+	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
+	if (!mr)
+		return ERR_PTR(-ENOMEM);
+
+	mr->rdev = rdev;
+	mr->qplib_mr.pd = &pd->qplib_pd;
+	mr->qplib_mr.flags = __from_ib_access_flags(mr_access_flags);
+	mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR;
+
+	/* Allocate and register 0 as the address */
+	rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+	if (rc)
+		goto fail;
+
+	mr->qplib_mr.hwq.level = PBL_LVL_MAX;
+	mr->qplib_mr.total_size = -1; /* Infinte length */
+	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, &pbl, 0, false);
+	if (rc)
+		goto fail_mr;
+
+	mr->ib_mr.lkey = mr->qplib_mr.lkey;
+	if (mr_access_flags & (IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_READ |
+			       IB_ACCESS_REMOTE_ATOMIC))
+		mr->ib_mr.rkey = mr->ib_mr.lkey;
+	atomic_inc(&rdev->mr_count);
+
+	return &mr->ib_mr;
+
+fail_mr:
+	bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr);
+fail:
+	kfree(mr);
+	return ERR_PTR(rc);
+}
+
+int bnxt_re_dereg_mr(struct ib_mr *ib_mr)
+{
+	struct bnxt_re_mr *mr = to_bnxt_re(ib_mr, struct bnxt_re_mr, ib_mr);
+	struct bnxt_re_dev *rdev = mr->rdev;
+	int rc = 0;
+
+	if (mr->npages && mr->pages) {
+		rc = bnxt_qplib_free_fast_reg_page_list(&rdev->qplib_res,
+							&mr->qplib_frpl);
+		kfree(mr->pages);
+		mr->npages = 0;
+		mr->pages = NULL;
+	}
+	rc = bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr);
+
+	if (!IS_ERR(mr->ib_umem) && mr->ib_umem)
+		ib_umem_release(mr->ib_umem);
+
+	kfree(mr);
+	atomic_dec(&rdev->mr_count);
+	return rc;
+}
+
+static int bnxt_re_set_page(struct ib_mr *ib_mr, u64 addr)
+{
+	struct bnxt_re_mr *mr = to_bnxt_re(ib_mr, struct bnxt_re_mr, ib_mr);
+
+	if (unlikely(mr->npages == mr->qplib_frpl.max_pg_ptrs))
+		return -ENOMEM;
+
+	mr->pages[mr->npages++] = addr;
+	return 0;
+}
+
+int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct scatterlist *sg, int sg_nents,
+		      unsigned int *sg_offset)
+{
+	struct bnxt_re_mr *mr = to_bnxt_re(ib_mr, struct bnxt_re_mr, ib_mr);
+
+	mr->npages = 0;
+	return ib_sg_to_pages(ib_mr, sg, sg_nents, sg_offset, bnxt_re_set_page);
+}
+
+struct ib_mr *bnxt_re_alloc_mr(struct ib_pd *ib_pd, enum ib_mr_type type,
+			       u32 max_num_sg)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_mr *mr = NULL;
+	int rc;
+
+	if (type != IB_MR_TYPE_MEM_REG) {
+		dev_dbg(rdev_to_dev(rdev), "MR type 0x%x not supported", type);
+		return ERR_PTR(-EINVAL);
+	}
+	if (max_num_sg > MAX_PBL_LVL_1_PGS)
+		return ERR_PTR(-EINVAL);
+
+	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
+	if (!mr)
+		return ERR_PTR(-ENOMEM);
+
+	mr->rdev = rdev;
+	mr->qplib_mr.pd = &pd->qplib_pd;
+	mr->qplib_mr.flags = BNXT_QPLIB_FR_PMR;
+	mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR;
+
+	rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+	if (rc)
+		goto fail;
+
+	mr->ib_mr.lkey = mr->qplib_mr.lkey;
+	mr->ib_mr.rkey = mr->ib_mr.lkey;
+
+	mr->pages = kcalloc(max_num_sg, sizeof(u64), GFP_KERNEL);
+	if (!mr->pages) {
+		rc = -ENOMEM;
+		goto fail;
+	}
+	rc = bnxt_qplib_alloc_fast_reg_page_list(&rdev->qplib_res,
+						 &mr->qplib_frpl, max_num_sg);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev),
+			"Failed to allocate HW FR page list");
+		goto fail_mr;
+	}
+
+	atomic_inc(&rdev->mr_count);
+	return &mr->ib_mr;
+
+fail_mr:
+	bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr);
+fail:
+	kfree(mr->pages);
+	kfree(mr);
+	return ERR_PTR(rc);
+}
+
+/* Fast Memory Regions */
+struct ib_fmr *bnxt_re_alloc_fmr(struct ib_pd *ib_pd, int mr_access_flags,
+				 struct ib_fmr_attr *fmr_attr)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_fmr *fmr;
+	int rc;
+
+	if (fmr_attr->max_pages > MAX_PBL_LVL_2_PGS ||
+	    fmr_attr->max_maps > rdev->dev_attr.max_map_per_fmr) {
+		dev_err(rdev_to_dev(rdev), "Allocate FMR exceeded Max limit");
+		return ERR_PTR(-ENOMEM);
+	}
+	fmr = kzalloc(sizeof(*fmr), GFP_KERNEL);
+	if (!fmr)
+		return ERR_PTR(-ENOMEM);
+
+	fmr->rdev = rdev;
+	fmr->qplib_fmr.pd = &pd->qplib_pd;
+	fmr->qplib_fmr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR;
+
+	rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &fmr->qplib_fmr);
+	if (rc)
+		goto fail;
+
+	fmr->qplib_fmr.flags = __from_ib_access_flags(mr_access_flags);
+	fmr->ib_fmr.lkey = fmr->qplib_fmr.lkey;
+	fmr->ib_fmr.rkey = fmr->ib_fmr.lkey;
+
+	atomic_inc(&rdev->mr_count);
+	return &fmr->ib_fmr;
+fail:
+	kfree(fmr);
+	return ERR_PTR(rc);
+}
+
+int bnxt_re_map_phys_fmr(struct ib_fmr *ib_fmr, u64 *page_list, int list_len,
+			 u64 iova)
+{
+	struct bnxt_re_fmr *fmr = to_bnxt_re(ib_fmr, struct bnxt_re_fmr,
+					     ib_fmr);
+	struct bnxt_re_dev *rdev = fmr->rdev;
+	int rc;
+
+	fmr->qplib_fmr.va = iova;
+	fmr->qplib_fmr.total_size = list_len * PAGE_SIZE;
+
+	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &fmr->qplib_fmr, page_list,
+			       list_len, true);
+	if (rc)
+		dev_err(rdev_to_dev(rdev), "Failed to map FMR for lkey = 0x%x!",
+			fmr->ib_fmr.lkey);
+	return rc;
+}
+
+int bnxt_re_unmap_fmr(struct list_head *fmr_list)
+{
+	struct bnxt_re_dev *rdev;
+	struct bnxt_re_fmr *fmr;
+	struct ib_fmr *ib_fmr;
+	int rc = 0;
+
+	/* Validate each FMRs inside the fmr_list */
+	list_for_each_entry(ib_fmr, fmr_list, list) {
+		fmr = to_bnxt_re(ib_fmr, struct bnxt_re_fmr, ib_fmr);
+		rdev = fmr->rdev;
+
+		if (rdev) {
+			rc = bnxt_qplib_dereg_mrw(&rdev->qplib_res,
+						  &fmr->qplib_fmr, true);
+			if (rc)
+				break;
+		}
+	}
+	return rc;
+}
+
+int bnxt_re_dealloc_fmr(struct ib_fmr *ib_fmr)
+{
+	struct bnxt_re_fmr *fmr = to_bnxt_re(ib_fmr, struct bnxt_re_fmr,
+					     ib_fmr);
+	struct bnxt_re_dev *rdev = fmr->rdev;
+	int rc;
+
+	rc = bnxt_qplib_free_mrw(&rdev->qplib_res, &fmr->qplib_fmr);
+	if (rc)
+		dev_err(rdev_to_dev(rdev), "Failed to free FMR");
+
+	kfree(fmr);
+	atomic_dec(&rdev->mr_count);
+	return rc;
+}
+
+/* uverbs */
+struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
+				  u64 virt_addr, int mr_access_flags,
+				  struct ib_udata *udata)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_mr *mr;
+	struct ib_umem *umem;
+	u64 *pbl_tbl, *pbl_tbl_orig;
+	int i, umem_pgs, pages, page_shift, rc;
+	struct scatterlist *sg;
+	int entry;
+
+	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
+	if (!mr)
+		return ERR_PTR(-ENOMEM);
+
+	mr->rdev = rdev;
+	mr->qplib_mr.pd = &pd->qplib_pd;
+	mr->qplib_mr.flags = __from_ib_access_flags(mr_access_flags);
+	mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_MR;
+
+	umem = ib_umem_get(ib_pd->uobject->context, start, length,
+			   mr_access_flags, 0);
+	if (IS_ERR(umem)) {
+		dev_err(rdev_to_dev(rdev), "Failed to get umem");
+		rc = -EFAULT;
+		goto free_mr;
+	}
+	mr->ib_umem = umem;
+
+	rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to allocate MR");
+		goto release_umem;
+	}
+	/* The fixed portion of the rkey is the same as the lkey */
+	mr->ib_mr.rkey = mr->qplib_mr.rkey;
+
+	mr->qplib_mr.va = virt_addr;
+	umem_pgs = ib_umem_page_count(umem);
+	if (!umem_pgs) {
+		dev_err(rdev_to_dev(rdev), "umem is invalid!");
+		rc = -EINVAL;
+		goto free_mrw;
+	}
+	mr->qplib_mr.total_size = length;
+
+	pbl_tbl = kcalloc(umem_pgs, sizeof(u64 *), GFP_KERNEL);
+	if (!pbl_tbl) {
+		rc = -EINVAL;
+		goto free_mrw;
+	}
+	pbl_tbl_orig = pbl_tbl;
+
+	page_shift = ilog2(umem->page_size);
+	if (umem->hugetlb) {
+		dev_err(rdev_to_dev(rdev), "umem hugetlb not supported!");
+		rc = -EFAULT;
+		goto fail;
+	}
+	if (umem->page_size != PAGE_SIZE) {
+		dev_err(rdev_to_dev(rdev), "umem page size unsupported!");
+		rc = -EFAULT;
+		goto fail;
+	}
+	/* Map umem buf ptrs to the PBL */
+	for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
+		pages = sg_dma_len(sg) >> page_shift;
+		for (i = 0; i < pages; i++, pbl_tbl++)
+			*pbl_tbl = sg_dma_address(sg) + (i << page_shift);
+	}
+	rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, pbl_tbl_orig,
+			       umem_pgs, false);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to register user MR");
+		goto fail;
+	}
+
+	kfree(pbl_tbl_orig);
+
+	mr->ib_mr.lkey = mr->qplib_mr.lkey;
+	mr->ib_mr.rkey = mr->qplib_mr.lkey;
+	atomic_inc(&rdev->mr_count);
+
+	return &mr->ib_mr;
+fail:
+	kfree(pbl_tbl_orig);
+free_mrw:
+	bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr);
+release_umem:
+	ib_umem_release(umem);
+free_mr:
+	kfree(mr);
+	return ERR_PTR(rc);
+}
+
 struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev,
 					   struct ib_udata *udata)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 14c9e02..ba9a4c9 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -70,6 +70,34 @@ struct bnxt_re_cq {
 	struct ib_umem		*umem;
 };
 
+struct bnxt_re_mr {
+	struct bnxt_re_dev	*rdev;
+	struct ib_mr		ib_mr;
+	struct ib_umem		*ib_umem;
+	struct bnxt_qplib_mrw	qplib_mr;
+	u32			npages;
+	u64			*pages;
+	struct bnxt_qplib_frpl	qplib_frpl;
+};
+
+struct bnxt_re_frpl {
+	struct bnxt_re_dev		*rdev;
+	struct bnxt_qplib_frpl		qplib_frpl;
+	u64				*page_list;
+};
+
+struct bnxt_re_fmr {
+	struct bnxt_re_dev	*rdev;
+	struct ib_fmr		ib_fmr;
+	struct bnxt_qplib_mrw	qplib_fmr;
+};
+
+struct bnxt_re_mw {
+	struct bnxt_re_dev	*rdev;
+	struct ib_mw		ib_mw;
+	struct bnxt_qplib_mrw	qplib_mw;
+};
+
 struct bnxt_re_ucontext {
 	struct bnxt_re_dev	*rdev;
 	struct ib_ucontext	ib_uctx;
@@ -119,6 +147,22 @@ struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 				struct ib_udata *udata);
 int bnxt_re_destroy_cq(struct ib_cq *cq);
 int bnxt_re_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags);
+struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
+
+int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct scatterlist *sg, int sg_nents,
+		      unsigned int *sg_offset);
+struct ib_mr *bnxt_re_alloc_mr(struct ib_pd *ib_pd, enum ib_mr_type mr_type,
+			       u32 max_num_sg);
+int bnxt_re_dereg_mr(struct ib_mr *mr);
+struct ib_fmr *bnxt_re_alloc_fmr(struct ib_pd *pd, int mr_access_flags,
+				 struct ib_fmr_attr *fmr_attr);
+int bnxt_re_map_phys_fmr(struct ib_fmr *fmr, u64 *page_list, int list_len,
+			 u64 iova);
+int bnxt_re_unmap_fmr(struct list_head *fmr_list);
+int bnxt_re_dealloc_fmr(struct ib_fmr *fmr);
+struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+				  u64 virt_addr, int mr_access_flags,
+				  struct ib_udata *udata);
 struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev,
 					   struct ib_udata *udata);
 int bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index e998850..3d1504e 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -448,6 +448,17 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->create_cq		= bnxt_re_create_cq;
 	ibdev->destroy_cq		= bnxt_re_destroy_cq;
 	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
+
+	ibdev->get_dma_mr		= bnxt_re_get_dma_mr;
+	ibdev->dereg_mr			= bnxt_re_dereg_mr;
+	ibdev->alloc_mr			= bnxt_re_alloc_mr;
+	ibdev->map_mr_sg		= bnxt_re_map_mr_sg;
+	ibdev->alloc_fmr		= bnxt_re_alloc_fmr;
+	ibdev->map_phys_fmr		= bnxt_re_map_phys_fmr;
+	ibdev->unmap_fmr		= bnxt_re_unmap_fmr;
+	ibdev->dealloc_fmr		= bnxt_re_dealloc_fmr;
+
+	ibdev->reg_user_mr		= bnxt_re_reg_user_mr;
 	ibdev->alloc_ucontext		= bnxt_re_alloc_ucontext;
 	ibdev->dealloc_ucontext		= bnxt_re_dealloc_ucontext;
 	ibdev->mmap			= bnxt_re_mmap;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  13/22] bnxt_re: Support QP verbs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (9 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 12/22] bnxt_re: Support memory registration verbs Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
       [not found]   ` <1481266096-23331-14-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
  2016-12-09  6:48 ` [PATCH V2 14/22] bnxt_re: Support post_send verb Selvin Xavier
                   ` (11 subsequent siblings)
  22 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch implements create_qp, destroy_qp, query_qp and modify_qp verbs.

v2: Fixed sparse warnings

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 873 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    | 250 +++++++
 drivers/infiniband/hw/bnxtre/bnxt_re.h          |  14 +
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 762 +++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  21 +
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   6 +
 include/uapi/rdma/bnxt_re_uverbs_abi.h          |  10 +
 7 files changed, 1936 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
index 636306f..edc9411 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -50,6 +50,69 @@
 #include "bnxt_qplib_fp.h"
 
 static void bnxt_qplib_arm_cq_enable(struct bnxt_qplib_cq *cq);
+
+static void bnxt_qplib_free_qp_hdr_buf(struct bnxt_qplib_res *res,
+				       struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_q *rq = &qp->rq;
+	struct bnxt_qplib_q *sq = &qp->sq;
+
+	if (qp->rq_hdr_buf)
+		dma_free_coherent(&res->pdev->dev,
+				  rq->hwq.max_elements * qp->rq_hdr_buf_size,
+				  qp->rq_hdr_buf, qp->rq_hdr_buf_map);
+	if (qp->sq_hdr_buf)
+		dma_free_coherent(&res->pdev->dev,
+				  sq->hwq.max_elements * qp->sq_hdr_buf_size,
+				  qp->sq_hdr_buf, qp->sq_hdr_buf_map);
+	qp->rq_hdr_buf = NULL;
+	qp->sq_hdr_buf = NULL;
+	qp->rq_hdr_buf_map = 0;
+	qp->sq_hdr_buf_map = 0;
+	qp->sq_hdr_buf_size = 0;
+	qp->rq_hdr_buf_size = 0;
+}
+
+static int bnxt_qplib_alloc_qp_hdr_buf(struct bnxt_qplib_res *res,
+				       struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_q *rq = &qp->rq;
+	struct bnxt_qplib_q *sq = &qp->rq;
+	int rc = 0;
+
+	if (qp->sq_hdr_buf_size && sq->hwq.max_elements) {
+		qp->sq_hdr_buf = dma_alloc_coherent(&res->pdev->dev,
+					sq->hwq.max_elements *
+					qp->sq_hdr_buf_size,
+					&qp->sq_hdr_buf_map, GFP_KERNEL);
+		if (!qp->sq_hdr_buf) {
+			rc = -ENOMEM;
+			dev_err(&res->pdev->dev,
+				"QPLIB: Failed to create sq_hdr_buf");
+			goto fail;
+		}
+	}
+
+	if (qp->rq_hdr_buf_size && rq->hwq.max_elements) {
+		qp->rq_hdr_buf = dma_alloc_coherent(&res->pdev->dev,
+						    rq->hwq.max_elements *
+						    qp->rq_hdr_buf_size,
+						    &qp->rq_hdr_buf_map,
+						    GFP_KERNEL);
+		if (!qp->rq_hdr_buf) {
+			rc = -ENOMEM;
+			dev_err(&res->pdev->dev,
+				"QPLIB: Failed to create rq_hdr_buf");
+			goto fail;
+		}
+	}
+	return 0;
+
+fail:
+	bnxt_qplib_free_qp_hdr_buf(res, qp);
+	return rc;
+}
+
 static void bnxt_qplib_service_nq(unsigned long data)
 {
 	struct bnxt_qplib_nq *nq = (struct bnxt_qplib_nq *)data;
@@ -215,6 +278,816 @@ int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq)
 	return 0;
 }
 
+/* QP */
+int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_create_qp1 req;
+	struct creq_create_qp1_resp *resp;
+	struct bnxt_qplib_pbl *pbl;
+	struct bnxt_qplib_q *sq = &qp->sq;
+	struct bnxt_qplib_q *rq = &qp->rq;
+	int rc;
+	u16 cmd_flags = 0;
+	u32 qp_flags = 0;
+
+	RCFW_CMD_PREP(req, CREATE_QP1, cmd_flags);
+
+	/* General */
+	req.type = qp->type;
+	req.dpi = cpu_to_le32(qp->dpi->dpi);
+	req.qp_handle = cpu_to_le64(qp->qp_handle);
+
+	/* SQ */
+	sq->hwq.max_elements = sq->max_wqe;
+	rc = bnxt_qplib_alloc_init_hwq(res->pdev, &sq->hwq, NULL, 0,
+				       &sq->hwq.max_elements,
+				       BNXT_QPLIB_MAX_SQE_ENTRY_SIZE, 0,
+				       PAGE_SIZE, HWQ_TYPE_QUEUE);
+	if (rc)
+		goto exit;
+
+	sq->swq = kcalloc(sq->hwq.max_elements, sizeof(*sq->swq), GFP_KERNEL);
+	if (!sq->swq) {
+		rc = -ENOMEM;
+		goto fail_sq;
+	}
+	pbl = &sq->hwq.pbl[PBL_LVL_0];
+	req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+	req.sq_pg_size_sq_lvl =
+		((sq->hwq.level & CMDQ_CREATE_QP1_SQ_LVL_MASK)
+				<<  CMDQ_CREATE_QP1_SQ_LVL_SFT) |
+		(pbl->pg_size == ROCE_PG_SIZE_4K ?
+				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_4K :
+		 pbl->pg_size == ROCE_PG_SIZE_8K ?
+				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_8K :
+		 pbl->pg_size == ROCE_PG_SIZE_64K ?
+				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_64K :
+		 pbl->pg_size == ROCE_PG_SIZE_2M ?
+				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_2M :
+		 pbl->pg_size == ROCE_PG_SIZE_8M ?
+				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_8M :
+		 pbl->pg_size == ROCE_PG_SIZE_1G ?
+				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_1G :
+		 CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_4K);
+
+	if (qp->scq)
+		req.scq_cid = cpu_to_le32(qp->scq->id);
+
+	qp_flags |= CMDQ_CREATE_QP1_QP_FLAGS_RESERVED_LKEY_ENABLE;
+
+	/* RQ */
+	if (rq->max_wqe) {
+		rq->hwq.max_elements = qp->rq.max_wqe;
+		rc = bnxt_qplib_alloc_init_hwq(res->pdev, &rq->hwq, NULL, 0,
+					       &rq->hwq.max_elements,
+					       BNXT_QPLIB_MAX_RQE_ENTRY_SIZE, 0,
+					       PAGE_SIZE, HWQ_TYPE_QUEUE);
+		if (rc)
+			goto fail_sq;
+
+		rq->swq = kcalloc(rq->hwq.max_elements, sizeof(*rq->swq),
+				  GFP_KERNEL);
+		if (!rq->swq) {
+			rc = -ENOMEM;
+			goto fail_rq;
+		}
+		pbl = &rq->hwq.pbl[PBL_LVL_0];
+		req.rq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+		req.rq_pg_size_rq_lvl =
+			((rq->hwq.level & CMDQ_CREATE_QP1_RQ_LVL_MASK) <<
+			 CMDQ_CREATE_QP1_RQ_LVL_SFT) |
+				(pbl->pg_size == ROCE_PG_SIZE_4K ?
+					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_4K :
+				 pbl->pg_size == ROCE_PG_SIZE_8K ?
+					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_8K :
+				 pbl->pg_size == ROCE_PG_SIZE_64K ?
+					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_64K :
+				 pbl->pg_size == ROCE_PG_SIZE_2M ?
+					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_2M :
+				 pbl->pg_size == ROCE_PG_SIZE_8M ?
+					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_8M :
+				 pbl->pg_size == ROCE_PG_SIZE_1G ?
+					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_1G :
+				 CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_4K);
+		if (qp->rcq)
+			req.rcq_cid = cpu_to_le32(qp->rcq->id);
+	}
+
+	/* Header buffer - allow hdr_buf pass in */
+	rc = bnxt_qplib_alloc_qp_hdr_buf(res, qp);
+	if (rc) {
+		rc = -ENOMEM;
+		goto fail;
+	}
+	req.qp_flags = cpu_to_le32(qp_flags);
+	req.sq_size = cpu_to_le32(sq->hwq.max_elements);
+	req.rq_size = cpu_to_le32(rq->hwq.max_elements);
+
+	req.sq_fwo_sq_sge =
+		cpu_to_le16((sq->max_sge & CMDQ_CREATE_QP1_SQ_SGE_MASK) <<
+			    CMDQ_CREATE_QP1_SQ_SGE_SFT);
+	req.rq_fwo_rq_sge =
+		cpu_to_le16((rq->max_sge & CMDQ_CREATE_QP1_RQ_SGE_MASK) <<
+			    CMDQ_CREATE_QP1_RQ_SGE_SFT);
+
+	req.pd_id = cpu_to_le32(qp->pd->id);
+
+	resp = (struct creq_create_qp1_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&res->pdev->dev, "QPLIB: FP: CREATE_QP1 send failed");
+		rc = -EINVAL;
+		goto fail;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP1 timed out");
+		rc = -ETIMEDOUT;
+		goto fail;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP1 failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		rc = -EINVAL;
+		goto fail;
+	}
+	qp->id = le32_to_cpu(resp->xid);
+	qp->cur_qp_state = CMDQ_MODIFY_QP_NEW_STATE_RESET;
+	sq->flush_in_progress = false;
+	rq->flush_in_progress = false;
+
+	return 0;
+
+fail:
+	bnxt_qplib_free_qp_hdr_buf(res, qp);
+fail_rq:
+	bnxt_qplib_free_hwq(res->pdev, &rq->hwq);
+	kfree(rq->swq);
+fail_sq:
+	bnxt_qplib_free_hwq(res->pdev, &sq->hwq);
+	kfree(sq->swq);
+exit:
+	return rc;
+}
+
+int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct sq_send *hw_sq_send_hdr, **hw_sq_send_ptr;
+	struct cmdq_create_qp req;
+	struct creq_create_qp_resp *resp;
+	struct bnxt_qplib_pbl *pbl;
+	struct sq_psn_search **psn_search_ptr;
+	unsigned long long int psn_search, poff = 0;
+	struct bnxt_qplib_q *sq = &qp->sq;
+	struct bnxt_qplib_q *rq = &qp->rq;
+	struct bnxt_qplib_hwq *xrrq;
+	int i, rc, req_size, psn_sz;
+	u16 cmd_flags = 0, max_ssge;
+	u32 sw_prod, qp_flags = 0;
+
+	RCFW_CMD_PREP(req, CREATE_QP, cmd_flags);
+
+	/* General */
+	req.type = qp->type;
+	req.dpi = cpu_to_le32(qp->dpi->dpi);
+	req.qp_handle = cpu_to_le64(qp->qp_handle);
+
+	/* SQ */
+	psn_sz = (qp->type == CMDQ_CREATE_QP_TYPE_RC) ?
+		 sizeof(struct sq_psn_search) : 0;
+	sq->hwq.max_elements = sq->max_wqe;
+	rc = bnxt_qplib_alloc_init_hwq(res->pdev, &sq->hwq, sq->sglist,
+				       sq->nmap, &sq->hwq.max_elements,
+				       BNXT_QPLIB_MAX_SQE_ENTRY_SIZE,
+				       psn_sz,
+				       PAGE_SIZE, HWQ_TYPE_QUEUE);
+	if (rc)
+		goto exit;
+
+	sq->swq = kcalloc(sq->hwq.max_elements, sizeof(*sq->swq), GFP_KERNEL);
+	if (!sq->swq) {
+		rc = -ENOMEM;
+		goto fail_sq;
+	}
+	hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr;
+	if (psn_sz) {
+		psn_search_ptr = (struct sq_psn_search **)
+				  &hw_sq_send_ptr[SQE_PG(sq->hwq.max_elements)];
+		psn_search = (unsigned long long int)
+			      &hw_sq_send_ptr[SQE_PG(sq->hwq.max_elements)]
+			      [SQE_IDX(sq->hwq.max_elements)];
+		if (psn_search & ~PAGE_MASK) {
+			/* If the psn_search does not start on a page boundary,
+			 * then calculate the offset
+			 */
+			poff = (psn_search & ~PAGE_MASK) /
+				BNXT_QPLIB_MAX_PSNE_ENTRY_SIZE;
+		}
+		for (i = 0; i < sq->hwq.max_elements; i++)
+			sq->swq[i].psn_search =
+				&psn_search_ptr[PSNE_PG(i + poff)]
+					       [PSNE_IDX(i + poff)];
+	}
+	pbl = &sq->hwq.pbl[PBL_LVL_0];
+	req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+	req.sq_pg_size_sq_lvl =
+		((sq->hwq.level & CMDQ_CREATE_QP_SQ_LVL_MASK)
+				 <<  CMDQ_CREATE_QP_SQ_LVL_SFT) |
+		(pbl->pg_size == ROCE_PG_SIZE_4K ?
+				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K :
+		 pbl->pg_size == ROCE_PG_SIZE_8K ?
+				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_8K :
+		 pbl->pg_size == ROCE_PG_SIZE_64K ?
+				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_64K :
+		 pbl->pg_size == ROCE_PG_SIZE_2M ?
+				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_2M :
+		 pbl->pg_size == ROCE_PG_SIZE_8M ?
+				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_8M :
+		 pbl->pg_size == ROCE_PG_SIZE_1G ?
+				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_1G :
+		 CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K);
+
+	/* initialize all SQ WQEs to LOCAL_INVALID (sq prep for hw fetch) */
+	hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr;
+	for (sw_prod = 0; sw_prod < sq->hwq.max_elements; sw_prod++) {
+		hw_sq_send_hdr = &hw_sq_send_ptr[SQE_PG(sw_prod)]
+						[SQE_IDX(sw_prod)];
+		hw_sq_send_hdr->wqe_type = SQ_BASE_WQE_TYPE_LOCAL_INVALID;
+	}
+
+	if (qp->scq)
+		req.scq_cid = cpu_to_le32(qp->scq->id);
+
+	qp_flags |= CMDQ_CREATE_QP_QP_FLAGS_RESERVED_LKEY_ENABLE;
+	qp_flags |= CMDQ_CREATE_QP_QP_FLAGS_FR_PMR_ENABLED;
+	if (qp->sig_type)
+		qp_flags |= CMDQ_CREATE_QP_QP_FLAGS_FORCE_COMPLETION;
+
+	/* RQ */
+	if (rq->max_wqe) {
+		rq->hwq.max_elements = rq->max_wqe;
+		rc = bnxt_qplib_alloc_init_hwq(res->pdev, &rq->hwq, rq->sglist,
+					       rq->nmap, &rq->hwq.max_elements,
+					       BNXT_QPLIB_MAX_RQE_ENTRY_SIZE, 0,
+					       PAGE_SIZE, HWQ_TYPE_QUEUE);
+		if (rc)
+			goto fail_sq;
+
+		rq->swq = kcalloc(rq->hwq.max_elements, sizeof(*rq->swq),
+				  GFP_KERNEL);
+		if (!rq->swq) {
+			rc = -ENOMEM;
+			goto fail_rq;
+		}
+		pbl = &rq->hwq.pbl[PBL_LVL_0];
+		req.rq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+		req.rq_pg_size_rq_lvl =
+			((rq->hwq.level & CMDQ_CREATE_QP_RQ_LVL_MASK) <<
+			 CMDQ_CREATE_QP_RQ_LVL_SFT) |
+				(pbl->pg_size == ROCE_PG_SIZE_4K ?
+					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_4K :
+				 pbl->pg_size == ROCE_PG_SIZE_8K ?
+					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_8K :
+				 pbl->pg_size == ROCE_PG_SIZE_64K ?
+					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_64K :
+				 pbl->pg_size == ROCE_PG_SIZE_2M ?
+					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_2M :
+				 pbl->pg_size == ROCE_PG_SIZE_8M ?
+					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_8M :
+				 pbl->pg_size == ROCE_PG_SIZE_1G ?
+					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_1G :
+				 CMDQ_CREATE_QP_RQ_PG_SIZE_PG_4K);
+	}
+
+	if (qp->rcq)
+		req.rcq_cid = cpu_to_le32(qp->rcq->id);
+	req.qp_flags = cpu_to_le32(qp_flags);
+	req.sq_size = cpu_to_le32(sq->hwq.max_elements);
+	req.rq_size = cpu_to_le32(rq->hwq.max_elements);
+	qp->sq_hdr_buf = NULL;
+	qp->rq_hdr_buf = NULL;
+
+	rc = bnxt_qplib_alloc_qp_hdr_buf(res, qp);
+	if (rc)
+		goto fail_rq;
+
+	/* CTRL-22434: Irrespective of the requested SGE count on the SQ
+	 * always create the QP with max send sges possible if the requested
+	 * inline size is greater than 0.
+	 */
+	max_ssge = qp->max_inline_data ? 6 : sq->max_sge;
+	req.sq_fwo_sq_sge = cpu_to_le16(
+				((max_ssge & CMDQ_CREATE_QP_SQ_SGE_MASK)
+				 << CMDQ_CREATE_QP_SQ_SGE_SFT) | 0);
+	req.rq_fwo_rq_sge = cpu_to_le16(
+				((rq->max_sge & CMDQ_CREATE_QP_RQ_SGE_MASK)
+				 << CMDQ_CREATE_QP_RQ_SGE_SFT) | 0);
+	/* ORRQ and IRRQ */
+	if (psn_sz) {
+		xrrq = &qp->orrq;
+		xrrq->max_elements =
+			ORD_LIMIT_TO_ORRQ_SLOTS(qp->max_rd_atomic);
+		req_size = xrrq->max_elements *
+			   BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE + PAGE_SIZE - 1;
+		req_size &= ~(PAGE_SIZE - 1);
+		rc = bnxt_qplib_alloc_init_hwq(res->pdev, xrrq, NULL, 0,
+					       &xrrq->max_elements,
+					       BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE,
+					       0, req_size, HWQ_TYPE_CTX);
+		if (rc)
+			goto fail_buf_free;
+		pbl = &xrrq->pbl[PBL_LVL_0];
+		req.orrq_addr = cpu_to_le64(pbl->pg_map_arr[0]);
+
+		xrrq = &qp->irrq;
+		xrrq->max_elements = IRD_LIMIT_TO_IRRQ_SLOTS(
+						qp->max_dest_rd_atomic);
+		req_size = xrrq->max_elements *
+			   BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE + PAGE_SIZE - 1;
+		req_size &= ~(PAGE_SIZE - 1);
+
+		rc = bnxt_qplib_alloc_init_hwq(res->pdev, xrrq, NULL, 0,
+					       &xrrq->max_elements,
+					       BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE,
+					       0, req_size, HWQ_TYPE_CTX);
+		if (rc)
+			goto fail_orrq;
+
+		pbl = &xrrq->pbl[PBL_LVL_0];
+		req.irrq_addr = cpu_to_le64(pbl->pg_map_arr[0]);
+	}
+	req.pd_id = cpu_to_le32(qp->pd->id);
+
+	resp = (struct creq_create_qp_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP send failed");
+		rc = -EINVAL;
+		goto fail;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP timed out");
+		rc = -ETIMEDOUT;
+		goto fail;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		rc = -EINVAL;
+		goto fail;
+	}
+	qp->id = le32_to_cpu(resp->xid);
+	qp->cur_qp_state = CMDQ_MODIFY_QP_NEW_STATE_RESET;
+	sq->flush_in_progress = false;
+	rq->flush_in_progress = false;
+
+	return 0;
+
+fail:
+	if (qp->irrq.max_elements)
+		bnxt_qplib_free_hwq(res->pdev, &qp->irrq);
+fail_orrq:
+	if (qp->orrq.max_elements)
+		bnxt_qplib_free_hwq(res->pdev, &qp->orrq);
+fail_buf_free:
+	bnxt_qplib_free_qp_hdr_buf(res, qp);
+fail_rq:
+	bnxt_qplib_free_hwq(res->pdev, &rq->hwq);
+	kfree(rq->swq);
+fail_sq:
+	bnxt_qplib_free_hwq(res->pdev, &sq->hwq);
+	kfree(sq->swq);
+exit:
+	return rc;
+}
+
+static void __filter_modify_flags(struct bnxt_qplib_qp *qp)
+{
+	switch (qp->cur_qp_state) {
+	case CMDQ_MODIFY_QP_NEW_STATE_RESET:
+		switch (qp->state) {
+		case CMDQ_MODIFY_QP_NEW_STATE_INIT:
+			break;
+		default:
+			break;
+		}
+		break;
+	case CMDQ_MODIFY_QP_NEW_STATE_INIT:
+		switch (qp->state) {
+		case CMDQ_MODIFY_QP_NEW_STATE_RTR:
+			/* INIT->RTR, configure the path_mtu to the default
+			 * 2048 if not being requested
+			 */
+			if (!(qp->modify_flags &
+			      CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU)) {
+				qp->modify_flags |=
+					CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+				qp->path_mtu = CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
+			}
+			qp->modify_flags &=
+				~CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID;
+			/* Bono FW requires the max_dest_rd_atomic to be >= 1 */
+			if (qp->max_dest_rd_atomic < 1)
+				qp->max_dest_rd_atomic = 1;
+			qp->modify_flags &= ~CMDQ_MODIFY_QP_MODIFY_MASK_SRC_MAC;
+			/* Bono FW 20.6.5 requires SGID_INDEX configuration */
+			if (!(qp->modify_flags &
+			      CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX)) {
+				qp->modify_flags |=
+					CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX;
+				qp->ah.sgid_index = 0;
+			}
+			break;
+		default:
+			break;
+		}
+		break;
+	case CMDQ_MODIFY_QP_NEW_STATE_RTR:
+		switch (qp->state) {
+		case CMDQ_MODIFY_QP_NEW_STATE_RTS:
+			/* Bono FW requires the max_rd_atomic to be >= 1 */
+			if (qp->max_rd_atomic < 1)
+				qp->max_rd_atomic = 1;
+			/* Bono FW does not allow PKEY_INDEX,
+			 * DGID, FLOW_LABEL, SGID_INDEX, HOP_LIMIT,
+			 * TRAFFIC_CLASS, DEST_MAC, PATH_MTU, RQ_PSN,
+			 * MIN_RNR_TIMER, MAX_DEST_RD_ATOMIC, DEST_QP_ID
+			 * modification
+			 */
+			qp->modify_flags &=
+				~(CMDQ_MODIFY_QP_MODIFY_MASK_PKEY |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_DGID |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER |
+				  CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC
+				  | CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID);
+			break;
+		default:
+			break;
+		}
+		break;
+	case CMDQ_MODIFY_QP_NEW_STATE_RTS:
+		break;
+	case CMDQ_MODIFY_QP_NEW_STATE_SQD:
+		break;
+	case CMDQ_MODIFY_QP_NEW_STATE_SQE:
+		break;
+	case CMDQ_MODIFY_QP_NEW_STATE_ERR:
+		break;
+	default:
+		break;
+	}
+}
+
+int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_modify_qp req;
+	struct creq_modify_qp_resp *resp;
+	u16 cmd_flags = 0, pkey;
+	u32 temp32[4];
+	u32 bmask;
+
+	RCFW_CMD_PREP(req, MODIFY_QP, cmd_flags);
+
+	/* Filter out the qp_attr_mask based on the state->new transition */
+	__filter_modify_flags(qp);
+	bmask = qp->modify_flags;
+	req.modify_mask = cpu_to_le64(qp->modify_flags);
+	req.qp_cid = cpu_to_le32(qp->id);
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_STATE) {
+		req.network_type_en_sqd_async_notify_new_state =
+				(qp->state & CMDQ_MODIFY_QP_NEW_STATE_MASK) |
+				(qp->en_sqd_async_notify ?
+					CMDQ_MODIFY_QP_EN_SQD_ASYNC_NOTIFY : 0);
+	}
+	req.network_type_en_sqd_async_notify_new_state |= qp->nw_type;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_ACCESS)
+		req.access = qp->access;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_PKEY) {
+		if (!bnxt_qplib_get_pkey(res, &res->pkey_tbl,
+					 qp->pkey_index, &pkey))
+			req.pkey = cpu_to_le16(pkey);
+	}
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_QKEY)
+		req.qkey = cpu_to_le32(qp->qkey);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_DGID) {
+		memcpy(temp32, qp->ah.dgid.data, sizeof(struct bnxt_qplib_gid));
+		req.dgid[0] = cpu_to_le32(temp32[0]);
+		req.dgid[1] = cpu_to_le32(temp32[1]);
+		req.dgid[2] = cpu_to_le32(temp32[2]);
+		req.dgid[3] = cpu_to_le32(temp32[3]);
+	}
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL)
+		req.flow_label = cpu_to_le32(qp->ah.flow_label);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX)
+		req.sgid_index = cpu_to_le16(res->sgid_tbl.hw_id
+					     [qp->ah.sgid_index]);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT)
+		req.hop_limit = qp->ah.hop_limit;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS)
+		req.traffic_class = qp->ah.traffic_class;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC)
+		memcpy(req.dest_mac, qp->ah.dmac, 6);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU)
+		req.path_mtu = cpu_to_le16(qp->path_mtu);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_TIMEOUT)
+		req.timeout = qp->timeout;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_RETRY_CNT)
+		req.retry_cnt = qp->retry_cnt;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_RNR_RETRY)
+		req.rnr_retry = qp->rnr_retry;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER)
+		req.min_rnr_timer = qp->min_rnr_timer;
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN)
+		req.rq_psn = cpu_to_le32(qp->rq.psn);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN)
+		req.sq_psn = cpu_to_le32(qp->sq.psn);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_MAX_RD_ATOMIC)
+		req.max_rd_atomic =
+			ORD_LIMIT_TO_ORRQ_SLOTS(qp->max_rd_atomic);
+
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC)
+		req.max_dest_rd_atomic =
+			IRD_LIMIT_TO_IRRQ_SLOTS(qp->max_dest_rd_atomic);
+
+	req.sq_size = cpu_to_le32(qp->sq.hwq.max_elements);
+	req.rq_size = cpu_to_le32(qp->rq.hwq.max_elements);
+	req.sq_sge = cpu_to_le16(qp->sq.max_sge);
+	req.rq_sge = cpu_to_le16(qp->rq.max_sge);
+	req.max_inline_data = cpu_to_le32(qp->max_inline_data);
+	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID)
+		req.dest_qp_id = cpu_to_le32(qp->dest_qpn);
+
+	req.vlan_pcp_vlan_dei_vlan_id = cpu_to_le16(qp->vlan_id);
+
+	resp = (struct creq_modify_qp_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: MODIFY_QP send failed");
+		return -EINVAL;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: MODIFY_QP timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: MODIFY_QP failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	qp->cur_qp_state = qp->state;
+	return 0;
+}
+
+int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_query_qp req;
+	struct creq_query_qp_resp *resp;
+	struct creq_query_qp_resp_sb *sb;
+	u16 cmd_flags = 0;
+	u32 temp32[4];
+	int i;
+
+	RCFW_CMD_PREP(req, QUERY_QP, cmd_flags);
+
+	req.qp_cid = cpu_to_le32(qp->id);
+	req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS;
+	resp = (struct creq_query_qp_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     (void **)&sb, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: QUERY_QP send failed");
+		return -EINVAL;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: QUERY_QP timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: QUERY_QP failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+	/* Extract the context from the side buffer */
+	qp->state = sb->en_sqd_async_notify_state &
+			CREQ_QUERY_QP_RESP_SB_STATE_MASK;
+	qp->en_sqd_async_notify = sb->en_sqd_async_notify_state &
+				  CREQ_QUERY_QP_RESP_SB_EN_SQD_ASYNC_NOTIFY ?
+				  true : false;
+	qp->access = sb->access;
+	qp->pkey_index = le16_to_cpu(sb->pkey);
+	qp->qkey = le32_to_cpu(sb->qkey);
+
+	temp32[0] = le32_to_cpu(sb->dgid[0]);
+	temp32[1] = le32_to_cpu(sb->dgid[1]);
+	temp32[2] = le32_to_cpu(sb->dgid[2]);
+	temp32[3] = le32_to_cpu(sb->dgid[3]);
+	memcpy(qp->ah.dgid.data, temp32, sizeof(qp->ah.dgid.data));
+
+	qp->ah.flow_label = le32_to_cpu(sb->flow_label);
+
+	qp->ah.sgid_index = 0;
+	for (i = 0; i < res->sgid_tbl.max; i++) {
+		if (res->sgid_tbl.hw_id[i] == le16_to_cpu(sb->sgid_index)) {
+			qp->ah.sgid_index = i;
+			break;
+		}
+	}
+	if (i == res->sgid_tbl.max)
+		dev_warn(&res->pdev->dev, "QPLIB: SGID not found??");
+
+	qp->ah.hop_limit = sb->hop_limit;
+	qp->ah.traffic_class = sb->traffic_class;
+	memcpy(qp->ah.dmac, sb->dest_mac, 6);
+	qp->ah.vlan_id = le16_to_cpu((sb->path_mtu_dest_vlan_id &
+				CREQ_QUERY_QP_RESP_SB_VLAN_ID_MASK) >>
+				CREQ_QUERY_QP_RESP_SB_VLAN_ID_SFT);
+	qp->path_mtu = sb->path_mtu_dest_vlan_id &
+				    CREQ_QUERY_QP_RESP_SB_PATH_MTU_MASK;
+	qp->timeout = sb->timeout;
+	qp->retry_cnt = sb->retry_cnt;
+	qp->rnr_retry = sb->rnr_retry;
+	qp->min_rnr_timer = sb->min_rnr_timer;
+	qp->rq.psn = le32_to_cpu(sb->rq_psn);
+	qp->max_rd_atomic = ORRQ_SLOTS_TO_ORD_LIMIT(sb->max_rd_atomic);
+	qp->sq.psn = le32_to_cpu(sb->sq_psn);
+	qp->max_dest_rd_atomic =
+			IRRQ_SLOTS_TO_IRD_LIMIT(sb->max_dest_rd_atomic);
+	qp->sq.max_wqe = qp->sq.hwq.max_elements;
+	qp->rq.max_wqe = qp->rq.hwq.max_elements;
+	qp->sq.max_sge = le16_to_cpu(sb->sq_sge);
+	qp->rq.max_sge = le32_to_cpu(sb->rq_sge);
+	qp->max_inline_data = le32_to_cpu(sb->max_inline_data);
+	qp->dest_qpn = le32_to_cpu(sb->dest_qp_id);
+	memcpy(qp->smac, sb->src_mac, 6);
+	qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id);
+	return 0;
+}
+
+static void __clean_cq(struct bnxt_qplib_cq *cq, u64 qp)
+{
+	struct bnxt_qplib_hwq *cq_hwq = &cq->hwq;
+	struct cq_base *hw_cqe, **hw_cqe_ptr;
+	int i;
+
+	for (i = 0; i < cq_hwq->max_elements; i++) {
+		hw_cqe_ptr = (struct cq_base **)cq_hwq->pbl_ptr;
+		hw_cqe = &hw_cqe_ptr[CQE_PG(i)][CQE_IDX(i)];
+		if (!CQE_CMP_VALID(hw_cqe, i, cq_hwq->max_elements))
+			continue;
+		switch (hw_cqe->cqe_type_toggle & CQ_BASE_CQE_TYPE_MASK) {
+		case CQ_BASE_CQE_TYPE_REQ:
+		case CQ_BASE_CQE_TYPE_TERMINAL:
+		{
+			struct cq_req *cqe = (struct cq_req *)hw_cqe;
+
+			if (qp == le64_to_cpu(cqe->qp_handle))
+				cqe->qp_handle = 0;
+			break;
+		}
+		case CQ_BASE_CQE_TYPE_RES_RC:
+		case CQ_BASE_CQE_TYPE_RES_UD:
+		case CQ_BASE_CQE_TYPE_RES_RAWETH_QP1:
+		{
+			struct cq_res_rc *cqe = (struct cq_res_rc *)hw_cqe;
+
+			if (qp == le64_to_cpu(cqe->qp_handle))
+				cqe->qp_handle = 0;
+			break;
+		}
+		default:
+			break;
+		}
+	}
+}
+
+static unsigned long bnxt_qplib_lock_cqs(struct bnxt_qplib_qp *qp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&qp->scq->hwq.lock, flags);
+	if (qp->rcq && qp->rcq != qp->scq)
+		spin_lock(&qp->rcq->hwq.lock);
+
+	return flags;
+}
+
+static void bnxt_qplib_unlock_cqs(struct bnxt_qplib_qp *qp,
+				  unsigned long flags)
+{
+	if (qp->rcq && qp->rcq != qp->scq)
+		spin_unlock(&qp->rcq->hwq.lock);
+	spin_unlock_irqrestore(&qp->scq->hwq.lock, flags);
+}
+
+int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
+			  struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+	struct cmdq_destroy_qp req;
+	struct creq_destroy_qp_resp *resp;
+	unsigned long flags;
+	u16 cmd_flags = 0;
+
+	RCFW_CMD_PREP(req, DESTROY_QP, cmd_flags);
+
+	req.qp_cid = cpu_to_le32(qp->id);
+	resp = (struct creq_destroy_qp_resp *)
+			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+						     NULL, 0);
+	if (!resp) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_QP send failed");
+		return -EINVAL;
+	}
+	/**/
+	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
+		/* Cmd timed out */
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_QP timed out");
+		return -ETIMEDOUT;
+	}
+	if (RCFW_RESP_STATUS(resp) ||
+	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
+		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_QP failed ");
+		dev_err(&rcfw->pdev->dev,
+			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
+			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
+			RCFW_RESP_COOKIE(resp));
+		return -EINVAL;
+	}
+
+	/* Must walk the associated CQs to nullified the QP ptr */
+	flags = bnxt_qplib_lock_cqs(qp);
+	__clean_cq(qp->scq, (u64)qp);
+	if (qp->rcq != qp->scq)
+		__clean_cq(qp->rcq, (u64)qp);
+	bnxt_qplib_unlock_cqs(qp, flags);
+
+	bnxt_qplib_free_qp_hdr_buf(res, qp);
+	bnxt_qplib_free_hwq(res->pdev, &qp->sq.hwq);
+	kfree(qp->sq.swq);
+
+	bnxt_qplib_free_hwq(res->pdev, &qp->rq.hwq);
+	kfree(qp->rq.swq);
+
+	if (qp->irrq.max_elements)
+		bnxt_qplib_free_hwq(res->pdev, &qp->irrq);
+	if (qp->orrq.max_elements)
+		bnxt_qplib_free_hwq(res->pdev, &qp->orrq);
+
+	return 0;
+}
+
 /* CQ */
 
 /* Spinlock must be held */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
index 1991eaa..f6d2be5 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -38,8 +38,246 @@
 
 #ifndef __BNXT_QPLIB_FP_H__
 #define __BNXT_QPLIB_FP_H__
+struct bnxt_qplib_sge {
+	u64				addr;
+	u32				lkey;
+	u32				size;
+};
+
+#define BNXT_QPLIB_MAX_SQE_ENTRY_SIZE	sizeof(struct sq_send)
+
+#define SQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_SQE_ENTRY_SIZE)
+#define SQE_MAX_IDX_PER_PG	(SQE_CNT_PER_PG - 1)
+#define SQE_PG(x)		(((x) & ~SQE_MAX_IDX_PER_PG) / SQE_CNT_PER_PG)
+#define SQE_IDX(x)		((x) & SQE_MAX_IDX_PER_PG)
+
+#define BNXT_QPLIB_MAX_PSNE_ENTRY_SIZE	sizeof(struct sq_psn_search)
+
+#define PSNE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_PSNE_ENTRY_SIZE)
+#define PSNE_MAX_IDX_PER_PG	(PSNE_CNT_PER_PG - 1)
+#define PSNE_PG(x)		(((x) & ~PSNE_MAX_IDX_PER_PG) / PSNE_CNT_PER_PG)
+#define PSNE_IDX(x)		((x) & PSNE_MAX_IDX_PER_PG)
+
+#define BNXT_QPLIB_QP_MAX_SGL	6
+
+struct bnxt_qplib_swq {
+	u64				wr_id;
+	u8				type;
+	u8				flags;
+	u32				start_psn;
+	u32				next_psn;
+	struct sq_psn_search		*psn_search;
+};
+
+struct bnxt_qplib_swqe {
+	/* General */
+	u64				wr_id;
+	u8				reqs_type;
+	u8				type;
+#define BNXT_QPLIB_SWQE_TYPE_SEND			0
+#define BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM		1
+#define BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV		2
+#define BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE			4
+#define BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM	5
+#define BNXT_QPLIB_SWQE_TYPE_RDMA_READ			6
+#define BNXT_QPLIB_SWQE_TYPE_ATOMIC_CMP_AND_SWP		8
+#define BNXT_QPLIB_SWQE_TYPE_ATOMIC_FETCH_AND_ADD	11
+#define BNXT_QPLIB_SWQE_TYPE_LOCAL_INV			12
+#define BNXT_QPLIB_SWQE_TYPE_FAST_REG_MR		13
+#define BNXT_QPLIB_SWQE_TYPE_REG_MR			13
+#define BNXT_QPLIB_SWQE_TYPE_BIND_MW			14
+#define BNXT_QPLIB_SWQE_TYPE_RECV			128
+#define BNXT_QPLIB_SWQE_TYPE_RECV_RDMA_IMM		129
+	u8				flags;
+#define BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP		BIT(0)
+#define BNXT_QPLIB_SWQE_FLAGS_RD_ATOMIC_FENCE		BIT(1)
+#define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE			BIT(2)
+#define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT		BIT(3)
+#define BNXT_QPLIB_SWQE_FLAGS_INLINE			BIT(4)
+	struct bnxt_qplib_sge		sg_list[BNXT_QPLIB_QP_MAX_SGL];
+	int				num_sge;
+	/* Max inline data is 96 bytes */
+	u32				inline_len;
+#define BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH		96
+	u8		inline_data[BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH];
+
+	union {
+		/* Send, with imm, inval key */
+		struct {
+			u32		imm_data_or_inv_key;
+			u32		q_key;
+			u32		dst_qp;
+			u16		avid;
+		} send;
+
+		/* Send Raw Ethernet and QP1 */
+		struct {
+			u16		lflags;
+			u16		cfa_action;
+			u32		cfa_meta;
+		} rawqp1;
+
+		/* RDMA write, with imm, read */
+		struct {
+			u32		imm_data_or_inv_key;
+			u64		remote_va;
+			u32		r_key;
+		} rdma;
+
+		/* Atomic cmp/swap, fetch/add */
+		struct {
+			u64		remote_va;
+			u32		r_key;
+			u64		swap_data;
+			u64		cmp_data;
+		} atomic;
+
+		/* Local Invalidate */
+		struct {
+			u32		inv_l_key;
+		} local_inv;
+
+		/* FR-PMR */
+		struct {
+			u8		access_cntl;
+			u8		pg_sz_log;
+			bool		zero_based;
+			u32		l_key;
+			u32		length;
+			u8		pbl_pg_sz_log;
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_4K			0
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_8K			1
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_64K			4
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_256K			6
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_1M			8
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_2M			9
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_4M			10
+#define BNXT_QPLIB_SWQE_PAGE_SIZE_1G			18
+			u8		levels;
+#define PAGE_SHIFT_4K	12
+			u64		*pbl_ptr;
+			dma_addr_t	pbl_dma_ptr;
+			u64		*page_list;
+			u16		page_list_len;
+			u64		va;
+		} frmr;
+
+		/* Bind */
+		struct {
+			u8		access_cntl;
+#define BNXT_QPLIB_BIND_SWQE_ACCESS_LOCAL_WRITE		BIT(0)
+#define BNXT_QPLIB_BIND_SWQE_ACCESS_REMOTE_READ		BIT(1)
+#define BNXT_QPLIB_BIND_SWQE_ACCESS_REMOTE_WRITE	BIT(2)
+#define BNXT_QPLIB_BIND_SWQE_ACCESS_REMOTE_ATOMIC	BIT(3)
+#define BNXT_QPLIB_BIND_SWQE_ACCESS_WINDOW_BIND		BIT(4)
+			bool		zero_based;
+			u8		mw_type;
+			u32		parent_l_key;
+			u32		r_key;
+			u64		va;
+			u32		length;
+		} bind;
+	};
+};
+
+#define BNXT_QPLIB_MAX_RQE_ENTRY_SIZE	sizeof(struct rq_wqe)
+
+#define RQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_RQE_ENTRY_SIZE)
+#define RQE_MAX_IDX_PER_PG	(RQE_CNT_PER_PG - 1)
+#define RQE_PG(x)		(((x) & ~RQE_MAX_IDX_PER_PG) / RQE_CNT_PER_PG)
+#define RQE_IDX(x)		((x) & RQE_MAX_IDX_PER_PG)
+
+struct bnxt_qplib_q {
+	struct bnxt_qplib_hwq		hwq;
+	struct bnxt_qplib_swq		*swq;
+	struct scatterlist		*sglist;
+	u32				nmap;
+	u32				max_wqe;
+	u16				max_sge;
+	u32				psn;
+	bool				flush_in_progress;
+};
+
+struct bnxt_qplib_qp {
+	struct bnxt_qplib_pd		*pd;
+	struct bnxt_qplib_dpi		*dpi;
+	u64				qp_handle;
+	u32				id;
+	u8				type;
+	u8				sig_type;
+	u64				modify_flags;
+	u8				state;
+	u8				cur_qp_state;
+	u32				max_inline_data;
+	u32				mtu;
+	u32				path_mtu;
+	bool				en_sqd_async_notify;
+	u16				pkey_index;
+	u32				qkey;
+	u32				dest_qp_id;
+	u8				access;
+	u8				timeout;
+	u8				retry_cnt;
+	u8				rnr_retry;
+	u32				min_rnr_timer;
+	u32				max_rd_atomic;
+	u32				max_dest_rd_atomic;
+	u32				dest_qpn;
+	u8				smac[6];
+	u16				vlan_id;
+	u8				nw_type;
+	struct bnxt_qplib_ah		ah;
+
+#define BTH_PSN_MASK			((1 << 24) - 1)
+	/* SQ */
+	struct bnxt_qplib_q		sq;
+	/* RQ */
+	struct bnxt_qplib_q		rq;
+	/* SRQ */
+	struct bnxt_qplib_srq		*srq;
+	/* CQ */
+	struct bnxt_qplib_cq		*scq;
+	struct bnxt_qplib_cq		*rcq;
+	/* IRRQ and ORRQ */
+	struct bnxt_qplib_hwq		irrq;
+	struct bnxt_qplib_hwq		orrq;
+	/* Header buffer for QP1 */
+	int				sq_hdr_buf_size;
+	int				rq_hdr_buf_size;
+/*
+ * Buffer space for ETH(14), IP or GRH(40), UDP header(8)
+ * and ib_bth + ib_deth (20).
+ * Max required is 82 when RoCE V2 is enabled
+ */
+#define BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2	86
+	/* Ethernet header	=  14 */
+	/* ib_grh		=  40 (provided by MAD) */
+	/* ib_bth + ib_deth	=  20 */
+	/* MAD			= 256 (provided by MAD) */
+	/* iCRC			=   4 */
+#define BNXT_QPLIB_MAX_QP1_RQ_ETH_HDR_SIZE	14
+#define BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2	512
+#define BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV4	20
+#define BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6	40
+#define BNXT_QPLIB_MAX_QP1_RQ_BDETH_HDR_SIZE	20
+	void				*sq_hdr_buf;
+	dma_addr_t			sq_hdr_buf_map;
+	void				*rq_hdr_buf;
+	dma_addr_t			rq_hdr_buf_map;
+};
+
 #define BNXT_QPLIB_MAX_CQE_ENTRY_SIZE	sizeof(struct cq_base)
 
+#define CQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_CQE_ENTRY_SIZE)
+#define CQE_MAX_IDX_PER_PG	(CQE_CNT_PER_PG - 1)
+#define CQE_PG(x)		(((x) & ~CQE_MAX_IDX_PER_PG) / CQE_CNT_PER_PG)
+#define CQE_IDX(x)		((x) & CQE_MAX_IDX_PER_PG)
+
+#define ROCE_CQE_CMP_V			0
+#define CQE_CMP_VALID(hdr, raw_cons, cp_bit)			\
+	(!!((hdr)->cqe_type_toggle & CQ_BASE_TOGGLE) ==		\
+	   !((raw_cons) & (cp_bit)))
+
 struct bnxt_qplib_cqe {
 	u8				status;
 	u8				type;
@@ -82,6 +320,13 @@ struct bnxt_qplib_cq {
 	wait_queue_head_t		waitq;
 };
 
+#define BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE	sizeof(struct xrrq_irrq)
+#define BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE	sizeof(struct xrrq_orrq)
+#define IRD_LIMIT_TO_IRRQ_SLOTS(x)	(2 * (x) + 2)
+#define IRRQ_SLOTS_TO_IRD_LIMIT(s)	(((s) >> 1) - 1)
+#define ORD_LIMIT_TO_ORRQ_SLOTS(x)	((x) + 1)
+#define ORRQ_SLOTS_TO_ORD_LIMIT(s)	((s) - 1)
+
 #define BNXT_QPLIB_MAX_NQE_ENTRY_SIZE	sizeof(struct nq_base)
 
 #define NQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_NQE_ENTRY_SIZE)
@@ -140,6 +385,11 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
 			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
 					     void *srq,
 					     u8 event));
+int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
+int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
+int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
+int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
+int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
 int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index 3a93a88..84af86b 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -64,6 +64,14 @@ struct bnxt_re_work {
 	struct net_device	*vlan_dev;
 };
 
+struct bnxt_re_sqp_entries {
+	struct bnxt_qplib_sge sge;
+	u64 wrid;
+	/* For storing the actual qp1 cqe */
+	struct bnxt_qplib_cqe cqe;
+	struct bnxt_re_qp *qp1_qp;
+};
+
 #define BNXT_RE_MIN_MSIX		2
 #define BNXT_RE_MAX_MSIX		16
 #define BNXT_RE_AEQ_IDX			0
@@ -112,6 +120,12 @@ struct bnxt_re_dev {
 	atomic_t			mw_count;
 	/* Max of 2 lossless traffic class supported per port */
 	u16				cosq[2];
+
+	/* QP for for handling QP1 packets */
+	u32				sqp_id;
+	struct bnxt_re_qp		*qp1_sqp;
+	struct bnxt_re_ah		*sqp_ah;
+	struct bnxt_re_sqp_entries sqp_tbl[1024];
 };
 
 #define to_bnxt_re(ptr, type, member)	\
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 5e41317..77860a2 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -649,6 +649,481 @@ int bnxt_re_query_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
 	return 0;
 }
 
+/* Queue Pairs */
+int bnxt_re_destroy_qp(struct ib_qp *ib_qp)
+{
+	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
+	struct bnxt_re_dev *rdev = qp->rdev;
+	int rc;
+
+	rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to destroy HW QP");
+		return rc;
+	}
+	if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp) {
+		rc = bnxt_qplib_destroy_ah(&rdev->qplib_res,
+					   &rdev->sqp_ah->qplib_ah);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev),
+				"Failed to destroy HW AH for shadow QP");
+			return rc;
+		}
+
+		rc = bnxt_qplib_destroy_qp(&rdev->qplib_res,
+					   &rdev->qp1_sqp->qplib_qp);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev),
+				"Failed to destroy Shadow QP");
+			return rc;
+		}
+		mutex_lock(&rdev->qp_lock);
+		list_del(&rdev->qp1_sqp->list);
+		atomic_dec(&rdev->qp_count);
+		mutex_unlock(&rdev->qp_lock);
+
+		kfree(rdev->sqp_ah);
+		kfree(rdev->qp1_sqp);
+	}
+
+	if (qp->rumem && !IS_ERR(qp->rumem))
+		ib_umem_release(qp->rumem);
+	if (qp->sumem && !IS_ERR(qp->sumem))
+		ib_umem_release(qp->sumem);
+
+	mutex_lock(&rdev->qp_lock);
+	list_del(&qp->list);
+	atomic_dec(&rdev->qp_count);
+	mutex_unlock(&rdev->qp_lock);
+	kfree(qp);
+	return 0;
+}
+
+static u8 __from_ib_qp_type(enum ib_qp_type type)
+{
+	switch (type) {
+	case IB_QPT_GSI:
+		return CMDQ_CREATE_QP1_TYPE_GSI;
+	case IB_QPT_RC:
+		return CMDQ_CREATE_QP_TYPE_RC;
+	case IB_QPT_UD:
+		return CMDQ_CREATE_QP_TYPE_UD;
+	case IB_QPT_RAW_ETHERTYPE:
+		return CMDQ_CREATE_QP_TYPE_RAW_ETHERTYPE;
+	default:
+		return IB_QPT_MAX;
+	}
+}
+
+static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd,
+			 struct bnxt_re_qp *qp, struct ib_udata *udata)
+{
+	struct bnxt_re_qp_req ureq;
+	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
+	struct ib_umem *umem;
+	int bytes = 0;
+	struct ib_ucontext *context = pd->ib_pd.uobject->context;
+	struct bnxt_re_ucontext *cntx = to_bnxt_re(context,
+						  struct bnxt_re_ucontext,
+						  ib_uctx);
+	if (ib_copy_from_udata(&ureq, udata, sizeof(ureq)))
+		return -EFAULT;
+
+	bytes = (qplib_qp->sq.max_wqe * BNXT_QPLIB_MAX_SQE_ENTRY_SIZE);
+	/* Consider mapping PSN search memory only for RC QPs. */
+	if (qplib_qp->type == CMDQ_CREATE_QP_TYPE_RC)
+		bytes += (qplib_qp->sq.max_wqe * sizeof(struct sq_psn_search));
+	bytes = PAGE_ALIGN(bytes);
+	umem = ib_umem_get(context, ureq.qpsva, bytes,
+			   IB_ACCESS_LOCAL_WRITE, 1);
+	if (IS_ERR(umem))
+		return PTR_ERR(umem);
+
+	qp->sumem = umem;
+	qplib_qp->sq.sglist = umem->sg_head.sgl;
+	qplib_qp->sq.nmap = umem->nmap;
+	qplib_qp->qp_handle = ureq.qp_handle;
+
+	if (!qp->qplib_qp.srq) {
+		bytes = (qplib_qp->rq.max_wqe * BNXT_QPLIB_MAX_RQE_ENTRY_SIZE);
+		bytes = PAGE_ALIGN(bytes);
+		umem = ib_umem_get(context, ureq.qprva, bytes,
+				   IB_ACCESS_LOCAL_WRITE, 1);
+		if (IS_ERR(umem))
+			goto rqfail;
+		qp->rumem = umem;
+		qplib_qp->rq.sglist = umem->sg_head.sgl;
+		qplib_qp->rq.nmap = umem->nmap;
+	}
+
+	qplib_qp->dpi = cntx->dpi;
+	return 0;
+rqfail:
+	ib_umem_release(qp->sumem);
+	qp->sumem = NULL;
+	qplib_qp->sq.sglist = NULL;
+	qplib_qp->sq.nmap = 0;
+
+	return PTR_ERR(umem);
+}
+
+static struct bnxt_re_ah *bnxt_re_create_shadow_qp_ah(struct bnxt_re_pd *pd,
+					       struct bnxt_qplib_res *qp1_res,
+					       struct bnxt_qplib_qp *qp1_qp)
+{
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_ah *ah;
+	union ib_gid sgid;
+	int rc;
+
+	ah = kzalloc(sizeof(*ah), GFP_KERNEL);
+	if (!ah)
+		return NULL;
+
+	memset(ah, 0, sizeof(*ah));
+	ah->rdev = rdev;
+	ah->qplib_ah.pd = &pd->qplib_pd;
+
+	rc = bnxt_re_query_gid(&rdev->ibdev, 1, 0, &sgid);
+	if (rc)
+		goto fail;
+
+	/* supply the dgid data same as sgid */
+	memcpy(ah->qplib_ah.dgid.data, &sgid.raw,
+	       sizeof(union ib_gid));
+	ah->qplib_ah.sgid_index = 0;
+
+	ah->qplib_ah.traffic_class = 0;
+	ah->qplib_ah.flow_label = 0;
+	ah->qplib_ah.hop_limit = 1;
+	ah->qplib_ah.sl = 0;
+	/* Have DMAC same as SMAC */
+	ether_addr_copy(ah->qplib_ah.dmac, rdev->netdev->dev_addr);
+
+	rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev),
+			"Failed to allocate HW AH for Shadow QP");
+		goto fail;
+	}
+
+	return ah;
+
+fail:
+	kfree(ah);
+	return NULL;
+}
+
+static struct bnxt_re_qp *bnxt_re_create_shadow_qp(struct bnxt_re_pd *pd,
+					    struct bnxt_qplib_res *qp1_res,
+					    struct bnxt_qplib_qp *qp1_qp)
+{
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_re_qp *qp;
+	int rc;
+
+	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
+	if (!qp)
+		return NULL;
+
+	memset(qp, 0, sizeof(*qp));
+	qp->rdev = rdev;
+
+	/* Initialize the shadow QP structure from the QP1 values */
+	ether_addr_copy(qp->qplib_qp.smac, rdev->netdev->dev_addr);
+
+	qp->qplib_qp.pd = &pd->qplib_pd;
+	qp->qplib_qp.qp_handle = (u64)&qp->qplib_qp;
+	qp->qplib_qp.type = IB_QPT_UD;
+
+	qp->qplib_qp.max_inline_data = 0;
+	qp->qplib_qp.sig_type = true;
+
+	/* Shadow QP SQ depth should be same as QP1 RQ depth */
+	qp->qplib_qp.sq.max_wqe = qp1_qp->rq.max_wqe;
+	qp->qplib_qp.sq.max_sge = 2;
+
+	qp->qplib_qp.scq = qp1_qp->scq;
+	qp->qplib_qp.rcq = qp1_qp->rcq;
+
+	qp->qplib_qp.rq.max_wqe = qp1_qp->rq.max_wqe;
+	qp->qplib_qp.rq.max_sge = qp1_qp->rq.max_sge;
+
+	qp->qplib_qp.mtu = qp1_qp->mtu;
+
+	qp->qplib_qp.sq_hdr_buf_size = 0;
+	qp->qplib_qp.rq_hdr_buf_size = BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6;
+	qp->qplib_qp.dpi = &rdev->dpi_privileged;
+
+	rc = bnxt_qplib_create_qp(qp1_res, &qp->qplib_qp);
+	if (rc)
+		goto fail;
+
+	rdev->sqp_id = qp->qplib_qp.id;
+
+	spin_lock_init(&qp->sq_lock);
+	INIT_LIST_HEAD(&qp->list);
+	mutex_lock(&rdev->qp_lock);
+	list_add_tail(&qp->list, &rdev->qp_list);
+	atomic_inc(&rdev->qp_count);
+	mutex_unlock(&rdev->qp_lock);
+	return qp;
+fail:
+	kfree(qp);
+	return NULL;
+}
+
+struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd,
+				struct ib_qp_init_attr *qp_init_attr,
+				struct ib_udata *udata)
+{
+	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
+	struct bnxt_re_dev *rdev = pd->rdev;
+	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
+	struct bnxt_re_qp *qp;
+	struct bnxt_re_srq *srq;
+	struct bnxt_re_cq *cq;
+	int rc, entries;
+
+	if ((qp_init_attr->cap.max_send_wr > dev_attr->max_qp_wqes) ||
+	    (qp_init_attr->cap.max_recv_wr > dev_attr->max_qp_wqes) ||
+	    (qp_init_attr->cap.max_send_sge > dev_attr->max_qp_sges) ||
+	    (qp_init_attr->cap.max_recv_sge > dev_attr->max_qp_sges) ||
+	    (qp_init_attr->cap.max_inline_data > dev_attr->max_inline_data))
+		return ERR_PTR(-EINVAL);
+
+	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
+	if (!qp)
+		return ERR_PTR(-ENOMEM);
+
+	qp->rdev = rdev;
+	ether_addr_copy(qp->qplib_qp.smac, rdev->netdev->dev_addr);
+	qp->qplib_qp.pd = &pd->qplib_pd;
+	qp->qplib_qp.qp_handle = (u64)&qp->qplib_qp;
+	qp->qplib_qp.type = __from_ib_qp_type(qp_init_attr->qp_type);
+	if (qp->qplib_qp.type == IB_QPT_MAX) {
+		dev_err(rdev_to_dev(rdev), "QP type 0x%x not supported",
+			qp->qplib_qp.type);
+		rc = -EINVAL;
+		goto fail;
+	}
+	qp->qplib_qp.max_inline_data = qp_init_attr->cap.max_inline_data;
+	qp->qplib_qp.sig_type = ((qp_init_attr->sq_sig_type ==
+				  IB_SIGNAL_ALL_WR) ? true : false);
+
+	entries = roundup_pow_of_two(qp_init_attr->cap.max_send_wr + 1);
+	if (entries > dev_attr->max_qp_wqes + 1)
+		entries = dev_attr->max_qp_wqes + 1;
+	qp->qplib_qp.sq.max_wqe = entries;
+
+	qp->qplib_qp.sq.max_sge = qp_init_attr->cap.max_send_sge;
+	if (qp->qplib_qp.sq.max_sge > dev_attr->max_qp_sges)
+		qp->qplib_qp.sq.max_sge = dev_attr->max_qp_sges;
+
+	if (qp_init_attr->send_cq) {
+		cq = to_bnxt_re(qp_init_attr->send_cq, struct bnxt_re_cq,
+				ib_cq);
+		if (!cq) {
+			dev_err(rdev_to_dev(rdev), "Send CQ not found");
+			rc = -EINVAL;
+			goto fail;
+		}
+		qp->qplib_qp.scq = &cq->qplib_cq;
+	}
+
+	if (qp_init_attr->recv_cq) {
+		cq = to_bnxt_re(qp_init_attr->recv_cq, struct bnxt_re_cq,
+				ib_cq);
+		if (!cq) {
+			dev_err(rdev_to_dev(rdev), "Receive CQ not found");
+			rc = -EINVAL;
+			goto fail;
+		}
+		qp->qplib_qp.rcq = &cq->qplib_cq;
+	}
+
+	if (qp_init_attr->srq) {
+		dev_err(rdev_to_dev(rdev), "SRQ not supported");
+		rc = -ENOTSUPP;
+		goto fail;
+	} else {
+		/* Allocate 1 more than what's provided so posting max doesn't
+		 * mean empty
+		 */
+		entries = roundup_pow_of_two(qp_init_attr->cap.max_recv_wr + 1);
+		if (entries > dev_attr->max_qp_wqes + 1)
+			entries = dev_attr->max_qp_wqes + 1;
+		qp->qplib_qp.rq.max_wqe = entries;
+
+		qp->qplib_qp.rq.max_sge = qp_init_attr->cap.max_recv_sge;
+		if (qp->qplib_qp.rq.max_sge > dev_attr->max_qp_sges)
+			qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
+	}
+
+	qp->qplib_qp.mtu = ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
+
+	if (qp_init_attr->qp_type == IB_QPT_GSI) {
+		qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
+		if (qp->qplib_qp.rq.max_sge > dev_attr->max_qp_sges)
+			qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
+		qp->qplib_qp.sq.max_sge++;
+		if (qp->qplib_qp.sq.max_sge > dev_attr->max_qp_sges)
+			qp->qplib_qp.sq.max_sge = dev_attr->max_qp_sges;
+
+		qp->qplib_qp.rq_hdr_buf_size =
+					BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2;
+
+		qp->qplib_qp.sq_hdr_buf_size =
+					BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2;
+		qp->qplib_qp.dpi = &rdev->dpi_privileged;
+		rc = bnxt_qplib_create_qp1(&rdev->qplib_res, &qp->qplib_qp);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev), "Failed to create HW QP1");
+			goto fail;
+		}
+		/* Create a shadow QP to handle the QP1 traffic */
+		rdev->qp1_sqp = bnxt_re_create_shadow_qp(pd, &rdev->qplib_res,
+							 &qp->qplib_qp);
+		if (!rdev->qp1_sqp) {
+			rc = -EINVAL;
+			dev_err(rdev_to_dev(rdev),
+				"Failed to create Shadow QP for QP1");
+			goto qp_destroy;
+		}
+		rdev->sqp_ah = bnxt_re_create_shadow_qp_ah(pd, &rdev->qplib_res,
+							   &qp->qplib_qp);
+		if (!rdev->sqp_ah) {
+			bnxt_qplib_destroy_qp(&rdev->qplib_res,
+					      &rdev->qp1_sqp->qplib_qp);
+			rc = -EINVAL;
+			dev_err(rdev_to_dev(rdev),
+				"Failed to create AH entry for ShadowQP");
+			goto qp_destroy;
+		}
+
+	} else {
+		qp->qplib_qp.max_rd_atomic = dev_attr->max_qp_rd_atom;
+		qp->qplib_qp.max_dest_rd_atomic = dev_attr->max_qp_init_rd_atom;
+		if (udata) {
+			rc = bnxt_re_init_user_qp(rdev, pd, qp, udata);
+			if (rc)
+				goto fail;
+		} else {
+			qp->qplib_qp.dpi = &rdev->dpi_privileged;
+		}
+
+		rc = bnxt_qplib_create_qp(&rdev->qplib_res, &qp->qplib_qp);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev), "Failed to create HW QP");
+			goto fail;
+		}
+	}
+
+	qp->ib_qp.qp_num = qp->qplib_qp.id;
+	spin_lock_init(&qp->sq_lock);
+
+	if (udata) {
+		struct bnxt_re_qp_resp resp;
+
+		resp.qpid = qp->ib_qp.qp_num;
+		rc = bnxt_re_copy_to_udata(rdev, &resp, sizeof(resp), udata);
+		if (rc) {
+			dev_err(rdev_to_dev(rdev), "Failed to copy QP udata");
+			goto qp_destroy;
+		}
+	}
+	INIT_LIST_HEAD(&qp->list);
+	mutex_lock(&rdev->qp_lock);
+	list_add_tail(&qp->list, &rdev->qp_list);
+	atomic_inc(&rdev->qp_count);
+	mutex_unlock(&rdev->qp_lock);
+
+	return &qp->ib_qp;
+qp_destroy:
+	bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
+fail:
+	kfree(qp);
+	return ERR_PTR(rc);
+}
+
+static u8 __from_ib_qp_state(enum ib_qp_state state)
+{
+	switch (state) {
+	case IB_QPS_RESET:
+		return CMDQ_MODIFY_QP_NEW_STATE_RESET;
+	case IB_QPS_INIT:
+		return CMDQ_MODIFY_QP_NEW_STATE_INIT;
+	case IB_QPS_RTR:
+		return CMDQ_MODIFY_QP_NEW_STATE_RTR;
+	case IB_QPS_RTS:
+		return CMDQ_MODIFY_QP_NEW_STATE_RTS;
+	case IB_QPS_SQD:
+		return CMDQ_MODIFY_QP_NEW_STATE_SQD;
+	case IB_QPS_SQE:
+		return CMDQ_MODIFY_QP_NEW_STATE_SQE;
+	case IB_QPS_ERR:
+	default:
+		return CMDQ_MODIFY_QP_NEW_STATE_ERR;
+	}
+}
+
+static enum ib_qp_state __to_ib_qp_state(u8 state)
+{
+	switch (state) {
+	case CMDQ_MODIFY_QP_NEW_STATE_RESET:
+		return IB_QPS_RESET;
+	case CMDQ_MODIFY_QP_NEW_STATE_INIT:
+		return IB_QPS_INIT;
+	case CMDQ_MODIFY_QP_NEW_STATE_RTR:
+		return IB_QPS_RTR;
+	case CMDQ_MODIFY_QP_NEW_STATE_RTS:
+		return IB_QPS_RTS;
+	case CMDQ_MODIFY_QP_NEW_STATE_SQD:
+		return IB_QPS_SQD;
+	case CMDQ_MODIFY_QP_NEW_STATE_SQE:
+		return IB_QPS_SQE;
+	case CMDQ_MODIFY_QP_NEW_STATE_ERR:
+	default:
+		return IB_QPS_ERR;
+	}
+}
+
+static u32 __from_ib_mtu(enum ib_mtu mtu)
+{
+	switch (mtu) {
+	case IB_MTU_256:
+		return CMDQ_MODIFY_QP_PATH_MTU_MTU_256;
+	case IB_MTU_512:
+		return CMDQ_MODIFY_QP_PATH_MTU_MTU_512;
+	case IB_MTU_1024:
+		return CMDQ_MODIFY_QP_PATH_MTU_MTU_1024;
+	case IB_MTU_2048:
+		return CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
+	case IB_MTU_4096:
+		return CMDQ_MODIFY_QP_PATH_MTU_MTU_4096;
+	default:
+		return CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
+	}
+}
+
+static enum ib_mtu __to_ib_mtu(u32 mtu)
+{
+	switch (mtu & CREQ_QUERY_QP_RESP_SB_PATH_MTU_MASK) {
+	case CMDQ_MODIFY_QP_PATH_MTU_MTU_256:
+		return IB_MTU_256;
+	case CMDQ_MODIFY_QP_PATH_MTU_MTU_512:
+		return IB_MTU_512;
+	case CMDQ_MODIFY_QP_PATH_MTU_MTU_1024:
+		return IB_MTU_1024;
+	case CMDQ_MODIFY_QP_PATH_MTU_MTU_2048:
+		return IB_MTU_2048;
+	case CMDQ_MODIFY_QP_PATH_MTU_MTU_4096:
+		return IB_MTU_4096;
+	default:
+		return IB_MTU_2048;
+	}
+}
+
 static int __from_ib_access_flags(int iflags)
 {
 	int qflags = 0;
@@ -690,6 +1165,293 @@ static enum ib_access_flags __to_ib_access_flags(int qflags)
 		iflags |= IB_ACCESS_ON_DEMAND;
 	return iflags;
 };
+
+static int bnxt_re_modify_shadow_qp(struct bnxt_re_dev *rdev,
+			     struct bnxt_re_qp *qp1_qp,
+			     int qp_attr_mask)
+{
+	struct bnxt_re_qp *qp = rdev->qp1_sqp;
+	int rc = 0;
+
+	if (qp_attr_mask & IB_QP_STATE) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_STATE;
+		qp->qplib_qp.state = qp1_qp->qplib_qp.state;
+	}
+	if (qp_attr_mask & IB_QP_PKEY_INDEX) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY;
+		qp->qplib_qp.pkey_index = qp1_qp->qplib_qp.pkey_index;
+	}
+
+	if (qp_attr_mask & IB_QP_QKEY) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_QKEY;
+		/* Using a Random  QKEY */
+		qp->qplib_qp.qkey = 0x81818181;
+	}
+	if (qp_attr_mask & IB_QP_SQ_PSN) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN;
+		qp->qplib_qp.sq.psn = qp1_qp->qplib_qp.sq.psn;
+	}
+
+	rc = bnxt_qplib_modify_qp(&rdev->qplib_res, &qp->qplib_qp);
+	if (rc)
+		dev_err(rdev_to_dev(rdev),
+			"Failed to modify Shadow QP for QP1");
+	return rc;
+}
+
+int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+		      int qp_attr_mask, struct ib_udata *udata)
+{
+	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
+	struct bnxt_re_dev *rdev = qp->rdev;
+	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
+	enum ib_qp_state curr_qp_state, new_qp_state;
+	int rc, entries;
+	int status;
+	union ib_gid sgid;
+	struct ib_gid_attr sgid_attr;
+	u8 nw_type;
+
+	qp->qplib_qp.modify_flags = 0;
+	if (qp_attr_mask & IB_QP_STATE) {
+		curr_qp_state = __to_ib_qp_state(qp->qplib_qp.cur_qp_state);
+		new_qp_state = qp_attr->qp_state;
+		if (!ib_modify_qp_is_ok(curr_qp_state, new_qp_state,
+					ib_qp->qp_type, qp_attr_mask,
+					IB_LINK_LAYER_ETHERNET)) {
+			dev_err(rdev_to_dev(rdev),
+				"Invalid attribute mask: %#x specified ",
+				qp_attr_mask);
+			dev_err(rdev_to_dev(rdev),
+				"for qpn: %#x type: %#x",
+				ib_qp->qp_num, ib_qp->qp_type);
+			dev_err(rdev_to_dev(rdev),
+				"curr_qp_state=0x%x, new_qp_state=0x%x\n",
+				curr_qp_state, new_qp_state);
+			return -EINVAL;
+		}
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_STATE;
+		qp->qplib_qp.state = __from_ib_qp_state(qp_attr->qp_state);
+	}
+	if (qp_attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_EN_SQD_ASYNC_NOTIFY;
+		qp->qplib_qp.en_sqd_async_notify = true;
+	}
+	if (qp_attr_mask & IB_QP_ACCESS_FLAGS) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_ACCESS;
+		qp->qplib_qp.access =
+			__from_ib_access_flags(qp_attr->qp_access_flags);
+		/* LOCAL_WRITE access must be set to allow RC receive */
+		qp->qplib_qp.access |= BNXT_QPLIB_ACCESS_LOCAL_WRITE;
+	}
+	if (qp_attr_mask & IB_QP_PKEY_INDEX) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY;
+		qp->qplib_qp.pkey_index = qp_attr->pkey_index;
+	}
+	if (qp_attr_mask & IB_QP_QKEY) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_QKEY;
+		qp->qplib_qp.qkey = qp_attr->qkey;
+	}
+	if (qp_attr_mask & IB_QP_AV) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_DGID |
+				     CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL |
+				     CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX |
+				     CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT |
+				     CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS |
+				     CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC |
+				     CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID;
+		memcpy(qp->qplib_qp.ah.dgid.data, qp_attr->ah_attr.grh.dgid.raw,
+		       sizeof(qp->qplib_qp.ah.dgid.data));
+		qp->qplib_qp.ah.flow_label = qp_attr->ah_attr.grh.flow_label;
+		/* If RoCE V2 is enabled, stack will have two entries for
+		 * each GID entry. Avoiding this duplicte entry in HW. Dividing
+		 * the GID index by 2 for RoCE V2
+		 */
+		qp->qplib_qp.ah.sgid_index =
+					qp_attr->ah_attr.grh.sgid_index / 2;
+		qp->qplib_qp.ah.host_sgid_index =
+					qp_attr->ah_attr.grh.sgid_index;
+		qp->qplib_qp.ah.hop_limit = qp_attr->ah_attr.grh.hop_limit;
+		qp->qplib_qp.ah.traffic_class =
+					qp_attr->ah_attr.grh.traffic_class;
+		qp->qplib_qp.ah.sl = qp_attr->ah_attr.sl;
+		ether_addr_copy(qp->qplib_qp.ah.dmac, qp_attr->ah_attr.dmac);
+
+		status = ib_get_cached_gid(&rdev->ibdev, 1,
+					   qp_attr->ah_attr.grh.sgid_index,
+					   &sgid, &sgid_attr);
+		if (!status && sgid_attr.ndev) {
+			memcpy(qp->qplib_qp.smac, sgid_attr.ndev->dev_addr,
+			       ETH_ALEN);
+			dev_put(sgid_attr.ndev);
+			nw_type = ib_gid_to_network_type(sgid_attr.gid_type,
+							 &sgid);
+			switch (nw_type) {
+			case RDMA_NETWORK_IPV4:
+				qp->qplib_qp.nw_type =
+					CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV2_IPV4;
+				break;
+			case RDMA_NETWORK_IPV6:
+				qp->qplib_qp.nw_type =
+					CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV2_IPV6;
+				break;
+			default:
+				qp->qplib_qp.nw_type =
+					CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV1;
+				break;
+			}
+		}
+	}
+
+	if (qp_attr_mask & IB_QP_PATH_MTU) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+		qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu);
+	} else if (qp_attr->qp_state == IB_QPS_RTR) {
+		qp->qplib_qp.modify_flags |=
+			CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+		qp->qplib_qp.path_mtu =
+			__from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu));
+	}
+
+	if (qp_attr_mask & IB_QP_TIMEOUT) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_TIMEOUT;
+		qp->qplib_qp.timeout = qp_attr->timeout;
+	}
+	if (qp_attr_mask & IB_QP_RETRY_CNT) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_RETRY_CNT;
+		qp->qplib_qp.retry_cnt = qp_attr->retry_cnt;
+	}
+	if (qp_attr_mask & IB_QP_RNR_RETRY) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_RNR_RETRY;
+		qp->qplib_qp.rnr_retry = qp_attr->rnr_retry;
+	}
+	if (qp_attr_mask & IB_QP_MIN_RNR_TIMER) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER;
+		qp->qplib_qp.min_rnr_timer = qp_attr->min_rnr_timer;
+	}
+	if (qp_attr_mask & IB_QP_RQ_PSN) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN;
+		qp->qplib_qp.rq.psn = qp_attr->rq_psn;
+	}
+	if (qp_attr_mask & IB_QP_MAX_QP_RD_ATOMIC) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_MAX_RD_ATOMIC;
+		qp->qplib_qp.max_rd_atomic = qp_attr->max_rd_atomic;
+	}
+	if (qp_attr_mask & IB_QP_SQ_PSN) {
+		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN;
+		qp->qplib_qp.sq.psn = qp_attr->sq_psn;
+	}
+	if (qp_attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC;
+		qp->qplib_qp.max_dest_rd_atomic = qp_attr->max_dest_rd_atomic;
+	}
+	if (qp_attr_mask & IB_QP_CAP) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_SQ_SIZE |
+				CMDQ_MODIFY_QP_MODIFY_MASK_RQ_SIZE |
+				CMDQ_MODIFY_QP_MODIFY_MASK_SQ_SGE |
+				CMDQ_MODIFY_QP_MODIFY_MASK_RQ_SGE |
+				CMDQ_MODIFY_QP_MODIFY_MASK_MAX_INLINE_DATA;
+		if ((qp_attr->cap.max_send_wr >= dev_attr->max_qp_wqes) ||
+		    (qp_attr->cap.max_recv_wr >= dev_attr->max_qp_wqes) ||
+		    (qp_attr->cap.max_send_sge >= dev_attr->max_qp_sges) ||
+		    (qp_attr->cap.max_recv_sge >= dev_attr->max_qp_sges) ||
+		    (qp_attr->cap.max_inline_data >=
+						dev_attr->max_inline_data)) {
+			dev_err(rdev_to_dev(rdev),
+				"Create QP failed - max exceeded");
+			return -EINVAL;
+		}
+		entries = roundup_pow_of_two(qp_attr->cap.max_send_wr);
+		if (entries > dev_attr->max_qp_wqes)
+			entries = dev_attr->max_qp_wqes;
+		qp->qplib_qp.sq.max_wqe = entries;
+		qp->qplib_qp.sq.max_sge = qp_attr->cap.max_send_sge;
+		if (qp->qplib_qp.rq.max_wqe) {
+			entries = roundup_pow_of_two(qp_attr->cap.max_recv_wr);
+			if (entries > dev_attr->max_qp_wqes)
+				entries = dev_attr->max_qp_wqes;
+			qp->qplib_qp.rq.max_wqe = entries;
+			qp->qplib_qp.rq.max_sge = qp_attr->cap.max_recv_sge;
+		} else {
+			/* SRQ was used prior, just ignore the RQ caps */
+		}
+	}
+	if (qp_attr_mask & IB_QP_DEST_QPN) {
+		qp->qplib_qp.modify_flags |=
+				CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID;
+		qp->qplib_qp.dest_qpn = qp_attr->dest_qp_num;
+	}
+	rc = bnxt_qplib_modify_qp(&rdev->qplib_res, &qp->qplib_qp);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to modify HW QP");
+		return rc;
+	}
+	if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp)
+		rc = bnxt_re_modify_shadow_qp(rdev, qp, qp_attr_mask);
+	return rc;
+}
+
+int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+{
+	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
+	struct bnxt_re_dev *rdev = qp->rdev;
+	struct bnxt_qplib_qp qplib_qp;
+	int rc;
+
+	memset(&qplib_qp, 0, sizeof(struct bnxt_qplib_qp));
+	qplib_qp.id = qp->qplib_qp.id;
+	qplib_qp.ah.host_sgid_index = qp->qplib_qp.ah.host_sgid_index;
+
+	rc = bnxt_qplib_query_qp(&rdev->qplib_res, &qplib_qp);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Failed to query HW QP");
+		return rc;
+	}
+	qp_attr->qp_state = __to_ib_qp_state(qplib_qp.state);
+	qp_attr->en_sqd_async_notify = qplib_qp.en_sqd_async_notify ? 1 : 0;
+	qp_attr->qp_access_flags = __to_ib_access_flags(qplib_qp.access);
+	qp_attr->pkey_index = qplib_qp.pkey_index;
+	qp_attr->qkey = qplib_qp.qkey;
+	memcpy(qp_attr->ah_attr.grh.dgid.raw, qplib_qp.ah.dgid.data,
+	       sizeof(qplib_qp.ah.dgid.data));
+	qp_attr->ah_attr.grh.flow_label = qplib_qp.ah.flow_label;
+	qp_attr->ah_attr.grh.sgid_index = qplib_qp.ah.host_sgid_index;
+	qp_attr->ah_attr.grh.hop_limit = qplib_qp.ah.hop_limit;
+	qp_attr->ah_attr.grh.traffic_class = qplib_qp.ah.traffic_class;
+	qp_attr->ah_attr.sl = qplib_qp.ah.sl;
+	ether_addr_copy(qp_attr->ah_attr.dmac, qplib_qp.ah.dmac);
+	qp_attr->path_mtu = __to_ib_mtu(qplib_qp.path_mtu);
+	qp_attr->timeout = qplib_qp.timeout;
+	qp_attr->retry_cnt = qplib_qp.retry_cnt;
+	qp_attr->rnr_retry = qplib_qp.rnr_retry;
+	qp_attr->min_rnr_timer = qplib_qp.min_rnr_timer;
+	qp_attr->rq_psn = qplib_qp.rq.psn;
+	qp_attr->max_rd_atomic = qplib_qp.max_rd_atomic;
+	qp_attr->sq_psn = qplib_qp.sq.psn;
+	qp_attr->max_dest_rd_atomic = qplib_qp.max_dest_rd_atomic;
+	qp_init_attr->sq_sig_type = qplib_qp.sig_type ? IB_SIGNAL_ALL_WR :
+							IB_SIGNAL_REQ_WR;
+	qp_attr->dest_qp_num = qplib_qp.dest_qpn;
+
+	qp_attr->cap.max_send_wr = qp->qplib_qp.sq.max_wqe;
+	qp_attr->cap.max_send_sge = qp->qplib_qp.sq.max_sge;
+	qp_attr->cap.max_recv_wr = qp->qplib_qp.rq.max_wqe;
+	qp_attr->cap.max_recv_sge = qp->qplib_qp.rq.max_sge;
+	qp_attr->cap.max_inline_data = qp->qplib_qp.max_inline_data;
+	qp_init_attr->cap = qp_attr->cap;
+
+	return 0;
+}
+
 /* Completion Queues */
 int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index ba9a4c9..75ee88a 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -57,6 +57,19 @@ struct bnxt_re_ah {
 	struct bnxt_qplib_ah	qplib_ah;
 };
 
+struct bnxt_re_qp {
+	struct list_head	list;
+	struct bnxt_re_dev	*rdev;
+	struct ib_qp		ib_qp;
+	spinlock_t		sq_lock;	/* protect sq */
+	struct bnxt_qplib_qp	qplib_qp;
+	struct ib_umem		*sumem;
+	struct ib_umem		*rumem;
+	/* QP1 */
+	u32			send_psn;
+	struct ib_ud_header	qp1_hdr;
+};
+
 struct bnxt_re_cq {
 	struct bnxt_re_dev	*rdev;
 	spinlock_t              cq_lock;	/* protect cq */
@@ -141,6 +154,14 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *pd,
 int bnxt_re_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
 int bnxt_re_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
 int bnxt_re_destroy_ah(struct ib_ah *ah);
+struct ib_qp *bnxt_re_create_qp(struct ib_pd *pd,
+				struct ib_qp_init_attr *qp_init_attr,
+				struct ib_udata *udata);
+int bnxt_re_modify_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
+		      int qp_attr_mask, struct ib_udata *udata);
+int bnxt_re_query_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
+		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr);
+int bnxt_re_destroy_qp(struct ib_qp *qp);
 struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 				const struct ib_cq_init_attr *attr,
 				struct ib_ucontext *context,
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 3d1504e..5facacc 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -445,6 +445,12 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->modify_ah		= bnxt_re_modify_ah;
 	ibdev->query_ah			= bnxt_re_query_ah;
 	ibdev->destroy_ah		= bnxt_re_destroy_ah;
+
+	ibdev->create_qp		= bnxt_re_create_qp;
+	ibdev->modify_qp		= bnxt_re_modify_qp;
+	ibdev->query_qp			= bnxt_re_query_qp;
+	ibdev->destroy_qp		= bnxt_re_destroy_qp;
+
 	ibdev->create_cq		= bnxt_re_create_cq;
 	ibdev->destroy_cq		= bnxt_re_destroy_cq;
 	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
diff --git a/include/uapi/rdma/bnxt_re_uverbs_abi.h b/include/uapi/rdma/bnxt_re_uverbs_abi.h
index 5444eff..e6732f8 100644
--- a/include/uapi/rdma/bnxt_re_uverbs_abi.h
+++ b/include/uapi/rdma/bnxt_re_uverbs_abi.h
@@ -66,6 +66,16 @@ struct bnxt_re_cq_resp {
 	__u32 phase;
 } __packed;
 
+struct bnxt_re_qp_req {
+	__u64 qpsva;
+	__u64 qprva;
+	__u64 qp_handle;
+} __packed;
+
+struct bnxt_re_qp_resp {
+	__u32 qpid;
+} __packed;
+
 enum bnxt_re_shpg_offt {
 	BNXT_RE_BEG_RESV_OFFT	= 0x00,
 	BNXT_RE_AVID_OFFT	= 0x10,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  14/22] bnxt_re: Support post_send verb
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (10 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 13/22] bnxt_re: Support QP verbs Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 15/22] bnxt_re: Support post_recv Selvin Xavier
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

Enables the ib_post_send fastpath verb for posting Send work requests on QPs.

v2: Fixed some sparse warnings

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 267 ++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |   8 +
 drivers/infiniband/hw/bnxtre/bnxt_re.h          |   5 +
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 542 +++++++++++++++++++++++-
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |   2 +
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   1 +
 6 files changed, 822 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
index edc9411..419efe2 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -1088,6 +1088,273 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
 	return 0;
 }
 
+void *bnxt_qplib_get_qp1_sq_buf(struct bnxt_qplib_qp *qp,
+				struct bnxt_qplib_sge *sge)
+{
+	struct bnxt_qplib_q *sq = &qp->sq;
+	u32 sw_prod;
+
+	memset(sge, 0, sizeof(*sge));
+
+	if (qp->sq_hdr_buf) {
+		sw_prod = HWQ_CMP(sq->hwq.prod, &sq->hwq);
+		sge->addr = (dma_addr_t)(qp->sq_hdr_buf_map +
+					 sw_prod * qp->sq_hdr_buf_size);
+		sge->lkey = 0xFFFFFFFF;
+		sge->size = qp->sq_hdr_buf_size;
+		return qp->sq_hdr_buf + sw_prod * sge->size;
+	}
+	return NULL;
+}
+
+void bnxt_qplib_post_send_db(struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_q *sq = &qp->sq;
+	struct dbr_dbr db_msg = { 0 };
+	u32 sw_prod;
+
+	sw_prod = HWQ_CMP(sq->hwq.prod, &sq->hwq);
+
+	db_msg.index = cpu_to_le32((sw_prod << DBR_DBR_INDEX_SFT) &
+				   DBR_DBR_INDEX_MASK);
+	db_msg.type_xid =
+		cpu_to_le32(((qp->id << DBR_DBR_XID_SFT) & DBR_DBR_XID_MASK) |
+			    DBR_DBR_TYPE_SQ);
+	/* Flush all the WQE writes to HW */
+	wmb();
+	__iowrite64_copy(qp->dpi->dbr, &db_msg, sizeof(db_msg) / sizeof(u64));
+}
+
+int bnxt_qplib_post_send(struct bnxt_qplib_qp *qp,
+			 struct bnxt_qplib_swqe *wqe)
+{
+	struct bnxt_qplib_q *sq = &qp->sq;
+	struct bnxt_qplib_swq *swq;
+	struct sq_send *hw_sq_send_hdr, **hw_sq_send_ptr;
+	struct sq_sge *hw_sge;
+	u32 sw_prod;
+	u8 wqe_size16;
+	int i, rc = 0, data_len = 0, pkt_num = 0;
+	u32 temp32;
+
+	if (qp->state != CMDQ_MODIFY_QP_NEW_STATE_RTS) {
+		rc = -EINVAL;
+		goto done;
+	}
+	if (HWQ_CMP((sq->hwq.prod + 1), &sq->hwq) ==
+	    HWQ_CMP(sq->hwq.cons, &sq->hwq)) {
+		rc = -ENOMEM;
+		goto done;
+	}
+	sw_prod = HWQ_CMP(sq->hwq.prod, &sq->hwq);
+	swq = &sq->swq[sw_prod];
+	swq->wr_id = wqe->wr_id;
+	swq->type = wqe->type;
+	swq->flags = wqe->flags;
+	if (qp->sig_type)
+		swq->flags |= SQ_SEND_FLAGS_SIGNAL_COMP;
+	swq->start_psn = sq->psn & BTH_PSN_MASK;
+
+	hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr;
+	hw_sq_send_hdr = &hw_sq_send_ptr[SQE_PG(sw_prod)][SQE_IDX(sw_prod)];
+
+	memset(hw_sq_send_hdr, 0, BNXT_QPLIB_MAX_SQE_ENTRY_SIZE);
+
+	if (wqe->flags & BNXT_QPLIB_SWQE_FLAGS_INLINE) {
+		/* Copy the inline data */
+		if (wqe->inline_len > BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH) {
+			dev_warn(&sq->hwq.pdev->dev,
+				 "QPLIB: Inline data length > 96 detected");
+			data_len = BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH;
+		} else {
+			data_len = wqe->inline_len;
+		}
+		memcpy(hw_sq_send_hdr->data, wqe->inline_data, data_len);
+		wqe_size16 = (data_len + 15) >> 4;
+	} else {
+		for (i = 0, hw_sge = (struct sq_sge *)hw_sq_send_hdr->data;
+		     i < wqe->num_sge; i++, hw_sge++) {
+			hw_sge->va_or_pa = cpu_to_le64(wqe->sg_list[i].addr);
+			hw_sge->l_key = cpu_to_le32(wqe->sg_list[i].lkey);
+			hw_sge->size = cpu_to_le32(wqe->sg_list[i].size);
+			data_len += hw_sge->size;
+		}
+		/* Each SGE entry = 1 WQE size16 */
+		wqe_size16 = wqe->num_sge;
+	}
+
+	/* Specifics */
+	switch (wqe->type) {
+	case BNXT_QPLIB_SWQE_TYPE_SEND:
+		if (qp->type == CMDQ_CREATE_QP1_TYPE_GSI) {
+			/* Assemble info for Raw Ethertype QPs */
+			struct sq_send_raweth_qp1 *sqe =
+				(struct sq_send_raweth_qp1 *)hw_sq_send_hdr;
+
+			sqe->wqe_type = wqe->type;
+			sqe->flags = wqe->flags;
+			sqe->wqe_size = wqe_size16 +
+				((offsetof(typeof(*sqe), data) + 15) >> 4);
+			sqe->cfa_action = cpu_to_le16(wqe->rawqp1.cfa_action);
+			sqe->lflags = cpu_to_le16(wqe->rawqp1.lflags);
+			sqe->length = cpu_to_le32(data_len);
+			sqe->cfa_meta = cpu_to_le32((wqe->rawqp1.cfa_meta &
+				SQ_SEND_RAWETH_QP1_CFA_META_VLAN_VID_MASK) <<
+				SQ_SEND_RAWETH_QP1_CFA_META_VLAN_VID_SFT);
+
+			break;
+		}
+		/* else, just fall thru */
+	case BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM:
+	case BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV:
+	{
+		struct sq_send *sqe = (struct sq_send *)hw_sq_send_hdr;
+
+		sqe->wqe_type = wqe->type;
+		sqe->flags = wqe->flags;
+		sqe->wqe_size = wqe_size16 +
+				((offsetof(typeof(*sqe), data) + 15) >> 4);
+		sqe->inv_key_or_imm_data = cpu_to_le32(
+						wqe->send.imm_data_or_inv_key);
+		if (qp->type == CMDQ_CREATE_QP_TYPE_UD) {
+			sqe->q_key = cpu_to_le32(wqe->send.q_key);
+			sqe->dst_qp = cpu_to_le32(
+					wqe->send.dst_qp & SQ_SEND_DST_QP_MASK);
+			sqe->length = cpu_to_le32(data_len);
+			sqe->avid = cpu_to_le32(wqe->send.avid &
+						SQ_SEND_AVID_MASK);
+			sq->psn = (sq->psn + 1) & BTH_PSN_MASK;
+		} else {
+			sqe->length = cpu_to_le32(data_len);
+			sqe->dst_qp = 0;
+			sqe->avid = 0;
+			if (qp->mtu)
+				pkt_num = (data_len + qp->mtu - 1) / qp->mtu;
+			if (!pkt_num)
+				pkt_num = 1;
+			sq->psn = (sq->psn + pkt_num) & BTH_PSN_MASK;
+		}
+		break;
+	}
+	case BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE:
+	case BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM:
+	case BNXT_QPLIB_SWQE_TYPE_RDMA_READ:
+	{
+		struct sq_rdma *sqe = (struct sq_rdma *)hw_sq_send_hdr;
+
+		sqe->wqe_type = wqe->type;
+		sqe->flags = wqe->flags;
+		sqe->wqe_size = wqe_size16 +
+				((offsetof(typeof(*sqe), data) + 15) >> 4);
+		sqe->imm_data = cpu_to_le32(wqe->rdma.imm_data_or_inv_key);
+		sqe->length = cpu_to_le32((u32)data_len);
+		sqe->remote_va = cpu_to_le64(wqe->rdma.remote_va);
+		sqe->remote_key = cpu_to_le32(wqe->rdma.r_key);
+		if (qp->mtu)
+			pkt_num = (data_len + qp->mtu - 1) / qp->mtu;
+		if (!pkt_num)
+			pkt_num = 1;
+		sq->psn = (sq->psn + pkt_num) & BTH_PSN_MASK;
+		break;
+	}
+	case BNXT_QPLIB_SWQE_TYPE_ATOMIC_CMP_AND_SWP:
+	case BNXT_QPLIB_SWQE_TYPE_ATOMIC_FETCH_AND_ADD:
+	{
+		struct sq_atomic *sqe = (struct sq_atomic *)hw_sq_send_hdr;
+
+		sqe->wqe_type = wqe->type;
+		sqe->flags = wqe->flags;
+		sqe->remote_key = cpu_to_le32(wqe->atomic.r_key);
+		sqe->remote_va = cpu_to_le64(wqe->atomic.remote_va);
+		sqe->swap_data = cpu_to_le64(wqe->atomic.swap_data);
+		sqe->cmp_data = cpu_to_le64(wqe->atomic.cmp_data);
+		if (qp->mtu)
+			pkt_num = (data_len + qp->mtu - 1) / qp->mtu;
+		if (!pkt_num)
+			pkt_num = 1;
+		sq->psn = (sq->psn + pkt_num) & BTH_PSN_MASK;
+		break;
+	}
+	case BNXT_QPLIB_SWQE_TYPE_LOCAL_INV:
+	{
+		struct sq_localinvalidate *sqe =
+				(struct sq_localinvalidate *)hw_sq_send_hdr;
+
+		sqe->wqe_type = wqe->type;
+		sqe->flags = wqe->flags;
+		sqe->inv_l_key = cpu_to_le32(wqe->local_inv.inv_l_key);
+
+		break;
+	}
+	case BNXT_QPLIB_SWQE_TYPE_FAST_REG_MR:
+	{
+		struct sq_fr_pmr *sqe = (struct sq_fr_pmr *)hw_sq_send_hdr;
+
+		sqe->wqe_type = wqe->type;
+		sqe->flags = wqe->flags;
+		sqe->access_cntl = wqe->frmr.access_cntl |
+				   SQ_FR_PMR_ACCESS_CNTL_LOCAL_WRITE;
+		sqe->zero_based_page_size_log =
+			(wqe->frmr.pg_sz_log & SQ_FR_PMR_PAGE_SIZE_LOG_MASK) <<
+			SQ_FR_PMR_PAGE_SIZE_LOG_SFT |
+			(wqe->frmr.zero_based ? SQ_FR_PMR_ZERO_BASED : 0);
+		sqe->l_key = cpu_to_le32(wqe->frmr.l_key);
+		temp32 = cpu_to_le32(wqe->frmr.length);
+		memcpy(sqe->length, &temp32, sizeof(wqe->frmr.length));
+		sqe->numlevels_pbl_page_size_log =
+			((wqe->frmr.pbl_pg_sz_log <<
+					SQ_FR_PMR_PBL_PAGE_SIZE_LOG_SFT) &
+					SQ_FR_PMR_PBL_PAGE_SIZE_LOG_MASK) |
+			((wqe->frmr.levels << SQ_FR_PMR_NUMLEVELS_SFT) &
+					SQ_FR_PMR_NUMLEVELS_MASK);
+
+		for (i = 0; i < wqe->frmr.page_list_len; i++)
+			wqe->frmr.pbl_ptr[i] = cpu_to_le64(
+						wqe->frmr.page_list[i] |
+						PTU_PTE_VALID);
+		sqe->pblptr = cpu_to_le64(wqe->frmr.pbl_dma_ptr);
+		sqe->va = cpu_to_le64(wqe->frmr.va);
+
+		break;
+	}
+	case BNXT_QPLIB_SWQE_TYPE_BIND_MW:
+	{
+		struct sq_bind *sqe = (struct sq_bind *)hw_sq_send_hdr;
+
+		sqe->wqe_type = wqe->type;
+		sqe->flags = wqe->flags;
+		sqe->access_cntl = wqe->bind.access_cntl;
+		sqe->mw_type_zero_based = wqe->bind.mw_type |
+			(wqe->bind.zero_based ? SQ_BIND_ZERO_BASED : 0);
+		sqe->parent_l_key = cpu_to_le32(wqe->bind.parent_l_key);
+		sqe->l_key = cpu_to_le32(wqe->bind.r_key);
+		sqe->va = cpu_to_le64(wqe->bind.va);
+		temp32 = cpu_to_le32(wqe->bind.length);
+		memcpy(&sqe->length, &temp32, sizeof(wqe->bind.length));
+		break;
+	}
+	default:
+		/* Bad wqe, return error */
+		rc = -EINVAL;
+		goto done;
+	}
+	swq->next_psn = sq->psn & BTH_PSN_MASK;
+	if (swq->psn_search) {
+		swq->psn_search->opcode_start_psn = cpu_to_le32(
+			((swq->start_psn << SQ_PSN_SEARCH_START_PSN_SFT) &
+			 SQ_PSN_SEARCH_START_PSN_MASK) |
+			((wqe->type << SQ_PSN_SEARCH_OPCODE_SFT) &
+			 SQ_PSN_SEARCH_OPCODE_MASK));
+		swq->psn_search->flags_next_psn = cpu_to_le32(
+			((swq->next_psn << SQ_PSN_SEARCH_NEXT_PSN_SFT) &
+			 SQ_PSN_SEARCH_NEXT_PSN_MASK));
+	}
+
+	sq->hwq.prod++;
+done:
+	return rc;
+}
+
 /* CQ */
 
 /* Spinlock must be held */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
index f6d2be5..7fe98db 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -38,6 +38,9 @@
 
 #ifndef __BNXT_QPLIB_FP_H__
 #define __BNXT_QPLIB_FP_H__
+
+#define BNXT_QPLIB_ETHTYPE_ROCEV1      0x8915
+
 struct bnxt_qplib_sge {
 	u64				addr;
 	u32				lkey;
@@ -390,6 +393,11 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
 int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
 int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
 int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
+void *bnxt_qplib_get_qp1_sq_buf(struct bnxt_qplib_qp *qp,
+				struct bnxt_qplib_sge *sge);
+void bnxt_qplib_post_send_db(struct bnxt_qplib_qp *qp);
+int bnxt_qplib_post_send(struct bnxt_qplib_qp *qp,
+			 struct bnxt_qplib_swqe *wqe);
 int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index 84af86b..30f4b30 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -133,6 +133,11 @@ struct bnxt_re_dev {
 
 #define to_bnxt_re_dev(ptr, member)	\
 	container_of((ptr), struct bnxt_re_dev, member)
+
+#define BNXT_RE_ROCE_V1_PACKET		0
+#define BNXT_RE_ROCEV2_IPV4_PACKET	2
+#define BNXT_RE_ROCEV2_IPV6_PACKET	3
+
 #define	rdev_to_dev(rdev)	((rdev) ? (&(rdev)->ibdev.dev) : NULL)
 
 #endif
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 77860a2..540f2f2 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -70,6 +70,20 @@ static int bnxt_re_copy_to_udata(struct bnxt_re_dev *rdev, void *data, int len,
 	return rc ? -EFAULT : 0;
 }
 
+static int bnxt_re_build_sgl(struct ib_sge *ib_sg_list,
+			     struct bnxt_qplib_sge *sg_list, int num)
+{
+	int i, total = 0;
+
+	for (i = 0; i < num; i++) {
+		sg_list[i].addr = ib_sg_list[i].addr;
+		sg_list[i].lkey = ib_sg_list[i].lkey;
+		sg_list[i].size = ib_sg_list[i].length;
+		total += sg_list[i].size;
+	}
+	return total;
+}
+
 /* Device */
 struct net_device *bnxt_re_get_netdev(struct ib_device *ibdev, u8 port_num)
 {
@@ -708,8 +722,6 @@ static u8 __from_ib_qp_type(enum ib_qp_type type)
 		return CMDQ_CREATE_QP_TYPE_RC;
 	case IB_QPT_UD:
 		return CMDQ_CREATE_QP_TYPE_UD;
-	case IB_QPT_RAW_ETHERTYPE:
-		return CMDQ_CREATE_QP_TYPE_RAW_ETHERTYPE;
 	default:
 		return IB_QPT_MAX;
 	}
@@ -881,7 +893,6 @@ struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd,
 	struct bnxt_re_dev *rdev = pd->rdev;
 	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
 	struct bnxt_re_qp *qp;
-	struct bnxt_re_srq *srq;
 	struct bnxt_re_cq *cq;
 	int rc, entries;
 
@@ -1452,6 +1463,531 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
 	return 0;
 }
 
+/* Routine for sending QP1 packets for RoCE V1 an V2
+ */
+static int bnxt_re_build_qp1_send_v2(struct bnxt_re_qp *qp,
+				     struct ib_send_wr *wr,
+				     struct bnxt_qplib_swqe *wqe,
+				     int payload_size)
+{
+	struct ib_device *ibdev = &qp->rdev->ibdev;
+	struct bnxt_re_ah *ah = to_bnxt_re(ud_wr(wr)->ah, struct bnxt_re_ah,
+					   ib_ah);
+	struct bnxt_qplib_ah *qplib_ah = &ah->qplib_ah;
+	struct bnxt_qplib_sge sge;
+	union ib_gid sgid;
+	u8 nw_type;
+	u16 ether_type;
+	struct ib_gid_attr sgid_attr;
+	union ib_gid dgid;
+	bool is_eth = false;
+	bool is_vlan = false;
+	bool is_grh = false;
+	bool is_udp = false;
+	u8 ip_version = 0;
+	u16 vlan_id = 0xFFFF;
+	void *buf;
+	int i, rc = 0, size;
+
+	memset(&qp->qp1_hdr, 0, sizeof(qp->qp1_hdr));
+
+	rc = ib_get_cached_gid(ibdev, 1,
+			       qplib_ah->host_sgid_index, &sgid,
+			       &sgid_attr);
+	if (rc) {
+		dev_err(rdev_to_dev(qp->rdev),
+			"Failed to query gid at index %d",
+			qplib_ah->host_sgid_index);
+		return rc;
+	}
+	if (sgid_attr.ndev) {
+		if (is_vlan_dev(sgid_attr.ndev))
+			vlan_id = vlan_dev_vlan_id(sgid_attr.ndev);
+		dev_put(sgid_attr.ndev);
+	}
+	/* Get network header type for this GID */
+	nw_type = ib_gid_to_network_type(sgid_attr.gid_type, &sgid);
+	switch (nw_type) {
+	case RDMA_NETWORK_IPV4:
+		nw_type = BNXT_RE_ROCEV2_IPV4_PACKET;
+		break;
+	case RDMA_NETWORK_IPV6:
+		nw_type = BNXT_RE_ROCEV2_IPV6_PACKET;
+		break;
+	default:
+		nw_type = BNXT_RE_ROCE_V1_PACKET;
+		break;
+	}
+	memcpy(&dgid.raw, &qplib_ah->dgid, 16);
+	is_udp = sgid_attr.gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP;
+	if (is_udp) {
+		if (ipv6_addr_v4mapped((struct in6_addr *)&sgid)) {
+			ip_version = 4;
+			ether_type = ETH_P_IP;
+		} else {
+			ip_version = 6;
+			ether_type = ETH_P_IPV6;
+		}
+		is_grh = false;
+	} else {
+		ether_type = BNXT_QPLIB_ETHTYPE_ROCEV1;
+		is_grh = true;
+	}
+
+	is_eth = true;
+	is_vlan = (vlan_id && (vlan_id < 0x1000)) ? true : false;
+
+	ib_ud_header_init(payload_size, !is_eth, is_eth, is_vlan, is_grh,
+			  ip_version, is_udp, 0, &qp->qp1_hdr);
+
+	/* ETH */
+	ether_addr_copy(qp->qp1_hdr.eth.dmac_h, ah->qplib_ah.dmac);
+	ether_addr_copy(qp->qp1_hdr.eth.smac_h, qp->qplib_qp.smac);
+
+	/* For vlan, check the sgid for vlan existence */
+
+	if (!is_vlan) {
+		qp->qp1_hdr.eth.type = cpu_to_be16(ether_type);
+	} else {
+		qp->qp1_hdr.vlan.type = cpu_to_be16(ether_type);
+		qp->qp1_hdr.vlan.tag = cpu_to_be16(vlan_id);
+	}
+
+	if (is_grh || (ip_version == 6)) {
+		memcpy(qp->qp1_hdr.grh.source_gid.raw, sgid.raw, sizeof(sgid));
+		memcpy(qp->qp1_hdr.grh.destination_gid.raw, qplib_ah->dgid.data,
+		       sizeof(sgid));
+		qp->qp1_hdr.grh.hop_limit     = qplib_ah->hop_limit;
+	}
+
+	if (ip_version == 4) {
+		qp->qp1_hdr.ip4.tos = 0;
+		qp->qp1_hdr.ip4.id = 0;
+		qp->qp1_hdr.ip4.frag_off = htons(IP_DF);
+		qp->qp1_hdr.ip4.ttl = qplib_ah->hop_limit;
+
+		memcpy(&qp->qp1_hdr.ip4.saddr, sgid.raw + 12, 4);
+		memcpy(&qp->qp1_hdr.ip4.daddr, qplib_ah->dgid.data + 12, 4);
+		qp->qp1_hdr.ip4.check = ib_ud_ip4_csum(&qp->qp1_hdr);
+	}
+
+	if (is_udp) {
+		qp->qp1_hdr.udp.dport = htons(ROCE_V2_UDP_DPORT);
+		qp->qp1_hdr.udp.sport = htons(0x8CD1);
+		qp->qp1_hdr.udp.csum = 0;
+	}
+
+	/* BTH */
+	if (wr->opcode == IB_WR_SEND_WITH_IMM) {
+		qp->qp1_hdr.bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;
+		qp->qp1_hdr.immediate_present = 1;
+	} else {
+		qp->qp1_hdr.bth.opcode = IB_OPCODE_UD_SEND_ONLY;
+	}
+	if (wr->send_flags & IB_SEND_SOLICITED)
+		qp->qp1_hdr.bth.solicited_event = 1;
+	/* pad_count */
+	qp->qp1_hdr.bth.pad_count = (4 - payload_size) & 3;
+
+	/* P_key for QP1 is for all members */
+	qp->qp1_hdr.bth.pkey = cpu_to_be16(0xFFFF);
+	qp->qp1_hdr.bth.destination_qpn = IB_QP1;
+	qp->qp1_hdr.bth.ack_req = 0;
+	qp->send_psn++;
+	qp->send_psn &= BTH_PSN_MASK;
+	qp->qp1_hdr.bth.psn = cpu_to_be32(qp->send_psn);
+	/* DETH */
+	/* Use the priviledged Q_Key for QP1 */
+	qp->qp1_hdr.deth.qkey = cpu_to_be32(IB_QP1_QKEY);
+	qp->qp1_hdr.deth.source_qpn = IB_QP1;
+
+	/* Pack the QP1 to the transmit buffer */
+	buf = bnxt_qplib_get_qp1_sq_buf(&qp->qplib_qp, &sge);
+	if (buf) {
+		size = ib_ud_header_pack(&qp->qp1_hdr, buf);
+		for (i = wqe->num_sge; i; i--) {
+			wqe->sg_list[i].addr = wqe->sg_list[i - 1].addr;
+			wqe->sg_list[i].lkey = wqe->sg_list[i - 1].lkey;
+			wqe->sg_list[i].size = wqe->sg_list[i - 1].size;
+		}
+
+		/*
+		 * Max Header buf size for IPV6 RoCE V2 is 86,
+		 * which is same as the QP1 SQ header buffer.
+		 * Header buf size for IPV4 RoCE V2 can be 66.
+		 * ETH(14) + VLAN(4)+ IP(20) + UDP (8) + BTH(20).
+		 * Subtract 20 bytes from QP1 SQ header buf size
+		 */
+		if (is_udp && ip_version == 4)
+			sge.size -= 20;
+		/*
+		 * Max Header buf size for RoCE V1 is 78.
+		 * ETH(14) + VLAN(4) + GRH(40) + BTH(20).
+		 * Subtract 8 bytes from QP1 SQ header buf size
+		 */
+		if (!is_udp)
+			sge.size -= 8;
+
+		/* Subtract 4 bytes for non vlan packets */
+		if (!is_vlan)
+			sge.size -= 4;
+
+		wqe->sg_list[0].addr = sge.addr;
+		wqe->sg_list[0].lkey = sge.lkey;
+		wqe->sg_list[0].size = sge.size;
+		wqe->num_sge++;
+
+	} else {
+		dev_err(rdev_to_dev(qp->rdev), "QP1 buffer is empty!");
+		rc = -ENOMEM;
+	}
+	return rc;
+}
+
+static int is_ud_qp(struct bnxt_re_qp *qp)
+{
+	return qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_UD;
+}
+
+static int bnxt_re_build_send_wqe(struct bnxt_re_qp *qp,
+				  struct ib_send_wr *wr,
+				  struct bnxt_qplib_swqe *wqe)
+{
+	struct bnxt_re_ah *ah = NULL;
+
+	if (is_ud_qp(qp)) {
+		ah = to_bnxt_re(ud_wr(wr)->ah, struct bnxt_re_ah, ib_ah);
+		wqe->send.q_key = ud_wr(wr)->remote_qkey;
+		wqe->send.dst_qp = ud_wr(wr)->remote_qpn;
+		wqe->send.avid = ah->qplib_ah.id;
+	}
+	switch (wr->opcode) {
+	case IB_WR_SEND:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND;
+		break;
+	case IB_WR_SEND_WITH_IMM:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM;
+		wqe->send.imm_data_or_inv_key = wr->ex.imm_data;
+		break;
+	case IB_WR_SEND_WITH_INV:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV;
+		wqe->send.imm_data_or_inv_key = wr->ex.invalidate_rkey;
+		break;
+	default:
+		return -EINVAL;
+	}
+	if (wr->send_flags & IB_SEND_SIGNALED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP;
+	if (wr->send_flags & IB_SEND_FENCE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE;
+	if (wr->send_flags & IB_SEND_SOLICITED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT;
+	if (wr->send_flags & IB_SEND_INLINE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_INLINE;
+
+	return 0;
+}
+
+static int bnxt_re_build_rdma_wqe(struct ib_send_wr *wr,
+				  struct bnxt_qplib_swqe *wqe)
+{
+	switch (wr->opcode) {
+	case IB_WR_RDMA_WRITE:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE;
+		break;
+	case IB_WR_RDMA_WRITE_WITH_IMM:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM;
+		wqe->rdma.imm_data_or_inv_key = wr->ex.imm_data;
+		break;
+	case IB_WR_RDMA_READ:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_READ;
+		wqe->rdma.imm_data_or_inv_key = wr->ex.invalidate_rkey;
+		break;
+	default:
+		return -EINVAL;
+	}
+	wqe->rdma.remote_va = rdma_wr(wr)->remote_addr;
+	wqe->rdma.r_key = rdma_wr(wr)->rkey;
+	if (wr->send_flags & IB_SEND_SIGNALED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP;
+	if (wr->send_flags & IB_SEND_FENCE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE;
+	if (wr->send_flags & IB_SEND_SOLICITED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT;
+	if (wr->send_flags & IB_SEND_INLINE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_INLINE;
+
+	return 0;
+}
+
+static int bnxt_re_build_atomic_wqe(struct ib_send_wr *wr,
+				    struct bnxt_qplib_swqe *wqe)
+{
+	switch (wr->opcode) {
+	case IB_WR_ATOMIC_CMP_AND_SWP:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_ATOMIC_CMP_AND_SWP;
+		wqe->atomic.swap_data = atomic_wr(wr)->swap;
+		break;
+	case IB_WR_ATOMIC_FETCH_AND_ADD:
+		wqe->type = BNXT_QPLIB_SWQE_TYPE_ATOMIC_FETCH_AND_ADD;
+		wqe->atomic.cmp_data = atomic_wr(wr)->compare_add;
+		break;
+	default:
+		return -EINVAL;
+	}
+	wqe->atomic.remote_va = atomic_wr(wr)->remote_addr;
+	wqe->atomic.r_key = atomic_wr(wr)->rkey;
+	if (wr->send_flags & IB_SEND_SIGNALED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP;
+	if (wr->send_flags & IB_SEND_FENCE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE;
+	if (wr->send_flags & IB_SEND_SOLICITED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT;
+	return 0;
+}
+
+static int bnxt_re_build_inv_wqe(struct ib_send_wr *wr,
+				 struct bnxt_qplib_swqe *wqe)
+{
+	wqe->type = BNXT_QPLIB_SWQE_TYPE_LOCAL_INV;
+	wqe->local_inv.inv_l_key = wr->ex.invalidate_rkey;
+
+	if (wr->send_flags & IB_SEND_SIGNALED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP;
+	if (wr->send_flags & IB_SEND_FENCE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE;
+	if (wr->send_flags & IB_SEND_SOLICITED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT;
+
+	return 0;
+}
+
+static int bnxt_re_build_reg_wqe(struct ib_reg_wr *wr,
+				 struct bnxt_qplib_swqe *wqe)
+{
+	struct bnxt_re_mr *mr = to_bnxt_re(wr->mr, struct bnxt_re_mr, ib_mr);
+	struct bnxt_qplib_frpl *qplib_frpl = &mr->qplib_frpl;
+	int access = wr->access;
+
+	wqe->frmr.pbl_ptr = (u64 *)qplib_frpl->hwq.pbl_ptr[0];
+	wqe->frmr.pbl_dma_ptr = qplib_frpl->hwq.pbl_dma_ptr[0];
+	wqe->frmr.page_list = mr->pages;
+	wqe->frmr.page_list_len = mr->npages;
+	wqe->frmr.levels = qplib_frpl->hwq.level + 1;
+	wqe->type = BNXT_QPLIB_SWQE_TYPE_REG_MR;
+
+	if (wr->wr.send_flags & IB_SEND_FENCE)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE;
+	if (wr->wr.send_flags & IB_SEND_SIGNALED)
+		wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP;
+
+	if (access & IB_ACCESS_LOCAL_WRITE)
+		wqe->frmr.access_cntl |= SQ_FR_PMR_ACCESS_CNTL_LOCAL_WRITE;
+	if (access & IB_ACCESS_REMOTE_READ)
+		wqe->frmr.access_cntl |= SQ_FR_PMR_ACCESS_CNTL_REMOTE_READ;
+	if (access & IB_ACCESS_REMOTE_WRITE)
+		wqe->frmr.access_cntl |= SQ_FR_PMR_ACCESS_CNTL_REMOTE_WRITE;
+	if (access & IB_ACCESS_REMOTE_ATOMIC)
+		wqe->frmr.access_cntl |= SQ_FR_PMR_ACCESS_CNTL_REMOTE_ATOMIC;
+	if (access & IB_ACCESS_MW_BIND)
+		wqe->frmr.access_cntl |= SQ_FR_PMR_ACCESS_CNTL_WINDOW_BIND;
+
+	wqe->frmr.l_key = wr->key;
+	wqe->frmr.length = wr->mr->length;
+	wqe->frmr.pbl_pg_sz_log = (wr->mr->page_size >> PAGE_SHIFT_4K) - 1;
+	wqe->frmr.va = wr->mr->iova;
+	return 0;
+}
+
+static int bnxt_re_copy_inline_data(struct bnxt_re_dev *rdev, struct ib_send_wr *wr,
+			     struct bnxt_qplib_swqe *wqe)
+{
+	/*  Copy the inline data to the data  field */
+	u8 *in_data;
+	u32 i, sge_len;
+	void *sge_addr;
+
+	in_data = wqe->inline_data;
+	for (i = 0; i < wr->num_sge; i++) {
+		sge_addr = (void *)(unsigned long)
+				wr->sg_list[i].addr;
+		sge_len = wr->sg_list[i].length;
+
+		if ((sge_len + wqe->inline_len) >
+		    BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH) {
+			dev_err(rdev_to_dev(rdev),
+				"Inline data size requested > supported value");
+			return -EINVAL;
+		}
+		sge_len = wr->sg_list[i].length;
+
+		memcpy(in_data, sge_addr, sge_len);
+		in_data += wr->sg_list[i].length;
+		wqe->inline_len += wr->sg_list[i].length;
+	}
+	return wqe->inline_len;
+}
+
+static int bnxt_re_copy_wr_payload(struct bnxt_re_dev *rdev, struct ib_send_wr *wr,
+			    struct bnxt_qplib_swqe *wqe)
+{
+	int payload_sz = 0;
+
+	if (wr->send_flags & IB_SEND_INLINE)
+		payload_sz = bnxt_re_copy_inline_data(rdev, wr, wqe);
+	else
+		payload_sz = bnxt_re_build_sgl(wr->sg_list, wqe->sg_list,
+					       wqe->num_sge);
+
+	return payload_sz;
+}
+
+static int bnxt_re_post_send_shadow_qp(struct bnxt_re_dev *rdev,
+				struct bnxt_re_qp *qp,
+				struct ib_send_wr *wr)
+{
+	struct bnxt_qplib_swqe wqe;
+	int rc = 0, payload_sz = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&qp->sq_lock, flags);
+	memset(&wqe, 0, sizeof(wqe));
+	while (wr) {
+		/* House keeping */
+		memset(&wqe, 0, sizeof(wqe));
+
+		/* Common */
+		wqe.num_sge = wr->num_sge;
+		if (wr->num_sge > qp->qplib_qp.sq.max_sge) {
+			dev_err(rdev_to_dev(rdev),
+				"Limit exceeded for Send SGEs");
+			rc = -EINVAL;
+			goto bad;
+		}
+
+		payload_sz = bnxt_re_copy_wr_payload(qp->rdev, wr, &wqe);
+		if (payload_sz < 0) {
+			rc = -EINVAL;
+			goto bad;
+		}
+		wqe.wr_id = wr->wr_id;
+
+		wqe.type = BNXT_QPLIB_SWQE_TYPE_SEND;
+
+		rc = bnxt_re_build_send_wqe(qp, wr, &wqe);
+		if (!rc)
+			rc = bnxt_qplib_post_send(&qp->qplib_qp, &wqe);
+bad:
+		if (rc) {
+			dev_err(rdev_to_dev(rdev),
+				"Post send failed opcode = %#x rc = %d",
+				wr->opcode, rc);
+			break;
+		}
+		wr = wr->next;
+	}
+	bnxt_qplib_post_send_db(&qp->qplib_qp);
+	spin_unlock_irqrestore(&qp->sq_lock, flags);
+	return rc;
+}
+
+int bnxt_re_post_send(struct ib_qp *ib_qp, struct ib_send_wr *wr,
+		      struct ib_send_wr **bad_wr)
+{
+	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
+	struct bnxt_qplib_swqe wqe;
+	int rc = 0, payload_sz = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&qp->sq_lock, flags);
+	while (wr) {
+		/* House keeping */
+		memset(&wqe, 0, sizeof(wqe));
+
+		/* Common */
+		wqe.num_sge = wr->num_sge;
+		if (wr->num_sge > qp->qplib_qp.sq.max_sge) {
+			dev_err(rdev_to_dev(qp->rdev),
+				"Limit exceeded for Send SGEs");
+			rc = -EINVAL;
+			goto bad;
+		}
+
+		payload_sz = bnxt_re_copy_wr_payload(qp->rdev, wr, &wqe);
+		if (payload_sz < 0) {
+			rc = -EINVAL;
+			goto bad;
+		}
+		wqe.wr_id = wr->wr_id;
+
+		switch (wr->opcode) {
+		case IB_WR_SEND:
+		case IB_WR_SEND_WITH_IMM:
+			if (ib_qp->qp_type == IB_QPT_GSI) {
+				rc = bnxt_re_build_qp1_send_v2(qp, wr, &wqe,
+							       payload_sz);
+				if (rc)
+					goto bad;
+				wqe.rawqp1.lflags |=
+					SQ_SEND_RAWETH_QP1_LFLAGS_ROCE_CRC;
+			}
+			switch (wr->send_flags) {
+			case IB_SEND_IP_CSUM:
+				wqe.rawqp1.lflags |=
+					SQ_SEND_RAWETH_QP1_LFLAGS_IP_CHKSUM;
+				break;
+			default:
+				break;
+			}
+			/* Fall thru to build the wqe */
+		case IB_WR_SEND_WITH_INV:
+			rc = bnxt_re_build_send_wqe(qp, wr, &wqe);
+			break;
+		case IB_WR_RDMA_WRITE:
+		case IB_WR_RDMA_WRITE_WITH_IMM:
+		case IB_WR_RDMA_READ:
+			rc = bnxt_re_build_rdma_wqe(wr, &wqe);
+			break;
+		case IB_WR_ATOMIC_CMP_AND_SWP:
+		case IB_WR_ATOMIC_FETCH_AND_ADD:
+			rc = bnxt_re_build_atomic_wqe(wr, &wqe);
+			break;
+		case IB_WR_RDMA_READ_WITH_INV:
+			dev_err(rdev_to_dev(qp->rdev),
+				"RDMA Read with Invalidate is not supported");
+			rc = -EINVAL;
+			goto bad;
+		case IB_WR_LOCAL_INV:
+			rc = bnxt_re_build_inv_wqe(wr, &wqe);
+			break;
+		case IB_WR_REG_MR:
+			rc = bnxt_re_build_reg_wqe(reg_wr(wr), &wqe);
+			break;
+		default:
+			/* Unsupported WRs */
+			dev_err(rdev_to_dev(qp->rdev),
+				"WR (%#x) is not supported", wr->opcode);
+			rc = -EINVAL;
+			goto bad;
+		}
+		if (!rc)
+			rc = bnxt_qplib_post_send(&qp->qplib_qp, &wqe);
+bad:
+		if (rc) {
+			dev_err(rdev_to_dev(qp->rdev),
+				"post_send failed op:%#x qps = %#x rc = %d\n",
+				wr->opcode, qp->qplib_qp.state, rc);
+			*bad_wr = wr;
+			break;
+		}
+		wr = wr->next;
+	}
+	bnxt_qplib_post_send_db(&qp->qplib_qp);
+	spin_unlock_irqrestore(&qp->sq_lock, flags);
+
+	return rc;
+}
+
 /* Completion Queues */
 int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 75ee88a..becdcdc 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -162,6 +162,8 @@ int bnxt_re_modify_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
 int bnxt_re_query_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
 		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr);
 int bnxt_re_destroy_qp(struct ib_qp *qp);
+int bnxt_re_post_send(struct ib_qp *qp, struct ib_send_wr *send_wr,
+		      struct ib_send_wr **bad_send_wr);
 struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 				const struct ib_cq_init_attr *attr,
 				struct ib_ucontext *context,
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 5facacc..14d1147 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -451,6 +451,7 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->query_qp			= bnxt_re_query_qp;
 	ibdev->destroy_qp		= bnxt_re_destroy_qp;
 
+	ibdev->post_send		= bnxt_re_post_send;
 	ibdev->create_cq		= bnxt_re_create_cq;
 	ibdev->destroy_cq		= bnxt_re_destroy_cq;
 	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  15/22] bnxt_re: Support post_recv
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (11 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 14/22] bnxt_re: Support post_send verb Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 16/22] bnxt_re: Support poll_cq verb Selvin Xavier
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

Enables the fastpath verb ib_post_recv.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 100 ++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |   8 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 133 ++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |   2 +
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   2 +
 5 files changed, 245 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
index 419efe2..67188ce 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -1107,6 +1107,37 @@ void *bnxt_qplib_get_qp1_sq_buf(struct bnxt_qplib_qp *qp,
 	return NULL;
 }
 
+u32 bnxt_qplib_get_rq_prod_index(struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_q *rq = &qp->rq;
+
+	return HWQ_CMP(rq->hwq.prod, &rq->hwq);
+}
+
+dma_addr_t bnxt_qplib_get_qp_buf_from_index(struct bnxt_qplib_qp *qp, u32 index)
+{
+	return (qp->rq_hdr_buf_map + index * qp->rq_hdr_buf_size);
+}
+
+void *bnxt_qplib_get_qp1_rq_buf(struct bnxt_qplib_qp *qp,
+				struct bnxt_qplib_sge *sge)
+{
+	struct bnxt_qplib_q *rq = &qp->rq;
+	u32 sw_prod;
+
+	memset(sge, 0, sizeof(*sge));
+
+	if (qp->rq_hdr_buf) {
+		sw_prod = HWQ_CMP(rq->hwq.prod, &rq->hwq);
+		sge->addr = (dma_addr_t)(qp->rq_hdr_buf_map +
+					 sw_prod * qp->rq_hdr_buf_size);
+		sge->lkey = 0xFFFFFFFF;
+		sge->size = qp->rq_hdr_buf_size;
+		return qp->rq_hdr_buf + sw_prod * sge->size;
+	}
+	return NULL;
+}
+
 void bnxt_qplib_post_send_db(struct bnxt_qplib_qp *qp)
 {
 	struct bnxt_qplib_q *sq = &qp->sq;
@@ -1355,6 +1386,75 @@ int bnxt_qplib_post_send(struct bnxt_qplib_qp *qp,
 	return rc;
 }
 
+void bnxt_qplib_post_recv_db(struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_q *rq = &qp->rq;
+	struct dbr_dbr db_msg = { 0 };
+	u32 sw_prod;
+
+	sw_prod = HWQ_CMP(rq->hwq.prod, &rq->hwq);
+	db_msg.index = cpu_to_le32((sw_prod << DBR_DBR_INDEX_SFT) &
+				   DBR_DBR_INDEX_MASK);
+	db_msg.type_xid =
+		cpu_to_le32(((qp->id << DBR_DBR_XID_SFT) & DBR_DBR_XID_MASK) |
+			    DBR_DBR_TYPE_RQ);
+
+	/* Flush the writes to HW Rx WQE before the ringing Rx DB */
+	wmb();
+	__iowrite64_copy(qp->dpi->dbr, &db_msg, sizeof(db_msg) / sizeof(u64));
+}
+
+int bnxt_qplib_post_recv(struct bnxt_qplib_qp *qp,
+			 struct bnxt_qplib_swqe *wqe)
+{
+	struct bnxt_qplib_q *rq = &qp->rq;
+	struct rq_wqe *rqe, **rqe_ptr;
+	struct sq_sge *hw_sge;
+	u32 sw_prod;
+	int i, rc = 0;
+
+	if (qp->state == CMDQ_MODIFY_QP_NEW_STATE_ERR) {
+		dev_err(&rq->hwq.pdev->dev,
+			"QPLIB: FP: QP (0x%x) is in the 0x%x state",
+			qp->id, qp->state);
+		rc = -EINVAL;
+		goto done;
+	}
+	if (HWQ_CMP((rq->hwq.prod + 1), &rq->hwq) ==
+	    HWQ_CMP(rq->hwq.cons, &rq->hwq)) {
+		dev_err(&rq->hwq.pdev->dev,
+			"QPLIB: FP: QP (0x%x) RQ is full!", qp->id);
+		rc = -EINVAL;
+		goto done;
+	}
+	sw_prod = HWQ_CMP(rq->hwq.prod, &rq->hwq);
+	rq->swq[sw_prod].wr_id = wqe->wr_id;
+
+	rqe_ptr = (struct rq_wqe **)rq->hwq.pbl_ptr;
+	rqe = &rqe_ptr[RQE_PG(sw_prod)][RQE_IDX(sw_prod)];
+
+	memset(rqe, 0, BNXT_QPLIB_MAX_RQE_ENTRY_SIZE);
+
+	/* Calculate wqe_size16 and data_len */
+	for (i = 0, hw_sge = (struct sq_sge *)rqe->data;
+	     i < wqe->num_sge; i++, hw_sge++) {
+		hw_sge->va_or_pa = cpu_to_le64(wqe->sg_list[i].addr);
+		hw_sge->l_key = cpu_to_le32(wqe->sg_list[i].lkey);
+		hw_sge->size = cpu_to_le32(wqe->sg_list[i].size);
+	}
+	rqe->wqe_type = wqe->type;
+	rqe->flags = wqe->flags;
+	rqe->wqe_size = wqe->num_sge +
+			((offsetof(typeof(*rqe), data) + 15) >> 4);
+
+	/* Supply the rqe->wr_id index to the wr_id_tbl for now */
+	rqe->wr_id[0] = cpu_to_le32(sw_prod);
+
+	rq->hwq.prod++;
+done:
+	return rc;
+}
+
 /* CQ */
 
 /* Spinlock must be held */
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
index 7fe98db..d9f2611 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -395,9 +395,17 @@ int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
 int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
 void *bnxt_qplib_get_qp1_sq_buf(struct bnxt_qplib_qp *qp,
 				struct bnxt_qplib_sge *sge);
+void *bnxt_qplib_get_qp1_rq_buf(struct bnxt_qplib_qp *qp,
+				struct bnxt_qplib_sge *sge);
+u32 bnxt_qplib_get_rq_prod_index(struct bnxt_qplib_qp *qp);
+dma_addr_t bnxt_qplib_get_qp_buf_from_index(struct bnxt_qplib_qp *qp,
+					    u32 index);
 void bnxt_qplib_post_send_db(struct bnxt_qplib_qp *qp);
 int bnxt_qplib_post_send(struct bnxt_qplib_qp *qp,
 			 struct bnxt_qplib_swqe *wqe);
+void bnxt_qplib_post_recv_db(struct bnxt_qplib_qp *qp);
+int bnxt_qplib_post_recv(struct bnxt_qplib_qp *qp,
+			 struct bnxt_qplib_swqe *wqe);
 int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 540f2f2..5d6c89c 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -1644,6 +1644,61 @@ static int bnxt_re_build_qp1_send_v2(struct bnxt_re_qp *qp,
 	return rc;
 }
 
+/* For the MAD layer, it only provides the recv SGE the size of
+ * ib_grh + MAD datagram.  No Ethernet headers, Ethertype, BTH, DETH,
+ * nor RoCE iCRC.  The Cu+ solution must provide buffer for the entire
+ * receive packet (334 bytes) with no VLAN and then copy the GRH
+ * and the MAD datagram out to the provided SGE.
+ */
+static int bnxt_re_build_qp1_shadow_qp_recv(struct bnxt_re_qp *qp,
+					    struct ib_recv_wr *wr,
+					    struct bnxt_qplib_swqe *wqe,
+					    int payload_size)
+{
+	struct bnxt_qplib_sge ref, sge;
+	int rc = 0;
+	u32 rq_prod_index;
+	struct bnxt_re_sqp_entries *sqp_entry;
+
+	rq_prod_index = bnxt_qplib_get_rq_prod_index(&qp->qplib_qp);
+
+	if (bnxt_qplib_get_qp1_rq_buf(&qp->qplib_qp, &sge)) {
+		/* Create 1 SGE to receive the entire
+		 * ethernet packet
+		 */
+		/* Save the reference from ULP */
+		ref.addr = wqe->sg_list[0].addr;
+		ref.lkey = wqe->sg_list[0].lkey;
+		ref.size = wqe->sg_list[0].size;
+
+		sqp_entry = &qp->rdev->sqp_tbl[rq_prod_index];
+
+		/* SGE 1 */
+		wqe->sg_list[0].addr = sge.addr;
+		wqe->sg_list[0].lkey = sge.lkey;
+		wqe->sg_list[0].size = BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2;
+		sge.size -= wqe->sg_list[0].size;
+		if (sge.size < 0) {
+			dev_err(rdev_to_dev(qp->rdev),
+				"QP1 rq buffer is empty!");
+			rc = -ENOMEM;
+			goto done;
+		}
+
+		sqp_entry->sge.addr = ref.addr;
+		sqp_entry->sge.lkey = ref.lkey;
+		sqp_entry->sge.size = ref.size;
+		/* Store the wrid for reporting completion */
+		sqp_entry->wrid = wqe->wr_id;
+		/* change the wqe->wrid to table index */
+		wqe->wr_id = rq_prod_index;
+	}
+	return 0;
+done:
+
+	return rc;
+}
+
 static int is_ud_qp(struct bnxt_re_qp *qp)
 {
 	return qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_UD;
@@ -1988,6 +2043,84 @@ int bnxt_re_post_send(struct ib_qp *ib_qp, struct ib_send_wr *wr,
 	return rc;
 }
 
+int bnxt_re_post_recv_shadow_qp(struct bnxt_re_dev *rdev,
+				struct bnxt_re_qp *qp,
+				struct ib_recv_wr *wr)
+{
+	struct bnxt_qplib_swqe wqe;
+	int rc = 0, payload_sz = 0;
+
+	memset(&wqe, 0, sizeof(wqe));
+	while (wr) {
+		/* House keeping */
+		memset(&wqe, 0, sizeof(wqe));
+
+		/* Common */
+		wqe.num_sge = wr->num_sge;
+		if (wr->num_sge > qp->qplib_qp.rq.max_sge) {
+			dev_err(rdev_to_dev(rdev),
+				"Limit exceeded for Receive SGEs");
+			rc = -EINVAL;
+			goto bad;
+		}
+		payload_sz = bnxt_re_build_sgl(wr->sg_list, wqe.sg_list,
+					       wr->num_sge);
+		wqe.wr_id = wr->wr_id;
+		wqe.type = BNXT_QPLIB_SWQE_TYPE_RECV;
+
+		if (!rc)
+			rc = bnxt_qplib_post_recv(&qp->qplib_qp, &wqe);
+bad:
+		if (rc)
+			break;
+
+		wr = wr->next;
+	}
+	bnxt_qplib_post_recv_db(&qp->qplib_qp);
+	return rc;
+}
+
+int bnxt_re_post_recv(struct ib_qp *ib_qp, struct ib_recv_wr *wr,
+		      struct ib_recv_wr **bad_wr)
+{
+	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
+	struct bnxt_qplib_swqe wqe;
+	int rc = 0, payload_sz = 0;
+
+	while (wr) {
+		/* House keeping */
+		memset(&wqe, 0, sizeof(wqe));
+
+		/* Common */
+		wqe.num_sge = wr->num_sge;
+		if (wr->num_sge > qp->qplib_qp.rq.max_sge) {
+			dev_err(rdev_to_dev(qp->rdev),
+				"Limit exceeded for Receive SGEs");
+			rc = -EINVAL;
+			goto bad;
+		}
+
+		payload_sz = bnxt_re_build_sgl(wr->sg_list, wqe.sg_list,
+					       wr->num_sge);
+		wqe.wr_id = wr->wr_id;
+		wqe.type = BNXT_QPLIB_SWQE_TYPE_RECV;
+
+		if (ib_qp->qp_type == IB_QPT_GSI)
+			rc = bnxt_re_build_qp1_shadow_qp_recv(qp, wr, &wqe,
+							      payload_sz);
+		if (!rc)
+			rc = bnxt_qplib_post_recv(&qp->qplib_qp, &wqe);
+bad:
+		if (rc) {
+			*bad_wr = wr;
+			break;
+		}
+		wr = wr->next;
+	}
+	bnxt_qplib_post_recv_db(&qp->qplib_qp);
+	return rc;
+}
+
 /* Completion Queues */
 int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index becdcdc..9f3dd49 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -164,6 +164,8 @@ int bnxt_re_query_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
 int bnxt_re_destroy_qp(struct ib_qp *qp);
 int bnxt_re_post_send(struct ib_qp *qp, struct ib_send_wr *send_wr,
 		      struct ib_send_wr **bad_send_wr);
+int bnxt_re_post_recv(struct ib_qp *qp, struct ib_recv_wr *recv_wr,
+		      struct ib_recv_wr **bad_recv_wr);
 struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 				const struct ib_cq_init_attr *attr,
 				struct ib_ucontext *context,
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 14d1147..73dfadd 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -452,6 +452,8 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 	ibdev->destroy_qp		= bnxt_re_destroy_qp;
 
 	ibdev->post_send		= bnxt_re_post_send;
+	ibdev->post_recv		= bnxt_re_post_recv;
+
 	ibdev->create_cq		= bnxt_re_create_cq;
 	ibdev->destroy_cq		= bnxt_re_destroy_cq;
 	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  16/22] bnxt_re: Support poll_cq verb
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (12 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 15/22] bnxt_re: Support post_recv Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 17/22] bnxt_re: Handling dispatching of events to IB stack Selvin Xavier
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

Enables the fastpath ib_poll_cq verb.

v2: Fixed sparse warnings

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 553 +++++++++++++++++++++++-
 drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |   7 +-
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 521 +++++++++++++++++++++-
 drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |   1 +
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |  22 +-
 5 files changed, 1098 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
index 67188ce..3671a5d 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
@@ -210,7 +210,7 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
 int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
 			 int msix_vector, int bar_reg_offset,
 			 int (*cqn_handler)(struct bnxt_qplib_nq *nq,
-					    void *),
+					    struct bnxt_qplib_cq *),
 			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
 					     void *, u8 event))
 {
@@ -1608,6 +1608,557 @@ int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
 	return 0;
 }
 
+static int __flush_sq(struct bnxt_qplib_q *sq, struct bnxt_qplib_qp *qp,
+		      struct bnxt_qplib_cqe **pcqe, int *budget)
+{
+	u32 sw_prod, sw_cons;
+	struct bnxt_qplib_cqe *cqe;
+	int rc = 0;
+
+	/* Now complete all outstanding SQEs with FLUSHED_ERR */
+	sw_prod = HWQ_CMP(sq->hwq.prod, &sq->hwq);
+	cqe = *pcqe;
+	while (*budget) {
+		sw_cons = HWQ_CMP(sq->hwq.cons, &sq->hwq);
+		if (sw_cons == sw_prod) {
+			sq->flush_in_progress = false;
+			break;
+		}
+		memset(cqe, 0, sizeof(*cqe));
+		cqe->status = CQ_REQ_STATUS_WORK_REQUEST_FLUSHED_ERR;
+		cqe->opcode = CQ_BASE_CQE_TYPE_REQ;
+		cqe->qp_handle = (u64)qp;
+		cqe->wr_id = sq->swq[sw_cons].wr_id;
+		cqe->src_qp = qp->id;
+		cqe->type = sq->swq[sw_cons].type;
+		cqe++;
+		(*budget)--;
+		sq->hwq.cons++;
+	}
+	*pcqe = cqe;
+	if (!budget && HWQ_CMP(sq->hwq.cons, &sq->hwq) != sw_prod)
+		/* Out of budget */
+		rc = -EAGAIN;
+
+	return rc;
+}
+
+static int __flush_rq(struct bnxt_qplib_q *rq, struct bnxt_qplib_qp *qp,
+		      int opcode, struct bnxt_qplib_cqe **pcqe, int *budget)
+{
+	struct bnxt_qplib_cqe *cqe;
+	u32 sw_prod, sw_cons;
+	int rc = 0;
+
+	/* Flush the rest of the RQ */
+	sw_prod = HWQ_CMP(rq->hwq.prod, &rq->hwq);
+	cqe = *pcqe;
+	while (*budget) {
+		sw_cons = HWQ_CMP(rq->hwq.cons, &rq->hwq);
+		if (sw_cons == sw_prod)
+			break;
+		memset(cqe, 0, sizeof(*cqe));
+		cqe->status =
+		    CQ_RES_RC_STATUS_WORK_REQUEST_FLUSHED_ERR;
+		cqe->opcode = opcode;
+		cqe->qp_handle = (u64)qp;
+		cqe->wr_id = rq->swq[sw_cons].wr_id;
+		cqe++;
+		(*budget)--;
+		rq->hwq.cons++;
+	}
+	*pcqe = cqe;
+	if (!*budget && HWQ_CMP(rq->hwq.cons, &rq->hwq) != sw_prod)
+		/* Out of budget */
+		rc = -EAGAIN;
+
+	return rc;
+}
+
+static int bnxt_qplib_cq_process_req(struct bnxt_qplib_cq *cq,
+				     struct cq_req *hwcqe,
+				     struct bnxt_qplib_cqe **pcqe, int *budget)
+{
+	struct bnxt_qplib_qp *qp;
+	struct bnxt_qplib_q *sq;
+	struct bnxt_qplib_cqe *cqe;
+	u32 sw_cons, cqe_cons;
+	int rc = 0;
+
+	qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
+	if (!qp) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: FP: Process Req qp is NULL");
+		return -EINVAL;
+	}
+	sq = &qp->sq;
+
+	cqe_cons = HWQ_CMP(le16_to_cpu(hwcqe->sq_cons_idx), &sq->hwq);
+	if (cqe_cons > sq->hwq.max_elements) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: FP: CQ Process req reported ");
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: sq_cons_idx 0x%x which exceeded max 0x%x",
+			cqe_cons, sq->hwq.max_elements);
+		return -EINVAL;
+	}
+	/* If we were in the middle of flushing the SQ, continue */
+	if (sq->flush_in_progress)
+		goto flush;
+
+	/* Require to walk the sq's swq to fabricate CQEs for all previously
+	 * signaled SWQEs due to CQE aggregation from the current sq cons
+	 * to the cqe_cons
+	 */
+	cqe = *pcqe;
+	while (*budget) {
+		sw_cons = HWQ_CMP(sq->hwq.cons, &sq->hwq);
+		if (sw_cons == cqe_cons)
+			break;
+		memset(cqe, 0, sizeof(*cqe));
+		cqe->opcode = CQ_BASE_CQE_TYPE_REQ;
+		cqe->qp_handle = (u64)qp;
+		cqe->src_qp = qp->id;
+		cqe->wr_id = sq->swq[sw_cons].wr_id;
+		cqe->type = sq->swq[sw_cons].type;
+
+		/* For the last CQE, check for status.  For errors, regardless
+		 * of the request being signaled or not, it must complete with
+		 * the hwcqe error status
+		 */
+		if (HWQ_CMP((sw_cons + 1), &sq->hwq) == cqe_cons &&
+		    hwcqe->status != CQ_REQ_STATUS_OK) {
+			cqe->status = hwcqe->status;
+			dev_err(&cq->hwq.pdev->dev,
+				"QPLIB: FP: CQ Processed Req ");
+			dev_err(&cq->hwq.pdev->dev,
+				"QPLIB: wr_id[%d] = 0x%llx with status 0x%x",
+				sw_cons, cqe->wr_id, cqe->status);
+			cqe++;
+			(*budget)--;
+			sq->flush_in_progress = true;
+			/* Must block new posting of SQ and RQ */
+			qp->state = CMDQ_MODIFY_QP_NEW_STATE_ERR;
+		} else {
+			if (sq->swq[sw_cons].flags &
+			    SQ_SEND_FLAGS_SIGNAL_COMP) {
+				cqe->status = CQ_REQ_STATUS_OK;
+				cqe++;
+				(*budget)--;
+			}
+		}
+		sq->hwq.cons++;
+	}
+	*pcqe = cqe;
+	if (!*budget && HWQ_CMP(sq->hwq.cons, &sq->hwq) != cqe_cons) {
+		/* Out of budget */
+		rc = -EAGAIN;
+		goto done;
+	}
+	if (!sq->flush_in_progress)
+		goto done;
+flush:
+	/* Require to walk the sq's swq to fabricate CQEs for all
+	 * previously posted SWQEs due to the error CQE received
+	 */
+	rc = __flush_sq(sq, qp, pcqe, budget);
+	if (!rc)
+		sq->flush_in_progress = false;
+done:
+	return rc;
+}
+
+static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+					struct cq_res_rc *hwcqe,
+					struct bnxt_qplib_cqe **pcqe,
+					int *budget)
+{
+	struct bnxt_qplib_qp *qp;
+	struct bnxt_qplib_q *rq;
+	struct bnxt_qplib_cqe *cqe;
+	u64 wr_id_idx;
+	int rc = 0;
+
+	qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
+	if (!qp) {
+		dev_err(&cq->hwq.pdev->dev, "QPLIB: process_cq RC qp is NULL");
+		return -EINVAL;
+	}
+	cqe = *pcqe;
+	cqe->opcode = hwcqe->cqe_type_toggle & CQ_BASE_CQE_TYPE_MASK;
+	cqe->length = le32_to_cpu(hwcqe->length);
+	cqe->immdata_or_invrkey = le32_to_cpu(hwcqe->imm_data_or_inv_r_key);
+	cqe->mr_handle = le64_to_cpu(hwcqe->mr_handle);
+	cqe->flags = le16_to_cpu(hwcqe->flags);
+	cqe->status = hwcqe->status;
+	cqe->qp_handle = (u64)qp;
+
+	wr_id_idx = le64_to_cpu(hwcqe->srq_or_rq_wr_id &
+				CQ_RES_RC_SRQ_OR_RQ_WR_ID_MASK);
+	rq = &qp->rq;
+	if (wr_id_idx > rq->hwq.max_elements) {
+		dev_err(&cq->hwq.pdev->dev, "QPLIB: FP: CQ Process RC ");
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: wr_id idx 0x%llx exceeded RQ max 0x%x",
+			wr_id_idx, rq->hwq.max_elements);
+		return -EINVAL;
+	}
+	if (rq->flush_in_progress)
+		goto flush_rq;
+
+	cqe->wr_id = rq->swq[wr_id_idx].wr_id;
+	cqe++;
+	(*budget)--;
+	rq->hwq.cons++;
+	*pcqe = cqe;
+
+	if (hwcqe->status != CQ_RES_RC_STATUS_OK) {
+		rq->flush_in_progress = true;
+flush_rq:
+		rc = __flush_rq(rq, qp, CQ_BASE_CQE_TYPE_RES_RC, pcqe, budget);
+		if (!rc)
+			rq->flush_in_progress = false;
+	}
+	return rc;
+}
+
+static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+					struct cq_res_ud *hwcqe,
+					struct bnxt_qplib_cqe **pcqe,
+					int *budget)
+{
+	struct bnxt_qplib_qp *qp;
+	struct bnxt_qplib_q *rq;
+	struct bnxt_qplib_cqe *cqe;
+	u64 wr_id_idx;
+	int rc = 0;
+
+	qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
+	if (!qp) {
+		dev_err(&cq->hwq.pdev->dev, "QPLIB: process_cq UD qp is NULL");
+		return -EINVAL;
+	}
+	cqe = *pcqe;
+	cqe->opcode = hwcqe->cqe_type_toggle & CQ_BASE_CQE_TYPE_MASK;
+	cqe->length = le32_to_cpu(hwcqe->length);
+	cqe->immdata_or_invrkey = le32_to_cpu(hwcqe->imm_data);
+	cqe->flags = le16_to_cpu(hwcqe->flags);
+	cqe->status = hwcqe->status;
+	cqe->qp_handle = (u64)qp;
+	memcpy(cqe->smac, hwcqe->src_mac, 6);
+	wr_id_idx = le64_to_cpu(hwcqe->src_qp_high_srq_or_rq_wr_id
+				& CQ_RES_UD_SRQ_OR_RQ_WR_ID_MASK);
+	cqe->src_qp = le16_to_cpu(hwcqe->src_qp_low) |
+				(hwcqe->src_qp_high_srq_or_rq_wr_id &
+				 CQ_RES_UD_SRC_QP_HIGH_MASK >> 8);
+
+	rq = &qp->rq;
+	if (wr_id_idx > rq->hwq.max_elements) {
+		dev_err(&cq->hwq.pdev->dev, "QPLIB: FP: CQ Process UD ");
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: wr_id idx 0x%llx exceeded RQ max 0x%x",
+			wr_id_idx, rq->hwq.max_elements);
+			return -EINVAL;
+	}
+	if (rq->flush_in_progress)
+		goto flush_rq;
+
+	cqe->wr_id = rq->swq[wr_id_idx].wr_id;
+	cqe++;
+	(*budget)--;
+	rq->hwq.cons++;
+	*pcqe = cqe;
+
+	if (hwcqe->status != CQ_RES_RC_STATUS_OK) {
+		rq->flush_in_progress = true;
+flush_rq:
+		rc = __flush_rq(rq, qp, CQ_BASE_CQE_TYPE_RES_UD, pcqe, budget);
+		if (!rc)
+			rq->flush_in_progress = false;
+	}
+	return rc;
+}
+
+static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+						struct cq_res_raweth_qp1 *hwcqe,
+						struct bnxt_qplib_cqe **pcqe,
+						int *budget)
+{
+	struct bnxt_qplib_qp *qp;
+	struct bnxt_qplib_q *rq;
+	struct bnxt_qplib_cqe *cqe;
+	u64 wr_id_idx;
+	int rc = 0;
+
+	qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
+	if (!qp) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: process_cq Raw/QP1 qp is NULL");
+		return -EINVAL;
+	}
+	cqe = *pcqe;
+	cqe->opcode = hwcqe->cqe_type_toggle & CQ_BASE_CQE_TYPE_MASK;
+	cqe->flags = le16_to_cpu(hwcqe->flags);
+	cqe->qp_handle = (u64)qp;
+
+	wr_id_idx = le64_to_cpu(hwcqe->raweth_qp1_payload_offset_srq_or_rq_wr_id
+				& CQ_RES_RAWETH_QP1_SRQ_OR_RQ_WR_ID_MASK);
+	cqe->src_qp = qp->id;
+	if (qp->id == 1 && !cqe->length) {
+		/* Add workaround for the length misdetection */
+		cqe->length = 296;
+	} else {
+		cqe->length = le16_to_cpu(hwcqe->length);
+	}
+	cqe->pkey_index = qp->pkey_index;
+	memcpy(cqe->smac, qp->smac, 6);
+
+	cqe->raweth_qp1_flags = le16_to_cpu(hwcqe->raweth_qp1_flags);
+	cqe->raweth_qp1_flags2 = le16_to_cpu(hwcqe->raweth_qp1_flags2);
+
+	rq = &qp->rq;
+	if (wr_id_idx > rq->hwq.max_elements) {
+		dev_err(&cq->hwq.pdev->dev, "QPLIB: FP: CQ Process Raw/QP1 RQ wr_id ");
+		dev_err(&cq->hwq.pdev->dev, "QPLIB: ix 0x%llx exceeded RQ max 0x%x",
+			wr_id_idx, rq->hwq.max_elements);
+		return -EINVAL;
+	}
+	if (rq->flush_in_progress)
+		goto flush_rq;
+
+	cqe->wr_id = rq->swq[wr_id_idx].wr_id;
+	cqe++;
+	(*budget)--;
+	rq->hwq.cons++;
+	*pcqe = cqe;
+
+	if (hwcqe->status != CQ_RES_RC_STATUS_OK) {
+		rq->flush_in_progress = true;
+flush_rq:
+		rc = __flush_rq(rq, qp, CQ_BASE_CQE_TYPE_RES_RAWETH_QP1, pcqe,
+				budget);
+		if (!rc)
+			rq->flush_in_progress = false;
+	}
+	return rc;
+}
+
+static int bnxt_qplib_cq_process_terminal(struct bnxt_qplib_cq *cq,
+					  struct cq_terminal *hwcqe,
+					  struct bnxt_qplib_cqe **pcqe,
+					  int *budget)
+{
+	struct bnxt_qplib_qp *qp;
+	struct bnxt_qplib_q *sq, *rq;
+	struct bnxt_qplib_cqe *cqe;
+	u32 sw_cons, cqe_cons;
+	int rc = 0;
+	u8 opcode = 0;
+
+	/* Check the Status */
+	if (hwcqe->status != CQ_TERMINAL_STATUS_OK)
+		dev_warn(&cq->hwq.pdev->dev,
+			 "QPLIB: FP: CQ Process Terminal Error status = 0x%x",
+			 hwcqe->status);
+
+	qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
+	if (!qp) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: FP: CQ Process terminal qp is NULL");
+		return -EINVAL;
+	}
+	/* Must block new posting of SQ and RQ */
+	qp->state = CMDQ_MODIFY_QP_NEW_STATE_ERR;
+
+	sq = &qp->sq;
+	rq = &qp->rq;
+
+	cqe_cons = le16_to_cpu(hwcqe->sq_cons_idx);
+	if (cqe_cons == 0xFFFF)
+		goto do_rq;
+
+	if (cqe_cons > sq->hwq.max_elements) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: FP: CQ Process terminal reported ");
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: sq_cons_idx 0x%x which exceeded max 0x%x",
+			cqe_cons, sq->hwq.max_elements);
+		goto do_rq;
+	}
+	/* If we were in the middle of flushing, continue */
+	if (sq->flush_in_progress)
+		goto flush_sq;
+
+	/* Terminal CQE can also include aggregated successful CQEs prior.
+	 * So we must complete all CQEs from the current sq's cons to the
+	 * cq_cons with status OK
+	 */
+	cqe = *pcqe;
+	while (*budget) {
+		sw_cons = HWQ_CMP(sq->hwq.cons, &sq->hwq);
+		if (sw_cons == cqe_cons)
+			break;
+		if (sq->swq[sw_cons].flags & SQ_SEND_FLAGS_SIGNAL_COMP) {
+			memset(cqe, 0, sizeof(*cqe));
+			cqe->status = CQ_REQ_STATUS_OK;
+			cqe->opcode = CQ_BASE_CQE_TYPE_REQ;
+			cqe->qp_handle = (u64)qp;
+			cqe->src_qp = qp->id;
+			cqe->wr_id = sq->swq[sw_cons].wr_id;
+			cqe->type = sq->swq[sw_cons].type;
+			cqe++;
+			(*budget)--;
+		}
+		sq->hwq.cons++;
+	}
+	*pcqe = cqe;
+	if (!budget && sw_cons != cqe_cons) {
+		/* Out of budget */
+		rc = -EAGAIN;
+		goto sq_done;
+	}
+	sq->flush_in_progress = true;
+flush_sq:
+	rc = __flush_sq(sq, qp, pcqe, budget);
+	if (!rc)
+		sq->flush_in_progress = false;
+sq_done:
+	if (rc)
+		return rc;
+do_rq:
+	cqe_cons = le16_to_cpu(hwcqe->rq_cons_idx);
+	if (cqe_cons == 0xFFFF) {
+		goto done;
+	} else if (cqe_cons > rq->hwq.max_elements) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: FP: CQ Processed terminal ");
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: reported rq_cons_idx 0x%x exceeds max 0x%x",
+			cqe_cons, rq->hwq.max_elements);
+		goto done;
+	}
+	/* Terminal CQE requires all posted RQEs to complete with FLUSHED_ERR
+	 * from the current rq->cons to the rq->prod regardless what the
+	 * rq->cons the terminal CQE indicates
+	 */
+	rq->flush_in_progress = true;
+	switch (qp->type) {
+	case CMDQ_CREATE_QP1_TYPE_GSI:
+		opcode = CQ_BASE_CQE_TYPE_RES_RAWETH_QP1;
+		break;
+	case CMDQ_CREATE_QP_TYPE_RC:
+		opcode = CQ_BASE_CQE_TYPE_RES_RC;
+		break;
+	case CMDQ_CREATE_QP_TYPE_UD:
+		opcode = CQ_BASE_CQE_TYPE_RES_UD;
+		break;
+	}
+
+	rc = __flush_rq(rq, qp, opcode, pcqe, budget);
+	if (!rc)
+		rq->flush_in_progress = false;
+done:
+	return rc;
+}
+
+static int bnxt_qplib_cq_process_cutoff(struct bnxt_qplib_cq *cq,
+					struct cq_cutoff *hwcqe)
+{
+	/* Check the Status */
+	if (hwcqe->status != CQ_CUTOFF_STATUS_OK) {
+		dev_err(&cq->hwq.pdev->dev,
+			"QPLIB: FP: CQ Process Cutoff Error status = 0x%x",
+			hwcqe->status);
+		return -EINVAL;
+	}
+	clear_bit(CQ_FLAGS_RESIZE_IN_PROG, &cq->flags);
+	wake_up_interruptible(&cq->waitq);
+
+	return 0;
+}
+
+int bnxt_qplib_poll_cq(struct bnxt_qplib_cq *cq, struct bnxt_qplib_cqe *cqe,
+		       int num_cqes)
+{
+	struct cq_base *hw_cqe, **hw_cqe_ptr;
+	unsigned long flags;
+	u32 sw_cons, raw_cons;
+	int budget, rc = 0;
+
+	spin_lock_irqsave(&cq->hwq.lock, flags);
+	raw_cons = cq->hwq.cons;
+	budget = num_cqes;
+
+	while (budget) {
+		sw_cons = HWQ_CMP(raw_cons, &cq->hwq);
+		hw_cqe_ptr = (struct cq_base **)cq->hwq.pbl_ptr;
+		hw_cqe = &hw_cqe_ptr[CQE_PG(sw_cons)][CQE_IDX(sw_cons)];
+
+		/* Check for Valid bit */
+		if (!CQE_CMP_VALID(hw_cqe, raw_cons, cq->hwq.max_elements))
+			break;
+
+		/* From the device's respective CQE format to qplib_wc*/
+		switch (hw_cqe->cqe_type_toggle & CQ_BASE_CQE_TYPE_MASK) {
+		case CQ_BASE_CQE_TYPE_REQ:
+			rc = bnxt_qplib_cq_process_req(cq,
+						       (struct cq_req *)hw_cqe,
+						       &cqe, &budget);
+			break;
+		case CQ_BASE_CQE_TYPE_RES_RC:
+			rc = bnxt_qplib_cq_process_res_rc(cq,
+							  (struct cq_res_rc *)
+							  hw_cqe, &cqe,
+							  &budget);
+			break;
+		case CQ_BASE_CQE_TYPE_RES_UD:
+			rc = bnxt_qplib_cq_process_res_ud
+					(cq, (struct cq_res_ud *)hw_cqe, &cqe,
+					 &budget);
+			break;
+		case CQ_BASE_CQE_TYPE_RES_RAWETH_QP1:
+			rc = bnxt_qplib_cq_process_res_raweth_qp1
+					(cq, (struct cq_res_raweth_qp1 *)
+					 hw_cqe, &cqe, &budget);
+			break;
+		case CQ_BASE_CQE_TYPE_TERMINAL:
+			rc = bnxt_qplib_cq_process_terminal
+					(cq, (struct cq_terminal *)hw_cqe,
+					 &cqe, &budget);
+			break;
+		case CQ_BASE_CQE_TYPE_CUT_OFF:
+			bnxt_qplib_cq_process_cutoff
+					(cq, (struct cq_cutoff *)hw_cqe);
+			/* Done processing this CQ */
+			goto exit;
+		default:
+			dev_err(&cq->hwq.pdev->dev,
+				"QPLIB: process_cq unknown type 0x%lx",
+				hw_cqe->cqe_type_toggle &
+				CQ_BASE_CQE_TYPE_MASK);
+			rc = -EINVAL;
+			break;
+		}
+		if (rc < 0) {
+			if (rc == -EAGAIN)
+				break;
+			/* Error while processing the CQE, just skip to the
+			 * next one
+			 */
+			dev_err(&cq->hwq.pdev->dev,
+				"QPLIB: process_cqe error rc = 0x%x", rc);
+		}
+		raw_cons++;
+	}
+	if (cq->hwq.cons != raw_cons) {
+		cq->hwq.cons = raw_cons;
+		bnxt_qplib_arm_cq(cq, DBR_DBR_TYPE_CQ);
+	}
+exit:
+	spin_unlock_irqrestore(&cq->hwq.lock, flags);
+	return num_cqes - budget;
+}
+
 void bnxt_qplib_req_notify_cq(struct bnxt_qplib_cq *cq, u32 arm_type)
 {
 	unsigned long flags;
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
index d9f2611..37d7d1e 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
@@ -373,7 +373,7 @@ struct bnxt_qplib_nq {
 
 	int				(*cqn_handler)
 						(struct bnxt_qplib_nq *nq,
-						 void *cq);
+						 struct bnxt_qplib_cq *cq);
 	int				(*srqn_handler)
 						(struct bnxt_qplib_nq *nq,
 						 void *srq,
@@ -384,7 +384,7 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq);
 int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
 			 int msix_vector, int bar_reg_offset,
 			 int (*cqn_handler)(struct bnxt_qplib_nq *nq,
-					    void *cq),
+					    struct bnxt_qplib_cq *cq),
 			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
 					     void *srq,
 					     u8 event));
@@ -408,7 +408,8 @@ int bnxt_qplib_post_recv(struct bnxt_qplib_qp *qp,
 			 struct bnxt_qplib_swqe *wqe);
 int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
 int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
-
+int bnxt_qplib_poll_cq(struct bnxt_qplib_cq *cq, struct bnxt_qplib_cqe *cqe,
+		       int num);
 void bnxt_qplib_req_notify_cq(struct bnxt_qplib_cq *cq, u32 arm_type);
 void bnxt_qplib_free_nq(struct bnxt_qplib_nq *nq);
 int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq);
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
index 5d6c89c..01cb833 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
@@ -2043,7 +2043,7 @@ int bnxt_re_post_send(struct ib_qp *ib_qp, struct ib_send_wr *wr,
 	return rc;
 }
 
-int bnxt_re_post_recv_shadow_qp(struct bnxt_re_dev *rdev,
+static int bnxt_re_post_recv_shadow_qp(struct bnxt_re_dev *rdev,
 				struct bnxt_re_qp *qp,
 				struct ib_recv_wr *wr)
 {
@@ -2247,6 +2247,525 @@ struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 	return ERR_PTR(rc);
 }
 
+static u8 __req_to_ib_wc_status(u8 qstatus)
+{
+	switch (qstatus) {
+	case CQ_REQ_STATUS_OK:
+		return IB_WC_SUCCESS;
+	case CQ_REQ_STATUS_BAD_RESPONSE_ERR:
+		return IB_WC_BAD_RESP_ERR;
+	case CQ_REQ_STATUS_LOCAL_LENGTH_ERR:
+		return IB_WC_LOC_LEN_ERR;
+	case CQ_REQ_STATUS_LOCAL_QP_OPERATION_ERR:
+		return IB_WC_LOC_QP_OP_ERR;
+	case CQ_REQ_STATUS_LOCAL_PROTECTION_ERR:
+		return IB_WC_LOC_PROT_ERR;
+	case CQ_REQ_STATUS_MEMORY_MGT_OPERATION_ERR:
+		return IB_WC_GENERAL_ERR;
+	case CQ_REQ_STATUS_REMOTE_INVALID_REQUEST_ERR:
+		return IB_WC_REM_INV_REQ_ERR;
+	case CQ_REQ_STATUS_REMOTE_ACCESS_ERR:
+		return IB_WC_REM_ACCESS_ERR;
+	case CQ_REQ_STATUS_REMOTE_OPERATION_ERR:
+		return IB_WC_REM_OP_ERR;
+	case CQ_REQ_STATUS_RNR_NAK_RETRY_CNT_ERR:
+		return IB_WC_RNR_RETRY_EXC_ERR;
+	case CQ_REQ_STATUS_TRANSPORT_RETRY_CNT_ERR:
+		return IB_WC_RETRY_EXC_ERR;
+	case CQ_REQ_STATUS_WORK_REQUEST_FLUSHED_ERR:
+		return IB_WC_WR_FLUSH_ERR;
+	default:
+		return IB_WC_GENERAL_ERR;
+	}
+	return 0;
+}
+
+static u8 __rawqp1_to_ib_wc_status(u8 qstatus)
+{
+	switch (qstatus) {
+	case CQ_RES_RAWETH_QP1_STATUS_OK:
+		return IB_WC_SUCCESS;
+	case CQ_RES_RAWETH_QP1_STATUS_LOCAL_ACCESS_ERROR:
+		return IB_WC_LOC_ACCESS_ERR;
+	case CQ_RES_RAWETH_QP1_STATUS_HW_LOCAL_LENGTH_ERR:
+		return IB_WC_LOC_LEN_ERR;
+	case CQ_RES_RAWETH_QP1_STATUS_LOCAL_PROTECTION_ERR:
+		return IB_WC_LOC_PROT_ERR;
+	case CQ_RES_RAWETH_QP1_STATUS_LOCAL_QP_OPERATION_ERR:
+		return IB_WC_LOC_QP_OP_ERR;
+	case CQ_RES_RAWETH_QP1_STATUS_MEMORY_MGT_OPERATION_ERR:
+		return IB_WC_GENERAL_ERR;
+	case CQ_RES_RAWETH_QP1_STATUS_WORK_REQUEST_FLUSHED_ERR:
+		return IB_WC_WR_FLUSH_ERR;
+	case CQ_RES_RAWETH_QP1_STATUS_HW_FLUSH_ERR:
+		return IB_WC_WR_FLUSH_ERR;
+	default:
+		return IB_WC_GENERAL_ERR;
+	}
+}
+
+static u8 __rc_to_ib_wc_status(u8 qstatus)
+{
+	switch (qstatus) {
+	case CQ_RES_RC_STATUS_OK:
+		return IB_WC_SUCCESS;
+	case CQ_RES_RC_STATUS_LOCAL_ACCESS_ERROR:
+		return IB_WC_LOC_ACCESS_ERR;
+	case CQ_RES_RC_STATUS_LOCAL_LENGTH_ERR:
+		return IB_WC_LOC_LEN_ERR;
+	case CQ_RES_RC_STATUS_LOCAL_PROTECTION_ERR:
+		return IB_WC_LOC_PROT_ERR;
+	case CQ_RES_RC_STATUS_LOCAL_QP_OPERATION_ERR:
+		return IB_WC_LOC_QP_OP_ERR;
+	case CQ_RES_RC_STATUS_MEMORY_MGT_OPERATION_ERR:
+		return IB_WC_GENERAL_ERR;
+	case CQ_RES_RC_STATUS_REMOTE_INVALID_REQUEST_ERR:
+		return IB_WC_REM_INV_REQ_ERR;
+	case CQ_RES_RC_STATUS_WORK_REQUEST_FLUSHED_ERR:
+		return IB_WC_WR_FLUSH_ERR;
+	case CQ_RES_RC_STATUS_HW_FLUSH_ERR:
+		return IB_WC_WR_FLUSH_ERR;
+	default:
+		return IB_WC_GENERAL_ERR;
+	}
+}
+
+static void bnxt_re_process_req_wc(struct ib_wc *wc, struct bnxt_qplib_cqe *cqe)
+{
+	switch (cqe->type) {
+	case BNXT_QPLIB_SWQE_TYPE_SEND:
+		wc->opcode = IB_WC_SEND;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM:
+		wc->opcode = IB_WC_SEND;
+		wc->wc_flags |= IB_WC_WITH_IMM;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV:
+		wc->opcode = IB_WC_SEND;
+		wc->wc_flags |= IB_WC_WITH_INVALIDATE;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE:
+		wc->opcode = IB_WC_RDMA_WRITE;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM:
+		wc->opcode = IB_WC_RDMA_WRITE;
+		wc->wc_flags |= IB_WC_WITH_IMM;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_RDMA_READ:
+		wc->opcode = IB_WC_RDMA_READ;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_ATOMIC_CMP_AND_SWP:
+		wc->opcode = IB_WC_COMP_SWAP;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_ATOMIC_FETCH_AND_ADD:
+		wc->opcode = IB_WC_FETCH_ADD;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_LOCAL_INV:
+		wc->opcode = IB_WC_LOCAL_INV;
+		break;
+	case BNXT_QPLIB_SWQE_TYPE_REG_MR:
+		wc->opcode = IB_WC_REG_MR;
+		break;
+	default:
+		wc->opcode = IB_WC_SEND;
+		break;
+	}
+
+	wc->status = __req_to_ib_wc_status(cqe->status);
+}
+
+static int bnxt_re_check_packet_type(u16 raweth_qp1_flags, u16 raweth_qp1_flags2)
+{
+	bool is_udp = false, is_ipv6 = false, is_ipv4 = false;
+
+	/* raweth_qp1_flags Bit 9-6 indicates itype */
+	if ((raweth_qp1_flags & CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_ROCE)
+	    != CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS_ITYPE_ROCE)
+		return -1;
+
+	if (raweth_qp1_flags2 &
+	    CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_IP_CS_CALC &&
+	    raweth_qp1_flags2 &
+	    CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_L4_CS_CALC) {
+		is_udp = true;
+		/* raweth_qp1_flags2 Bit 8 indicates ip_type. 0-v4 1 - v6 */
+		(raweth_qp1_flags2 &
+		 CQ_RES_RAWETH_QP1_RAWETH_QP1_FLAGS2_IP_TYPE) ?
+			(is_ipv6 = true) : (is_ipv4 = true);
+		return ((is_ipv6) ?
+			 BNXT_RE_ROCEV2_IPV6_PACKET :
+			 BNXT_RE_ROCEV2_IPV4_PACKET);
+	} else {
+		return BNXT_RE_ROCE_V1_PACKET;
+	}
+}
+
+static int bnxt_re_to_ib_nw_type(int nw_type)
+{
+	u8 nw_hdr_type = 0xFF;
+
+	switch (nw_type) {
+	case BNXT_RE_ROCE_V1_PACKET:
+		nw_hdr_type = RDMA_NETWORK_ROCE_V1;
+		break;
+	case BNXT_RE_ROCEV2_IPV4_PACKET:
+		nw_hdr_type = RDMA_NETWORK_IPV4;
+		break;
+	case BNXT_RE_ROCEV2_IPV6_PACKET:
+		nw_hdr_type = RDMA_NETWORK_IPV6;
+		break;
+	}
+	return nw_hdr_type;
+}
+
+static bool bnxt_re_is_loopback_packet(struct bnxt_re_dev *rdev,
+				       void *rq_hdr_buf)
+{
+	u8 *tmp_buf = NULL;
+	struct ethhdr *eth_hdr;
+	u16 eth_type;
+	bool rc = false;
+
+	tmp_buf = (u8 *)rq_hdr_buf;
+	/*
+	 * If dest mac is not same as I/F mac, this could be a
+	 * loopback address or multicast address, check whether
+	 * it is a loopback packet
+	 */
+	if (!ether_addr_equal(tmp_buf, rdev->netdev->dev_addr)) {
+		tmp_buf += 4;
+		/* Check the  ether type */
+		eth_hdr = (struct ethhdr *)tmp_buf;
+		eth_type = ntohs(eth_hdr->h_proto);
+		switch (eth_type) {
+		case BNXT_QPLIB_ETHTYPE_ROCEV1:
+			rc = true;
+			break;
+		case ETH_P_IP:
+		case ETH_P_IPV6: {
+			u32 len;
+			struct udphdr *udp_hdr;
+
+			len = (eth_type == ETH_P_IP ? sizeof(struct iphdr) :
+						      sizeof(struct ipv6hdr));
+			tmp_buf += sizeof(struct ethhdr) + len;
+			udp_hdr = (struct udphdr *)tmp_buf;
+			if (ntohs(udp_hdr->dest) ==
+				    ROCE_V2_UDP_DPORT)
+				rc = true;
+				break;
+			}
+		default:
+			break;
+		}
+	}
+
+	return rc;
+}
+
+static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *qp1_qp,
+					 struct bnxt_qplib_cqe *cqe)
+{
+	struct bnxt_re_dev *rdev = qp1_qp->rdev;
+	struct bnxt_re_sqp_entries *sqp_entry = NULL;
+	struct bnxt_re_qp *qp = rdev->qp1_sqp;
+	struct ib_send_wr *swr;
+	struct ib_ud_wr udwr;
+	struct ib_recv_wr rwr;
+	u8 pkt_type = 0;
+	u32 tbl_idx;
+	void *rq_hdr_buf;
+	dma_addr_t rq_hdr_buf_map;
+	dma_addr_t shrq_hdr_buf_map;
+	u32 offset = 0;
+	u32 skip_bytes = 0;
+	struct ib_sge s_sge[2];
+	struct ib_sge r_sge[2];
+	int rc;
+
+	memset(&udwr, 0, sizeof(udwr));
+	memset(&rwr, 0, sizeof(rwr));
+	memset(&s_sge, 0, sizeof(s_sge));
+	memset(&r_sge, 0, sizeof(r_sge));
+
+	swr = &udwr.wr;
+	tbl_idx = cqe->wr_id;
+
+	rq_hdr_buf = qp1_qp->qplib_qp.rq_hdr_buf +
+			(tbl_idx * qp1_qp->qplib_qp.rq_hdr_buf_size);
+	rq_hdr_buf_map = bnxt_qplib_get_qp_buf_from_index(&qp1_qp->qplib_qp,
+							  tbl_idx);
+
+	/* Shadow QP header buffer */
+	shrq_hdr_buf_map = bnxt_qplib_get_qp_buf_from_index(&qp->qplib_qp,
+							    tbl_idx);
+	sqp_entry = &rdev->sqp_tbl[tbl_idx];
+
+	/* Store this cqe */
+	memcpy(&sqp_entry->cqe, cqe, sizeof(struct bnxt_qplib_cqe));
+	sqp_entry->qp1_qp = qp1_qp;
+
+	/* Find packet type from the cqe */
+
+	pkt_type = bnxt_re_check_packet_type(cqe->raweth_qp1_flags,
+					     cqe->raweth_qp1_flags2);
+	if (pkt_type < 0) {
+		dev_err(rdev_to_dev(rdev), "Invalid packet\n");
+		return -EINVAL;
+	}
+
+	/* Adjust the offset for the user buffer and post in the rq */
+
+	if (pkt_type == BNXT_RE_ROCEV2_IPV4_PACKET)
+		offset = 20;
+
+	/*
+	 * QP1 loopback packet has 4 bytes of internal header before
+	 * ether header. Skip these four bytes.
+	 */
+	if (bnxt_re_is_loopback_packet(rdev, rq_hdr_buf))
+		skip_bytes = 4;
+
+	/* First send SGE . Skip the ether header*/
+	s_sge[0].addr = rq_hdr_buf_map + BNXT_QPLIB_MAX_QP1_RQ_ETH_HDR_SIZE
+			+ skip_bytes;
+	s_sge[0].lkey = 0xFFFFFFFF;
+	s_sge[0].length = offset ? BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV4 :
+				BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6;
+
+	/* Second Send SGE */
+	s_sge[1].addr = s_sge[0].addr + s_sge[0].length +
+			BNXT_QPLIB_MAX_QP1_RQ_BDETH_HDR_SIZE;
+	if (pkt_type != BNXT_RE_ROCE_V1_PACKET)
+		s_sge[1].addr += 8;
+	s_sge[1].lkey = 0xFFFFFFFF;
+	s_sge[1].length = 256;
+
+	/* First recv SGE */
+
+	r_sge[0].addr = shrq_hdr_buf_map;
+	r_sge[0].lkey = 0xFFFFFFFF;
+	r_sge[0].length = 40;
+
+	r_sge[1].addr = sqp_entry->sge.addr + offset;
+	r_sge[1].lkey = sqp_entry->sge.lkey;
+	r_sge[1].length = BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6 + 256 - offset;
+
+	/* Create receive work request */
+	rwr.num_sge = 2;
+	rwr.sg_list = r_sge;
+	rwr.wr_id = tbl_idx;
+	rwr.next = NULL;
+
+	rc = bnxt_re_post_recv_shadow_qp(rdev, qp, &rwr);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev),
+			"Failed to post Rx buffers to shadow QP");
+		return -ENOMEM;
+	}
+
+	swr->num_sge = 2;
+	swr->sg_list = s_sge;
+	swr->wr_id = tbl_idx;
+	swr->opcode = IB_WR_SEND;
+	swr->next = NULL;
+
+	udwr.ah = &rdev->sqp_ah->ib_ah;
+	udwr.remote_qpn = rdev->qp1_sqp->qplib_qp.id;
+	udwr.remote_qkey = rdev->qp1_sqp->qplib_qp.qkey;
+
+	/* post data received  in the send queue */
+	rc = bnxt_re_post_send_shadow_qp(rdev, qp, swr);
+
+	return 0;
+}
+
+static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+					  struct bnxt_qplib_cqe *cqe)
+{
+	wc->opcode = IB_WC_RECV;
+	wc->status = __rawqp1_to_ib_wc_status(cqe->status);
+	wc->wc_flags |= IB_WC_GRH;
+}
+
+static void bnxt_re_process_res_rc_wc(struct ib_wc *wc,
+				      struct bnxt_qplib_cqe *cqe)
+{
+	wc->opcode = IB_WC_RECV;
+	wc->status = __rc_to_ib_wc_status(cqe->status);
+
+	if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
+		wc->wc_flags |= IB_WC_WITH_IMM;
+	if (cqe->flags & CQ_RES_RC_FLAGS_INV)
+		wc->wc_flags |= IB_WC_WITH_INVALIDATE;
+	if ((cqe->flags & (CQ_RES_RC_FLAGS_RDMA | CQ_RES_RC_FLAGS_IMM)) ==
+	    (CQ_RES_RC_FLAGS_RDMA | CQ_RES_RC_FLAGS_IMM))
+		wc->opcode = IB_WC_RECV_RDMA_WITH_IMM;
+}
+
+static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *qp,
+					     struct ib_wc *wc,
+					     struct bnxt_qplib_cqe *cqe)
+{
+	u32 tbl_idx;
+	struct bnxt_re_dev *rdev = qp->rdev;
+	struct bnxt_re_qp *qp1_qp = NULL;
+	struct bnxt_qplib_cqe *orig_cqe = NULL;
+	struct bnxt_re_sqp_entries *sqp_entry = NULL;
+	int nw_type;
+
+	tbl_idx = cqe->wr_id;
+
+	sqp_entry = &rdev->sqp_tbl[tbl_idx];
+	qp1_qp = sqp_entry->qp1_qp;
+	orig_cqe = &sqp_entry->cqe;
+
+	wc->wr_id = sqp_entry->wrid;
+	wc->byte_len = orig_cqe->length;
+	wc->qp = &qp1_qp->ib_qp;
+
+	wc->ex.imm_data = orig_cqe->immdata_or_invrkey;
+	wc->src_qp = orig_cqe->src_qp;
+	memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+	wc->port_num = 1;
+	wc->vendor_err = orig_cqe->status;
+
+	wc->opcode = IB_WC_RECV;
+	wc->status = __rawqp1_to_ib_wc_status(orig_cqe->status);
+	wc->wc_flags |= IB_WC_GRH;
+
+	nw_type = bnxt_re_check_packet_type(orig_cqe->raweth_qp1_flags,
+					    orig_cqe->raweth_qp1_flags2);
+	if (nw_type >= 0) {
+		wc->network_hdr_type = bnxt_re_to_ib_nw_type(nw_type);
+		wc->wc_flags |= IB_WC_WITH_NETWORK_HDR_TYPE;
+	}
+}
+
+static void bnxt_re_process_res_ud_wc(struct ib_wc *wc,
+				      struct bnxt_qplib_cqe *cqe)
+{
+	wc->opcode = IB_WC_RECV;
+	wc->status = __rc_to_ib_wc_status(cqe->status);
+
+	if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
+		wc->wc_flags |= IB_WC_WITH_IMM;
+	if (cqe->flags & CQ_RES_RC_FLAGS_INV)
+		wc->wc_flags |= IB_WC_WITH_INVALIDATE;
+	if ((cqe->flags & (CQ_RES_RC_FLAGS_RDMA | CQ_RES_RC_FLAGS_IMM)) ==
+	    (CQ_RES_RC_FLAGS_RDMA | CQ_RES_RC_FLAGS_IMM))
+		wc->opcode = IB_WC_RECV_RDMA_WITH_IMM;
+}
+
+int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
+{
+	struct bnxt_re_cq *cq = to_bnxt_re(ib_cq, struct bnxt_re_cq, ib_cq);
+	struct bnxt_re_qp *qp;
+	struct bnxt_qplib_cqe *cqe;
+	int i, ncqe, budget;
+	u32 tbl_idx;
+	struct bnxt_re_sqp_entries *sqp_entry = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&cq->cq_lock, flags);
+	budget = min_t(u32, num_entries, cq->max_cql);
+	if (!cq->cql) {
+		dev_err(rdev_to_dev(cq->rdev), "POLL CQ : no CQL to use");
+		goto exit;
+	}
+	cqe = &cq->cql[0];
+	while (budget) {
+		ncqe = bnxt_qplib_poll_cq(&cq->qplib_cq, cqe, budget);
+		if (!ncqe)
+			break;
+
+		for (i = 0; i < ncqe; i++, cqe++) {
+			/* Transcribe each qplib_wqe back to ib_wc */
+			memset(wc, 0, sizeof(*wc));
+
+			wc->wr_id = cqe->wr_id;
+			wc->byte_len = cqe->length;
+			qp = to_bnxt_re((struct bnxt_qplib_qp *)cqe->qp_handle,
+					struct bnxt_re_qp, qplib_qp);
+			if (!qp) {
+				dev_err(rdev_to_dev(cq->rdev),
+					"POLL CQ : bad QP handle");
+				continue;
+			}
+			wc->qp = &qp->ib_qp;
+			wc->ex.imm_data = cqe->immdata_or_invrkey;
+			wc->src_qp = cqe->src_qp;
+			memcpy(wc->smac, cqe->smac, ETH_ALEN);
+			wc->port_num = 1;
+			wc->vendor_err = cqe->status;
+
+			switch (cqe->opcode) {
+			case CQ_BASE_CQE_TYPE_REQ:
+				if (qp->qplib_qp.id ==
+				    qp->rdev->qp1_sqp->qplib_qp.id) {
+					/* Handle this completion with
+					 * the stored completion
+					 */
+					memset(wc, 0, sizeof(*wc));
+					continue;
+				}
+				bnxt_re_process_req_wc(wc, cqe);
+				break;
+			case CQ_BASE_CQE_TYPE_RES_RAWETH_QP1:
+				if (!cqe->status) {
+					int rc = 0;
+
+					rc = bnxt_re_process_raw_qp_pkt_rx
+								(qp, cqe);
+					if (!rc) {
+						memset(wc, 0, sizeof(*wc));
+						continue;
+					}
+					cqe->status = -1;
+				}
+				/* Errors need not be looped back.
+				 * But change the wr_id to the one
+				 * stored in the table
+				 */
+				tbl_idx = cqe->wr_id;
+				sqp_entry = &cq->rdev->sqp_tbl[tbl_idx];
+				wc->wr_id = sqp_entry->wrid;
+				bnxt_re_process_res_rawqp1_wc(wc, cqe);
+				break;
+			case CQ_BASE_CQE_TYPE_RES_RC:
+				bnxt_re_process_res_rc_wc(wc, cqe);
+				break;
+			case CQ_BASE_CQE_TYPE_RES_UD:
+				if (qp->qplib_qp.id ==
+				    qp->rdev->qp1_sqp->qplib_qp.id) {
+					/* Handle this completion with
+					 * the stored completion
+					 */
+					if (cqe->status) {
+						continue;
+					} else {
+						bnxt_re_process_res_shadow_qp_wc
+								(qp, wc, cqe);
+						break;
+					}
+				}
+				bnxt_re_process_res_ud_wc(wc, cqe);
+				break;
+			default:
+				dev_err(rdev_to_dev(cq->rdev),
+					"POLL CQ : type 0x%x not handled",
+					cqe->opcode);
+				continue;
+			}
+			wc++;
+			budget--;
+		}
+	}
+exit:
+	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	return num_entries - budget;
+}
+
 int bnxt_re_req_notify_cq(struct ib_cq *ib_cq,
 			  enum ib_cq_notify_flags ib_cqn_flags)
 {
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
index 9f3dd49..cc04994 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
@@ -171,6 +171,7 @@ struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
 				struct ib_ucontext *context,
 				struct ib_udata *udata);
 int bnxt_re_destroy_cq(struct ib_cq *cq);
+int bnxt_re_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc);
 int bnxt_re_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags);
 struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
 
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 73dfadd..fd52e65 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -456,6 +456,7 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 
 	ibdev->create_cq		= bnxt_re_create_cq;
 	ibdev->destroy_cq		= bnxt_re_destroy_cq;
+	ibdev->poll_cq			= bnxt_re_poll_cq;
 	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
 
 	ibdev->get_dma_mr		= bnxt_re_get_dma_mr;
@@ -605,6 +606,25 @@ static int bnxt_re_aeq_handler(struct bnxt_qplib_rcfw *rcfw,
 	return 0;
 }
 
+static int bnxt_re_cqn_handler(struct bnxt_qplib_nq *nq,
+			       struct bnxt_qplib_cq *handle)
+{
+	struct bnxt_re_cq *cq = to_bnxt_re(handle, struct bnxt_re_cq,
+					   qplib_cq);
+
+	if (!cq) {
+		dev_err(NULL, "%s: CQ is NULL, CQN not handled",
+			ROCE_DRV_MODULE_NAME);
+		return -EINVAL;
+	}
+	if (cq->ib_cq.comp_handler) {
+		/* Lock comp_handler? */
+		(*cq->ib_cq.comp_handler)(&cq->ib_cq, cq->ib_cq.cq_context);
+	}
+
+	return 0;
+}
+
 static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
 {
 	if (rdev->nq.hwq.max_elements)
@@ -626,7 +646,7 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
 	rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nq,
 				  rdev->msix_entries[BNXT_RE_NQ_IDX].vector,
 				  rdev->msix_entries[BNXT_RE_NQ_IDX].db_offset,
-				  NULL,
+				  &bnxt_re_cqn_handler,
 				  NULL);
 
 	if (rc)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  17/22] bnxt_re: Handling dispatching of events to IB stack
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (13 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 16/22] bnxt_re: Support poll_cq verb Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 18/22] bnxt_re: Support for DCB Selvin Xavier
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch implements handling dispatch of appropriate event to the IB stack
based on NETDEV events received.

v2: Removed cleanup of the resources during driver unload since we are calling
    unregister_netdevice_notifier first in the exit.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c | 64 +++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index fd52e65..260dec1 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -720,6 +720,60 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
 	return rc;
 }
 
+static void bnxt_re_dispatch_event(struct ib_device *ibdev, struct ib_qp *qp,
+				   u8 port_num, enum ib_event_type event)
+{
+	struct ib_event ib_event;
+
+	ib_event.device = ibdev;
+	if (qp)
+		ib_event.element.qp = qp;
+	else
+		ib_event.element.port_num = port_num;
+	ib_event.event = event;
+	ib_dispatch_event(&ib_event);
+}
+
+static bool bnxt_re_is_qp1_or_shadow_qp(struct bnxt_re_dev *rdev,
+					struct bnxt_re_qp *qp)
+{
+	return (qp->ib_qp.qp_type == IB_QPT_GSI) || (qp == rdev->qp1_sqp);
+}
+
+static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev, bool qp_wait)
+{
+	int mask = IB_QP_STATE, qp_count, count = 1;
+	struct ib_qp_attr qp_attr;
+	struct bnxt_re_qp *qp;
+
+	qp_attr.qp_state = IB_QPS_ERR;
+	mutex_lock(&rdev->qp_lock);
+	list_for_each_entry(qp, &rdev->qp_list, list) {
+		/* Modify the state of all QPs except QP1/Shadow QP */
+		if (qp && !bnxt_re_is_qp1_or_shadow_qp(rdev, qp)) {
+			if (qp->qplib_qp.state !=
+			    CMDQ_MODIFY_QP_NEW_STATE_RESET ||
+			    qp->qplib_qp.state !=
+			    CMDQ_MODIFY_QP_NEW_STATE_ERR) {
+				bnxt_re_dispatch_event(&rdev->ibdev, &qp->ib_qp,
+						       1, IB_EVENT_QP_FATAL);
+				bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, mask,
+						  NULL);
+			}
+		}
+	}
+
+	mutex_unlock(&rdev->qp_lock);
+	if (qp_wait) {
+		/* Give the application some time to clean up */
+		do {
+			qp_count = atomic_read(&rdev->qp_count);
+			msleep(100);
+		} while ((qp_count != atomic_read(&rdev->qp_count)) &&
+			  count--);
+	}
+}
+
 static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
 {
 	int i, rc;
@@ -878,6 +932,8 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
 		}
 	}
 	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+	bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1, IB_EVENT_PORT_ACTIVE);
+	bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1, IB_EVENT_GID_CHANGE);
 
 	return 0;
 free_sctx:
@@ -956,10 +1012,18 @@ static void bnxt_re_task(struct work_struct *work)
 				"Failed to register with IB: %#x", rc);
 			break;
 	case NETDEV_UP:
+		bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1,
+				       IB_EVENT_PORT_ACTIVE);
 		break;
 	case NETDEV_DOWN:
+		bnxt_re_dev_stop(rdev, false);
 		break;
 	case NETDEV_CHANGE:
+		if (!netif_carrier_ok(rdev->netdev))
+			bnxt_re_dev_stop(rdev, false);
+		else if (netif_carrier_ok(rdev->netdev))
+			bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1,
+					       IB_EVENT_PORT_ACTIVE);
 		break;
 	default:
 		break;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  18/22] bnxt_re: Support for DCB
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (14 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 17/22] bnxt_re: Handling dispatching of events to IB stack Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-10 13:50   ` Or Gerlitz
  2016-12-09  6:48 ` [PATCH V2 19/22] bnxt_re: Support debugfs Selvin Xavier
                   ` (6 subsequent siblings)
  22 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch queries the configured RoCE APP Priority on the host
using the dcbnl API and programs the RoCE FW with the corresponding
Traffic Class(es) for the priority.

v2: Fixed some sparse warning and cleanup of function
bnxt_re_query_hwrm_pri2cos

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h |   3 +-
 drivers/infiniband/hw/bnxtre/bnxt_re.h       |   6 ++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c  | 140 +++++++++++++++++++++++++++
 3 files changed, 148 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
index 3358f6d..a72dfab 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.h
@@ -156,4 +156,5 @@ int bnxt_qplib_alloc_fast_reg_page_list(struct bnxt_qplib_res *res,
 					struct bnxt_qplib_frpl *frpl, int max);
 int bnxt_qplib_free_fast_reg_page_list(struct bnxt_qplib_res *res,
 				       struct bnxt_qplib_frpl *frpl);
-#endif /* __BNXT_QPLIB_SP_H__*/
+int bnxt_qplib_map_tc2cos(struct bnxt_qplib_res *res, u16 *cids);
+#endif
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
index 30f4b30..36b1d4f 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
@@ -45,6 +45,9 @@
 #define BNXT_RE_REF_WAIT_COUNT		10
 #define BNXT_RE_DESC	"Broadcom NetXtreme-C/E RoCE Driver"
 
+#define BNXT_RE_ROCE_V1_ETH_TYPE	0x8915
+#define BNXT_RE_ROCE_V2_PORT_NO		4791
+
 #define BNXT_RE_PAGE_SIZE_4K		BIT(12)
 #define BNXT_RE_PAGE_SIZE_8K		BIT(13)
 #define BNXT_RE_PAGE_SIZE_64K		BIT(16)
@@ -95,6 +98,9 @@ struct bnxt_re_dev {
 
 	int				id;
 
+	struct delayed_work		worker;
+	u8				cur_prio_map;
+
 	/* FP Notification Queue (CQ & SRQ) */
 	struct tasklet_struct		nq_task;
 
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 260dec1..f3ce02c 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -44,6 +44,7 @@
 #include <linux/rculist.h>
 #include <linux/spinlock.h>
 #include <linux/pci.h>
+#include <net/dcbnl.h>
 #include <net/ipv6.h>
 #include <net/addrconf.h>
 
@@ -734,6 +735,50 @@ static void bnxt_re_dispatch_event(struct ib_device *ibdev, struct ib_qp *qp,
 	ib_dispatch_event(&ib_event);
 }
 
+#define HWRM_QUEUE_PRI2COS_QCFG_INPUT_FLAGS_IVLAN      0x02
+static int bnxt_re_query_hwrm_pri2cos(struct bnxt_re_dev *rdev, u8 dir,
+				      u64 *cid_map)
+{
+	struct hwrm_queue_pri2cos_qcfg_input req = {0};
+	struct bnxt *bp = netdev_priv(rdev->netdev);
+	struct hwrm_queue_pri2cos_qcfg_output resp;
+	struct bnxt_en_dev *en_dev = rdev->en_dev;
+	struct bnxt_fw_msg fw_msg;
+	u32 flags = 0;
+	u8 *qcfgmap, *tmp_map;
+	int rc = 0, i;
+
+	if (!cid_map)
+		return -EINVAL;
+
+	memset(&fw_msg, 0, sizeof(fw_msg));
+	bnxt_re_init_hwrm_hdr(rdev, (void *)&req,
+			      HWRM_QUEUE_PRI2COS_QCFG, -1, -1);
+	flags |= (dir & 0x01);
+	flags |= HWRM_QUEUE_PRI2COS_QCFG_INPUT_FLAGS_IVLAN;
+	req.flags = cpu_to_le32(flags);
+	req.port_id = bp->pf.port_id;
+
+	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+	rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+	if (rc)
+		return rc;
+
+	if (resp.queue_cfg_info) {
+		dev_warn(rdev_to_dev(rdev),
+			 "Asymmetric cos queue configuration detected");
+		dev_warn(rdev_to_dev(rdev),
+			 " on device, QoS may not be fully functional\n");
+	}
+	qcfgmap = &resp.pri0_cos_queue_id;
+	tmp_map = (u8 *)cid_map;
+	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+		tmp_map[i] = qcfgmap[i];
+
+	return rc;
+}
+
 static bool bnxt_re_is_qp1_or_shadow_qp(struct bnxt_re_dev *rdev,
 					struct bnxt_re_qp *qp)
 {
@@ -774,6 +819,80 @@ static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev, bool qp_wait)
 	}
 }
 
+static u32 bnxt_re_get_priority_mask(struct bnxt_re_dev *rdev)
+{
+	u32 prio_map = 0, tmp_map = 0;
+	struct net_device *netdev;
+	struct dcb_app app;
+
+	netdev = rdev->netdev;
+
+	memset(&app, 0, sizeof(app));
+	app.selector = IEEE_8021QAZ_APP_SEL_ETHERTYPE;
+	app.protocol = BNXT_RE_ROCE_V1_ETH_TYPE;
+	tmp_map = dcb_ieee_getapp_mask(netdev, &app);
+	prio_map = tmp_map;
+
+	app.selector = IEEE_8021QAZ_APP_SEL_DGRAM;
+	app.protocol = BNXT_RE_ROCE_V2_PORT_NO;
+	tmp_map = dcb_ieee_getapp_mask(netdev, &app);
+	prio_map |= tmp_map;
+
+	if (!prio_map)
+		prio_map = -EFAULT;
+	return prio_map;
+}
+
+static void bnxt_re_parse_cid_map(u8 prio_map, u8 *cid_map, u16 *cosq)
+{
+	u16 prio;
+	u8 id;
+
+	for (prio = 0, id = 0; prio < 8; prio++) {
+		if (prio_map & (1 << prio)) {
+			cosq[id] = cid_map[prio];
+			id++;
+			if (id == 2) /* Max 2 tcs supported */
+				break;
+		}
+	}
+}
+
+static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev)
+{
+	u8 prio_map = 0;
+	u64 cid_map;
+	int rc;
+
+	/* Get priority for roce */
+	rc = bnxt_re_get_priority_mask(rdev);
+	if (rc < 0)
+		return rc;
+	prio_map = (u8)rc;
+
+	if (prio_map == rdev->cur_prio_map)
+		return 0;
+	rdev->cur_prio_map = prio_map;
+	/* Get cosq id for this priority */
+	rc = bnxt_re_query_hwrm_pri2cos(rdev, 0, &cid_map);
+	if (rc) {
+		dev_warn(rdev_to_dev(rdev), "no cos for p_mask %x\n", prio_map);
+		return rc;
+	}
+	/* Parse CoS IDs for app priority */
+	bnxt_re_parse_cid_map(prio_map, (u8 *)&cid_map, rdev->cosq);
+
+	/* Config BONO. */
+	rc = bnxt_qplib_map_tc2cos(&rdev->qplib_res, rdev->cosq);
+	if (rc) {
+		dev_warn(rdev_to_dev(rdev), "no tc for cos{%x, %x}\n",
+			 rdev->cosq[0], rdev->cosq[1]);
+		return rc;
+	}
+
+	return 0;
+}
+
 static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
 {
 	int i, rc;
@@ -785,6 +904,9 @@ static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
 		/* Cleanup ib dev */
 		bnxt_re_unregister_ib(rdev);
 	}
+	if (test_and_clear_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags))
+		cancel_delayed_work(&rdev->worker);
+
 	bnxt_re_cleanup_res(rdev);
 	bnxt_re_free_res(rdev, lock_wait);
 
@@ -827,6 +949,16 @@ static void bnxt_re_set_resource_limits(struct bnxt_re_dev *rdev)
 		rdev->dev_attr.tqm_alloc_reqs[i];
 }
 
+/* worker thread for polling periodic events. Now used for QoS programming*/
+static void bnxt_re_worker(struct work_struct *work)
+{
+	struct bnxt_re_dev *rdev = container_of(work, struct bnxt_re_dev,
+						worker.work);
+
+	bnxt_re_setup_qos(rdev);
+	schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
+}
+
 static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
 {
 	int i, j, rc;
@@ -911,6 +1043,14 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
 		goto fail;
 	}
 
+	rc = bnxt_re_setup_qos(rdev);
+	if (rc)
+		pr_info("RoCE priority not yet configured\n");
+
+	INIT_DELAYED_WORK(&rdev->worker, bnxt_re_worker);
+	set_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags);
+	schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
+
 	/* Register ib dev */
 	rc = bnxt_re_register_ib(rdev);
 	if (rc) {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  19/22] bnxt_re: Support debugfs
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (15 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 18/22] bnxt_re: Support for DCB Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 20/22] bnxt_re: Set uverbs command mask Selvin Xavier
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch exports some of the FW debug counters

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c | 159 +++++++++++++++++++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h |  48 ++++++++
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c    |   8 +-
 3 files changed, 213 insertions(+), 2 deletions(-)
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c
 create mode 100644 drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c
new file mode 100644
index 0000000..b38c83a
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c
@@ -0,0 +1,159 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: DebugFS specifics
+ */
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/netdevice.h>
+
+#include <rdma/ib_verbs.h>
+#include "bnxt_re_hsi.h"
+#include "bnxt_ulp.h"
+#include "bnxt_qplib_res.h"
+#include "bnxt_qplib_sp.h"
+#include "bnxt_qplib_fp.h"
+#include "bnxt_qplib_rcfw.h"
+
+#include "bnxt_re.h"
+#include "bnxt_re_debugfs.h"
+
+static struct dentry *bnxt_re_debugfs_root;
+static struct dentry *bnxt_re_debugfs_info;
+
+static ssize_t bnxt_re_debugfs_clear(struct file *fil, const char __user *u,
+				     size_t size, loff_t *off)
+{
+	return size;
+}
+
+static int bnxt_re_debugfs_show(struct seq_file *s, void *unused)
+{
+	struct bnxt_re_dev *rdev;
+
+	seq_puts(s, "bnxt_re debug info:\n");
+
+	mutex_lock(&bnxt_re_dev_lock);
+	list_for_each_entry(rdev, &bnxt_re_dev_list, list) {
+		struct ctx_hw_stats *stats = rdev->qplib_ctx.stats.dma;
+
+		seq_printf(s, "=====[ IBDEV %s ]=============================\n",
+			   rdev->ibdev.name);
+		if (rdev->netdev)
+			seq_printf(s, "\tlink state: %s\n",
+				   test_bit(__LINK_STATE_START,
+					    &rdev->netdev->state) ?
+				   (test_bit(__LINK_STATE_NOCARRIER,
+					     &rdev->netdev->state) ?
+				    "DOWN" : "UP") : "DOWN");
+		seq_printf(s, "\tMax QP: 0x%x\n", rdev->dev_attr.max_qp);
+		seq_printf(s, "\tMax SRQ: 0x%x\n", rdev->dev_attr.max_srq);
+		seq_printf(s, "\tMax CQ: 0x%x\n", rdev->dev_attr.max_cq);
+		seq_printf(s, "\tMax MR: 0x%x\n", rdev->dev_attr.max_mr);
+		seq_printf(s, "\tMax MW: 0x%x\n", rdev->dev_attr.max_mw);
+
+		seq_printf(s, "\tActive QP: %d\n",
+			   atomic_read(&rdev->qp_count));
+		seq_printf(s, "\tActive SRQ: %d\n",
+			   atomic_read(&rdev->srq_count));
+		seq_printf(s, "\tActive CQ: %d\n",
+			   atomic_read(&rdev->cq_count));
+		seq_printf(s, "\tActive MR: %d\n",
+			   atomic_read(&rdev->mr_count));
+		seq_printf(s, "\tActive MW: %d\n",
+			   atomic_read(&rdev->mw_count));
+		seq_printf(s, "\tRx Pkts: %lld\n",
+			   stats ? stats->rx_ucast_pkts : 0);
+		seq_printf(s, "\tRx Bytes: %lld\n",
+			   stats ? stats->rx_ucast_bytes : 0);
+		seq_printf(s, "\tTx Pkts: %lld\n",
+			   stats ? stats->tx_ucast_pkts : 0);
+		seq_printf(s, "\tTx Bytes: %lld\n",
+			   stats ? stats->tx_ucast_bytes : 0);
+		seq_printf(s, "\tRecoverable Errors: %lld\n",
+			   stats ? stats->tx_bcast_pkts : 0);
+		seq_puts(s, "\n");
+	}
+	mutex_unlock(&bnxt_re_dev_lock);
+	return 0;
+}
+
+static int bnxt_re_debugfs_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, bnxt_re_debugfs_show, NULL);
+}
+
+static int bnxt_re_debugfs_release(struct inode *inode, struct file *file)
+{
+	return single_release(inode, file);
+}
+
+static const struct file_operations bnxt_re_dbg_ops = {
+	.owner		= THIS_MODULE,
+	.open		= bnxt_re_debugfs_open,
+	.read		= seq_read,
+	.write		= bnxt_re_debugfs_clear,
+	.llseek		= seq_lseek,
+	.release	= bnxt_re_debugfs_release,
+};
+
+void bnxt_re_debugfs_remove(void)
+{
+	debugfs_remove_recursive(bnxt_re_debugfs_root);
+	bnxt_re_debugfs_root = NULL;
+}
+
+void bnxt_re_debugfs_init(void)
+{
+	bnxt_re_debugfs_root = debugfs_create_dir(ROCE_DRV_MODULE_NAME, NULL);
+	if (IS_ERR_OR_NULL(bnxt_re_debugfs_root)) {
+		dev_dbg(NULL, "%s: Unable to create debugfs root directory ",
+			ROCE_DRV_MODULE_NAME);
+		dev_dbg(NULL, "with err 0x%lx", PTR_ERR(bnxt_re_debugfs_root));
+		return;
+	}
+	bnxt_re_debugfs_info = debugfs_create_file("info", 0400,
+						   bnxt_re_debugfs_root, NULL,
+						   &bnxt_re_dbg_ops);
+	if (IS_ERR_OR_NULL(bnxt_re_debugfs_info)) {
+		dev_dbg(NULL, "%s: Unable to create debugfs info node ",
+			ROCE_DRV_MODULE_NAME);
+		dev_dbg(NULL, "with err 0x%lx", PTR_ERR(bnxt_re_debugfs_info));
+		bnxt_re_debugfs_remove();
+	}
+}
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h
new file mode 100644
index 0000000..06ba5cd
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.h
@@ -0,0 +1,48 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2016, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: DebugFS header
+ */
+
+#ifndef __BNXT_RE_DEBUGFS__
+#define __BNXT_RE_DEBUGFS__
+
+extern struct list_head bnxt_re_dev_list;
+extern struct mutex bnxt_re_dev_lock;
+
+void bnxt_re_debugfs_init(void);
+void bnxt_re_debugfs_remove(void);
+
+#endif
diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index f3ce02c..3b3a063 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -60,6 +60,7 @@
 #include "bnxt_qplib_fp.h"
 #include "bnxt_qplib_rcfw.h"
 #include "bnxt_re.h"
+#include "bnxt_re_debugfs.h"
 #include "bnxt_re_ib_verbs.h"
 #include "bnxt.h"
 static char version[] =
@@ -72,8 +73,8 @@ MODULE_LICENSE("Dual BSD/GPL");
 MODULE_VERSION(ROCE_DRV_MODULE_VERSION);
 
 /* globals */
-static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
-static DEFINE_MUTEX(bnxt_re_dev_lock);
+struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
+DEFINE_MUTEX(bnxt_re_dev_lock);
 static struct workqueue_struct *bnxt_re_wq;
 
 /* for handling bnxt_en callbacks later */
@@ -1264,6 +1265,7 @@ static int __init bnxt_re_mod_init(void)
 	if (!bnxt_re_wq)
 		return -ENOMEM;
 
+	bnxt_re_debugfs_init();
 	INIT_LIST_HEAD(&bnxt_re_dev_list);
 
 	rc = register_netdevice_notifier(&bnxt_re_netdev_notifier);
@@ -1275,6 +1277,7 @@ static int __init bnxt_re_mod_init(void)
 	return 0;
 
 err_netdev:
+	bnxt_re_debugfs_remove();
 	destroy_workqueue(bnxt_re_wq);
 
 	return rc;
@@ -1282,6 +1285,7 @@ static int __init bnxt_re_mod_init(void)
 static void __exit bnxt_re_mod_exit(void)
 {
 	unregister_netdevice_notifier(&bnxt_re_netdev_notifier);
+	bnxt_re_debugfs_remove();
 	if (bnxt_re_wq)
 		destroy_workqueue(bnxt_re_wq);
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  20/22] bnxt_re: Set uverbs command mask
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (16 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 19/22] bnxt_re: Support debugfs Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
  2016-12-09  6:48 ` [PATCH V2 21/22] bnxt_re: Add QP event handling Selvin Xavier
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

This patch exports available uverbs command mask to the IB stack.
Also, populating some of the missing parameters in the ibdev structure
used for registration.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_re_main.c | 36 +++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
index 3b3a063..0474f6a 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
@@ -62,6 +62,7 @@
 #include "bnxt_re.h"
 #include "bnxt_re_debugfs.h"
 #include "bnxt_re_ib_verbs.h"
+#include <rdma/bnxt_re_uverbs_abi.h>
 #include "bnxt.h"
 static char version[] =
 		BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
@@ -425,8 +426,42 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
 		strlen(BNXT_RE_DESC) + 5);
 	ibdev->phys_port_cnt = 1;
 
+	bnxt_qplib_get_guid(rdev->netdev->dev_addr, (u8 *)&ibdev->node_guid);
+
 	ibdev->num_comp_vectors	= 1;
 	ibdev->dma_device = &rdev->en_dev->pdev->dev;
+	ibdev->local_dma_lkey = BNXT_QPLIB_RSVD_LKEY;
+
+	/* User space */
+	ibdev->uverbs_abi_ver = BNXT_RE_ABI_VERSION;
+	ibdev->uverbs_cmd_mask =
+			(1ull << IB_USER_VERBS_CMD_GET_CONTEXT)		|
+			(1ull << IB_USER_VERBS_CMD_QUERY_DEVICE)	|
+			(1ull << IB_USER_VERBS_CMD_QUERY_PORT)		|
+			(1ull << IB_USER_VERBS_CMD_ALLOC_PD)		|
+			(1ull << IB_USER_VERBS_CMD_DEALLOC_PD)		|
+			(1ull << IB_USER_VERBS_CMD_REG_MR)		|
+			(1ull << IB_USER_VERBS_CMD_REREG_MR)		|
+			(1ull << IB_USER_VERBS_CMD_DEREG_MR)		|
+			(1ull << IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL) |
+			(1ull << IB_USER_VERBS_CMD_CREATE_CQ)		|
+			(1ull << IB_USER_VERBS_CMD_RESIZE_CQ)		|
+			(1ull << IB_USER_VERBS_CMD_DESTROY_CQ)		|
+			(1ull << IB_USER_VERBS_CMD_CREATE_QP)		|
+			(1ull << IB_USER_VERBS_CMD_MODIFY_QP)		|
+			(1ull << IB_USER_VERBS_CMD_QUERY_QP)		|
+			(1ull << IB_USER_VERBS_CMD_DESTROY_QP)		|
+			(1ull << IB_USER_VERBS_CMD_CREATE_SRQ)		|
+			(1ull << IB_USER_VERBS_CMD_MODIFY_SRQ)		|
+			(1ull << IB_USER_VERBS_CMD_QUERY_SRQ)		|
+			(1ull << IB_USER_VERBS_CMD_DESTROY_SRQ)		|
+			(1ull << IB_USER_VERBS_CMD_CREATE_AH)		|
+			(1ull << IB_USER_VERBS_CMD_MODIFY_AH)		|
+			(1ull << IB_USER_VERBS_CMD_QUERY_AH)		|
+			(1ull << IB_USER_VERBS_CMD_DESTROY_AH);
+	/* POLL_CQ and REQ_NOTIFY_CQ is directly handled in libbnxt_re */
+
+	/* Kernel verbs */
 	ibdev->query_device		= bnxt_re_query_device;
 	ibdev->modify_device		= bnxt_re_modify_device;
 
@@ -1058,6 +1093,7 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
 		pr_err("Failed to register with IB: %#x\n", rc);
 		goto fail;
 	}
+	dev_info(rdev_to_dev(rdev), "Device registered successfully");
 	for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) {
 		rc = device_create_file(&rdev->ibdev.dev,
 					bnxt_re_attributes[i]);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  21/22] bnxt_re: Add QP event handling
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (17 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 20/22] bnxt_re: Set uverbs command mask Selvin Xavier
@ 2016-12-09  6:48 ` Selvin Xavier
       [not found]   ` <1481266096-23331-22-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
  2016-12-09 15:26 ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) David Miller
                   ` (3 subsequent siblings)
  22 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford, linux-rdma
  Cc: netdev, Selvin Xavier, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

Implements callback handler for processing affiliated Async events of a QP.
This patch also implements the control path command completion handling.

Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c | 49 ++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
index 5b71acd..3e6bb3f 100644
--- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
+++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
@@ -246,6 +246,46 @@ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
 	return 0;
 }
 
+static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+				       struct creq_qp_event *qp_event)
+{
+	struct bnxt_qplib_crsq *crsq = &rcfw->crsq;
+	struct bnxt_qplib_hwq *cmdq = &rcfw->cmdq;
+	struct bnxt_qplib_crsqe *crsqe;
+	u16 cbit, cookie, blocked = 0;
+	unsigned long flags;
+	u32 sw_cons;
+
+	switch (qp_event->event) {
+	case CREQ_QP_EVENT_EVENT_QP_ERROR_NOTIFICATION:
+		break;
+	default:
+	{
+		/* Command Response */
+		spin_lock_irqsave(&cmdq->lock, flags);
+		sw_cons = HWQ_CMP(crsq->cons, crsq);
+		crsqe = &crsq->crsq[sw_cons];
+		crsq->cons++;
+		memcpy(&crsqe->qp_event, qp_event, sizeof(crsqe->qp_event));
+
+		cookie = le16_to_cpu(crsqe->qp_event.cookie);
+		blocked = cookie & RCFW_CMD_IS_BLOCKING;
+		cookie &= RCFW_MAX_COOKIE_VALUE;
+		cbit = cookie % RCFW_MAX_OUTSTANDING_CMD;
+		if (!test_and_clear_bit(cbit, rcfw->cmdq_bitmap))
+			dev_warn(&rcfw->pdev->dev,
+				 "QPLIB: CMD bit %d was not requested", cbit);
+
+		cmdq->cons += crsqe->req_size;
+		spin_unlock_irqrestore(&cmdq->lock, flags);
+		if (!blocked)
+			wake_up(&rcfw->waitq);
+		break;
+	}
+	}
+	return 0;
+}
+
 /* SP - CREQ Completion handlers */
 static void bnxt_qplib_service_creq(unsigned long data)
 {
@@ -269,6 +309,15 @@ static void bnxt_qplib_service_creq(unsigned long data)
 		type = creqe->type & CREQ_BASE_TYPE_MASK;
 		switch (type) {
 		case CREQ_BASE_TYPE_QP_EVENT:
+			if (!bnxt_qplib_process_qp_event
+			    (rcfw, (struct creq_qp_event *)creqe))
+				rcfw->creq_qp_event_processed++;
+			else {
+				dev_warn(&rcfw->pdev->dev, "QPLIB: crsqe with");
+				dev_warn(&rcfw->pdev->dev,
+					 "QPLIB: type = 0x%x not handled",
+					 type);
+			}
 			break;
 		case CREQ_BASE_TYPE_FUNC_EVENT:
 			if (!bnxt_qplib_process_func_event
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH V2  22/22] bnxt_re: Add bnxt_re driver build support
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
  2016-12-09  6:48   ` [PATCH V2 07/22] bnxt_re: Support for query and modify device verbs Selvin Xavier
  2016-12-09  6:48   ` [PATCH V2 10/22] bnxt_re: Support for CQ verbs Selvin Xavier
@ 2016-12-09  6:48   ` Selvin Xavier
  2016-12-09 11:21     ` kbuild test robot
  2016-12-10  5:36   ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09  6:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Selvin Xavier

Makefile and Kconfig changes for enabling bnxt_re compilation

Signed-off-by: Selvin Xavier <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/Kconfig            | 2 ++
 drivers/infiniband/hw/Makefile        | 1 +
 drivers/infiniband/hw/bnxtre/Kconfig  | 9 +++++++++
 drivers/infiniband/hw/bnxtre/Makefile | 6 ++++++
 4 files changed, 18 insertions(+)
 create mode 100644 drivers/infiniband/hw/bnxtre/Kconfig
 create mode 100644 drivers/infiniband/hw/bnxtre/Makefile

diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index fb3fb89..a4fab22 100644
--- a/drivers/infiniband/Kconfig
+++ b/drivers/infiniband/Kconfig
@@ -91,4 +91,6 @@ source "drivers/infiniband/hw/hfi1/Kconfig"
 
 source "drivers/infiniband/hw/qedr/Kconfig"
 
+source "drivers/infiniband/hw/bnxtre/Kconfig"
+
 endif # INFINIBAND
diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile
index e7a5ed9..7227b36 100644
--- a/drivers/infiniband/hw/Makefile
+++ b/drivers/infiniband/hw/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_INFINIBAND_USNIC)		+= usnic/
 obj-$(CONFIG_INFINIBAND_HFI1)		+= hfi1/
 obj-$(CONFIG_INFINIBAND_HNS)		+= hns/
 obj-$(CONFIG_INFINIBAND_QEDR)		+= qedr/
+obj-$(CONFIG_INFINIBAND_BNXTRE)		+= bnxtre/
diff --git a/drivers/infiniband/hw/bnxtre/Kconfig b/drivers/infiniband/hw/bnxtre/Kconfig
new file mode 100644
index 0000000..b1f153f
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/Kconfig
@@ -0,0 +1,9 @@
+config INFINIBAND_BNXTRE
+    tristate "Broadcom Netxtreme HCA support"
+    depends on ETHERNET && NETDEVICES && PCI && INET
+    select NET_VENDOR_BROADCOM
+    select BNXT
+    ---help---
+	  This driver supports Broadcom NetXtreme-E 10/25/40/50 gigabit
+	  RoCE HCAs.  To compile this driver as a module, choose M here:
+	  the module will be called bnxt_re.
diff --git a/drivers/infiniband/hw/bnxtre/Makefile b/drivers/infiniband/hw/bnxtre/Makefile
new file mode 100644
index 0000000..39df4f1
--- /dev/null
+++ b/drivers/infiniband/hw/bnxtre/Makefile
@@ -0,0 +1,6 @@
+
+ccflags-y := -Idrivers/net/ethernet/broadcom/bnxt
+obj-$(CONFIG_INFINIBAND_BNXTRE) += bnxt_re.o
+bnxt_re-y := bnxt_re_main.o bnxt_re_ib_verbs.o bnxt_re_debugfs.o \
+	     bnxt_qplib_res.o bnxt_qplib_rcfw.o	\
+	     bnxt_qplib_sp.o bnxt_qplib_fp.o
-- 
2.5.5

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 22/22] bnxt_re: Add bnxt_re driver build support
  2016-12-09  6:48   ` [PATCH V2 22/22] bnxt_re: Add bnxt_re driver build support Selvin Xavier
@ 2016-12-09 11:21     ` kbuild test robot
  0 siblings, 0 replies; 46+ messages in thread
From: kbuild test robot @ 2016-12-09 11:21 UTC (permalink / raw)
  To: Selvin Xavier; +Cc: kbuild-all, dledford, linux-rdma, netdev, Selvin Xavier

[-- Attachment #1: Type: text/plain, Size: 7977 bytes --]

Hi Selvin,

[auto build test ERROR on rdma/master]
[also build test ERROR on v4.9-rc8 next-20161208]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Selvin-Xavier/Broadcom-RoCE-Driver-bnxt_re/20161209-154823
base:   https://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git master
config: parisc-allyesconfig (attached as .config)
compiler: hppa-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=parisc 

All errors (new ones prefixed by >>):

   drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c: In function 'bnxt_qplib_creq_irq':
>> drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c:359:2: error: implicit declaration of function 'prefetch' [-Werror=implicit-function-declaration]
     prefetch(&creq_ptr[CREQ_PG(sw_cons)][CREQ_IDX(sw_cons)]);
     ^~~~~~~~
   cc1: some warnings being treated as errors
--
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_service_nq':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:145:29: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
       bnxt_qplib_arm_cq_enable((struct bnxt_qplib_cq *)
                                ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:147:29: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
       if (!nq->cqn_handler(nq, (struct bnxt_qplib_cq *)
                                ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_nq_irq':
>> drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:182:2: error: implicit declaration of function 'prefetch' [-Werror=implicit-function-declaration]
     prefetch(&nq_ptr[NQE_PG(sw_cons)][NQE_IDX(sw_cons)]);
     ^~~~~~~~
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_create_qp':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:484:16: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
      psn_search = (unsigned long long int)
                   ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_destroy_qp':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1071:22: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
     __clean_cq(qp->scq, (u64)qp);
                         ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1073:23: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
      __clean_cq(qp->rcq, (u64)qp);
                          ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function '__flush_sq':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1630:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
      cqe->qp_handle = (u64)qp;
                       ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function '__flush_rq':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1664:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
      cqe->qp_handle = (u64)qp;
                       ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_cq_process_req':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1688:7: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
          ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1720:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
      cqe->qp_handle = (u64)qp;
                       ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_cq_process_res_rc':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1782:7: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
          ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1794:19: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
     cqe->qp_handle = (u64)qp;
                      ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_cq_process_res_ud':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1836:7: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
          ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1847:19: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
     cqe->qp_handle = (u64)qp;
                      ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_cq_process_res_raweth_qp1':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1893:7: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
          ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1902:19: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
     cqe->qp_handle = (u64)qp;
                      ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c: In function 'bnxt_qplib_cq_process_terminal':
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1964:7: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     qp = (struct bnxt_qplib_qp *)le64_to_cpu(hwcqe->qp_handle);
          ^
   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:2005:21: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
       cqe->qp_handle = (u64)qp;
                        ^
   cc1: some warnings being treated as errors

vim +/prefetch +359 drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c

bd53aa95 Selvin Xavier 2016-12-08  343  		CREQ_DB_REARM(rcfw->creq_bar_reg_iomem, raw_cons,
bd53aa95 Selvin Xavier 2016-12-08  344  			      creq->max_elements);
bd53aa95 Selvin Xavier 2016-12-08  345  	}
bd53aa95 Selvin Xavier 2016-12-08  346  	spin_unlock_irqrestore(&creq->lock, flags);
bd53aa95 Selvin Xavier 2016-12-08  347  }
bd53aa95 Selvin Xavier 2016-12-08  348  
bd53aa95 Selvin Xavier 2016-12-08  349  static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
bd53aa95 Selvin Xavier 2016-12-08  350  {
bd53aa95 Selvin Xavier 2016-12-08  351  	struct bnxt_qplib_rcfw *rcfw = dev_instance;
bd53aa95 Selvin Xavier 2016-12-08  352  	struct bnxt_qplib_hwq *creq = &rcfw->creq;
bd53aa95 Selvin Xavier 2016-12-08  353  	struct creq_base **creq_ptr;
bd53aa95 Selvin Xavier 2016-12-08  354  	u32 sw_cons;
bd53aa95 Selvin Xavier 2016-12-08  355  
bd53aa95 Selvin Xavier 2016-12-08  356  	/* Prefetch the CREQ element */
bd53aa95 Selvin Xavier 2016-12-08  357  	sw_cons = HWQ_CMP(creq->cons, creq);
bd53aa95 Selvin Xavier 2016-12-08  358  	creq_ptr = (struct creq_base **)rcfw->creq.pbl_ptr;
bd53aa95 Selvin Xavier 2016-12-08 @359  	prefetch(&creq_ptr[CREQ_PG(sw_cons)][CREQ_IDX(sw_cons)]);
bd53aa95 Selvin Xavier 2016-12-08  360  
bd53aa95 Selvin Xavier 2016-12-08  361  	tasklet_schedule(&rcfw->worker);
bd53aa95 Selvin Xavier 2016-12-08  362  
bd53aa95 Selvin Xavier 2016-12-08  363  	return IRQ_HANDLED;
bd53aa95 Selvin Xavier 2016-12-08  364  }
bd53aa95 Selvin Xavier 2016-12-08  365  
bd53aa95 Selvin Xavier 2016-12-08  366  /* RCFW */
bd53aa95 Selvin Xavier 2016-12-08  367  int bnxt_qplib_deinit_rcfw(struct bnxt_qplib_rcfw *rcfw)

:::::: The code at line 359 was first introduced by commit
:::::: bd53aa95a8998c3192837b1d2d6c6e795adf3276 bnxt_re: Enabling RoCE control path

:::::: TO: Selvin Xavier <selvin.xavier@broadcom.com>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 47317 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 21/22] bnxt_re: Add QP event handling
       [not found]   ` <1481266096-23331-22-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
@ 2016-12-09 12:12     ` Sergei Shtylyov
  0 siblings, 0 replies; 46+ messages in thread
From: Sergei Shtylyov @ 2016-12-09 12:12 UTC (permalink / raw)
  To: Selvin Xavier, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Eddie Wai, Devesh Sharma,
	Somnath Kotur, Sriharsha Basavapatna

Hello!

On 12/9/2016 9:48 AM, Selvin Xavier wrote:

> Implements callback handler for processing affiliated Async events of a QP.
> This patch also implements the control path command completion handling.
>
> Signed-off-by: Eddie Wai <eddie.wai-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Selvin Xavier <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> ---
>  drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c | 49 ++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
>
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
> index 5b71acd..3e6bb3f 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
> @@ -246,6 +246,46 @@ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
>  	return 0;
>  }
>
> +static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
> +				       struct creq_qp_event *qp_event)
> +{
> +	struct bnxt_qplib_crsq *crsq = &rcfw->crsq;
> +	struct bnxt_qplib_hwq *cmdq = &rcfw->cmdq;
> +	struct bnxt_qplib_crsqe *crsqe;
> +	u16 cbit, cookie, blocked = 0;
> +	unsigned long flags;
> +	u32 sw_cons;
> +
> +	switch (qp_event->event) {
> +	case CREQ_QP_EVENT_EVENT_QP_ERROR_NOTIFICATION:
> +		break;
> +	default:
> +	{
> +		/* Command Response */
> +		spin_lock_irqsave(&cmdq->lock, flags);
> +		sw_cons = HWQ_CMP(crsq->cons, crsq);
> +		crsqe = &crsq->crsq[sw_cons];
> +		crsq->cons++;
> +		memcpy(&crsqe->qp_event, qp_event, sizeof(crsqe->qp_event));
> +
> +		cookie = le16_to_cpu(crsqe->qp_event.cookie);
> +		blocked = cookie & RCFW_CMD_IS_BLOCKING;
> +		cookie &= RCFW_MAX_COOKIE_VALUE;
> +		cbit = cookie % RCFW_MAX_OUTSTANDING_CMD;
> +		if (!test_and_clear_bit(cbit, rcfw->cmdq_bitmap))
> +			dev_warn(&rcfw->pdev->dev,
> +				 "QPLIB: CMD bit %d was not requested", cbit);
> +
> +		cmdq->cons += crsqe->req_size;
> +		spin_unlock_irqrestore(&cmdq->lock, flags);
> +		if (!blocked)
> +			wake_up(&rcfw->waitq);
> +		break;
> +	}
> +	}

    Hum, strange indentation... Not seeing why you need {} in the *default* at 
all...

> +	return 0;
> +}
> +
>  /* SP - CREQ Completion handlers */
>  static void bnxt_qplib_service_creq(unsigned long data)
>  {
> @@ -269,6 +309,15 @@ static void bnxt_qplib_service_creq(unsigned long data)
>  		type = creqe->type & CREQ_BASE_TYPE_MASK;
>  		switch (type) {
>  		case CREQ_BASE_TYPE_QP_EVENT:
> +			if (!bnxt_qplib_process_qp_event
> +			    (rcfw, (struct creq_qp_event *)creqe))
> +				rcfw->creq_qp_event_processed++;
> +			else {

    CodingStyle: there should be {} used in all branches if it's used on at 
least branch of *if*.

> +				dev_warn(&rcfw->pdev->dev, "QPLIB: crsqe with");
> +				dev_warn(&rcfw->pdev->dev,
> +					 "QPLIB: type = 0x%x not handled",
> +					 type);
> +			}
>  			break;
>  		case CREQ_BASE_TYPE_FUNC_EVENT:
>  			if (!bnxt_qplib_process_func_event

MBR, Sergei

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (18 preceding siblings ...)
  2016-12-09  6:48 ` [PATCH V2 21/22] bnxt_re: Add QP event handling Selvin Xavier
@ 2016-12-09 15:26 ` David Miller
  2016-12-09 17:52   ` Selvin Xavier
  2016-12-09 16:27 ` Leon Romanovsky
                   ` (2 subsequent siblings)
  22 siblings, 1 reply; 46+ messages in thread
From: David Miller @ 2016-12-09 15:26 UTC (permalink / raw)
  To: selvin.xavier; +Cc: dledford, linux-rdma, netdev

From: Selvin Xavier <selvin.xavier@broadcom.com>
Date: Thu,  8 Dec 2016 22:47:54 -0800

> This series introduces the RoCE driver for the Broadcom
> NetXtreme-E 10/25/40/50 gigabit RoCE HCAs. 
> This driver is dependent on the bnxt_en NIC driver and is 
> based on the bnxt_re branch in Doug's repository. bnxt_en changes
> required for this patch series is already available in this branch.
> 
> I am preparing a git repository with these changes as per Jason's
> comment and will share the details later today.

If this is targetted at the net-next tree, it is too late as I've
closed the net-next tree two nights ago.

Please resubmit this after the upcoming merge window closes.

Thanks.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2  00/22] Broadcom RoCE Driver (bnxt_re)
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (19 preceding siblings ...)
  2016-12-09 15:26 ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) David Miller
@ 2016-12-09 16:27 ` Leon Romanovsky
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
  2016-12-13  6:54 ` Or Gerlitz
  22 siblings, 0 replies; 46+ messages in thread
From: Leon Romanovsky @ 2016-12-09 16:27 UTC (permalink / raw)
  To: Selvin Xavier; +Cc: dledford, linux-rdma, netdev

[-- Attachment #1: Type: text/plain, Size: 1178 bytes --]

On Thu, Dec 08, 2016 at 10:47:54PM -0800, Selvin Xavier wrote:
> 

...

>  create mode 100644 include/uapi/rdma/bnxt_re_uverbs_abi.h

Please use already established naming format for this file.
It will simplify our future integration with rdma-core library.

Thanks

➜  linux-rdma git:(master) ls -l include/uapi/rdma/*-abi.h 
-rw-r--r-- 1 leonro leonro 2291 Dec  7 13:07 include/uapi/rdma/cxgb3-abi.h
-rw-r--r-- 1 leonro leonro 2488 Dec  7 13:07 include/uapi/rdma/cxgb4-abi.h
-rw-r--r-- 1 leonro leonro 2864 Dec  7 13:07 include/uapi/rdma/mlx4-abi.h
-rw-r--r-- 1 leonro leonro 6103 Dec  8 12:52 include/uapi/rdma/mlx5-abi.h
-rw-r--r-- 1 leonro leonro 2932 Dec  7 13:07 include/uapi/rdma/mthca-abi.h
-rw-r--r-- 1 leonro leonro 3380 Dec  7 13:07 include/uapi/rdma/nes-abi.h
-rw-r--r-- 1 leonro leonro 3918 Dec  7 13:07 include/uapi/rdma/ocrdma-abi.h
-rw-r--r-- 1 leonro leonro 2559 Dec  7 13:07 include/uapi/rdma/qedr-abi.h

> 
> -- 
> 2.5.5
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
  2016-12-09 15:26 ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) David Miller
@ 2016-12-09 17:52   ` Selvin Xavier
  0 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-09 17:52 UTC (permalink / raw)
  To: David Miller; +Cc: Doug Ledford, linux-rdma, netdev

On Fri, Dec 9, 2016 at 8:56 PM, David Miller <davem@davemloft.net> wrote:
> From: Selvin Xavier <selvin.xavier@broadcom.com>
> Date: Thu,  8 Dec 2016 22:47:54 -0800
>
>> This series introduces the RoCE driver for the Broadcom
>> NetXtreme-E 10/25/40/50 gigabit RoCE HCAs.
>> This driver is dependent on the bnxt_en NIC driver and is
>> based on the bnxt_re branch in Doug's repository. bnxt_en changes
>> required for this patch series is already available in this branch.
>>
>> I am preparing a git repository with these changes as per Jason's
>> comment and will share the details later today.
>
> If this is targetted at the net-next tree, it is too late as I've
> closed the net-next tree two nights ago.
>

This patch series is targeting linux-rdma tree. netdev is copied since
this series is dependent on  bnxt_en.

Thanks
Selvin

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 03/22] bnxt_re: register with the NIC driver
  2016-12-09  6:47 ` [PATCH V2 03/22] bnxt_re: register with the NIC driver Selvin Xavier
@ 2016-12-10  0:03   ` Jonathan Toppins
  0 siblings, 0 replies; 46+ messages in thread
From: Jonathan Toppins @ 2016-12-10  0:03 UTC (permalink / raw)
  To: Selvin Xavier, dledford, linux-rdma
  Cc: netdev, Eddie Wai, Devesh Sharma, Somnath Kotur, Sriharsha Basavapatna

On 12/09/2016 01:47 AM, Selvin Xavier wrote:
> This patch handles the registration with bnxt_en driver. The driver registers
> with netdev notifier chain. Upon receiving NETDEV_REGISTER event, the driver
> in turn registers with bnxt_en driver.
> 	1. bnxt_en's ulp_probe function returns a structure that contains information
> 	   about the device and additional entry points.
> 	2. bnxt_en driver returns 'struct bnxt_eth_dev' that contains set of operation
> 	   vectors that RocE driver invokes later.
> 	3. bnxt_request_msix() allows the RoCE driver to specify the number of MSI-X
> 	   vectors that are needed.
> 	4. bnxt_send_fw_msg () can be used to send messages to the FW
> 	5. bnxt_register_async_events() can be used to register for async event
> 	   callbacks.
> 
> v2: Remove some sparse warning. Also, remove some unused code from unreg path.
> 
> Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
> Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
> Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
> ---
>  drivers/infiniband/hw/bnxtre/bnxt_re.h      |  48 +++
>  drivers/infiniband/hw/bnxtre/bnxt_re_main.c | 436 ++++++++++++++++++++++++++++
>  2 files changed, 484 insertions(+)
> 

[...]

>  #endif
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
> index ebe1c69..029824a 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
> +
> +static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
> +{
> +	int i, j, rc;
> +
> +	/* Registered a new RoCE device instance to netdev */
> +	rc = bnxt_re_register_netdev(rdev);
> +	if (rc) {
> +		pr_err("Failed to register with netedev: %#x\n", rc);
> +		return -EINVAL;
> +	}
> +	set_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
> +
> +	rc = bnxt_re_request_msix(rdev);
> +	if (rc) {
> +		pr_err("Failed to get MSI-X vectors: %#x\n", rc);
> +		rc = -EINVAL;
> +		goto fail;
> +	}
> +	set_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags);

Though this exit path looks correct (need to verify) once all patches
are applied, this looks incorrect if only considering this specific
patch. I think you need the following:

+ return 0;

> +
> +fail:
> +	bnxt_re_ib_unreg(rdev, true);
> +	return rc;
> +}
> +

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
                     ` (2 preceding siblings ...)
  2016-12-09  6:48   ` [PATCH V2 22/22] bnxt_re: Add bnxt_re driver build support Selvin Xavier
@ 2016-12-10  5:36   ` Selvin Xavier
       [not found]     ` <CA+sbYW1ZEa5fndGkvN8OXr-orcUx4jaL73Di8zBJQX_uCdK=Ww-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2016-12-12 16:54   ` Jonathan Toppins
  2016-12-12 23:52   ` Doug Ledford
  5 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-10  5:36 UTC (permalink / raw)
  To: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Selvin Xavier

On Fri, Dec 9, 2016 at 12:17 PM, Selvin Xavier
<selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> wrote:
> I am preparing a git repository with these changes as per Jason's
> comment and will share the details later today.

Please use bnxt_re branch in this git repository.

https://github.com/Broadcom/linux-rdma-nxt.git

Thanks,
Selvin Xavier
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 18/22] bnxt_re: Support for DCB
  2016-12-09  6:48 ` [PATCH V2 18/22] bnxt_re: Support for DCB Selvin Xavier
@ 2016-12-10 13:50   ` Or Gerlitz
  2016-12-13  6:25     ` Selvin Xavier
  0 siblings, 1 reply; 46+ messages in thread
From: Or Gerlitz @ 2016-12-10 13:50 UTC (permalink / raw)
  To: Selvin Xavier
  Cc: Doug Ledford, linux-rdma, Linux Netdev List, Eddie Wai,
	Devesh Sharma, Somnath Kotur, Sriharsha Basavapatna

On Fri, Dec 9, 2016 at 8:48 AM, Selvin Xavier
<selvin.xavier@broadcom.com> wrote:
> This patch queries the configured RoCE APP Priority on the host
> using the dcbnl API and programs the RoCE FW with the corresponding
> Traffic Class(es) for the priority.

> +#define BNXT_RE_ROCE_V1_ETH_TYPE       0x8915
> +#define BNXT_RE_ROCE_V2_PORT_NO                4791

I believe these two are defined already, try # git grep on each under include

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
                     ` (3 preceding siblings ...)
  2016-12-10  5:36   ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
@ 2016-12-12 16:54   ` Jonathan Toppins
       [not found]     ` <9cf03e2b-a16d-19ec-a8ce-14f24272bf6a-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2016-12-12 23:52   ` Doug Ledford
  5 siblings, 1 reply; 46+ messages in thread
From: Jonathan Toppins @ 2016-12-12 16:54 UTC (permalink / raw)
  To: Selvin Xavier, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA

On 12/09/2016 01:47 AM, Selvin Xavier wrote:
> This series introduces the RoCE driver for the Broadcom
> NetXtreme-E 10/25/40/50 gigabit RoCE HCAs. 
> This driver is dependent on the bnxt_en NIC driver and is 
> based on the bnxt_re branch in Doug's repository. bnxt_en changes
> required for this patch series is already available in this branch.
> 
> I am preparing a git repository with these changes as per Jason's
> comment and will share the details later today.
> 
> v1-> v2:
>   * The license text in each file updated to reflect Dual license.
>   * Makefile and Kconfig changes are pushed to the last patch
>   * Moved bnxt_re_uverbs_abi.h to include/uapi/rdma folder
>   * Remove duplicate structure definitions from bnxt_re_hsi.h as
>     it is available in the corresponding bnxt_en header file (bnxt_hsi.h)
>   * Removed some unused code reported during code review.
>   * Fixed few sparse warnings
> 

I get the following sparse errors (filtered for only bnxt_re ones),
please let me know if they are false positives:

$ make C=2  drivers/net/ethernet/broadcom/bnxt/bnxt_en.ko
drivers/infiniband/hw/bnxtre/bnxt_re.ko
  CHK     include/config/kernel.release
  CHK     include/generated/uapi/linux/version.h
  CHK     include/generated/utsrelease.h
  CHECK   arch/x86/purgatory/purgatory.c
[...]
  CHECK   arch/x86/purgatory/sha256.c
  CHECK   arch/x86/purgatory/string.c
[...]
  CHK     include/generated/bounds.h
  CHK     include/generated/timeconst.h
  CHK     include/generated/asm-offsets.h
  CALL    scripts/checksyscalls.sh
  CHECK   scripts/mod/empty.c
  CHECK   drivers/net/ethernet/broadcom/bnxt/bnxt.c
  CHECK   drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
  CHECK   drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
  CHECK   drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
  CHECK   drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
  MODPOST 2 modules
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_re_main.c
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
[...]
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c:729:6: warning: symbol
'bnxt_qplib_cleanup_pkey_tbl' was not declared. Should it be static?
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
  CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1015:22: warning: context
imbalance in 'bnxt_qplib_lock_cqs' - wrong count at exit
drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1030:28: warning: context
imbalance in 'bnxt_qplib_unlock_cqs' - unexpected unlock
  MODPOST 2 modules

-Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found]     ` <CA+sbYW1ZEa5fndGkvN8OXr-orcUx4jaL73Di8zBJQX_uCdK=Ww-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-12-12 17:07       ` Jason Gunthorpe
       [not found]         ` <20161212170701.GA28387-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  0 siblings, 1 reply; 46+ messages in thread
From: Jason Gunthorpe @ 2016-12-12 17:07 UTC (permalink / raw)
  To: Selvin Xavier
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	netdev-u79uwXL29TY76Z2rM5mHXA

On Sat, Dec 10, 2016 at 11:06:58AM +0530, Selvin Xavier wrote:
> On Fri, Dec 9, 2016 at 12:17 PM, Selvin Xavier
> <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> wrote:
> > I am preparing a git repository with these changes as per Jason's
> > comment and will share the details later today.
> 
> Please use bnxt_re branch in this git repository.
> 
> https://github.com/Broadcom/linux-rdma-nxt.git

Why are you using __packed in bnxt_re_uverbs_abi.h ? that doesn't seem
necessary. It is a good idea to make sure all those structures are a
multiple of 64 bits (add explicit reserved fields), and make sure you
test 32 bit verbs as well.

Why are you using debugfs just to export counters? Isn't the core code
counter framework good enough?

Please try and avoid writing functions as defines (eg rdev_to_dev,
to_bnxt_re, SQE_PG, RCFW_CMDQ_COOKIE, PTR_PG etc)

There is something wrong with the tabs and spaces (see
https://github.com/Broadcom/linux-rdma-nxt/blob/03e23b087f7e86ea28656273994e065827210ce5/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h)

FWIW, I really dislike the column alignment style, it is so hard to
maintain..

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2  13/22] bnxt_re: Support QP verbs
       [not found]   ` <1481266096-23331-14-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
@ 2016-12-12 18:27     ` Leon Romanovsky
       [not found]       ` <20161212182737.GC8204-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  0 siblings, 1 reply; 46+ messages in thread
From: Leon Romanovsky @ 2016-12-12 18:27 UTC (permalink / raw)
  To: Selvin Xavier
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, netdev-u79uwXL29TY76Z2rM5mHXA,
	Eddie Wai, Devesh Sharma, Somnath Kotur, Sriharsha Basavapatna

[-- Attachment #1: Type: text/plain, Size: 68951 bytes --]

On Thu, Dec 08, 2016 at 10:48:07PM -0800, Selvin Xavier wrote:
> This patch implements create_qp, destroy_qp, query_qp and modify_qp verbs.
>
> v2: Fixed sparse warnings
>
> Signed-off-by: Eddie Wai <eddie.wai-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Selvin Xavier <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> ---
>  drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 873 ++++++++++++++++++++++++
>  drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    | 250 +++++++
>  drivers/infiniband/hw/bnxtre/bnxt_re.h          |  14 +
>  drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 762 +++++++++++++++++++++
>  drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  21 +
>  drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   6 +
>  include/uapi/rdma/bnxt_re_uverbs_abi.h          |  10 +
>  7 files changed, 1936 insertions(+)
>
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
> index 636306f..edc9411 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
> @@ -50,6 +50,69 @@
>  #include "bnxt_qplib_fp.h"
>
>  static void bnxt_qplib_arm_cq_enable(struct bnxt_qplib_cq *cq);
> +
> +static void bnxt_qplib_free_qp_hdr_buf(struct bnxt_qplib_res *res,
> +				       struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_q *rq = &qp->rq;
> +	struct bnxt_qplib_q *sq = &qp->sq;
> +
> +	if (qp->rq_hdr_buf)
> +		dma_free_coherent(&res->pdev->dev,
> +				  rq->hwq.max_elements * qp->rq_hdr_buf_size,
> +				  qp->rq_hdr_buf, qp->rq_hdr_buf_map);
> +	if (qp->sq_hdr_buf)
> +		dma_free_coherent(&res->pdev->dev,
> +				  sq->hwq.max_elements * qp->sq_hdr_buf_size,
> +				  qp->sq_hdr_buf, qp->sq_hdr_buf_map);
> +	qp->rq_hdr_buf = NULL;
> +	qp->sq_hdr_buf = NULL;
> +	qp->rq_hdr_buf_map = 0;
> +	qp->sq_hdr_buf_map = 0;
> +	qp->sq_hdr_buf_size = 0;
> +	qp->rq_hdr_buf_size = 0;
> +}
> +
> +static int bnxt_qplib_alloc_qp_hdr_buf(struct bnxt_qplib_res *res,
> +				       struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_q *rq = &qp->rq;
> +	struct bnxt_qplib_q *sq = &qp->rq;
> +	int rc = 0;
> +
> +	if (qp->sq_hdr_buf_size && sq->hwq.max_elements) {
> +		qp->sq_hdr_buf = dma_alloc_coherent(&res->pdev->dev,
> +					sq->hwq.max_elements *
> +					qp->sq_hdr_buf_size,
> +					&qp->sq_hdr_buf_map, GFP_KERNEL);
> +		if (!qp->sq_hdr_buf) {
> +			rc = -ENOMEM;
> +			dev_err(&res->pdev->dev,
> +				"QPLIB: Failed to create sq_hdr_buf");
> +			goto fail;
> +		}
> +	}
> +
> +	if (qp->rq_hdr_buf_size && rq->hwq.max_elements) {
> +		qp->rq_hdr_buf = dma_alloc_coherent(&res->pdev->dev,
> +						    rq->hwq.max_elements *
> +						    qp->rq_hdr_buf_size,
> +						    &qp->rq_hdr_buf_map,
> +						    GFP_KERNEL);
> +		if (!qp->rq_hdr_buf) {
> +			rc = -ENOMEM;
> +			dev_err(&res->pdev->dev,
> +				"QPLIB: Failed to create rq_hdr_buf");
> +			goto fail;
> +		}
> +	}
> +	return 0;
> +
> +fail:
> +	bnxt_qplib_free_qp_hdr_buf(res, qp);
> +	return rc;
> +}
> +
>  static void bnxt_qplib_service_nq(unsigned long data)
>  {
>  	struct bnxt_qplib_nq *nq = (struct bnxt_qplib_nq *)data;
> @@ -215,6 +278,816 @@ int bnxt_qplib_alloc_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq)
>  	return 0;
>  }
>
> +/* QP */
> +int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
> +	struct cmdq_create_qp1 req;
> +	struct creq_create_qp1_resp *resp;
> +	struct bnxt_qplib_pbl *pbl;
> +	struct bnxt_qplib_q *sq = &qp->sq;
> +	struct bnxt_qplib_q *rq = &qp->rq;
> +	int rc;
> +	u16 cmd_flags = 0;
> +	u32 qp_flags = 0;
> +
> +	RCFW_CMD_PREP(req, CREATE_QP1, cmd_flags);
> +
> +	/* General */
> +	req.type = qp->type;
> +	req.dpi = cpu_to_le32(qp->dpi->dpi);
> +	req.qp_handle = cpu_to_le64(qp->qp_handle);
> +
> +	/* SQ */
> +	sq->hwq.max_elements = sq->max_wqe;
> +	rc = bnxt_qplib_alloc_init_hwq(res->pdev, &sq->hwq, NULL, 0,
> +				       &sq->hwq.max_elements,
> +				       BNXT_QPLIB_MAX_SQE_ENTRY_SIZE, 0,
> +				       PAGE_SIZE, HWQ_TYPE_QUEUE);
> +	if (rc)
> +		goto exit;
> +
> +	sq->swq = kcalloc(sq->hwq.max_elements, sizeof(*sq->swq), GFP_KERNEL);
> +	if (!sq->swq) {
> +		rc = -ENOMEM;
> +		goto fail_sq;
> +	}
> +	pbl = &sq->hwq.pbl[PBL_LVL_0];
> +	req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
> +	req.sq_pg_size_sq_lvl =
> +		((sq->hwq.level & CMDQ_CREATE_QP1_SQ_LVL_MASK)
> +				<<  CMDQ_CREATE_QP1_SQ_LVL_SFT) |
> +		(pbl->pg_size == ROCE_PG_SIZE_4K ?
> +				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_4K :
> +		 pbl->pg_size == ROCE_PG_SIZE_8K ?
> +				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_8K :
> +		 pbl->pg_size == ROCE_PG_SIZE_64K ?
> +				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_64K :
> +		 pbl->pg_size == ROCE_PG_SIZE_2M ?
> +				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_2M :
> +		 pbl->pg_size == ROCE_PG_SIZE_8M ?
> +				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_8M :
> +		 pbl->pg_size == ROCE_PG_SIZE_1G ?
> +				CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_1G :
> +		 CMDQ_CREATE_QP1_SQ_PG_SIZE_PG_4K);
> +
> +	if (qp->scq)
> +		req.scq_cid = cpu_to_le32(qp->scq->id);
> +
> +	qp_flags |= CMDQ_CREATE_QP1_QP_FLAGS_RESERVED_LKEY_ENABLE;
> +
> +	/* RQ */
> +	if (rq->max_wqe) {
> +		rq->hwq.max_elements = qp->rq.max_wqe;
> +		rc = bnxt_qplib_alloc_init_hwq(res->pdev, &rq->hwq, NULL, 0,
> +					       &rq->hwq.max_elements,
> +					       BNXT_QPLIB_MAX_RQE_ENTRY_SIZE, 0,
> +					       PAGE_SIZE, HWQ_TYPE_QUEUE);
> +		if (rc)
> +			goto fail_sq;
> +
> +		rq->swq = kcalloc(rq->hwq.max_elements, sizeof(*rq->swq),
> +				  GFP_KERNEL);
> +		if (!rq->swq) {
> +			rc = -ENOMEM;
> +			goto fail_rq;
> +		}
> +		pbl = &rq->hwq.pbl[PBL_LVL_0];
> +		req.rq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
> +		req.rq_pg_size_rq_lvl =
> +			((rq->hwq.level & CMDQ_CREATE_QP1_RQ_LVL_MASK) <<
> +			 CMDQ_CREATE_QP1_RQ_LVL_SFT) |
> +				(pbl->pg_size == ROCE_PG_SIZE_4K ?
> +					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_4K :
> +				 pbl->pg_size == ROCE_PG_SIZE_8K ?
> +					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_8K :
> +				 pbl->pg_size == ROCE_PG_SIZE_64K ?
> +					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_64K :
> +				 pbl->pg_size == ROCE_PG_SIZE_2M ?
> +					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_2M :
> +				 pbl->pg_size == ROCE_PG_SIZE_8M ?
> +					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_8M :
> +				 pbl->pg_size == ROCE_PG_SIZE_1G ?
> +					CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_1G :
> +				 CMDQ_CREATE_QP1_RQ_PG_SIZE_PG_4K);
> +		if (qp->rcq)
> +			req.rcq_cid = cpu_to_le32(qp->rcq->id);
> +	}
> +
> +	/* Header buffer - allow hdr_buf pass in */
> +	rc = bnxt_qplib_alloc_qp_hdr_buf(res, qp);
> +	if (rc) {
> +		rc = -ENOMEM;
> +		goto fail;
> +	}
> +	req.qp_flags = cpu_to_le32(qp_flags);
> +	req.sq_size = cpu_to_le32(sq->hwq.max_elements);
> +	req.rq_size = cpu_to_le32(rq->hwq.max_elements);
> +
> +	req.sq_fwo_sq_sge =
> +		cpu_to_le16((sq->max_sge & CMDQ_CREATE_QP1_SQ_SGE_MASK) <<
> +			    CMDQ_CREATE_QP1_SQ_SGE_SFT);
> +	req.rq_fwo_rq_sge =
> +		cpu_to_le16((rq->max_sge & CMDQ_CREATE_QP1_RQ_SGE_MASK) <<
> +			    CMDQ_CREATE_QP1_RQ_SGE_SFT);
> +
> +	req.pd_id = cpu_to_le32(qp->pd->id);
> +
> +	resp = (struct creq_create_qp1_resp *)
> +			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
> +						     NULL, 0);
> +	if (!resp) {
> +		dev_err(&res->pdev->dev, "QPLIB: FP: CREATE_QP1 send failed");
> +		rc = -EINVAL;
> +		goto fail;
> +	}
> +	/**/

It looks like you forgot to add a text into comment section.

> +	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
> +		/* Cmd timed out */
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP1 timed out");
> +		rc = -ETIMEDOUT;
> +		goto fail;
> +	}
> +	if (RCFW_RESP_STATUS(resp) ||
> +	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP1 failed ");
> +		dev_err(&rcfw->pdev->dev,
> +			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
> +			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
> +			RCFW_RESP_COOKIE(resp));
> +		rc = -EINVAL;
> +		goto fail;
> +	}
> +	qp->id = le32_to_cpu(resp->xid);
> +	qp->cur_qp_state = CMDQ_MODIFY_QP_NEW_STATE_RESET;
> +	sq->flush_in_progress = false;
> +	rq->flush_in_progress = false;
> +
> +	return 0;
> +
> +fail:
> +	bnxt_qplib_free_qp_hdr_buf(res, qp);
> +fail_rq:
> +	bnxt_qplib_free_hwq(res->pdev, &rq->hwq);
> +	kfree(rq->swq);
> +fail_sq:
> +	bnxt_qplib_free_hwq(res->pdev, &sq->hwq);
> +	kfree(sq->swq);
> +exit:
> +	return rc;
> +}
> +
> +int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
> +	struct sq_send *hw_sq_send_hdr, **hw_sq_send_ptr;
> +	struct cmdq_create_qp req;
> +	struct creq_create_qp_resp *resp;
> +	struct bnxt_qplib_pbl *pbl;
> +	struct sq_psn_search **psn_search_ptr;
> +	unsigned long long int psn_search, poff = 0;
> +	struct bnxt_qplib_q *sq = &qp->sq;
> +	struct bnxt_qplib_q *rq = &qp->rq;
> +	struct bnxt_qplib_hwq *xrrq;
> +	int i, rc, req_size, psn_sz;
> +	u16 cmd_flags = 0, max_ssge;
> +	u32 sw_prod, qp_flags = 0;
> +
> +	RCFW_CMD_PREP(req, CREATE_QP, cmd_flags);
> +
> +	/* General */
> +	req.type = qp->type;
> +	req.dpi = cpu_to_le32(qp->dpi->dpi);
> +	req.qp_handle = cpu_to_le64(qp->qp_handle);
> +
> +	/* SQ */
> +	psn_sz = (qp->type == CMDQ_CREATE_QP_TYPE_RC) ?
> +		 sizeof(struct sq_psn_search) : 0;
> +	sq->hwq.max_elements = sq->max_wqe;
> +	rc = bnxt_qplib_alloc_init_hwq(res->pdev, &sq->hwq, sq->sglist,
> +				       sq->nmap, &sq->hwq.max_elements,
> +				       BNXT_QPLIB_MAX_SQE_ENTRY_SIZE,
> +				       psn_sz,
> +				       PAGE_SIZE, HWQ_TYPE_QUEUE);
> +	if (rc)
> +		goto exit;
> +
> +	sq->swq = kcalloc(sq->hwq.max_elements, sizeof(*sq->swq), GFP_KERNEL);
> +	if (!sq->swq) {
> +		rc = -ENOMEM;
> +		goto fail_sq;
> +	}
> +	hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr;
> +	if (psn_sz) {
> +		psn_search_ptr = (struct sq_psn_search **)
> +				  &hw_sq_send_ptr[SQE_PG(sq->hwq.max_elements)];
> +		psn_search = (unsigned long long int)
> +			      &hw_sq_send_ptr[SQE_PG(sq->hwq.max_elements)]
> +			      [SQE_IDX(sq->hwq.max_elements)];
> +		if (psn_search & ~PAGE_MASK) {
> +			/* If the psn_search does not start on a page boundary,
> +			 * then calculate the offset
> +			 */
> +			poff = (psn_search & ~PAGE_MASK) /
> +				BNXT_QPLIB_MAX_PSNE_ENTRY_SIZE;
> +		}
> +		for (i = 0; i < sq->hwq.max_elements; i++)
> +			sq->swq[i].psn_search =
> +				&psn_search_ptr[PSNE_PG(i + poff)]
> +					       [PSNE_IDX(i + poff)];
> +	}
> +	pbl = &sq->hwq.pbl[PBL_LVL_0];
> +	req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
> +	req.sq_pg_size_sq_lvl =
> +		((sq->hwq.level & CMDQ_CREATE_QP_SQ_LVL_MASK)
> +				 <<  CMDQ_CREATE_QP_SQ_LVL_SFT) |
> +		(pbl->pg_size == ROCE_PG_SIZE_4K ?
> +				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K :
> +		 pbl->pg_size == ROCE_PG_SIZE_8K ?
> +				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_8K :
> +		 pbl->pg_size == ROCE_PG_SIZE_64K ?
> +				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_64K :
> +		 pbl->pg_size == ROCE_PG_SIZE_2M ?
> +				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_2M :
> +		 pbl->pg_size == ROCE_PG_SIZE_8M ?
> +				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_8M :
> +		 pbl->pg_size == ROCE_PG_SIZE_1G ?
> +				CMDQ_CREATE_QP_SQ_PG_SIZE_PG_1G :
> +		 CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K);
> +
> +	/* initialize all SQ WQEs to LOCAL_INVALID (sq prep for hw fetch) */
> +	hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr;
> +	for (sw_prod = 0; sw_prod < sq->hwq.max_elements; sw_prod++) {
> +		hw_sq_send_hdr = &hw_sq_send_ptr[SQE_PG(sw_prod)]
> +						[SQE_IDX(sw_prod)];
> +		hw_sq_send_hdr->wqe_type = SQ_BASE_WQE_TYPE_LOCAL_INVALID;
> +	}
> +
> +	if (qp->scq)
> +		req.scq_cid = cpu_to_le32(qp->scq->id);
> +
> +	qp_flags |= CMDQ_CREATE_QP_QP_FLAGS_RESERVED_LKEY_ENABLE;
> +	qp_flags |= CMDQ_CREATE_QP_QP_FLAGS_FR_PMR_ENABLED;
> +	if (qp->sig_type)
> +		qp_flags |= CMDQ_CREATE_QP_QP_FLAGS_FORCE_COMPLETION;
> +
> +	/* RQ */
> +	if (rq->max_wqe) {
> +		rq->hwq.max_elements = rq->max_wqe;
> +		rc = bnxt_qplib_alloc_init_hwq(res->pdev, &rq->hwq, rq->sglist,
> +					       rq->nmap, &rq->hwq.max_elements,
> +					       BNXT_QPLIB_MAX_RQE_ENTRY_SIZE, 0,
> +					       PAGE_SIZE, HWQ_TYPE_QUEUE);
> +		if (rc)
> +			goto fail_sq;
> +
> +		rq->swq = kcalloc(rq->hwq.max_elements, sizeof(*rq->swq),
> +				  GFP_KERNEL);
> +		if (!rq->swq) {
> +			rc = -ENOMEM;
> +			goto fail_rq;
> +		}
> +		pbl = &rq->hwq.pbl[PBL_LVL_0];
> +		req.rq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
> +		req.rq_pg_size_rq_lvl =
> +			((rq->hwq.level & CMDQ_CREATE_QP_RQ_LVL_MASK) <<
> +			 CMDQ_CREATE_QP_RQ_LVL_SFT) |
> +				(pbl->pg_size == ROCE_PG_SIZE_4K ?
> +					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_4K :
> +				 pbl->pg_size == ROCE_PG_SIZE_8K ?
> +					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_8K :
> +				 pbl->pg_size == ROCE_PG_SIZE_64K ?
> +					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_64K :
> +				 pbl->pg_size == ROCE_PG_SIZE_2M ?
> +					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_2M :
> +				 pbl->pg_size == ROCE_PG_SIZE_8M ?
> +					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_8M :
> +				 pbl->pg_size == ROCE_PG_SIZE_1G ?
> +					CMDQ_CREATE_QP_RQ_PG_SIZE_PG_1G :
> +				 CMDQ_CREATE_QP_RQ_PG_SIZE_PG_4K);
> +	}
> +
> +	if (qp->rcq)
> +		req.rcq_cid = cpu_to_le32(qp->rcq->id);
> +	req.qp_flags = cpu_to_le32(qp_flags);
> +	req.sq_size = cpu_to_le32(sq->hwq.max_elements);
> +	req.rq_size = cpu_to_le32(rq->hwq.max_elements);
> +	qp->sq_hdr_buf = NULL;
> +	qp->rq_hdr_buf = NULL;
> +
> +	rc = bnxt_qplib_alloc_qp_hdr_buf(res, qp);
> +	if (rc)
> +		goto fail_rq;
> +
> +	/* CTRL-22434: Irrespective of the requested SGE count on the SQ
> +	 * always create the QP with max send sges possible if the requested
> +	 * inline size is greater than 0.
> +	 */
> +	max_ssge = qp->max_inline_data ? 6 : sq->max_sge;
> +	req.sq_fwo_sq_sge = cpu_to_le16(
> +				((max_ssge & CMDQ_CREATE_QP_SQ_SGE_MASK)
> +				 << CMDQ_CREATE_QP_SQ_SGE_SFT) | 0);
> +	req.rq_fwo_rq_sge = cpu_to_le16(
> +				((rq->max_sge & CMDQ_CREATE_QP_RQ_SGE_MASK)
> +				 << CMDQ_CREATE_QP_RQ_SGE_SFT) | 0);
> +	/* ORRQ and IRRQ */
> +	if (psn_sz) {
> +		xrrq = &qp->orrq;
> +		xrrq->max_elements =
> +			ORD_LIMIT_TO_ORRQ_SLOTS(qp->max_rd_atomic);
> +		req_size = xrrq->max_elements *
> +			   BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE + PAGE_SIZE - 1;
> +		req_size &= ~(PAGE_SIZE - 1);
> +		rc = bnxt_qplib_alloc_init_hwq(res->pdev, xrrq, NULL, 0,
> +					       &xrrq->max_elements,
> +					       BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE,
> +					       0, req_size, HWQ_TYPE_CTX);
> +		if (rc)
> +			goto fail_buf_free;
> +		pbl = &xrrq->pbl[PBL_LVL_0];
> +		req.orrq_addr = cpu_to_le64(pbl->pg_map_arr[0]);
> +
> +		xrrq = &qp->irrq;
> +		xrrq->max_elements = IRD_LIMIT_TO_IRRQ_SLOTS(
> +						qp->max_dest_rd_atomic);
> +		req_size = xrrq->max_elements *
> +			   BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE + PAGE_SIZE - 1;
> +		req_size &= ~(PAGE_SIZE - 1);
> +
> +		rc = bnxt_qplib_alloc_init_hwq(res->pdev, xrrq, NULL, 0,
> +					       &xrrq->max_elements,
> +					       BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE,
> +					       0, req_size, HWQ_TYPE_CTX);
> +		if (rc)
> +			goto fail_orrq;
> +
> +		pbl = &xrrq->pbl[PBL_LVL_0];
> +		req.irrq_addr = cpu_to_le64(pbl->pg_map_arr[0]);
> +	}
> +	req.pd_id = cpu_to_le32(qp->pd->id);
> +
> +	resp = (struct creq_create_qp_resp *)
> +			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
> +						     NULL, 0);
> +	if (!resp) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP send failed");
> +		rc = -EINVAL;
> +		goto fail;
> +	}
> +	/**/
> +	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
> +		/* Cmd timed out */
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP timed out");
> +		rc = -ETIMEDOUT;
> +		goto fail;
> +	}
> +	if (RCFW_RESP_STATUS(resp) ||
> +	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: CREATE_QP failed ");
> +		dev_err(&rcfw->pdev->dev,
> +			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
> +			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
> +			RCFW_RESP_COOKIE(resp));
> +		rc = -EINVAL;
> +		goto fail;
> +	}
> +	qp->id = le32_to_cpu(resp->xid);
> +	qp->cur_qp_state = CMDQ_MODIFY_QP_NEW_STATE_RESET;
> +	sq->flush_in_progress = false;
> +	rq->flush_in_progress = false;
> +
> +	return 0;
> +
> +fail:
> +	if (qp->irrq.max_elements)
> +		bnxt_qplib_free_hwq(res->pdev, &qp->irrq);
> +fail_orrq:
> +	if (qp->orrq.max_elements)
> +		bnxt_qplib_free_hwq(res->pdev, &qp->orrq);
> +fail_buf_free:
> +	bnxt_qplib_free_qp_hdr_buf(res, qp);
> +fail_rq:
> +	bnxt_qplib_free_hwq(res->pdev, &rq->hwq);
> +	kfree(rq->swq);
> +fail_sq:
> +	bnxt_qplib_free_hwq(res->pdev, &sq->hwq);
> +	kfree(sq->swq);
> +exit:
> +	return rc;
> +}
> +
> +static void __filter_modify_flags(struct bnxt_qplib_qp *qp)
> +{

It can help to review if you break this function into smaller pieces and
get rid of switch->switch->if construction.

> +	switch (qp->cur_qp_state) {
> +	case CMDQ_MODIFY_QP_NEW_STATE_RESET:
> +		switch (qp->state) {
> +		case CMDQ_MODIFY_QP_NEW_STATE_INIT:
> +			break;
> +		default:
> +			break;
> +		}
> +		break;
> +	case CMDQ_MODIFY_QP_NEW_STATE_INIT:
> +		switch (qp->state) {
> +		case CMDQ_MODIFY_QP_NEW_STATE_RTR:
> +			/* INIT->RTR, configure the path_mtu to the default
> +			 * 2048 if not being requested
> +			 */
> +			if (!(qp->modify_flags &
> +			      CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU)) {
> +				qp->modify_flags |=
> +					CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
> +				qp->path_mtu = CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
> +			}
> +			qp->modify_flags &=
> +				~CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID;
> +			/* Bono FW requires the max_dest_rd_atomic to be >= 1 */
> +			if (qp->max_dest_rd_atomic < 1)
> +				qp->max_dest_rd_atomic = 1;
> +			qp->modify_flags &= ~CMDQ_MODIFY_QP_MODIFY_MASK_SRC_MAC;
> +			/* Bono FW 20.6.5 requires SGID_INDEX configuration */
> +			if (!(qp->modify_flags &
> +			      CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX)) {
> +				qp->modify_flags |=
> +					CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX;
> +				qp->ah.sgid_index = 0;
> +			}
> +			break;
> +		default:
> +			break;
> +		}
> +		break;
> +	case CMDQ_MODIFY_QP_NEW_STATE_RTR:
> +		switch (qp->state) {
> +		case CMDQ_MODIFY_QP_NEW_STATE_RTS:
> +			/* Bono FW requires the max_rd_atomic to be >= 1 */
> +			if (qp->max_rd_atomic < 1)
> +				qp->max_rd_atomic = 1;
> +			/* Bono FW does not allow PKEY_INDEX,
> +			 * DGID, FLOW_LABEL, SGID_INDEX, HOP_LIMIT,
> +			 * TRAFFIC_CLASS, DEST_MAC, PATH_MTU, RQ_PSN,
> +			 * MIN_RNR_TIMER, MAX_DEST_RD_ATOMIC, DEST_QP_ID
> +			 * modification
> +			 */
> +			qp->modify_flags &=
> +				~(CMDQ_MODIFY_QP_MODIFY_MASK_PKEY |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_DGID |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER |
> +				  CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC
> +				  | CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID);
> +			break;
> +		default:
> +			break;
> +		}
> +		break;
> +	case CMDQ_MODIFY_QP_NEW_STATE_RTS:
> +		break;
> +	case CMDQ_MODIFY_QP_NEW_STATE_SQD:
> +		break;
> +	case CMDQ_MODIFY_QP_NEW_STATE_SQE:
> +		break;
> +	case CMDQ_MODIFY_QP_NEW_STATE_ERR:
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
> +	struct cmdq_modify_qp req;
> +	struct creq_modify_qp_resp *resp;
> +	u16 cmd_flags = 0, pkey;
> +	u32 temp32[4];
> +	u32 bmask;
> +
> +	RCFW_CMD_PREP(req, MODIFY_QP, cmd_flags);
> +
> +	/* Filter out the qp_attr_mask based on the state->new transition */
> +	__filter_modify_flags(qp);
> +	bmask = qp->modify_flags;
> +	req.modify_mask = cpu_to_le64(qp->modify_flags);
> +	req.qp_cid = cpu_to_le32(qp->id);
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_STATE) {
> +		req.network_type_en_sqd_async_notify_new_state =
> +				(qp->state & CMDQ_MODIFY_QP_NEW_STATE_MASK) |
> +				(qp->en_sqd_async_notify ?
> +					CMDQ_MODIFY_QP_EN_SQD_ASYNC_NOTIFY : 0);
> +	}
> +	req.network_type_en_sqd_async_notify_new_state |= qp->nw_type;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_ACCESS)
> +		req.access = qp->access;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_PKEY) {
> +		if (!bnxt_qplib_get_pkey(res, &res->pkey_tbl,
> +					 qp->pkey_index, &pkey))
> +			req.pkey = cpu_to_le16(pkey);
> +	}
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_QKEY)
> +		req.qkey = cpu_to_le32(qp->qkey);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_DGID) {
> +		memcpy(temp32, qp->ah.dgid.data, sizeof(struct bnxt_qplib_gid));
> +		req.dgid[0] = cpu_to_le32(temp32[0]);
> +		req.dgid[1] = cpu_to_le32(temp32[1]);
> +		req.dgid[2] = cpu_to_le32(temp32[2]);
> +		req.dgid[3] = cpu_to_le32(temp32[3]);
> +	}
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL)
> +		req.flow_label = cpu_to_le32(qp->ah.flow_label);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX)
> +		req.sgid_index = cpu_to_le16(res->sgid_tbl.hw_id
> +					     [qp->ah.sgid_index]);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT)
> +		req.hop_limit = qp->ah.hop_limit;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS)
> +		req.traffic_class = qp->ah.traffic_class;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC)
> +		memcpy(req.dest_mac, qp->ah.dmac, 6);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU)
> +		req.path_mtu = cpu_to_le16(qp->path_mtu);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_TIMEOUT)
> +		req.timeout = qp->timeout;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_RETRY_CNT)
> +		req.retry_cnt = qp->retry_cnt;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_RNR_RETRY)
> +		req.rnr_retry = qp->rnr_retry;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER)
> +		req.min_rnr_timer = qp->min_rnr_timer;
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN)
> +		req.rq_psn = cpu_to_le32(qp->rq.psn);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN)
> +		req.sq_psn = cpu_to_le32(qp->sq.psn);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_MAX_RD_ATOMIC)
> +		req.max_rd_atomic =
> +			ORD_LIMIT_TO_ORRQ_SLOTS(qp->max_rd_atomic);
> +
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC)
> +		req.max_dest_rd_atomic =
> +			IRD_LIMIT_TO_IRRQ_SLOTS(qp->max_dest_rd_atomic);
> +
> +	req.sq_size = cpu_to_le32(qp->sq.hwq.max_elements);
> +	req.rq_size = cpu_to_le32(qp->rq.hwq.max_elements);
> +	req.sq_sge = cpu_to_le16(qp->sq.max_sge);
> +	req.rq_sge = cpu_to_le16(qp->rq.max_sge);
> +	req.max_inline_data = cpu_to_le32(qp->max_inline_data);
> +	if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID)
> +		req.dest_qp_id = cpu_to_le32(qp->dest_qpn);
> +
> +	req.vlan_pcp_vlan_dei_vlan_id = cpu_to_le16(qp->vlan_id);
> +
> +	resp = (struct creq_modify_qp_resp *)
> +			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
> +						     NULL, 0);
> +	if (!resp) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: MODIFY_QP send failed");
> +		return -EINVAL;
> +	}
> +	/**/
> +	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
> +		/* Cmd timed out */
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: MODIFY_QP timed out");
> +		return -ETIMEDOUT;
> +	}
> +	if (RCFW_RESP_STATUS(resp) ||
> +	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: MODIFY_QP failed ");
> +		dev_err(&rcfw->pdev->dev,
> +			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
> +			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
> +			RCFW_RESP_COOKIE(resp));
> +		return -EINVAL;
> +	}
> +	qp->cur_qp_state = qp->state;
> +	return 0;
> +}
> +
> +int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
> +	struct cmdq_query_qp req;
> +	struct creq_query_qp_resp *resp;
> +	struct creq_query_qp_resp_sb *sb;
> +	u16 cmd_flags = 0;
> +	u32 temp32[4];
> +	int i;
> +
> +	RCFW_CMD_PREP(req, QUERY_QP, cmd_flags);
> +
> +	req.qp_cid = cpu_to_le32(qp->id);
> +	req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS;
> +	resp = (struct creq_query_qp_resp *)
> +			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
> +						     (void **)&sb, 0);
> +	if (!resp) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: QUERY_QP send failed");
> +		return -EINVAL;
> +	}
> +	/**/
> +	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
> +		/* Cmd timed out */
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: QUERY_QP timed out");
> +		return -ETIMEDOUT;
> +	}
> +	if (RCFW_RESP_STATUS(resp) ||
> +	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: QUERY_QP failed ");
> +		dev_err(&rcfw->pdev->dev,
> +			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
> +			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
> +			RCFW_RESP_COOKIE(resp));
> +		return -EINVAL;
> +	}
> +	/* Extract the context from the side buffer */
> +	qp->state = sb->en_sqd_async_notify_state &
> +			CREQ_QUERY_QP_RESP_SB_STATE_MASK;
> +	qp->en_sqd_async_notify = sb->en_sqd_async_notify_state &
> +				  CREQ_QUERY_QP_RESP_SB_EN_SQD_ASYNC_NOTIFY ?
> +				  true : false;
> +	qp->access = sb->access;
> +	qp->pkey_index = le16_to_cpu(sb->pkey);
> +	qp->qkey = le32_to_cpu(sb->qkey);
> +
> +	temp32[0] = le32_to_cpu(sb->dgid[0]);
> +	temp32[1] = le32_to_cpu(sb->dgid[1]);
> +	temp32[2] = le32_to_cpu(sb->dgid[2]);
> +	temp32[3] = le32_to_cpu(sb->dgid[3]);
> +	memcpy(qp->ah.dgid.data, temp32, sizeof(qp->ah.dgid.data));
> +
> +	qp->ah.flow_label = le32_to_cpu(sb->flow_label);
> +
> +	qp->ah.sgid_index = 0;
> +	for (i = 0; i < res->sgid_tbl.max; i++) {
> +		if (res->sgid_tbl.hw_id[i] == le16_to_cpu(sb->sgid_index)) {
> +			qp->ah.sgid_index = i;
> +			break;
> +		}
> +	}
> +	if (i == res->sgid_tbl.max)
> +		dev_warn(&res->pdev->dev, "QPLIB: SGID not found??");
> +
> +	qp->ah.hop_limit = sb->hop_limit;
> +	qp->ah.traffic_class = sb->traffic_class;
> +	memcpy(qp->ah.dmac, sb->dest_mac, 6);
> +	qp->ah.vlan_id = le16_to_cpu((sb->path_mtu_dest_vlan_id &
> +				CREQ_QUERY_QP_RESP_SB_VLAN_ID_MASK) >>
> +				CREQ_QUERY_QP_RESP_SB_VLAN_ID_SFT);
> +	qp->path_mtu = sb->path_mtu_dest_vlan_id &
> +				    CREQ_QUERY_QP_RESP_SB_PATH_MTU_MASK;
> +	qp->timeout = sb->timeout;
> +	qp->retry_cnt = sb->retry_cnt;
> +	qp->rnr_retry = sb->rnr_retry;
> +	qp->min_rnr_timer = sb->min_rnr_timer;
> +	qp->rq.psn = le32_to_cpu(sb->rq_psn);
> +	qp->max_rd_atomic = ORRQ_SLOTS_TO_ORD_LIMIT(sb->max_rd_atomic);
> +	qp->sq.psn = le32_to_cpu(sb->sq_psn);
> +	qp->max_dest_rd_atomic =
> +			IRRQ_SLOTS_TO_IRD_LIMIT(sb->max_dest_rd_atomic);
> +	qp->sq.max_wqe = qp->sq.hwq.max_elements;
> +	qp->rq.max_wqe = qp->rq.hwq.max_elements;
> +	qp->sq.max_sge = le16_to_cpu(sb->sq_sge);
> +	qp->rq.max_sge = le32_to_cpu(sb->rq_sge);
> +	qp->max_inline_data = le32_to_cpu(sb->max_inline_data);
> +	qp->dest_qpn = le32_to_cpu(sb->dest_qp_id);
> +	memcpy(qp->smac, sb->src_mac, 6);
> +	qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id);
> +	return 0;
> +}
> +
> +static void __clean_cq(struct bnxt_qplib_cq *cq, u64 qp)
> +{
> +	struct bnxt_qplib_hwq *cq_hwq = &cq->hwq;
> +	struct cq_base *hw_cqe, **hw_cqe_ptr;
> +	int i;
> +
> +	for (i = 0; i < cq_hwq->max_elements; i++) {
> +		hw_cqe_ptr = (struct cq_base **)cq_hwq->pbl_ptr;
> +		hw_cqe = &hw_cqe_ptr[CQE_PG(i)][CQE_IDX(i)];
> +		if (!CQE_CMP_VALID(hw_cqe, i, cq_hwq->max_elements))
> +			continue;
> +		switch (hw_cqe->cqe_type_toggle & CQ_BASE_CQE_TYPE_MASK) {
> +		case CQ_BASE_CQE_TYPE_REQ:
> +		case CQ_BASE_CQE_TYPE_TERMINAL:
> +		{
> +			struct cq_req *cqe = (struct cq_req *)hw_cqe;
> +
> +			if (qp == le64_to_cpu(cqe->qp_handle))
> +				cqe->qp_handle = 0;
> +			break;
> +		}
> +		case CQ_BASE_CQE_TYPE_RES_RC:
> +		case CQ_BASE_CQE_TYPE_RES_UD:
> +		case CQ_BASE_CQE_TYPE_RES_RAWETH_QP1:
> +		{
> +			struct cq_res_rc *cqe = (struct cq_res_rc *)hw_cqe;
> +
> +			if (qp == le64_to_cpu(cqe->qp_handle))
> +				cqe->qp_handle = 0;
> +			break;
> +		}
> +		default:
> +			break;
> +		}
> +	}
> +}
> +
> +static unsigned long bnxt_qplib_lock_cqs(struct bnxt_qplib_qp *qp)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&qp->scq->hwq.lock, flags);
> +	if (qp->rcq && qp->rcq != qp->scq)
> +		spin_lock(&qp->rcq->hwq.lock);
> +
> +	return flags;
> +}
> +
> +static void bnxt_qplib_unlock_cqs(struct bnxt_qplib_qp *qp,
> +				  unsigned long flags)
> +{
> +	if (qp->rcq && qp->rcq != qp->scq)
> +		spin_unlock(&qp->rcq->hwq.lock);
> +	spin_unlock_irqrestore(&qp->scq->hwq.lock, flags);
> +}
> +
> +int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
> +			  struct bnxt_qplib_qp *qp)
> +{
> +	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
> +	struct cmdq_destroy_qp req;
> +	struct creq_destroy_qp_resp *resp;
> +	unsigned long flags;
> +	u16 cmd_flags = 0;
> +
> +	RCFW_CMD_PREP(req, DESTROY_QP, cmd_flags);
> +
> +	req.qp_cid = cpu_to_le32(qp->id);
> +	resp = (struct creq_destroy_qp_resp *)
> +			bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
> +						     NULL, 0);
> +	if (!resp) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_QP send failed");
> +		return -EINVAL;
> +	}
> +	/**/
> +	if (!bnxt_qplib_rcfw_wait_for_resp(rcfw, le16_to_cpu(req.cookie))) {
> +		/* Cmd timed out */
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_QP timed out");
> +		return -ETIMEDOUT;
> +	}
> +	if (RCFW_RESP_STATUS(resp) ||
> +	    RCFW_RESP_COOKIE(resp) != RCFW_CMDQ_COOKIE(req)) {
> +		dev_err(&rcfw->pdev->dev, "QPLIB: FP: DESTROY_QP failed ");
> +		dev_err(&rcfw->pdev->dev,
> +			"QPLIB: with status 0x%x cmdq 0x%x resp 0x%x",
> +			RCFW_RESP_STATUS(resp), RCFW_CMDQ_COOKIE(req),
> +			RCFW_RESP_COOKIE(resp));
> +		return -EINVAL;
> +	}
> +
> +	/* Must walk the associated CQs to nullified the QP ptr */
> +	flags = bnxt_qplib_lock_cqs(qp);
> +	__clean_cq(qp->scq, (u64)qp);
> +	if (qp->rcq != qp->scq)
> +		__clean_cq(qp->rcq, (u64)qp);
> +	bnxt_qplib_unlock_cqs(qp, flags);
> +
> +	bnxt_qplib_free_qp_hdr_buf(res, qp);
> +	bnxt_qplib_free_hwq(res->pdev, &qp->sq.hwq);
> +	kfree(qp->sq.swq);
> +
> +	bnxt_qplib_free_hwq(res->pdev, &qp->rq.hwq);
> +	kfree(qp->rq.swq);
> +
> +	if (qp->irrq.max_elements)
> +		bnxt_qplib_free_hwq(res->pdev, &qp->irrq);
> +	if (qp->orrq.max_elements)
> +		bnxt_qplib_free_hwq(res->pdev, &qp->orrq);
> +
> +	return 0;
> +}
> +
>  /* CQ */
>
>  /* Spinlock must be held */
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
> index 1991eaa..f6d2be5 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h
> @@ -38,8 +38,246 @@
>
>  #ifndef __BNXT_QPLIB_FP_H__
>  #define __BNXT_QPLIB_FP_H__
> +struct bnxt_qplib_sge {
> +	u64				addr;
> +	u32				lkey;
> +	u32				size;
> +};
> +
> +#define BNXT_QPLIB_MAX_SQE_ENTRY_SIZE	sizeof(struct sq_send)
> +
> +#define SQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_SQE_ENTRY_SIZE)
> +#define SQE_MAX_IDX_PER_PG	(SQE_CNT_PER_PG - 1)
> +#define SQE_PG(x)		(((x) & ~SQE_MAX_IDX_PER_PG) / SQE_CNT_PER_PG)
> +#define SQE_IDX(x)		((x) & SQE_MAX_IDX_PER_PG)
> +
> +#define BNXT_QPLIB_MAX_PSNE_ENTRY_SIZE	sizeof(struct sq_psn_search)
> +
> +#define PSNE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_PSNE_ENTRY_SIZE)
> +#define PSNE_MAX_IDX_PER_PG	(PSNE_CNT_PER_PG - 1)
> +#define PSNE_PG(x)		(((x) & ~PSNE_MAX_IDX_PER_PG) / PSNE_CNT_PER_PG)
> +#define PSNE_IDX(x)		((x) & PSNE_MAX_IDX_PER_PG)
> +
> +#define BNXT_QPLIB_QP_MAX_SGL	6
> +
> +struct bnxt_qplib_swq {
> +	u64				wr_id;
> +	u8				type;
> +	u8				flags;
> +	u32				start_psn;
> +	u32				next_psn;
> +	struct sq_psn_search		*psn_search;
> +};
> +
> +struct bnxt_qplib_swqe {
> +	/* General */
> +	u64				wr_id;
> +	u8				reqs_type;
> +	u8				type;
> +#define BNXT_QPLIB_SWQE_TYPE_SEND			0
> +#define BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM		1
> +#define BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV		2
> +#define BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE			4
> +#define BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM	5
> +#define BNXT_QPLIB_SWQE_TYPE_RDMA_READ			6
> +#define BNXT_QPLIB_SWQE_TYPE_ATOMIC_CMP_AND_SWP		8
> +#define BNXT_QPLIB_SWQE_TYPE_ATOMIC_FETCH_AND_ADD	11
> +#define BNXT_QPLIB_SWQE_TYPE_LOCAL_INV			12
> +#define BNXT_QPLIB_SWQE_TYPE_FAST_REG_MR		13
> +#define BNXT_QPLIB_SWQE_TYPE_REG_MR			13
> +#define BNXT_QPLIB_SWQE_TYPE_BIND_MW			14
> +#define BNXT_QPLIB_SWQE_TYPE_RECV			128
> +#define BNXT_QPLIB_SWQE_TYPE_RECV_RDMA_IMM		129
> +	u8				flags;
> +#define BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP		BIT(0)
> +#define BNXT_QPLIB_SWQE_FLAGS_RD_ATOMIC_FENCE		BIT(1)
> +#define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE			BIT(2)
> +#define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT		BIT(3)
> +#define BNXT_QPLIB_SWQE_FLAGS_INLINE			BIT(4)
> +	struct bnxt_qplib_sge		sg_list[BNXT_QPLIB_QP_MAX_SGL];
> +	int				num_sge;
> +	/* Max inline data is 96 bytes */
> +	u32				inline_len;
> +#define BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH		96
> +	u8		inline_data[BNXT_QPLIB_SWQE_MAX_INLINE_LENGTH];
> +
> +	union {
> +		/* Send, with imm, inval key */
> +		struct {
> +			u32		imm_data_or_inv_key;
> +			u32		q_key;
> +			u32		dst_qp;
> +			u16		avid;
> +		} send;
> +
> +		/* Send Raw Ethernet and QP1 */
> +		struct {
> +			u16		lflags;
> +			u16		cfa_action;
> +			u32		cfa_meta;
> +		} rawqp1;
> +
> +		/* RDMA write, with imm, read */
> +		struct {
> +			u32		imm_data_or_inv_key;
> +			u64		remote_va;
> +			u32		r_key;
> +		} rdma;
> +
> +		/* Atomic cmp/swap, fetch/add */
> +		struct {
> +			u64		remote_va;
> +			u32		r_key;
> +			u64		swap_data;
> +			u64		cmp_data;
> +		} atomic;
> +
> +		/* Local Invalidate */
> +		struct {
> +			u32		inv_l_key;
> +		} local_inv;
> +
> +		/* FR-PMR */
> +		struct {
> +			u8		access_cntl;
> +			u8		pg_sz_log;
> +			bool		zero_based;
> +			u32		l_key;
> +			u32		length;
> +			u8		pbl_pg_sz_log;
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_4K			0
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_8K			1
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_64K			4
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_256K			6
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_1M			8
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_2M			9
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_4M			10
> +#define BNXT_QPLIB_SWQE_PAGE_SIZE_1G			18
> +			u8		levels;
> +#define PAGE_SHIFT_4K	12
> +			u64		*pbl_ptr;
> +			dma_addr_t	pbl_dma_ptr;
> +			u64		*page_list;
> +			u16		page_list_len;
> +			u64		va;
> +		} frmr;
> +
> +		/* Bind */
> +		struct {
> +			u8		access_cntl;
> +#define BNXT_QPLIB_BIND_SWQE_ACCESS_LOCAL_WRITE		BIT(0)
> +#define BNXT_QPLIB_BIND_SWQE_ACCESS_REMOTE_READ		BIT(1)
> +#define BNXT_QPLIB_BIND_SWQE_ACCESS_REMOTE_WRITE	BIT(2)
> +#define BNXT_QPLIB_BIND_SWQE_ACCESS_REMOTE_ATOMIC	BIT(3)
> +#define BNXT_QPLIB_BIND_SWQE_ACCESS_WINDOW_BIND		BIT(4)
> +			bool		zero_based;
> +			u8		mw_type;
> +			u32		parent_l_key;
> +			u32		r_key;
> +			u64		va;
> +			u32		length;
> +		} bind;
> +	};
> +};
> +
> +#define BNXT_QPLIB_MAX_RQE_ENTRY_SIZE	sizeof(struct rq_wqe)
> +
> +#define RQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_RQE_ENTRY_SIZE)
> +#define RQE_MAX_IDX_PER_PG	(RQE_CNT_PER_PG - 1)
> +#define RQE_PG(x)		(((x) & ~RQE_MAX_IDX_PER_PG) / RQE_CNT_PER_PG)
> +#define RQE_IDX(x)		((x) & RQE_MAX_IDX_PER_PG)
> +
> +struct bnxt_qplib_q {
> +	struct bnxt_qplib_hwq		hwq;
> +	struct bnxt_qplib_swq		*swq;
> +	struct scatterlist		*sglist;
> +	u32				nmap;
> +	u32				max_wqe;
> +	u16				max_sge;
> +	u32				psn;
> +	bool				flush_in_progress;
> +};
> +
> +struct bnxt_qplib_qp {
> +	struct bnxt_qplib_pd		*pd;
> +	struct bnxt_qplib_dpi		*dpi;
> +	u64				qp_handle;
> +	u32				id;
> +	u8				type;
> +	u8				sig_type;
> +	u64				modify_flags;
> +	u8				state;
> +	u8				cur_qp_state;
> +	u32				max_inline_data;
> +	u32				mtu;
> +	u32				path_mtu;
> +	bool				en_sqd_async_notify;
> +	u16				pkey_index;
> +	u32				qkey;
> +	u32				dest_qp_id;
> +	u8				access;
> +	u8				timeout;
> +	u8				retry_cnt;
> +	u8				rnr_retry;
> +	u32				min_rnr_timer;
> +	u32				max_rd_atomic;
> +	u32				max_dest_rd_atomic;
> +	u32				dest_qpn;
> +	u8				smac[6];
> +	u16				vlan_id;
> +	u8				nw_type;
> +	struct bnxt_qplib_ah		ah;
> +
> +#define BTH_PSN_MASK			((1 << 24) - 1)
> +	/* SQ */
> +	struct bnxt_qplib_q		sq;
> +	/* RQ */
> +	struct bnxt_qplib_q		rq;
> +	/* SRQ */
> +	struct bnxt_qplib_srq		*srq;
> +	/* CQ */
> +	struct bnxt_qplib_cq		*scq;
> +	struct bnxt_qplib_cq		*rcq;
> +	/* IRRQ and ORRQ */
> +	struct bnxt_qplib_hwq		irrq;
> +	struct bnxt_qplib_hwq		orrq;
> +	/* Header buffer for QP1 */
> +	int				sq_hdr_buf_size;
> +	int				rq_hdr_buf_size;
> +/*
> + * Buffer space for ETH(14), IP or GRH(40), UDP header(8)
> + * and ib_bth + ib_deth (20).
> + * Max required is 82 when RoCE V2 is enabled
> + */
> +#define BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2	86
> +	/* Ethernet header	=  14 */
> +	/* ib_grh		=  40 (provided by MAD) */
> +	/* ib_bth + ib_deth	=  20 */
> +	/* MAD			= 256 (provided by MAD) */
> +	/* iCRC			=   4 */
> +#define BNXT_QPLIB_MAX_QP1_RQ_ETH_HDR_SIZE	14
> +#define BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2	512
> +#define BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV4	20
> +#define BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6	40
> +#define BNXT_QPLIB_MAX_QP1_RQ_BDETH_HDR_SIZE	20
> +	void				*sq_hdr_buf;
> +	dma_addr_t			sq_hdr_buf_map;
> +	void				*rq_hdr_buf;
> +	dma_addr_t			rq_hdr_buf_map;
> +};
> +
>  #define BNXT_QPLIB_MAX_CQE_ENTRY_SIZE	sizeof(struct cq_base)
>
> +#define CQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_CQE_ENTRY_SIZE)
> +#define CQE_MAX_IDX_PER_PG	(CQE_CNT_PER_PG - 1)
> +#define CQE_PG(x)		(((x) & ~CQE_MAX_IDX_PER_PG) / CQE_CNT_PER_PG)
> +#define CQE_IDX(x)		((x) & CQE_MAX_IDX_PER_PG)
> +
> +#define ROCE_CQE_CMP_V			0
> +#define CQE_CMP_VALID(hdr, raw_cons, cp_bit)			\
> +	(!!((hdr)->cqe_type_toggle & CQ_BASE_TOGGLE) ==		\
> +	   !((raw_cons) & (cp_bit)))
> +
>  struct bnxt_qplib_cqe {
>  	u8				status;
>  	u8				type;
> @@ -82,6 +320,13 @@ struct bnxt_qplib_cq {
>  	wait_queue_head_t		waitq;
>  };
>
> +#define BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE	sizeof(struct xrrq_irrq)
> +#define BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE	sizeof(struct xrrq_orrq)
> +#define IRD_LIMIT_TO_IRRQ_SLOTS(x)	(2 * (x) + 2)
> +#define IRRQ_SLOTS_TO_IRD_LIMIT(s)	(((s) >> 1) - 1)
> +#define ORD_LIMIT_TO_ORRQ_SLOTS(x)	((x) + 1)
> +#define ORRQ_SLOTS_TO_ORD_LIMIT(s)	((s) - 1)
> +
>  #define BNXT_QPLIB_MAX_NQE_ENTRY_SIZE	sizeof(struct nq_base)
>
>  #define NQE_CNT_PER_PG		(PAGE_SIZE / BNXT_QPLIB_MAX_NQE_ENTRY_SIZE)
> @@ -140,6 +385,11 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
>  			 int (*srqn_handler)(struct bnxt_qplib_nq *nq,
>  					     void *srq,
>  					     u8 event));
> +int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
> +int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
> +int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
> +int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
> +int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp);
>  int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
>  int bnxt_qplib_destroy_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq);
>
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re.h b/drivers/infiniband/hw/bnxtre/bnxt_re.h
> index 3a93a88..84af86b 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_re.h
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_re.h
> @@ -64,6 +64,14 @@ struct bnxt_re_work {
>  	struct net_device	*vlan_dev;
>  };
>
> +struct bnxt_re_sqp_entries {
> +	struct bnxt_qplib_sge sge;
> +	u64 wrid;
> +	/* For storing the actual qp1 cqe */
> +	struct bnxt_qplib_cqe cqe;
> +	struct bnxt_re_qp *qp1_qp;
> +};
> +
>  #define BNXT_RE_MIN_MSIX		2
>  #define BNXT_RE_MAX_MSIX		16
>  #define BNXT_RE_AEQ_IDX			0
> @@ -112,6 +120,12 @@ struct bnxt_re_dev {
>  	atomic_t			mw_count;
>  	/* Max of 2 lossless traffic class supported per port */
>  	u16				cosq[2];
> +
> +	/* QP for for handling QP1 packets */
> +	u32				sqp_id;
> +	struct bnxt_re_qp		*qp1_sqp;
> +	struct bnxt_re_ah		*sqp_ah;
> +	struct bnxt_re_sqp_entries sqp_tbl[1024];
>  };
>
>  #define to_bnxt_re(ptr, type, member)	\
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
> index 5e41317..77860a2 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
> @@ -649,6 +649,481 @@ int bnxt_re_query_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
>  	return 0;
>  }
>
> +/* Queue Pairs */
> +int bnxt_re_destroy_qp(struct ib_qp *ib_qp)
> +{
> +	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
> +	struct bnxt_re_dev *rdev = qp->rdev;
> +	int rc;
> +
> +	rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
> +	if (rc) {
> +		dev_err(rdev_to_dev(rdev), "Failed to destroy HW QP");
> +		return rc;
> +	}
> +	if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp) {
> +		rc = bnxt_qplib_destroy_ah(&rdev->qplib_res,
> +					   &rdev->sqp_ah->qplib_ah);
> +		if (rc) {
> +			dev_err(rdev_to_dev(rdev),
> +				"Failed to destroy HW AH for shadow QP");
> +			return rc;
> +		}
> +
> +		rc = bnxt_qplib_destroy_qp(&rdev->qplib_res,
> +					   &rdev->qp1_sqp->qplib_qp);
> +		if (rc) {
> +			dev_err(rdev_to_dev(rdev),
> +				"Failed to destroy Shadow QP");
> +			return rc;
> +		}
> +		mutex_lock(&rdev->qp_lock);
> +		list_del(&rdev->qp1_sqp->list);
> +		atomic_dec(&rdev->qp_count);
> +		mutex_unlock(&rdev->qp_lock);
> +
> +		kfree(rdev->sqp_ah);
> +		kfree(rdev->qp1_sqp);
> +	}
> +
> +	if (qp->rumem && !IS_ERR(qp->rumem))
> +		ib_umem_release(qp->rumem);
> +	if (qp->sumem && !IS_ERR(qp->sumem))
> +		ib_umem_release(qp->sumem);
> +
> +	mutex_lock(&rdev->qp_lock);
> +	list_del(&qp->list);
> +	atomic_dec(&rdev->qp_count);
> +	mutex_unlock(&rdev->qp_lock);
> +	kfree(qp);
> +	return 0;
> +}
> +
> +static u8 __from_ib_qp_type(enum ib_qp_type type)
> +{
> +	switch (type) {
> +	case IB_QPT_GSI:
> +		return CMDQ_CREATE_QP1_TYPE_GSI;
> +	case IB_QPT_RC:
> +		return CMDQ_CREATE_QP_TYPE_RC;
> +	case IB_QPT_UD:
> +		return CMDQ_CREATE_QP_TYPE_UD;
> +	case IB_QPT_RAW_ETHERTYPE:
> +		return CMDQ_CREATE_QP_TYPE_RAW_ETHERTYPE;
> +	default:
> +		return IB_QPT_MAX;
> +	}
> +}
> +
> +static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd,
> +			 struct bnxt_re_qp *qp, struct ib_udata *udata)
> +{
> +	struct bnxt_re_qp_req ureq;
> +	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
> +	struct ib_umem *umem;
> +	int bytes = 0;
> +	struct ib_ucontext *context = pd->ib_pd.uobject->context;
> +	struct bnxt_re_ucontext *cntx = to_bnxt_re(context,
> +						  struct bnxt_re_ucontext,
> +						  ib_uctx);
> +	if (ib_copy_from_udata(&ureq, udata, sizeof(ureq)))
> +		return -EFAULT;
> +
> +	bytes = (qplib_qp->sq.max_wqe * BNXT_QPLIB_MAX_SQE_ENTRY_SIZE);
> +	/* Consider mapping PSN search memory only for RC QPs. */
> +	if (qplib_qp->type == CMDQ_CREATE_QP_TYPE_RC)
> +		bytes += (qplib_qp->sq.max_wqe * sizeof(struct sq_psn_search));
> +	bytes = PAGE_ALIGN(bytes);
> +	umem = ib_umem_get(context, ureq.qpsva, bytes,
> +			   IB_ACCESS_LOCAL_WRITE, 1);
> +	if (IS_ERR(umem))
> +		return PTR_ERR(umem);
> +
> +	qp->sumem = umem;
> +	qplib_qp->sq.sglist = umem->sg_head.sgl;
> +	qplib_qp->sq.nmap = umem->nmap;
> +	qplib_qp->qp_handle = ureq.qp_handle;
> +
> +	if (!qp->qplib_qp.srq) {
> +		bytes = (qplib_qp->rq.max_wqe * BNXT_QPLIB_MAX_RQE_ENTRY_SIZE);
> +		bytes = PAGE_ALIGN(bytes);
> +		umem = ib_umem_get(context, ureq.qprva, bytes,
> +				   IB_ACCESS_LOCAL_WRITE, 1);
> +		if (IS_ERR(umem))
> +			goto rqfail;
> +		qp->rumem = umem;
> +		qplib_qp->rq.sglist = umem->sg_head.sgl;
> +		qplib_qp->rq.nmap = umem->nmap;
> +	}
> +
> +	qplib_qp->dpi = cntx->dpi;
> +	return 0;
> +rqfail:
> +	ib_umem_release(qp->sumem);
> +	qp->sumem = NULL;
> +	qplib_qp->sq.sglist = NULL;
> +	qplib_qp->sq.nmap = 0;
> +
> +	return PTR_ERR(umem);
> +}
> +
> +static struct bnxt_re_ah *bnxt_re_create_shadow_qp_ah(struct bnxt_re_pd *pd,
> +					       struct bnxt_qplib_res *qp1_res,
> +					       struct bnxt_qplib_qp *qp1_qp)
> +{
> +	struct bnxt_re_dev *rdev = pd->rdev;
> +	struct bnxt_re_ah *ah;
> +	union ib_gid sgid;
> +	int rc;
> +
> +	ah = kzalloc(sizeof(*ah), GFP_KERNEL);
> +	if (!ah)
> +		return NULL;
> +
> +	memset(ah, 0, sizeof(*ah));
> +	ah->rdev = rdev;
> +	ah->qplib_ah.pd = &pd->qplib_pd;
> +
> +	rc = bnxt_re_query_gid(&rdev->ibdev, 1, 0, &sgid);
> +	if (rc)
> +		goto fail;
> +
> +	/* supply the dgid data same as sgid */
> +	memcpy(ah->qplib_ah.dgid.data, &sgid.raw,
> +	       sizeof(union ib_gid));
> +	ah->qplib_ah.sgid_index = 0;
> +
> +	ah->qplib_ah.traffic_class = 0;
> +	ah->qplib_ah.flow_label = 0;
> +	ah->qplib_ah.hop_limit = 1;
> +	ah->qplib_ah.sl = 0;
> +	/* Have DMAC same as SMAC */
> +	ether_addr_copy(ah->qplib_ah.dmac, rdev->netdev->dev_addr);
> +
> +	rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah);
> +	if (rc) {
> +		dev_err(rdev_to_dev(rdev),
> +			"Failed to allocate HW AH for Shadow QP");
> +		goto fail;
> +	}
> +
> +	return ah;
> +
> +fail:
> +	kfree(ah);
> +	return NULL;
> +}
> +
> +static struct bnxt_re_qp *bnxt_re_create_shadow_qp(struct bnxt_re_pd *pd,
> +					    struct bnxt_qplib_res *qp1_res,
> +					    struct bnxt_qplib_qp *qp1_qp)
> +{
> +	struct bnxt_re_dev *rdev = pd->rdev;
> +	struct bnxt_re_qp *qp;
> +	int rc;
> +
> +	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
> +	if (!qp)
> +		return NULL;
> +
> +	memset(qp, 0, sizeof(*qp));
> +	qp->rdev = rdev;
> +
> +	/* Initialize the shadow QP structure from the QP1 values */
> +	ether_addr_copy(qp->qplib_qp.smac, rdev->netdev->dev_addr);
> +
> +	qp->qplib_qp.pd = &pd->qplib_pd;
> +	qp->qplib_qp.qp_handle = (u64)&qp->qplib_qp;
> +	qp->qplib_qp.type = IB_QPT_UD;
> +
> +	qp->qplib_qp.max_inline_data = 0;
> +	qp->qplib_qp.sig_type = true;
> +
> +	/* Shadow QP SQ depth should be same as QP1 RQ depth */
> +	qp->qplib_qp.sq.max_wqe = qp1_qp->rq.max_wqe;
> +	qp->qplib_qp.sq.max_sge = 2;
> +
> +	qp->qplib_qp.scq = qp1_qp->scq;
> +	qp->qplib_qp.rcq = qp1_qp->rcq;
> +
> +	qp->qplib_qp.rq.max_wqe = qp1_qp->rq.max_wqe;
> +	qp->qplib_qp.rq.max_sge = qp1_qp->rq.max_sge;
> +
> +	qp->qplib_qp.mtu = qp1_qp->mtu;
> +
> +	qp->qplib_qp.sq_hdr_buf_size = 0;
> +	qp->qplib_qp.rq_hdr_buf_size = BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6;
> +	qp->qplib_qp.dpi = &rdev->dpi_privileged;
> +
> +	rc = bnxt_qplib_create_qp(qp1_res, &qp->qplib_qp);
> +	if (rc)
> +		goto fail;
> +
> +	rdev->sqp_id = qp->qplib_qp.id;
> +
> +	spin_lock_init(&qp->sq_lock);
> +	INIT_LIST_HEAD(&qp->list);
> +	mutex_lock(&rdev->qp_lock);
> +	list_add_tail(&qp->list, &rdev->qp_list);
> +	atomic_inc(&rdev->qp_count);
> +	mutex_unlock(&rdev->qp_lock);
> +	return qp;
> +fail:
> +	kfree(qp);
> +	return NULL;
> +}
> +
> +struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd,
> +				struct ib_qp_init_attr *qp_init_attr,
> +				struct ib_udata *udata)
> +{
> +	struct bnxt_re_pd *pd = to_bnxt_re(ib_pd, struct bnxt_re_pd, ib_pd);
> +	struct bnxt_re_dev *rdev = pd->rdev;
> +	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
> +	struct bnxt_re_qp *qp;
> +	struct bnxt_re_srq *srq;
> +	struct bnxt_re_cq *cq;
> +	int rc, entries;
> +
> +	if ((qp_init_attr->cap.max_send_wr > dev_attr->max_qp_wqes) ||
> +	    (qp_init_attr->cap.max_recv_wr > dev_attr->max_qp_wqes) ||
> +	    (qp_init_attr->cap.max_send_sge > dev_attr->max_qp_sges) ||
> +	    (qp_init_attr->cap.max_recv_sge > dev_attr->max_qp_sges) ||
> +	    (qp_init_attr->cap.max_inline_data > dev_attr->max_inline_data))
> +		return ERR_PTR(-EINVAL);
> +
> +	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
> +	if (!qp)
> +		return ERR_PTR(-ENOMEM);
> +
> +	qp->rdev = rdev;
> +	ether_addr_copy(qp->qplib_qp.smac, rdev->netdev->dev_addr);
> +	qp->qplib_qp.pd = &pd->qplib_pd;
> +	qp->qplib_qp.qp_handle = (u64)&qp->qplib_qp;
> +	qp->qplib_qp.type = __from_ib_qp_type(qp_init_attr->qp_type);
> +	if (qp->qplib_qp.type == IB_QPT_MAX) {
> +		dev_err(rdev_to_dev(rdev), "QP type 0x%x not supported",
> +			qp->qplib_qp.type);
> +		rc = -EINVAL;
> +		goto fail;
> +	}
> +	qp->qplib_qp.max_inline_data = qp_init_attr->cap.max_inline_data;
> +	qp->qplib_qp.sig_type = ((qp_init_attr->sq_sig_type ==
> +				  IB_SIGNAL_ALL_WR) ? true : false);
> +
> +	entries = roundup_pow_of_two(qp_init_attr->cap.max_send_wr + 1);
> +	if (entries > dev_attr->max_qp_wqes + 1)
> +		entries = dev_attr->max_qp_wqes + 1;
> +	qp->qplib_qp.sq.max_wqe = entries;
> +
> +	qp->qplib_qp.sq.max_sge = qp_init_attr->cap.max_send_sge;
> +	if (qp->qplib_qp.sq.max_sge > dev_attr->max_qp_sges)
> +		qp->qplib_qp.sq.max_sge = dev_attr->max_qp_sges;
> +
> +	if (qp_init_attr->send_cq) {
> +		cq = to_bnxt_re(qp_init_attr->send_cq, struct bnxt_re_cq,
> +				ib_cq);
> +		if (!cq) {
> +			dev_err(rdev_to_dev(rdev), "Send CQ not found");
> +			rc = -EINVAL;
> +			goto fail;
> +		}
> +		qp->qplib_qp.scq = &cq->qplib_cq;
> +	}
> +
> +	if (qp_init_attr->recv_cq) {
> +		cq = to_bnxt_re(qp_init_attr->recv_cq, struct bnxt_re_cq,
> +				ib_cq);
> +		if (!cq) {
> +			dev_err(rdev_to_dev(rdev), "Receive CQ not found");
> +			rc = -EINVAL;
> +			goto fail;
> +		}
> +		qp->qplib_qp.rcq = &cq->qplib_cq;
> +	}
> +
> +	if (qp_init_attr->srq) {
> +		dev_err(rdev_to_dev(rdev), "SRQ not supported");
> +		rc = -ENOTSUPP;
> +		goto fail;
> +	} else {
> +		/* Allocate 1 more than what's provided so posting max doesn't
> +		 * mean empty
> +		 */
> +		entries = roundup_pow_of_two(qp_init_attr->cap.max_recv_wr + 1);
> +		if (entries > dev_attr->max_qp_wqes + 1)
> +			entries = dev_attr->max_qp_wqes + 1;
> +		qp->qplib_qp.rq.max_wqe = entries;
> +
> +		qp->qplib_qp.rq.max_sge = qp_init_attr->cap.max_recv_sge;
> +		if (qp->qplib_qp.rq.max_sge > dev_attr->max_qp_sges)
> +			qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
> +	}
> +
> +	qp->qplib_qp.mtu = ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
> +
> +	if (qp_init_attr->qp_type == IB_QPT_GSI) {
> +		qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
> +		if (qp->qplib_qp.rq.max_sge > dev_attr->max_qp_sges)
> +			qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
> +		qp->qplib_qp.sq.max_sge++;
> +		if (qp->qplib_qp.sq.max_sge > dev_attr->max_qp_sges)
> +			qp->qplib_qp.sq.max_sge = dev_attr->max_qp_sges;
> +
> +		qp->qplib_qp.rq_hdr_buf_size =
> +					BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2;
> +
> +		qp->qplib_qp.sq_hdr_buf_size =
> +					BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2;
> +		qp->qplib_qp.dpi = &rdev->dpi_privileged;
> +		rc = bnxt_qplib_create_qp1(&rdev->qplib_res, &qp->qplib_qp);
> +		if (rc) {
> +			dev_err(rdev_to_dev(rdev), "Failed to create HW QP1");
> +			goto fail;
> +		}
> +		/* Create a shadow QP to handle the QP1 traffic */
> +		rdev->qp1_sqp = bnxt_re_create_shadow_qp(pd, &rdev->qplib_res,
> +							 &qp->qplib_qp);
> +		if (!rdev->qp1_sqp) {
> +			rc = -EINVAL;
> +			dev_err(rdev_to_dev(rdev),
> +				"Failed to create Shadow QP for QP1");
> +			goto qp_destroy;
> +		}
> +		rdev->sqp_ah = bnxt_re_create_shadow_qp_ah(pd, &rdev->qplib_res,
> +							   &qp->qplib_qp);
> +		if (!rdev->sqp_ah) {
> +			bnxt_qplib_destroy_qp(&rdev->qplib_res,
> +					      &rdev->qp1_sqp->qplib_qp);
> +			rc = -EINVAL;
> +			dev_err(rdev_to_dev(rdev),
> +				"Failed to create AH entry for ShadowQP");
> +			goto qp_destroy;
> +		}
> +
> +	} else {
> +		qp->qplib_qp.max_rd_atomic = dev_attr->max_qp_rd_atom;
> +		qp->qplib_qp.max_dest_rd_atomic = dev_attr->max_qp_init_rd_atom;
> +		if (udata) {
> +			rc = bnxt_re_init_user_qp(rdev, pd, qp, udata);
> +			if (rc)
> +				goto fail;
> +		} else {
> +			qp->qplib_qp.dpi = &rdev->dpi_privileged;
> +		}
> +
> +		rc = bnxt_qplib_create_qp(&rdev->qplib_res, &qp->qplib_qp);
> +		if (rc) {
> +			dev_err(rdev_to_dev(rdev), "Failed to create HW QP");
> +			goto fail;
> +		}
> +	}
> +
> +	qp->ib_qp.qp_num = qp->qplib_qp.id;
> +	spin_lock_init(&qp->sq_lock);
> +
> +	if (udata) {
> +		struct bnxt_re_qp_resp resp;
> +
> +		resp.qpid = qp->ib_qp.qp_num;
> +		rc = bnxt_re_copy_to_udata(rdev, &resp, sizeof(resp), udata);
> +		if (rc) {
> +			dev_err(rdev_to_dev(rdev), "Failed to copy QP udata");
> +			goto qp_destroy;
> +		}
> +	}
> +	INIT_LIST_HEAD(&qp->list);
> +	mutex_lock(&rdev->qp_lock);
> +	list_add_tail(&qp->list, &rdev->qp_list);
> +	atomic_inc(&rdev->qp_count);
> +	mutex_unlock(&rdev->qp_lock);
> +
> +	return &qp->ib_qp;
> +qp_destroy:
> +	bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
> +fail:
> +	kfree(qp);
> +	return ERR_PTR(rc);
> +}
> +
> +static u8 __from_ib_qp_state(enum ib_qp_state state)
> +{
> +	switch (state) {
> +	case IB_QPS_RESET:
> +		return CMDQ_MODIFY_QP_NEW_STATE_RESET;
> +	case IB_QPS_INIT:
> +		return CMDQ_MODIFY_QP_NEW_STATE_INIT;
> +	case IB_QPS_RTR:
> +		return CMDQ_MODIFY_QP_NEW_STATE_RTR;
> +	case IB_QPS_RTS:
> +		return CMDQ_MODIFY_QP_NEW_STATE_RTS;
> +	case IB_QPS_SQD:
> +		return CMDQ_MODIFY_QP_NEW_STATE_SQD;
> +	case IB_QPS_SQE:
> +		return CMDQ_MODIFY_QP_NEW_STATE_SQE;
> +	case IB_QPS_ERR:
> +	default:
> +		return CMDQ_MODIFY_QP_NEW_STATE_ERR;
> +	}
> +}
> +
> +static enum ib_qp_state __to_ib_qp_state(u8 state)
> +{
> +	switch (state) {
> +	case CMDQ_MODIFY_QP_NEW_STATE_RESET:
> +		return IB_QPS_RESET;
> +	case CMDQ_MODIFY_QP_NEW_STATE_INIT:
> +		return IB_QPS_INIT;
> +	case CMDQ_MODIFY_QP_NEW_STATE_RTR:
> +		return IB_QPS_RTR;
> +	case CMDQ_MODIFY_QP_NEW_STATE_RTS:
> +		return IB_QPS_RTS;
> +	case CMDQ_MODIFY_QP_NEW_STATE_SQD:
> +		return IB_QPS_SQD;
> +	case CMDQ_MODIFY_QP_NEW_STATE_SQE:
> +		return IB_QPS_SQE;
> +	case CMDQ_MODIFY_QP_NEW_STATE_ERR:
> +	default:
> +		return IB_QPS_ERR;
> +	}
> +}
> +
> +static u32 __from_ib_mtu(enum ib_mtu mtu)
> +{
> +	switch (mtu) {
> +	case IB_MTU_256:
> +		return CMDQ_MODIFY_QP_PATH_MTU_MTU_256;
> +	case IB_MTU_512:
> +		return CMDQ_MODIFY_QP_PATH_MTU_MTU_512;
> +	case IB_MTU_1024:
> +		return CMDQ_MODIFY_QP_PATH_MTU_MTU_1024;
> +	case IB_MTU_2048:
> +		return CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
> +	case IB_MTU_4096:
> +		return CMDQ_MODIFY_QP_PATH_MTU_MTU_4096;
> +	default:
> +		return CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
> +	}
> +}
> +
> +static enum ib_mtu __to_ib_mtu(u32 mtu)
> +{
> +	switch (mtu & CREQ_QUERY_QP_RESP_SB_PATH_MTU_MASK) {
> +	case CMDQ_MODIFY_QP_PATH_MTU_MTU_256:
> +		return IB_MTU_256;
> +	case CMDQ_MODIFY_QP_PATH_MTU_MTU_512:
> +		return IB_MTU_512;
> +	case CMDQ_MODIFY_QP_PATH_MTU_MTU_1024:
> +		return IB_MTU_1024;
> +	case CMDQ_MODIFY_QP_PATH_MTU_MTU_2048:
> +		return IB_MTU_2048;
> +	case CMDQ_MODIFY_QP_PATH_MTU_MTU_4096:
> +		return IB_MTU_4096;
> +	default:
> +		return IB_MTU_2048;
> +	}
> +}
> +
>  static int __from_ib_access_flags(int iflags)
>  {
>  	int qflags = 0;
> @@ -690,6 +1165,293 @@ static enum ib_access_flags __to_ib_access_flags(int qflags)
>  		iflags |= IB_ACCESS_ON_DEMAND;
>  	return iflags;
>  };
> +
> +static int bnxt_re_modify_shadow_qp(struct bnxt_re_dev *rdev,
> +			     struct bnxt_re_qp *qp1_qp,
> +			     int qp_attr_mask)
> +{
> +	struct bnxt_re_qp *qp = rdev->qp1_sqp;
> +	int rc = 0;
> +
> +	if (qp_attr_mask & IB_QP_STATE) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_STATE;
> +		qp->qplib_qp.state = qp1_qp->qplib_qp.state;
> +	}
> +	if (qp_attr_mask & IB_QP_PKEY_INDEX) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY;
> +		qp->qplib_qp.pkey_index = qp1_qp->qplib_qp.pkey_index;
> +	}
> +
> +	if (qp_attr_mask & IB_QP_QKEY) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_QKEY;
> +		/* Using a Random  QKEY */
> +		qp->qplib_qp.qkey = 0x81818181;
> +	}
> +	if (qp_attr_mask & IB_QP_SQ_PSN) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN;
> +		qp->qplib_qp.sq.psn = qp1_qp->qplib_qp.sq.psn;
> +	}
> +
> +	rc = bnxt_qplib_modify_qp(&rdev->qplib_res, &qp->qplib_qp);
> +	if (rc)
> +		dev_err(rdev_to_dev(rdev),
> +			"Failed to modify Shadow QP for QP1");
> +	return rc;
> +}
> +
> +int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
> +		      int qp_attr_mask, struct ib_udata *udata)
> +{
> +	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
> +	struct bnxt_re_dev *rdev = qp->rdev;
> +	struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
> +	enum ib_qp_state curr_qp_state, new_qp_state;
> +	int rc, entries;
> +	int status;
> +	union ib_gid sgid;
> +	struct ib_gid_attr sgid_attr;
> +	u8 nw_type;
> +
> +	qp->qplib_qp.modify_flags = 0;
> +	if (qp_attr_mask & IB_QP_STATE) {
> +		curr_qp_state = __to_ib_qp_state(qp->qplib_qp.cur_qp_state);
> +		new_qp_state = qp_attr->qp_state;
> +		if (!ib_modify_qp_is_ok(curr_qp_state, new_qp_state,
> +					ib_qp->qp_type, qp_attr_mask,
> +					IB_LINK_LAYER_ETHERNET)) {
> +			dev_err(rdev_to_dev(rdev),
> +				"Invalid attribute mask: %#x specified ",
> +				qp_attr_mask);
> +			dev_err(rdev_to_dev(rdev),
> +				"for qpn: %#x type: %#x",
> +				ib_qp->qp_num, ib_qp->qp_type);
> +			dev_err(rdev_to_dev(rdev),
> +				"curr_qp_state=0x%x, new_qp_state=0x%x\n",
> +				curr_qp_state, new_qp_state);
> +			return -EINVAL;
> +		}
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_STATE;
> +		qp->qplib_qp.state = __from_ib_qp_state(qp_attr->qp_state);
> +	}
> +	if (qp_attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_EN_SQD_ASYNC_NOTIFY;
> +		qp->qplib_qp.en_sqd_async_notify = true;
> +	}
> +	if (qp_attr_mask & IB_QP_ACCESS_FLAGS) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_ACCESS;
> +		qp->qplib_qp.access =
> +			__from_ib_access_flags(qp_attr->qp_access_flags);
> +		/* LOCAL_WRITE access must be set to allow RC receive */
> +		qp->qplib_qp.access |= BNXT_QPLIB_ACCESS_LOCAL_WRITE;
> +	}
> +	if (qp_attr_mask & IB_QP_PKEY_INDEX) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY;
> +		qp->qplib_qp.pkey_index = qp_attr->pkey_index;
> +	}
> +	if (qp_attr_mask & IB_QP_QKEY) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_QKEY;
> +		qp->qplib_qp.qkey = qp_attr->qkey;
> +	}
> +	if (qp_attr_mask & IB_QP_AV) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_DGID |
> +				     CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL |
> +				     CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX |
> +				     CMDQ_MODIFY_QP_MODIFY_MASK_HOP_LIMIT |
> +				     CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS |
> +				     CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC |
> +				     CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID;
> +		memcpy(qp->qplib_qp.ah.dgid.data, qp_attr->ah_attr.grh.dgid.raw,
> +		       sizeof(qp->qplib_qp.ah.dgid.data));
> +		qp->qplib_qp.ah.flow_label = qp_attr->ah_attr.grh.flow_label;
> +		/* If RoCE V2 is enabled, stack will have two entries for
> +		 * each GID entry. Avoiding this duplicte entry in HW. Dividing
> +		 * the GID index by 2 for RoCE V2
> +		 */
> +		qp->qplib_qp.ah.sgid_index =
> +					qp_attr->ah_attr.grh.sgid_index / 2;
> +		qp->qplib_qp.ah.host_sgid_index =
> +					qp_attr->ah_attr.grh.sgid_index;
> +		qp->qplib_qp.ah.hop_limit = qp_attr->ah_attr.grh.hop_limit;
> +		qp->qplib_qp.ah.traffic_class =
> +					qp_attr->ah_attr.grh.traffic_class;
> +		qp->qplib_qp.ah.sl = qp_attr->ah_attr.sl;
> +		ether_addr_copy(qp->qplib_qp.ah.dmac, qp_attr->ah_attr.dmac);
> +
> +		status = ib_get_cached_gid(&rdev->ibdev, 1,
> +					   qp_attr->ah_attr.grh.sgid_index,
> +					   &sgid, &sgid_attr);
> +		if (!status && sgid_attr.ndev) {
> +			memcpy(qp->qplib_qp.smac, sgid_attr.ndev->dev_addr,
> +			       ETH_ALEN);
> +			dev_put(sgid_attr.ndev);
> +			nw_type = ib_gid_to_network_type(sgid_attr.gid_type,
> +							 &sgid);
> +			switch (nw_type) {
> +			case RDMA_NETWORK_IPV4:
> +				qp->qplib_qp.nw_type =
> +					CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV2_IPV4;
> +				break;
> +			case RDMA_NETWORK_IPV6:
> +				qp->qplib_qp.nw_type =
> +					CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV2_IPV6;
> +				break;
> +			default:
> +				qp->qplib_qp.nw_type =
> +					CMDQ_MODIFY_QP_NETWORK_TYPE_ROCEV1;
> +				break;
> +			}
> +		}
> +	}
> +
> +	if (qp_attr_mask & IB_QP_PATH_MTU) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
> +		qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu);
> +	} else if (qp_attr->qp_state == IB_QPS_RTR) {
> +		qp->qplib_qp.modify_flags |=
> +			CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
> +		qp->qplib_qp.path_mtu =
> +			__from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu));
> +	}
> +
> +	if (qp_attr_mask & IB_QP_TIMEOUT) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_TIMEOUT;
> +		qp->qplib_qp.timeout = qp_attr->timeout;
> +	}
> +	if (qp_attr_mask & IB_QP_RETRY_CNT) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_RETRY_CNT;
> +		qp->qplib_qp.retry_cnt = qp_attr->retry_cnt;
> +	}
> +	if (qp_attr_mask & IB_QP_RNR_RETRY) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_RNR_RETRY;
> +		qp->qplib_qp.rnr_retry = qp_attr->rnr_retry;
> +	}
> +	if (qp_attr_mask & IB_QP_MIN_RNR_TIMER) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER;
> +		qp->qplib_qp.min_rnr_timer = qp_attr->min_rnr_timer;
> +	}
> +	if (qp_attr_mask & IB_QP_RQ_PSN) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_RQ_PSN;
> +		qp->qplib_qp.rq.psn = qp_attr->rq_psn;
> +	}
> +	if (qp_attr_mask & IB_QP_MAX_QP_RD_ATOMIC) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_MAX_RD_ATOMIC;
> +		qp->qplib_qp.max_rd_atomic = qp_attr->max_rd_atomic;
> +	}
> +	if (qp_attr_mask & IB_QP_SQ_PSN) {
> +		qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_SQ_PSN;
> +		qp->qplib_qp.sq.psn = qp_attr->sq_psn;
> +	}
> +	if (qp_attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_MAX_DEST_RD_ATOMIC;
> +		qp->qplib_qp.max_dest_rd_atomic = qp_attr->max_dest_rd_atomic;
> +	}
> +	if (qp_attr_mask & IB_QP_CAP) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_SQ_SIZE |
> +				CMDQ_MODIFY_QP_MODIFY_MASK_RQ_SIZE |
> +				CMDQ_MODIFY_QP_MODIFY_MASK_SQ_SGE |
> +				CMDQ_MODIFY_QP_MODIFY_MASK_RQ_SGE |
> +				CMDQ_MODIFY_QP_MODIFY_MASK_MAX_INLINE_DATA;
> +		if ((qp_attr->cap.max_send_wr >= dev_attr->max_qp_wqes) ||
> +		    (qp_attr->cap.max_recv_wr >= dev_attr->max_qp_wqes) ||
> +		    (qp_attr->cap.max_send_sge >= dev_attr->max_qp_sges) ||
> +		    (qp_attr->cap.max_recv_sge >= dev_attr->max_qp_sges) ||
> +		    (qp_attr->cap.max_inline_data >=
> +						dev_attr->max_inline_data)) {
> +			dev_err(rdev_to_dev(rdev),
> +				"Create QP failed - max exceeded");
> +			return -EINVAL;
> +		}
> +		entries = roundup_pow_of_two(qp_attr->cap.max_send_wr);
> +		if (entries > dev_attr->max_qp_wqes)
> +			entries = dev_attr->max_qp_wqes;
> +		qp->qplib_qp.sq.max_wqe = entries;
> +		qp->qplib_qp.sq.max_sge = qp_attr->cap.max_send_sge;
> +		if (qp->qplib_qp.rq.max_wqe) {
> +			entries = roundup_pow_of_two(qp_attr->cap.max_recv_wr);
> +			if (entries > dev_attr->max_qp_wqes)
> +				entries = dev_attr->max_qp_wqes;
> +			qp->qplib_qp.rq.max_wqe = entries;
> +			qp->qplib_qp.rq.max_sge = qp_attr->cap.max_recv_sge;
> +		} else {
> +			/* SRQ was used prior, just ignore the RQ caps */
> +		}
> +	}
> +	if (qp_attr_mask & IB_QP_DEST_QPN) {
> +		qp->qplib_qp.modify_flags |=
> +				CMDQ_MODIFY_QP_MODIFY_MASK_DEST_QP_ID;
> +		qp->qplib_qp.dest_qpn = qp_attr->dest_qp_num;
> +	}
> +	rc = bnxt_qplib_modify_qp(&rdev->qplib_res, &qp->qplib_qp);
> +	if (rc) {
> +		dev_err(rdev_to_dev(rdev), "Failed to modify HW QP");
> +		return rc;
> +	}
> +	if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp)
> +		rc = bnxt_re_modify_shadow_qp(rdev, qp, qp_attr_mask);
> +	return rc;
> +}
> +
> +int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
> +		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
> +{
> +	struct bnxt_re_qp *qp = to_bnxt_re(ib_qp, struct bnxt_re_qp, ib_qp);
> +	struct bnxt_re_dev *rdev = qp->rdev;
> +	struct bnxt_qplib_qp qplib_qp;
> +	int rc;
> +
> +	memset(&qplib_qp, 0, sizeof(struct bnxt_qplib_qp));
> +	qplib_qp.id = qp->qplib_qp.id;
> +	qplib_qp.ah.host_sgid_index = qp->qplib_qp.ah.host_sgid_index;
> +
> +	rc = bnxt_qplib_query_qp(&rdev->qplib_res, &qplib_qp);
> +	if (rc) {
> +		dev_err(rdev_to_dev(rdev), "Failed to query HW QP");
> +		return rc;
> +	}
> +	qp_attr->qp_state = __to_ib_qp_state(qplib_qp.state);
> +	qp_attr->en_sqd_async_notify = qplib_qp.en_sqd_async_notify ? 1 : 0;
> +	qp_attr->qp_access_flags = __to_ib_access_flags(qplib_qp.access);
> +	qp_attr->pkey_index = qplib_qp.pkey_index;
> +	qp_attr->qkey = qplib_qp.qkey;
> +	memcpy(qp_attr->ah_attr.grh.dgid.raw, qplib_qp.ah.dgid.data,
> +	       sizeof(qplib_qp.ah.dgid.data));
> +	qp_attr->ah_attr.grh.flow_label = qplib_qp.ah.flow_label;
> +	qp_attr->ah_attr.grh.sgid_index = qplib_qp.ah.host_sgid_index;
> +	qp_attr->ah_attr.grh.hop_limit = qplib_qp.ah.hop_limit;
> +	qp_attr->ah_attr.grh.traffic_class = qplib_qp.ah.traffic_class;
> +	qp_attr->ah_attr.sl = qplib_qp.ah.sl;
> +	ether_addr_copy(qp_attr->ah_attr.dmac, qplib_qp.ah.dmac);
> +	qp_attr->path_mtu = __to_ib_mtu(qplib_qp.path_mtu);
> +	qp_attr->timeout = qplib_qp.timeout;
> +	qp_attr->retry_cnt = qplib_qp.retry_cnt;
> +	qp_attr->rnr_retry = qplib_qp.rnr_retry;
> +	qp_attr->min_rnr_timer = qplib_qp.min_rnr_timer;
> +	qp_attr->rq_psn = qplib_qp.rq.psn;
> +	qp_attr->max_rd_atomic = qplib_qp.max_rd_atomic;
> +	qp_attr->sq_psn = qplib_qp.sq.psn;
> +	qp_attr->max_dest_rd_atomic = qplib_qp.max_dest_rd_atomic;
> +	qp_init_attr->sq_sig_type = qplib_qp.sig_type ? IB_SIGNAL_ALL_WR :
> +							IB_SIGNAL_REQ_WR;
> +	qp_attr->dest_qp_num = qplib_qp.dest_qpn;
> +
> +	qp_attr->cap.max_send_wr = qp->qplib_qp.sq.max_wqe;
> +	qp_attr->cap.max_send_sge = qp->qplib_qp.sq.max_sge;
> +	qp_attr->cap.max_recv_wr = qp->qplib_qp.rq.max_wqe;
> +	qp_attr->cap.max_recv_sge = qp->qplib_qp.rq.max_sge;
> +	qp_attr->cap.max_inline_data = qp->qplib_qp.max_inline_data;
> +	qp_init_attr->cap = qp_attr->cap;
> +
> +	return 0;
> +}
> +
>  /* Completion Queues */
>  int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
>  {
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
> index ba9a4c9..75ee88a 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h
> @@ -57,6 +57,19 @@ struct bnxt_re_ah {
>  	struct bnxt_qplib_ah	qplib_ah;
>  };
>
> +struct bnxt_re_qp {
> +	struct list_head	list;
> +	struct bnxt_re_dev	*rdev;
> +	struct ib_qp		ib_qp;
> +	spinlock_t		sq_lock;	/* protect sq */
> +	struct bnxt_qplib_qp	qplib_qp;
> +	struct ib_umem		*sumem;
> +	struct ib_umem		*rumem;
> +	/* QP1 */
> +	u32			send_psn;
> +	struct ib_ud_header	qp1_hdr;
> +};
> +
>  struct bnxt_re_cq {
>  	struct bnxt_re_dev	*rdev;
>  	spinlock_t              cq_lock;	/* protect cq */
> @@ -141,6 +154,14 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *pd,
>  int bnxt_re_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
>  int bnxt_re_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
>  int bnxt_re_destroy_ah(struct ib_ah *ah);
> +struct ib_qp *bnxt_re_create_qp(struct ib_pd *pd,
> +				struct ib_qp_init_attr *qp_init_attr,
> +				struct ib_udata *udata);
> +int bnxt_re_modify_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
> +		      int qp_attr_mask, struct ib_udata *udata);
> +int bnxt_re_query_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
> +		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr);
> +int bnxt_re_destroy_qp(struct ib_qp *qp);
>  struct ib_cq *bnxt_re_create_cq(struct ib_device *ibdev,
>  				const struct ib_cq_init_attr *attr,
>  				struct ib_ucontext *context,
> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
> index 3d1504e..5facacc 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_re_main.c
> @@ -445,6 +445,12 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
>  	ibdev->modify_ah		= bnxt_re_modify_ah;
>  	ibdev->query_ah			= bnxt_re_query_ah;
>  	ibdev->destroy_ah		= bnxt_re_destroy_ah;
> +
> +	ibdev->create_qp		= bnxt_re_create_qp;
> +	ibdev->modify_qp		= bnxt_re_modify_qp;
> +	ibdev->query_qp			= bnxt_re_query_qp;
> +	ibdev->destroy_qp		= bnxt_re_destroy_qp;
> +
>  	ibdev->create_cq		= bnxt_re_create_cq;
>  	ibdev->destroy_cq		= bnxt_re_destroy_cq;
>  	ibdev->req_notify_cq		= bnxt_re_req_notify_cq;
> diff --git a/include/uapi/rdma/bnxt_re_uverbs_abi.h b/include/uapi/rdma/bnxt_re_uverbs_abi.h
> index 5444eff..e6732f8 100644
> --- a/include/uapi/rdma/bnxt_re_uverbs_abi.h
> +++ b/include/uapi/rdma/bnxt_re_uverbs_abi.h
> @@ -66,6 +66,16 @@ struct bnxt_re_cq_resp {
>  	__u32 phase;
>  } __packed;
>
> +struct bnxt_re_qp_req {
> +	__u64 qpsva;
> +	__u64 qprva;
> +	__u64 qp_handle;
> +} __packed;
> +
> +struct bnxt_re_qp_resp {
> +	__u32 qpid;
> +} __packed;
> +
>  enum bnxt_re_shpg_offt {
>  	BNXT_RE_BEG_RESV_OFFT	= 0x00,
>  	BNXT_RE_AVID_OFFT	= 0x10,
> --
> 2.5.5
>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 10/22] bnxt_re: Support for CQ verbs
       [not found]     ` <1481266096-23331-11-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
@ 2016-12-12 21:03       ` Jonathan Toppins
  0 siblings, 0 replies; 46+ messages in thread
From: Jonathan Toppins @ 2016-12-12 21:03 UTC (permalink / raw)
  To: Selvin Xavier, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, Eddie Wai, Devesh Sharma,
	Somnath Kotur, Sriharsha Basavapatna

On 12/09/2016 01:48 AM, Selvin Xavier wrote:
> This patch implements support for create_cq, destroy_cq and req_notify_cq
> verbs.
> 
> Signed-off-by: Eddie Wai <eddie.wai-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Selvin Xavier <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
> ---
>  drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c    | 183 ++++++++++++++++++++++++
>  drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.h    |  47 ++++++
>  drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c | 154 ++++++++++++++++++++
>  drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.h |  19 +++
>  drivers/infiniband/hw/bnxtre/bnxt_re_main.c     |   4 +
>  include/uapi/rdma/bnxt_re_uverbs_abi.h          |  11 ++
>  6 files changed, 418 insertions(+)

Something I just realized is this patch series does not modify the
MAINTAINERS file. Whom from Broadcom will be maintaining this driver?
Probably want to include this info in the v3 series

[...]

> diff --git a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
> index 3417829..f316598 100644
> --- a/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
> +++ b/drivers/infiniband/hw/bnxtre/bnxt_re_ib_verbs.c
> @@ -60,6 +60,16 @@
>  #include "bnxt_re_ib_verbs.h"
>  #include <rdma/bnxt_re_uverbs_abi.h>
>  
> +static int bnxt_re_copy_to_udata(struct bnxt_re_dev *rdev, void *data, int len,
> +				 struct ib_udata *udata)
> +{
> +	int rc;
> +
> +	rc = ib_copy_to_udata(udata, data, len);
> +
> +	return rc ? -EFAULT : 0;
> +}

This function seems to provide no value by wrapping ib_copy_to_udata,
any reason to keep it? From the two call sites for this function it
appears it can be replaced with a direct call to ib_copy_to_udata.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
                     ` (4 preceding siblings ...)
  2016-12-12 16:54   ` Jonathan Toppins
@ 2016-12-12 23:52   ` Doug Ledford
  2016-12-13  3:54     ` Selvin Xavier
       [not found]     ` <23e26353-4317-2836-9f94-d1fc3274a770-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  5 siblings, 2 replies; 46+ messages in thread
From: Doug Ledford @ 2016-12-12 23:52 UTC (permalink / raw)
  To: Selvin Xavier, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 1282 bytes --]

On 12/9/2016 1:47 AM, Selvin Xavier wrote:
> This series introduces the RoCE driver for the Broadcom
> NetXtreme-E 10/25/40/50 gigabit RoCE HCAs. 
> This driver is dependent on the bnxt_en NIC driver and is 
> based on the bnxt_re branch in Doug's repository. bnxt_en changes
> required for this patch series is already available in this branch.
> 
> I am preparing a git repository with these changes as per Jason's
> comment and will share the details later today.
> 
> v1-> v2:
>   * The license text in each file updated to reflect Dual license.
>   * Makefile and Kconfig changes are pushed to the last patch
>   * Moved bnxt_re_uverbs_abi.h to include/uapi/rdma folder
>   * Remove duplicate structure definitions from bnxt_re_hsi.h as
>     it is available in the corresponding bnxt_en header file (bnxt_hsi.h)
>   * Removed some unused code reported during code review.
>   * Fixed few sparse warnings
> 
> Doug,
> Please review and consider applying this to linux-rdma repository.

There are outstanding review comments to be addressed still yet, and the
v2 patchset doesn't compile for me in 0day testing.  I'm going to bounce
this one to 4.11.


-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG Key ID: 0E572FDD


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 884 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
  2016-12-12 23:52   ` Doug Ledford
@ 2016-12-13  3:54     ` Selvin Xavier
       [not found]     ` <23e26353-4317-2836-9f94-d1fc3274a770-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-13  3:54 UTC (permalink / raw)
  To: Doug Ledford; +Cc: linux-rdma, netdev

On Tue, Dec 13, 2016 at 5:22 AM, Doug Ledford <dledford@redhat.com> wrote:
>
> There are outstanding review comments to be addressed still yet, and the
> v2 patchset doesn't compile for me in 0day testing.  I'm going to bounce
> this one to 4.11.

I will address all review comments and fix the 0day compilation error
and post a v3 soon.

Thanks,
Selvin Xavier

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found]     ` <9cf03e2b-a16d-19ec-a8ce-14f24272bf6a-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-13  4:52       ` Selvin Xavier
  2016-12-13  6:41         ` Michael Chan
  0 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-13  4:52 UTC (permalink / raw)
  To: jtoppins-H+wXaHxf7aLQT0dZR+AlfA
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	netdev-u79uwXL29TY76Z2rM5mHXA

On Mon, Dec 12, 2016 at 10:24 PM, Jonathan Toppins <jtoppins-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> CHECK   drivers/infiniband/hw/bnxtre/bnxt_re_debugfs.c
>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c
> drivers/infiniband/hw/bnxtre/bnxt_qplib_res.c:729:6: warning: symbol
> 'bnxt_qplib_cleanup_pkey_tbl' was not declared. Should it be static?
I will remove this warning in v3 patch set.

>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
> drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1015:22: warning: context
> imbalance in 'bnxt_qplib_lock_cqs' - wrong count at exit
> drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1030:28: warning: context
> imbalance in 'bnxt_qplib_unlock_cqs' - unexpected unlock
The above two are false positives, since locking and unlocking are
handled in two separate functions. This is a wrapper to lock/unlock
both SQ and RQ CQ locks. Functionally it is ok  since
bnxt_qplib_unlock_cqs is called just after the critical section and
both locks are freed in order. I think we can ignore this warning.


>   MODPOST 2 modules
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found]         ` <20161212170701.GA28387-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2016-12-13  6:04           ` Selvin Xavier
  0 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-13  6:04 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	netdev-u79uwXL29TY76Z2rM5mHXA

On Mon, Dec 12, 2016 at 10:37 PM, Jason Gunthorpe
<jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org> wrote:
> On Sat, Dec 10, 2016 at 11:06:58AM +0530, Selvin Xavier wrote:
>> On Fri, Dec 9, 2016 at 12:17 PM, Selvin Xavier
>> <selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> wrote:
>> > I am preparing a git repository with these changes as per Jason's
>> > comment and will share the details later today.
>>
>> Please use bnxt_re branch in this git repository.
>>
>> https://github.com/Broadcom/linux-rdma-nxt.git
>
> Why are you using __packed in bnxt_re_uverbs_abi.h ? that doesn't seem
> necessary. It is a good idea to make sure all those structures are a
> multiple of 64 bits (add explicit reserved fields), and make sure you
> test 32 bit verbs as well.

Will take care in v3.

>
> Why are you using debugfs just to export counters? Isn't the core code
> counter framework good enough?

I agree that some of the counters exported by this patch set, tx and
rx bytes/pkts etc, can be exported
through the core counters. i will try adding  this support in v3, if
not, will post as a separate patch.
debugfs was introduced more for the future, in case any HW specific
data needs to be displayed.
As of now, it tracks only the count of resources( CQ/MR/QPs) active at
any given point. So its ok to
skip this patch from this series.

>
> Please try and avoid writing functions as defines (eg rdev_to_dev,
> to_bnxt_re, SQE_PG, RCFW_CMDQ_COOKIE, PTR_PG etc)
>
Sure, will take care in v3.

> There is something wrong with the tabs and spaces (see
> https://github.com/Broadcom/linux-rdma-nxt/blob/03e23b087f7e86ea28656273994e065827210ce5/drivers/infiniband/hw/bnxtre/bnxt_re_hsi.h)
>
> FWIW, I really dislike the column alignment style, it is so hard to
> maintain..
This file contains the Macro defines for the FW/HW structures and are
auto-generated. Some of these auto-generated defines are very long
which makes the lines greater than
80 characters. I will fix whatever possible and include in v3 set.

>
> Jason

Thanks,
Selvin
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 13/22] bnxt_re: Support QP verbs
       [not found]       ` <20161212182737.GC8204-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2016-12-13  6:08         ` Selvin Xavier
  0 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-13  6:08 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	netdev-u79uwXL29TY76Z2rM5mHXA, Eddie Wai, Devesh Sharma,
	Somnath Kotur, Sriharsha Basavapatna

On Mon, Dec 12, 2016 at 11:57 PM, Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> It can help to review if you break this function into smaller pieces and
> get rid of switch->switch->if construction.

Thanks Leon. I will address this and your previous comments in v3 patch set.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 18/22] bnxt_re: Support for DCB
  2016-12-10 13:50   ` Or Gerlitz
@ 2016-12-13  6:25     ` Selvin Xavier
       [not found]       ` <CA+sbYW1irBd0cTqJJSGJWRbBi-iFzvX3JpoTfF_daU47EqNtAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 46+ messages in thread
From: Selvin Xavier @ 2016-12-13  6:25 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Doug Ledford, linux-rdma, Linux Netdev List, Eddie Wai,
	Devesh Sharma, Somnath Kotur, Sriharsha Basavapatna

On Sat, Dec 10, 2016 at 7:20 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Fri, Dec 9, 2016 at 8:48 AM, Selvin Xavier
> <selvin.xavier@broadcom.com> wrote:
>> This patch queries the configured RoCE APP Priority on the host
>> using the dcbnl API and programs the RoCE FW with the corresponding
>> Traffic Class(es) for the priority.
>
>> +#define BNXT_RE_ROCE_V1_ETH_TYPE       0x8915
>> +#define BNXT_RE_ROCE_V2_PORT_NO                4791
>
> I believe these two are defined already, try # git grep on each under include

Thanks Or for your comments.
V2 port number is defined in ib_verbs.h.  i will include this in the
next patch set.
v1 eth_type is not defined. All vendor drivers have their own definition.

Thanks,
Selvin

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
  2016-12-13  4:52       ` Selvin Xavier
@ 2016-12-13  6:41         ` Michael Chan
  0 siblings, 0 replies; 46+ messages in thread
From: Michael Chan @ 2016-12-13  6:41 UTC (permalink / raw)
  To: Selvin Xavier; +Cc: jtoppins, Doug Ledford, linux-rdma, Netdev

On Mon, Dec 12, 2016 at 8:52 PM, Selvin Xavier
<selvin.xavier@broadcom.com> wrote:

>>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_rcfw.c
>>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_sp.c
>>   CHECK   drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c
>> drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1015:22: warning: context
>> imbalance in 'bnxt_qplib_lock_cqs' - wrong count at exit
>> drivers/infiniband/hw/bnxtre/bnxt_qplib_fp.c:1030:28: warning: context
>> imbalance in 'bnxt_qplib_unlock_cqs' - unexpected unlock
> The above two are false positives, since locking and unlocking are
> handled in two separate functions. This is a wrapper to lock/unlock
> both SQ and RQ CQ locks. Functionally it is ok  since
> bnxt_qplib_unlock_cqs is called just after the critical section and
> both locks are freed in order. I think we can ignore this warning.
>
>
You can use __releases() and __acquires() macros to denote these cases
for sparse.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
  2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
                   ` (21 preceding siblings ...)
       [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
@ 2016-12-13  6:54 ` Or Gerlitz
  22 siblings, 0 replies; 46+ messages in thread
From: Or Gerlitz @ 2016-12-13  6:54 UTC (permalink / raw)
  To: Selvin Xavier; +Cc: dledford, linux-rdma, netdev

On 12/9/2016 8:47 AM, Selvin Xavier wrote:
> This series introduces the RoCE driver for the Broadcom
> NetXtreme-E 10/25/40/50 gigabit RoCE HCAs.
> This driver is dependent on the bnxt_en NIC driver and is
> based on the bnxt_re branch in Doug's repository. bnxt_en changes
> required for this patch series is already available in this branch.
>
> I am preparing a git repository with these changes as per Jason's
> comment and will share the details later today.
>
> v1-> v2:
>    * The license text in each file updated to reflect Dual license.
>    * Makefile and Kconfig changes are pushed to the last patch
>    * Moved bnxt_re_uverbs_abi.h to include/uapi/rdma folder
>    * Remove duplicate structure definitions from bnxt_re_hsi.h as
>      it is available in the corresponding bnxt_en header file (bnxt_hsi.h)
>    * Removed some unused code reported during code review.
>    * Fixed few sparse warnings
>
> Doug, Please review and consider applying this to linux-rdma repository.

Hi,

I made some quick on-the-surface static checkers etc rub on the new 
driver (Doug, I used the bits in your github bnxt_re branch), there are 
bunch (tons...) of smatch [1] and sparse [2]complaints along with few 
checkpatch [3] things too.  So... addressing them before this goes upstream?

BTW net-next's drivers/net/ethernet/broadcom/bnxt (where you put the pre 
patches for the RDMA driver) is pretty much clean of that

Or.

[1] smatch

CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_main.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_debugfs.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:185 bnxt_qplib_del_sgid() 
warn: impossible condition '(*sgid_tbl->hw_id + index == -1) => (0-65535 
== (-1))'
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:107 
bnxt_qplib_rcfw_send_message() warn: test_bit() takes a bit number
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:116 
bnxt_qplib_rcfw_send_message() warn: test_bit() takes a bit number
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:148 
bnxt_qplib_rcfw_send_message() error: double lock 'irqsave:flags'
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:204 
bnxt_qplib_rcfw_send_message() error: double unlock 'irqsave:flags'
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:121 
bnxt_re_unregister_netdev() warn: variable dereferenced before check 
'rdev' (see line 118)
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:140 
bnxt_re_register_netdev() warn: variable dereferenced before check 
'rdev' (see line 137)
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:155 bnxt_re_free_msix() 
warn: variable dereferenced before check 'rdev' (see line 152)
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:173 bnxt_re_request_msix() 
warn: variable dereferenced before check 'rdev' (see line 171)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:172 
bnxt_qplib_alloc_init_hwq() warn: unsigned '*elements' is never less 
than zero.
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:387 
bnxt_qplib_deinit_rcfw() warn: test_bit() takes a bit number
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:488 
bnxt_qplib_alloc_sgid_tbl() warn: double check that we're allocating 
correct size: 2 vs 4
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:688 
bnxt_qplib_alloc_dpi_tbl() warn: consider using resource_size() here
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:496 
bnxt_qplib_init_rcfw() warn: test_bit() takes a bit number
./include/linux/mm.h:1385 stack_guard_page_end() warn: bitwise AND 
condition is false here
./include/linux/mm.h:1379 vma_growsup() warn: bitwise AND condition is 
false here
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:522 show_rev() info: 
ignoring unreachable code.
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:835 bnxt_re_dev_stop() 
warn: was && intended here instead of ||?
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:1048 bnxt_re_ib_reg() warn: 
curly braces intended?
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:1050 bnxt_re_ib_reg() warn: 
inconsistent indenting
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:1190 bnxt_re_task() warn: 
curly braces intended?
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1639 __flush_sq() warn: 
variable dereferenced before check 'budget' (see line 1621)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1852 
bnxt_qplib_cq_process_res_ud() warn: shift has higher precedence than mask
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1861 
bnxt_qplib_cq_process_res_ud() warn: inconsistent indenting
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:2015 
bnxt_qplib_cq_process_terminal() warn: variable dereferenced before 
check 'budget' (see line 1997)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1607 
bnxt_re_build_qp1_send_v2 warn: unused return: size = ib_ud_header_pack()
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1681 
bnxt_re_build_qp1_shadow_qp_recv() warn: unsigned 'sge.size' is never 
less than zero.
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2066 
bnxt_re_post_recv_shadow_qp warn: unused return: payload_sz = 
bnxt_re_build_sgl()
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2242 
bnxt_re_create_cq() warn: variable dereferenced before check 'cq->umem' 
(see line 2192)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2456 
bnxt_re_is_loopback_packet() warn: curly braces intended?
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2512 
bnxt_re_process_raw_qp_pkt_rx() warn: impossible condition '(pkt_type < 
0) => (0-255 < 0)'
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2578 
bnxt_re_process_raw_qp_pkt_rx warn: unused return: rc = 
bnxt_re_post_send_shadow_qp()
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2838 bnxt_re_dereg_mr 
warn: unused return: rc = bnxt_qplib_free_fast_reg_page_list()

[2] sparse

  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_main.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_debugfs.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c
   CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:111:33: warning: cast to 
restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:114:30: warning: 
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:114:30: warning: cast to 
restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:277:28: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:277:28: expected 
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:277:28: got restricted 
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:278:28: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:278:28: expected 
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:278:28: got restricted 
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:279:28: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:279:28: expected 
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:279:28: got restricted 
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:280:28: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:280:28: expected 
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:280:28: got restricted 
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:282:34: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:282:34: expected 
restricted __le16 [addressable] [assigned] [usertype] vlan
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:282:34: got restricted 
__le32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:289:32: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:289:32: expected 
restricted __le16 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:289:32: got restricted 
__be16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:290:32: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:290:32: expected 
restricted __le16 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:290:32: got restricted 
__be16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:291:32: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:291:32: expected 
restricted __le16 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:291:32: got restricted 
__be16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:810:18: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:810:18: expected 
restricted __le16 [addressable] [assigned] [usertype] cos0
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:810:18: got unsigned short 
[unsigned] [short] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:811:18: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:811:18: expected 
restricted __le16 [addressable] [assigned] [usertype] cos1
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:811:18: got unsigned short 
[unsigned] [short] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:133:22: warning: 
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:136:24: warning: 
restricted __le16 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:136:24: warning: cast to 
restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:777:25: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:777:25: expected 
restricted __le32 [addressable] [assigned] [usertype] modify_mask
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:777:25: got restricted 
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:822:30: warning: incorrect 
type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:822:30: expected unsigned 
char [unsigned] [addressable] [assigned] [usertype] path_mtu
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:822:30: got restricted 
__le16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:953:26: warning: 
restricted __le16 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:953:26: warning: cast to 
restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:956:26: warning: 
restricted __le16 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:970:26: warning: cast to 
restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:970:26: warning: cast from 
restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1211:34: warning: invalid 
assignment: +=
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1211:34: left side has 
type int
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1211:34: right side has 
type restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1333:24: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1333:24: expected unsigned 
int [unsigned] [usertype] temp32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1333:24: got restricted 
__le32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1343:46: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1343:46: expected unsigned 
long long [unsigned] [long] [long long] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1343:46: got restricted 
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1363:24: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1363:24: expected unsigned 
int [unsigned] [addressable] [usertype] temp32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1363:24: got restricted 
__le32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1533:27: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1533:27: expected 
restricted __le32 [addressable] [assigned] [usertype] cq_fco_cnq_id
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1533:27: got restricted 
__le16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1796:21: warning: 
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1796:21: warning: cast to 
restricted __le64
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1849:21: warning: 
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1849:21: warning: cast to 
restricted __le64
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1852:39: warning: 
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1904:21: warning: 
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1904:21: warning: cast to 
restricted __le64
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1917:34: warning: cast to 
restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1917:34: warning: cast 
from restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1015:22: warning: context 
imbalance in 'bnxt_qplib_lock_cqs' - wrong count at exit
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1030:28: warning: context 
imbalance in 'bnxt_qplib_unlock_cqs' - unexpected unlock
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:410:72: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:410:72: expected unsigned 
long long [unsigned] [long] [long long] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:410:72: got restricted 
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:418:56: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:418:56: expected unsigned 
long long [unsigned] [long] [long long] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:418:56: got restricted 
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:729:6: warning: symbol 
'bnxt_qplib_cleanup_pkey_tbl' was not declared. Should it be static?
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1725:47: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1725:47: expected 
unsigned int [unsigned] [usertype] imm_data_or_inv_key
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1725:47: got restricted 
__be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1755:47: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1755:47: expected 
unsigned int [unsigned] [usertype] imm_data_or_inv_key
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1755:47: got restricted 
__be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2627:25: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2627:25: expected 
restricted __be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2627:25: got unsigned 
int [unsigned] [usertype] immdata_or_invrkey
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2696:41: warning: 
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2696:41: expected 
restricted __be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2696:41: got unsigned 
int [unsigned] [usertype] immdata_or_invrkey

[3] checkpatch

CHECK: Macro argument reuse 'req' - possible side-effects?
CHECK: Macro argument reuse 'prod' - possible side-effects?
CHECK: Macro argument reuse 'dma_addr' - possible side-effects?
CHECK: Macro argument reuse 'rdev' - possible side-effects?
CHECK: Please don't use multiple blank lines
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Please don't use multiple blank lines
CHECK: DEFINE_MUTEX definition without comment
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Alignment should match open parenthesis
CHECK: Please use a blank line after function/struct/union/enum declarations

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found]     ` <23e26353-4317-2836-9f94-d1fc3274a770-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-13  7:59       ` Or Gerlitz
       [not found]         ` <CAJ3xEMh98kC1KXGf7uHKD-H91f_NiZXaz-3yTtwQ2s-D7rYqMQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 46+ messages in thread
From: Or Gerlitz @ 2016-12-13  7:59 UTC (permalink / raw)
  To: Doug Ledford, Selvin Xavier
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Linux Netdev List

On Tue, Dec 13, 2016 at 1:52 AM, Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> On 12/9/2016 1:47 AM, Selvin Xavier wrote:
>> This series introduces the RoCE driver for the Broadcom
>> NetXtreme-E 10/25/40/50 gigabit RoCE HCAs.
>> This driver is dependent on the bnxt_en NIC driver and is
>> based on the bnxt_re branch in Doug's repository. bnxt_en changes
>> required for this patch series is already available in this branch.
>>
>> I am preparing a git repository with these changes as per Jason's
>> comment and will share the details later today.
>>
>> v1-> v2:
>>   * The license text in each file updated to reflect Dual license.
>>   * Makefile and Kconfig changes are pushed to the last patch
>>   * Moved bnxt_re_uverbs_abi.h to include/uapi/rdma folder
>>   * Remove duplicate structure definitions from bnxt_re_hsi.h as
>>     it is available in the corresponding bnxt_en header file (bnxt_hsi.h)
>>   * Removed some unused code reported during code review.
>>   * Fixed few sparse warnings
>>
>> Doug,
>> Please review and consider applying this to linux-rdma repository.
>
> There are outstanding review comments to be addressed still yet, and the
> v2 patchset doesn't compile for me in 0day testing.  I'm going to bounce
> this one to 4.11.


I made some quick on-the-surface static checkers etc rub on the new
driver (Doug, I used
the bits in your github bnxt_re branch), there are bunch (tons...) of
smatch [1] and sparse [2]
complaints along with few checkpatch [3] things too.  So... addressing
them before this goes upstream?

BTW net-next's drivers/net/ethernet/broadcom/bnxt (where you put the
pre patches for the
RDMA driver) is pretty much clean of that

Or.

[1] smatch

CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_main.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_debugfs.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:185
bnxt_qplib_del_sgid() warn: impossible condition '(*sgid_tbl->hw_id +
index == -1) => (0-65535 == (-1))'
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:107
bnxt_qplib_rcfw_send_message() warn: test_bit() takes a bit number
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:116
bnxt_qplib_rcfw_send_message() warn: test_bit() takes a bit number
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:148
bnxt_qplib_rcfw_send_message() error: double lock 'irqsave:flags'
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:204
bnxt_qplib_rcfw_send_message() error: double unlock 'irqsave:flags'
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:121
bnxt_re_unregister_netdev() warn: variable dereferenced before check
'rdev' (see line 118)
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:140
bnxt_re_register_netdev() warn: variable dereferenced before check
'rdev' (see line 137)
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:155 bnxt_re_free_msix()
warn: variable dereferenced before check 'rdev' (see line 152)
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:173
bnxt_re_request_msix() warn: variable dereferenced before check 'rdev'
(see line 171)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:172
bnxt_qplib_alloc_init_hwq() warn: unsigned '*elements' is never less
than zero.
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:387
bnxt_qplib_deinit_rcfw() warn: test_bit() takes a bit number
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:488
bnxt_qplib_alloc_sgid_tbl() warn: double check that we're allocating
correct size: 2 vs 4
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:688
bnxt_qplib_alloc_dpi_tbl() warn: consider using resource_size() here
drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c:496
bnxt_qplib_init_rcfw() warn: test_bit() takes a bit number
./include/linux/mm.h:1385 stack_guard_page_end() warn: bitwise AND
condition is false here
./include/linux/mm.h:1379 vma_growsup() warn: bitwise AND condition is
false here
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:522 show_rev() info:
ignoring unreachable code.
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:835 bnxt_re_dev_stop()
warn: was && intended here instead of ||?
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:1048 bnxt_re_ib_reg()
warn: curly braces intended?
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:1050 bnxt_re_ib_reg()
warn: inconsistent indenting
drivers/infiniband/hw/bnxt_re/bnxt_re_main.c:1190 bnxt_re_task() warn:
curly braces intended?
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1639 __flush_sq() warn:
variable dereferenced before check 'budget' (see line 1621)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1852
bnxt_qplib_cq_process_res_ud() warn: shift has higher precedence than
mask
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1861
bnxt_qplib_cq_process_res_ud() warn: inconsistent indenting
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:2015
bnxt_qplib_cq_process_terminal() warn: variable dereferenced before
check 'budget' (see line 1997)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1607
bnxt_re_build_qp1_send_v2 warn: unused return: size =
ib_ud_header_pack()
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1681
bnxt_re_build_qp1_shadow_qp_recv() warn: unsigned 'sge.size' is never
less than zero.
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2066
bnxt_re_post_recv_shadow_qp warn: unused return: payload_sz =
bnxt_re_build_sgl()
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2242
bnxt_re_create_cq() warn: variable dereferenced before check
'cq->umem' (see line 2192)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2456
bnxt_re_is_loopback_packet() warn: curly braces intended?
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2512
bnxt_re_process_raw_qp_pkt_rx() warn: impossible condition '(pkt_type
< 0) => (0-255 < 0)'
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2578
bnxt_re_process_raw_qp_pkt_rx warn: unused return: rc =
bnxt_re_post_send_shadow_qp()
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2838 bnxt_re_dereg_mr
warn: unused return: rc = bnxt_qplib_free_fast_reg_page_list()

[2] sparse

 CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_main.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_re_debugfs.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_rcfw.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c
  CHECK   drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:111:33: warning: cast to
restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:114:30: warning:
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:114:30: warning: cast to
restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:277:28: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:277:28: expected
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:277:28: got restricted
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:278:28: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:278:28: expected
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:278:28: got restricted
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:279:28: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:279:28: expected
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:279:28: got restricted
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:280:28: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:280:28: expected
restricted __le32 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:280:28: got restricted
__be32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:282:34: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:282:34: expected
restricted __le16 [addressable] [assigned] [usertype] vlan
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:282:34: got restricted
__le32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:289:32: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:289:32: expected
restricted __le16 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:289:32: got restricted
__be16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:290:32: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:290:32: expected
restricted __le16 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:290:32: got restricted
__be16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:291:32: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:291:32: expected
restricted __le16 <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:291:32: got restricted
__be16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:810:18: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:810:18: expected
restricted __le16 [addressable] [assigned] [usertype] cos0
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:810:18: got unsigned
short [unsigned] [short] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:811:18: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:811:18: expected
restricted __le16 [addressable] [assigned] [usertype] cos1
drivers/infiniband/hw/bnxt_re/bnxt_qplib_sp.c:811:18: got unsigned
short [unsigned] [short] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:133:22: warning:
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:136:24: warning:
restricted __le16 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:136:24: warning: cast to
restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:777:25: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:777:25: expected
restricted __le32 [addressable] [assigned] [usertype] modify_mask
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:777:25: got restricted
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:822:30: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:822:30: expected
unsigned char [unsigned] [addressable] [assigned] [usertype] path_mtu
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:822:30: got restricted
__le16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:953:26: warning:
restricted __le16 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:953:26: warning: cast to
restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:956:26: warning:
restricted __le16 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:970:26: warning: cast to
restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:970:26: warning: cast
from restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1211:34: warning:
invalid assignment: +=
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1211:34: left side has type int
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1211:34: right side has
type restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1333:24: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1333:24: expected
unsigned int [unsigned] [usertype] temp32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1333:24: got restricted
__le32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1343:46: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1343:46: expected
unsigned long long [unsigned] [long] [long long] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1343:46: got restricted
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1363:24: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1363:24: expected
unsigned int [unsigned] [addressable] [usertype] temp32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1363:24: got restricted
__le32 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1533:27: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1533:27: expected
restricted __le32 [addressable] [assigned] [usertype] cq_fco_cnq_id
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1533:27: got restricted
__le16 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1796:21: warning:
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1796:21: warning: cast
to restricted __le64
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1849:21: warning:
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1849:21: warning: cast
to restricted __le64
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1852:39: warning:
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1904:21: warning:
restricted __le32 degrades to integer
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1904:21: warning: cast
to restricted __le64
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1917:34: warning: cast
to restricted __le16
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1917:34: warning: cast
from restricted __le32
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1015:22: warning:
context imbalance in 'bnxt_qplib_lock_cqs' - wrong count at exit
drivers/infiniband/hw/bnxt_re/bnxt_qplib_fp.c:1030:28: warning:
context imbalance in 'bnxt_qplib_unlock_cqs' - unexpected unlock
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:410:72: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:410:72: expected
unsigned long long [unsigned] [long] [long long] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:410:72: got restricted
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:418:56: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:418:56: expected
unsigned long long [unsigned] [long] [long long] [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:418:56: got restricted
__le64 [usertype] <noident>
drivers/infiniband/hw/bnxt_re/bnxt_qplib_res.c:729:6: warning: symbol
'bnxt_qplib_cleanup_pkey_tbl' was not declared. Should it be static?
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1725:47: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1725:47: expected
unsigned int [unsigned] [usertype] imm_data_or_inv_key
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1725:47: got
restricted __be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1755:47: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1755:47: expected
unsigned int [unsigned] [usertype] imm_data_or_inv_key
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:1755:47: got
restricted __be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2627:25: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2627:25: expected
restricted __be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2627:25: got unsigned
int [unsigned] [usertype] immdata_or_invrkey
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2696:41: warning:
incorrect type in assignment (different base types)
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2696:41: expected
restricted __be32 [usertype] imm_data
drivers/infiniband/hw/bnxt_re/bnxt_re_ib_verbs.c:2696:41: got unsigned
int [unsigned] [usertype] immdata_or_invrkey

[3] checkpatch

CHECK: Macro argument reuse 'req' - possible side-effects?
CHECK: Macro argument reuse 'prod' - possible side-effects?
CHECK: Macro argument reuse 'dma_addr' - possible side-effects?
CHECK: Macro argument reuse 'rdev' - possible side-effects?
CHECK: Please don't use multiple blank lines
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Alignment should match open parenthesis
CHECK: Please don't use multiple blank lines
CHECK: DEFINE_MUTEX definition without comment
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Please use a blank line after function/struct/union/enum declarations
CHECK: Alignment should match open parenthesis
CHECK: Please use a blank line after function/struct/union/enum declarations
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re)
       [not found]         ` <CAJ3xEMh98kC1KXGf7uHKD-H91f_NiZXaz-3yTtwQ2s-D7rYqMQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-12-13  8:36           ` Selvin Xavier
  0 siblings, 0 replies; 46+ messages in thread
From: Selvin Xavier @ 2016-12-13  8:36 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Linux Netdev List

On Tue, Dec 13, 2016 at 1:29 PM, Or Gerlitz <gerlitz.or-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> I made some quick on-the-surface static checkers etc rub on the new
> driver (Doug, I used
> the bits in your github bnxt_re branch), there are bunch (tons...) of
> smatch [1] and sparse [2]
> complaints along with few checkpatch [3] things too.  So... addressing
> them before this goes upstream?

Thank you for pointing it out. I will include this cleanup also as a
part of my v3 series.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH V2 18/22] bnxt_re: Support for DCB
       [not found]       ` <CA+sbYW1irBd0cTqJJSGJWRbBi-iFzvX3JpoTfF_daU47EqNtAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-12-13 16:56         ` Jason Gunthorpe
  0 siblings, 0 replies; 46+ messages in thread
From: Jason Gunthorpe @ 2016-12-13 16:56 UTC (permalink / raw)
  To: Selvin Xavier
  Cc: Or Gerlitz, Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	Linux Netdev List, Eddie Wai, Devesh Sharma, Somnath Kotur,
	Sriharsha Basavapatna

On Tue, Dec 13, 2016 at 11:55:55AM +0530, Selvin Xavier wrote:

> v1 eth_type is not defined. All vendor drivers have their own definition.

Send a cleanup patch?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2016-12-13 16:56 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-09  6:47 [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
2016-12-09  6:47 ` [PATCH V2 01/22] bnxt_re: Add bnxt_re RoCE driver files Selvin Xavier
2016-12-09  6:47 ` [PATCH V2 02/22] bnxt_re: Introducing autogenerated Host Software Interface(hsi) file Selvin Xavier
2016-12-09  6:47 ` [PATCH V2 03/22] bnxt_re: register with the NIC driver Selvin Xavier
2016-12-10  0:03   ` Jonathan Toppins
2016-12-09  6:47 ` [PATCH V2 04/22] bnxt_re: Enabling RoCE control path Selvin Xavier
2016-12-09  6:47 ` [PATCH V2 05/22] bnxt_re: Adding Notification Queue support Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 06/22] bnxt_re: Support for PD, ucontext and mmap verbs Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 08/22] bnxt_re: Adding support for port related verbs Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 09/22] bnxt_re: Support for GID " Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 11/22] bnxt_re: Support for AH verbs Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 12/22] bnxt_re: Support memory registration verbs Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 13/22] bnxt_re: Support QP verbs Selvin Xavier
     [not found]   ` <1481266096-23331-14-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2016-12-12 18:27     ` Leon Romanovsky
     [not found]       ` <20161212182737.GC8204-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2016-12-13  6:08         ` Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 14/22] bnxt_re: Support post_send verb Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 15/22] bnxt_re: Support post_recv Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 16/22] bnxt_re: Support poll_cq verb Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 17/22] bnxt_re: Handling dispatching of events to IB stack Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 18/22] bnxt_re: Support for DCB Selvin Xavier
2016-12-10 13:50   ` Or Gerlitz
2016-12-13  6:25     ` Selvin Xavier
     [not found]       ` <CA+sbYW1irBd0cTqJJSGJWRbBi-iFzvX3JpoTfF_daU47EqNtAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-12-13 16:56         ` Jason Gunthorpe
2016-12-09  6:48 ` [PATCH V2 19/22] bnxt_re: Support debugfs Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 20/22] bnxt_re: Set uverbs command mask Selvin Xavier
2016-12-09  6:48 ` [PATCH V2 21/22] bnxt_re: Add QP event handling Selvin Xavier
     [not found]   ` <1481266096-23331-22-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2016-12-09 12:12     ` Sergei Shtylyov
2016-12-09 15:26 ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) David Miller
2016-12-09 17:52   ` Selvin Xavier
2016-12-09 16:27 ` Leon Romanovsky
     [not found] ` <1481266096-23331-1-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2016-12-09  6:48   ` [PATCH V2 07/22] bnxt_re: Support for query and modify device verbs Selvin Xavier
2016-12-09  6:48   ` [PATCH V2 10/22] bnxt_re: Support for CQ verbs Selvin Xavier
     [not found]     ` <1481266096-23331-11-git-send-email-selvin.xavier-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2016-12-12 21:03       ` Jonathan Toppins
2016-12-09  6:48   ` [PATCH V2 22/22] bnxt_re: Add bnxt_re driver build support Selvin Xavier
2016-12-09 11:21     ` kbuild test robot
2016-12-10  5:36   ` [PATCH V2 00/22] Broadcom RoCE Driver (bnxt_re) Selvin Xavier
     [not found]     ` <CA+sbYW1ZEa5fndGkvN8OXr-orcUx4jaL73Di8zBJQX_uCdK=Ww-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-12-12 17:07       ` Jason Gunthorpe
     [not found]         ` <20161212170701.GA28387-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2016-12-13  6:04           ` Selvin Xavier
2016-12-12 16:54   ` Jonathan Toppins
     [not found]     ` <9cf03e2b-a16d-19ec-a8ce-14f24272bf6a-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-13  4:52       ` Selvin Xavier
2016-12-13  6:41         ` Michael Chan
2016-12-12 23:52   ` Doug Ledford
2016-12-13  3:54     ` Selvin Xavier
     [not found]     ` <23e26353-4317-2836-9f94-d1fc3274a770-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-13  7:59       ` Or Gerlitz
     [not found]         ` <CAJ3xEMh98kC1KXGf7uHKD-H91f_NiZXaz-3yTtwQ2s-D7rYqMQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-12-13  8:36           ` Selvin Xavier
2016-12-13  6:54 ` Or Gerlitz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).