linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
@ 2019-06-20 15:03 Jack Wang
  2019-06-20 15:03 ` [PATCH v4 01/25] sysfs: export sysfs_remove_file_self() Jack Wang
                   ` (25 more replies)
  0 siblings, 26 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis, rpenyaev

Hi all,

Here is v4 of IBNBD/IBTRS patches, which have minor changes

 Changelog
 ---------
v4:
  o Protocol extended to transport IO priorities
  o Support for Mellanox ConnectX-4/X-5
  o Minor sysfs extentions (display access mode on server side)
  o Bug fixes: cleaning up sysfs folders, race on deallocation of resources
  o Style fixes

v3:
  o Sparse fixes:
     - le32 -> le16 conversion
     - pcpu and RCU wrong declaration
     - sysfs: dynamically alloc array of sockaddr structures to reduce
	   size of a stack frame

  o Rename sysfs folder on client and server sides to show source and
    destination addresses of the connection, i.e.:
	   .../<session-name>/paths/<src@dst>/

  o Remove external inclusions from Makefiles.
  * https://lwn.net/Articles/756994/

v2:
  o IBNBD:
     - No legacy request IO mode, only MQ is left.

  o IBTRS:
     - No FMR registration, only FR is left.

  * https://lwn.net/Articles/755075/

v1:
  - IBTRS: load-balancing and IO fail-over using multipath features were added.

  - Major parts of the code were rewritten, simplified and overall code
    size was reduced by a quarter.

  * https://lwn.net/Articles/746342/

v0:
  - Initial submission

  * https://lwn.net/Articles/718181/


 Introduction
 -------------

IBTRS (InfiniBand Transport) is a reliable high speed transport library
which allows for establishing connection between client and server
machines via RDMA. It is based on RDMA-CM, so expect also to support RoCE
and iWARP, but we mainly tested in IB environment. It is optimized to
transfer (read/write) IO blocks in the sense that it follows the BIO 
semantics of providing the possibility to either write data from a 
scatter-gather list to the remote side or to request ("read") data
transfer from the remote side into a given set of buffers.

IBTRS is multipath capable and provides I/O fail-over and load-balancing
functionality, i.e. in IBTRS terminology, an IBTRS path is a set of RDMA
CMs and particular path is selected according to the load-balancing policy.
It can be used for other components not bind to IBNBD.


IBNBD (InfiniBand Network Block Device) is a pair of kernel modules
(client and server) that allow for remote access of a block device on
the server over IBTRS protocol. After being mapped, the remote block
devices can be accessed on the client side as local block devices.
Internally IBNBD uses IBTRS as an RDMA transport library.


   - IBNBD/IBTRS is developed in order to map thin provisioned volumes,
     thus internal protocol is simple.
   - IBTRS was developed as an independent RDMA transport library, which
     supports fail-over and load-balancing policies using multipath, thus
     it can be used for any other IO needs rather than only for block
     device.
   - IBNBD/IBTRS is fast.
     Old comparison results:
     https://www.spinics.net/lists/linux-rdma/msg48799.html
     New comparison results: see performance measurements section below.

Key features of IBTRS transport library and IBNBD block device:

o High throughput and low latency due to:
   - Only two RDMA messages per IO.
   - IMM InfiniBand messages on responses to reduce round trip latency.
   - Simplified memory management: memory allocation happens once on
     server side when IBTRS session is established.

o IO fail-over and load-balancing by using multipath.  According to
  our test loads additional path brings ~20% of bandwidth.  

o Simple configuration of IBNBD:
   - Server side is completely passive: volumes do not need to be
     explicitly exported.
   - Only IB port GID and device path needed on client side to map
     a block device.
   - A device is remapped automatically i.e. after storage reboot.

Commits for kernel can be found here:
   https://github.com/ionos-enterprise/ibnbd/tree/linux-5.2-rc3--ibnbd-v4
The out-of-tree modules are here:
   https://github.com/ionos-enterprise/ibnbd

Vault 2017 presentation:
  https://events.static.linuxfound.org/sites/events/files/slides/IBNBD-Vault-2017.pdf

 Performance measurements
 ------------------------

o IBNBD and NVMEoRDMA

  Performance results for the v5.2-rc3 kernel
  link: https://github.com/ionos-enterprise/ibnbd/tree/develop/performance/v4-v5.2-rc3

Roman Pen (25):
  sysfs: export sysfs_remove_file_self()
  ibtrs: public interface header to establish RDMA connections
  ibtrs: private headers with IBTRS protocol structs and helpers
  ibtrs: core: lib functions shared between client and server modules
  ibtrs: client: private header with client structs and functions
  ibtrs: client: main functionality
  ibtrs: client: statistics functions
  ibtrs: client: sysfs interface functions
  ibtrs: server: private header with server structs and functions
  ibtrs: server: main functionality
  ibtrs: server: statistics functions
  ibtrs: server: sysfs interface functions
  ibtrs: include client and server modules into kernel compilation
  ibtrs: a bit of documentation
  ibnbd: private headers with IBNBD protocol structs and helpers
  ibnbd: client: private header with client structs and functions
  ibnbd: client: main functionality
  ibnbd: client: sysfs interface functions
  ibnbd: server: private header with server structs and functions
  ibnbd: server: main functionality
  ibnbd: server: functionality for IO submission to file or block dev
  ibnbd: server: sysfs interface functions
  ibnbd: include client and server modules into kernel compilation
  ibnbd: a bit of documentation
  MAINTAINERS: Add maintainer for IBNBD/IBTRS modules

 MAINTAINERS                                   |   14 +
 drivers/block/Kconfig                         |    2 +
 drivers/block/Makefile                        |    1 +
 drivers/block/ibnbd/Kconfig                   |   24 +
 drivers/block/ibnbd/Makefile                  |   13 +
 drivers/block/ibnbd/README                    |  315 ++
 drivers/block/ibnbd/ibnbd-clt-sysfs.c         |  691 ++++
 drivers/block/ibnbd/ibnbd-clt.c               | 1832 +++++++++++
 drivers/block/ibnbd/ibnbd-clt.h               |  166 +
 drivers/block/ibnbd/ibnbd-log.h               |   59 +
 drivers/block/ibnbd/ibnbd-proto.h             |  378 +++
 drivers/block/ibnbd/ibnbd-srv-dev.c           |  408 +++
 drivers/block/ibnbd/ibnbd-srv-dev.h           |  143 +
 drivers/block/ibnbd/ibnbd-srv-sysfs.c         |  270 ++
 drivers/block/ibnbd/ibnbd-srv.c               |  945 ++++++
 drivers/block/ibnbd/ibnbd-srv.h               |   94 +
 drivers/infiniband/Kconfig                    |    1 +
 drivers/infiniband/ulp/Makefile               |    1 +
 drivers/infiniband/ulp/ibtrs/Kconfig          |   22 +
 drivers/infiniband/ulp/ibtrs/Makefile         |   15 +
 drivers/infiniband/ulp/ibtrs/README           |  385 +++
 .../infiniband/ulp/ibtrs/ibtrs-clt-stats.c    |  447 +++
 .../infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c    |  514 +++
 drivers/infiniband/ulp/ibtrs/ibtrs-clt.c      | 2844 +++++++++++++++++
 drivers/infiniband/ulp/ibtrs/ibtrs-clt.h      |  308 ++
 drivers/infiniband/ulp/ibtrs/ibtrs-log.h      |   84 +
 drivers/infiniband/ulp/ibtrs/ibtrs-pri.h      |  463 +++
 .../infiniband/ulp/ibtrs/ibtrs-srv-stats.c    |  103 +
 .../infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c    |  303 ++
 drivers/infiniband/ulp/ibtrs/ibtrs-srv.c      | 1998 ++++++++++++
 drivers/infiniband/ulp/ibtrs/ibtrs-srv.h      |  170 +
 drivers/infiniband/ulp/ibtrs/ibtrs.c          |  610 ++++
 drivers/infiniband/ulp/ibtrs/ibtrs.h          |  318 ++
 fs/sysfs/file.c                               |    1 +
 34 files changed, 13942 insertions(+)
 create mode 100644 drivers/block/ibnbd/Kconfig
 create mode 100644 drivers/block/ibnbd/Makefile
 create mode 100644 drivers/block/ibnbd/README
 create mode 100644 drivers/block/ibnbd/ibnbd-clt-sysfs.c
 create mode 100644 drivers/block/ibnbd/ibnbd-clt.c
 create mode 100644 drivers/block/ibnbd/ibnbd-clt.h
 create mode 100644 drivers/block/ibnbd/ibnbd-log.h
 create mode 100644 drivers/block/ibnbd/ibnbd-proto.h
 create mode 100644 drivers/block/ibnbd/ibnbd-srv-dev.c
 create mode 100644 drivers/block/ibnbd/ibnbd-srv-dev.h
 create mode 100644 drivers/block/ibnbd/ibnbd-srv-sysfs.c
 create mode 100644 drivers/block/ibnbd/ibnbd-srv.c
 create mode 100644 drivers/block/ibnbd/ibnbd-srv.h
 create mode 100644 drivers/infiniband/ulp/ibtrs/Kconfig
 create mode 100644 drivers/infiniband/ulp/ibtrs/Makefile
 create mode 100644 drivers/infiniband/ulp/ibtrs/README
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt-stats.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt.h
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-log.h
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-pri.h
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv-stats.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv.h
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs.c
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 148+ messages in thread

* [PATCH v4 01/25] sysfs: export sysfs_remove_file_self()
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 17:21   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections Jack Wang
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, linux-kernel

From: Roman Pen <roman.penyaev@profitbricks.com>

Function is going to be used in transport over RDMA module
in subsequent patches.

Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: linux-kernel@vger.kernel.org
---
 fs/sysfs/file.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
index 130fc6fbcc03..1ff4672d7746 100644
--- a/fs/sysfs/file.c
+++ b/fs/sysfs/file.c
@@ -492,6 +492,7 @@ bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr)
 	kernfs_put(kn);
 	return ret;
 }
+EXPORT_SYMBOL_GPL(sysfs_remove_file_self);
 
 void sysfs_remove_files(struct kobject *kobj, const struct attribute * const *ptr)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
  2019-06-20 15:03 ` [PATCH v4 01/25] sysfs: export sysfs_remove_file_self() Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 17:44   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers Jack Wang
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

Introduce public header which provides set of API functions to
establish RDMA connections from client to server machine using
IBTRS protocol, which manages RDMA connections for each session,
does multipathing and load balancing.

Main functions for client (active) side:

 ibtrs_clt_open() - Creates set of RDMA connections incapsulated
                    in IBTRS session and returns pointer on IBTRS
		    session object.
 ibtrs_clt_close() - Closes RDMA connections associated with IBTRS
                     session.
 ibtrs_clt_request() - Requests zero-copy RDMA transfer to/from
                       server.

Main functions for server (passive) side:

 ibtrs_srv_open() - Starts listening for IBTRS clients on specified
                    port and invokes IBTRS callbacks for incoming
		    RDMA requests or link events.
 ibtrs_srv_close() - Closes IBTRS server context.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs.h | 318 +++++++++++++++++++++++++++
 1 file changed, 318 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs.h

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs.h b/drivers/infiniband/ulp/ibtrs/ibtrs.h
new file mode 100644
index 000000000000..f5434f0bb85c
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs.h
@@ -0,0 +1,318 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBTRS_H
+#define IBTRS_H
+
+#include <linux/socket.h>
+#include <linux/scatterlist.h>
+
+struct ibtrs_tag;
+struct ibtrs_clt;
+struct ibtrs_srv_ctx;
+struct ibtrs_srv;
+struct ibtrs_srv_op;
+
+/*
+ * Here goes IBTRS client API
+ */
+
+/**
+ * enum ibtrs_clt_link_ev - Events about connectivity state of a client
+ * @IBTRS_CLT_LINK_EV_RECONNECTED	Client was reconnected.
+ * @IBTRS_CLT_LINK_EV_DISCONNECTED	Client was disconnected.
+ */
+enum ibtrs_clt_link_ev {
+	IBTRS_CLT_LINK_EV_RECONNECTED,
+	IBTRS_CLT_LINK_EV_DISCONNECTED,
+};
+
+/**
+ * Source and destination address of a path to be established
+ */
+struct ibtrs_addr {
+	struct sockaddr_storage *src;
+	struct sockaddr_storage *dst;
+};
+
+typedef void (link_clt_ev_fn)(void *priv, enum ibtrs_clt_link_ev ev);
+/**
+ * ibtrs_clt_open() - Open a session to a IBTRS client
+ * @priv:		User supplied private data.
+ * @link_ev:		Event notification for connection state changes
+ *	@priv:			user supplied data that was passed to
+ *				ibtrs_clt_open()
+ *	@ev:			Occurred event
+ * @sessname: name of the session
+ * @paths: Paths to be established defined by their src and dst addresses
+ * @path_cnt: Number of elemnts in the @paths array
+ * @port: port to be used by the IBTRS session
+ * @pdu_sz: Size of extra payload which can be accessed after tag allocation.
+ * @max_inflight_msg: Max. number of parallel inflight messages for the session
+ * @max_segments: Max. number of segments per IO request
+ * @reconnect_delay_sec: time between reconnect tries
+ * @max_reconnect_attempts: Number of times to reconnect on error before giving
+ *			    up, 0 for * disabled, -1 for forever
+ *
+ * Starts session establishment with the ibtrs_server. The function can block
+ * up to ~2000ms until it returns.
+ *
+ * Return a valid pointer on success otherwise PTR_ERR.
+ */
+struct ibtrs_clt *ibtrs_clt_open(void *priv, link_clt_ev_fn *link_ev,
+				 const char *sessname,
+				 const struct ibtrs_addr *paths,
+				 size_t path_cnt, short port,
+				 size_t pdu_sz, u8 reconnect_delay_sec,
+				 u16 max_segments,
+				 s16 max_reconnect_attempts);
+
+/**
+ * ibtrs_clt_close() - Close a session
+ * @sess: Session handler, is freed on return
+ */
+void ibtrs_clt_close(struct ibtrs_clt *sess);
+
+/**
+ * ibtrs_tag_from_pdu() - converts opaque pdu pointer to ibtrs_tag
+ * @pdu: opaque pointer
+ */
+struct ibtrs_tag *ibtrs_tag_from_pdu(void *pdu);
+
+/**
+ * ibtrs_tag_to_pdu() - converts ibtrs_tag to opaque pdu pointer
+ * @tag: IBTRS tag pointer
+ */
+void *ibtrs_tag_to_pdu(struct ibtrs_tag *tag);
+
+enum {
+	IBTRS_TAG_NOWAIT = 0,
+	IBTRS_TAG_WAIT   = 1,
+};
+
+/**
+ * enum ibtrs_clt_con_type() type of ib connection to use with a given tag
+ * @USR_CON - use connection reserved vor "service" messages
+ * @IO_CON - use a connection reserved for IO
+ */
+enum ibtrs_clt_con_type {
+	IBTRS_USR_CON,
+	IBTRS_IO_CON
+};
+
+/**
+ * ibtrs_clt_get_tag() - allocates tag for future RDMA operation
+ * @sess:	Current session
+ * @con_type:	Type of connection to use with the tag
+ * @wait:	Wait type
+ *
+ * Description:
+ *    Allocates tag for the following RDMA operation.  Tag is used
+ *    to preallocate all resources and to propagate memory pressure
+ *    up earlier.
+ *
+ * Context:
+ *    Can sleep if @wait == IBTRS_TAG_WAIT
+ */
+struct ibtrs_tag *ibtrs_clt_get_tag(struct ibtrs_clt *sess,
+				    enum ibtrs_clt_con_type con_type,
+				    int wait);
+
+/**
+ * ibtrs_clt_put_tag() - puts allocated tag
+ * @sess:	Current session
+ * @tag:	Tag to be freed
+ *
+ * Context:
+ *    Does not matter
+ */
+void ibtrs_clt_put_tag(struct ibtrs_clt *sess, struct ibtrs_tag *tag);
+
+typedef void (ibtrs_conf_fn)(void *priv, int errno);
+/**
+ * ibtrs_clt_request() - Request data transfer to/from server via RDMA.
+ *
+ * @dir:	READ/WRITE
+ * @conf:	callback function to be called as confirmation
+ * @sess:	Session
+ * @tag:	Preallocated tag
+ * @priv:	User provided data, passed back with corresponding
+ *		@(conf) confirmation.
+ * @vec:	Message that is send to server together with the request.
+ *		Sum of len of all @vec elements limited to <= IO_MSG_SIZE.
+ *		Since the msg is copied internally it can be allocated on stack.
+ * @nr:		Number of elements in @vec.
+ * @len:	length of data send to/from server
+ * @sg:		Pages to be sent/received to/from server.
+ * @sg_cnt:	Number of elements in the @sg
+ *
+ * Return:
+ * 0:		Success
+ * <0:		Error
+ *
+ * On dir=READ ibtrs client will request a data transfer from Server to client.
+ * The data that the server will respond with will be stored in @sg when
+ * the user receives an %IBTRS_CLT_RDMA_EV_RDMA_REQUEST_WRITE_COMPL event.
+ * On dir=WRITE ibtrs client will rdma write data in sg to server side.
+ */
+int ibtrs_clt_request(int dir, ibtrs_conf_fn *conf, struct ibtrs_clt *sess,
+		      struct ibtrs_tag *tag, void *priv, const struct kvec *vec,
+		      size_t nr, size_t len, struct scatterlist *sg,
+		      unsigned int sg_cnt);
+
+/**
+ * ibtrs_attrs - IBTRS session attributes
+ */
+struct ibtrs_attrs {
+	u32	queue_depth;
+	u32	max_io_size;
+	u8	sessname[NAME_MAX];
+	struct kobject *sess_kobj;
+};
+
+/**
+ * ibtrs_clt_query() - queries IBTRS session attributes
+ *
+ * Returns:
+ *    0 on success
+ *    -ECOMM		no connection to the server
+ */
+int ibtrs_clt_query(struct ibtrs_clt *sess, struct ibtrs_attrs *attr);
+
+/*
+ * Here goes IBTRS server API
+ */
+
+/**
+ * enum ibtrs_srv_link_ev - Server link events
+ * @IBTRS_SRV_LINK_EV_CONNECTED:	Connection from client established
+ * @IBTRS_SRV_LINK_EV_DISCONNECTED:	Connection was disconnected, all
+ *					connection IBTRS resources were freed.
+ */
+enum ibtrs_srv_link_ev {
+	IBTRS_SRV_LINK_EV_CONNECTED,
+	IBTRS_SRV_LINK_EV_DISCONNECTED,
+};
+
+/**
+ * rdma_ev_fn():	Event notification for RDMA operations
+ *			If the callback returns a value != 0, an error message
+ *			for the data transfer will be sent to the client.
+
+ *	@sess:		Session
+ *	@priv:		Private data set by ibtrs_srv_set_sess_priv()
+ *	@id:		internal IBTRS operation id
+ *	@dir:		READ/WRITE
+ *	@data:		Pointer to (bidirectional) rdma memory area:
+ *			- in case of %IBTRS_SRV_RDMA_EV_RECV contains
+ *			data sent by the client
+ *			- in case of %IBTRS_SRV_RDMA_EV_WRITE_REQ points to the
+ *			memory area where the response is to be written to
+ *	@datalen:	Size of the memory area in @data
+ *	@usr:		The extra user message sent by the client (%vec)
+ *	@usrlen:	Size of the user message
+ */
+typedef int (rdma_ev_fn)(struct ibtrs_srv *sess, void *priv,
+			 struct ibtrs_srv_op *id, int dir,
+			 void *data, size_t datalen, const void *usr,
+			 size_t usrlen);
+
+/**
+ * link_ev_fn():	Events about connective state changes
+ *			If the callback returns != 0 and the event
+ *			%IBTRS_SRV_LINK_EV_CONNECTED the corresponding session
+ *			will be destroyed.
+ *	@sess:		Session
+ *	@ev:		event
+ *	@priv:		Private data from user if previously set with
+ *			ibtrs_srv_set_sess_priv()
+ */
+typedef int (link_ev_fn)(struct ibtrs_srv *sess, enum ibtrs_srv_link_ev ev,
+			 void *priv);
+
+/**
+ * ibtrs_srv_open() - open IBTRS server context
+ * @ops:		callback functions
+ *
+ * Creates server context with specified callbacks.
+ *
+ * Return a valid pointer on success otherwise PTR_ERR.
+ */
+struct ibtrs_srv_ctx *ibtrs_srv_open(rdma_ev_fn *rdma_ev, link_ev_fn *link_ev,
+				     unsigned int port);
+
+/**
+ * ibtrs_srv_close() - close IBTRS server context
+ * @ctx: pointer to server context
+ *
+ * Closes IBTRS server context with all client sessions.
+ */
+void ibtrs_srv_close(struct ibtrs_srv_ctx *ctx);
+
+/**
+ * ibtrs_srv_resp_rdma() - Finish an RDMA request
+ *
+ * @id:		Internal IBTRS operation identifier
+ * @errno:	Response Code send to the other side for this operation.
+ *		0 = success, <=0 error
+ *
+ * Finish a RDMA operation. A message is sent to the client and the
+ * corresponding memory areas will be released.
+ */
+void ibtrs_srv_resp_rdma(struct ibtrs_srv_op *id, int errno);
+
+/**
+ * ibtrs_srv_set_sess_priv() - Set private pointer in ibtrs_srv.
+ * @sess:	Session
+ * @priv:	The private pointer that is associated with the session.
+ */
+void ibtrs_srv_set_sess_priv(struct ibtrs_srv *sess, void *priv);
+
+/**
+ * ibtrs_srv_get_sess_qdepth() - Get ibtrs_srv qdepth.
+ * @sess:	Session
+ */
+int ibtrs_srv_get_queue_depth(struct ibtrs_srv *sess);
+
+/**
+ * ibtrs_srv_get_sess_name() - Get ibtrs_srv peer hostname.
+ * @sess:	Session
+ * @sessname:	Sessname buffer
+ * @len:	Length of sessname buffer
+ */
+int ibtrs_srv_get_sess_name(struct ibtrs_srv *sess, char *sessname, size_t len);
+
+/**
+ * ibtrs_addr_to_sockaddr() - convert path string "src,dst" to sockaddreses
+ * @str		string containing source and destination addr of a path
+ *		separated by comma. I.e. "ip:1.1.1.1,ip:1.1.1.2". If str
+ *		contains only one address it's considered to be destination.
+ * @len		string length
+ * @addr->dst	will be set to the destination sockadddr.
+ * @addr->src	will be set to the source address or to NULL
+ *		if str doesn't contain any sorce address.
+ *
+ * Returns zero if conversion successful. Non-zero otherwise.
+ */
+int ibtrs_addr_to_sockaddr(const char *str, size_t len, short port,
+			   struct ibtrs_addr *addr);
+#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
  2019-06-20 15:03 ` [PATCH v4 01/25] sysfs: export sysfs_remove_file_self() Jack Wang
  2019-06-20 15:03 ` [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 22:50   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules Jack Wang
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

These are common private headers with IBTRS protocol structures,
logging, sysfs and other helper functions, which are used on
both client and server sides.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs-log.h |  84 ++++
 drivers/infiniband/ulp/ibtrs/ibtrs-pri.h | 463 +++++++++++++++++++++++
 2 files changed, 547 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-log.h
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-pri.h

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-log.h b/drivers/infiniband/ulp/ibtrs/ibtrs-log.h
new file mode 100644
index 000000000000..fec816c935bc
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-log.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBTRS_LOG_H
+#define IBTRS_LOG_H
+
+#define P1 )
+#define P2 ))
+#define P3 )))
+#define P4 ))))
+#define P(N) P ## N
+
+#define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)
+#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__
+
+#define LIST(...)						\
+	__VA_ARGS__,						\
+	({ unknown_type(); NULL; })				\
+	CAT(P, COUNT_ARGS(__VA_ARGS__))				\
+
+#define EMPTY()
+#define DEFER(id) id EMPTY()
+
+#define _CASE(obj, type, member)				\
+	__builtin_choose_expr(					\
+	__builtin_types_compatible_p(				\
+		typeof(obj), type),				\
+		((type)obj)->member
+#define CASE(o, t, m) DEFER(_CASE)(o, t, m)
+
+/*
+ * Below we define retrieving of sessname from common IBTRS types.
+ * Client or server related types have to be defined by special
+ * TYPES_TO_SESSNAME macro.
+ */
+
+void unknown_type(void);
+
+#ifndef TYPES_TO_SESSNAME
+#define TYPES_TO_SESSNAME(...) ({ unknown_type(); NULL; })
+#endif
+
+#define ibtrs_prefix(obj)					\
+	_CASE(obj, struct ibtrs_con *,  sess->sessname),	\
+	_CASE(obj, struct ibtrs_sess *, sessname),		\
+	TYPES_TO_SESSNAME(obj)					\
+	))
+
+#define ibtrs_log(fn, obj, fmt, ...)				\
+	fn("<%s>: " fmt, ibtrs_prefix(obj), ##__VA_ARGS__)
+
+#define ibtrs_err(obj, fmt, ...)	\
+	ibtrs_log(pr_err, obj, fmt, ##__VA_ARGS__)
+#define ibtrs_err_rl(obj, fmt, ...)	\
+	ibtrs_log(pr_err_ratelimited, obj, fmt, ##__VA_ARGS__)
+#define ibtrs_wrn(obj, fmt, ...)	\
+	ibtrs_log(pr_warn, obj, fmt, ##__VA_ARGS__)
+#define ibtrs_wrn_rl(obj, fmt, ...) \
+	ibtrs_log(pr_warn_ratelimited, obj, fmt, ##__VA_ARGS__)
+#define ibtrs_info(obj, fmt, ...) \
+	ibtrs_log(pr_info, obj, fmt, ##__VA_ARGS__)
+#define ibtrs_info_rl(obj, fmt, ...) \
+	ibtrs_log(pr_info_ratelimited, obj, fmt, ##__VA_ARGS__)
+
+#endif /* IBTRS_LOG_H */
diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-pri.h b/drivers/infiniband/ulp/ibtrs/ibtrs-pri.h
new file mode 100644
index 000000000000..5a180e5e19bc
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-pri.h
@@ -0,0 +1,463 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBTRS_PRI_H
+#define IBTRS_PRI_H
+
+#include <linux/uuid.h>
+#include <rdma/rdma_cm.h>
+#include <rdma/ib_verbs.h>
+#include <rdma/ib.h>
+
+#include "ibtrs.h"
+
+#define IBTRS_PROTO_VER_MAJOR 2
+#define IBTRS_PROTO_VER_MINOR 0
+
+#define IBTRS_PROTO_VER_STRING __stringify(IBTRS_PROTO_VER_MAJOR) "." \
+			       __stringify(IBTRS_PROTO_VER_MINOR)
+
+#ifndef IBTRS_VER_STRING
+#define IBTRS_VER_STRING __stringify(IBTRS_PROTO_VER_MAJOR) "." \
+			 __stringify(IBTRS_PROTO_VER_MINOR)
+#endif
+
+enum ibtrs_imm_const {
+	MAX_IMM_TYPE_BITS = 4,
+	MAX_IMM_TYPE_MASK = ((1 << MAX_IMM_TYPE_BITS) - 1),
+	MAX_IMM_PAYL_BITS = 28,
+	MAX_IMM_PAYL_MASK = ((1 << MAX_IMM_PAYL_BITS) - 1),
+};
+
+enum ibtrs_imm_type {
+	IBTRS_IO_REQ_IMM       = 0, /* client to server */
+	IBTRS_IO_RSP_IMM       = 1, /* server to client */
+	IBTRS_IO_RSP_W_INV_IMM = 2, /* server to client */
+
+	IBTRS_HB_MSG_IMM = 8,
+	IBTRS_HB_ACK_IMM = 9,
+
+	IBTRS_LAST_IMM,
+};
+
+enum {
+	SERVICE_CON_QUEUE_DEPTH = 512,
+
+	MIN_RTR_CNT = 1,
+	MAX_RTR_CNT = 7,
+
+	MAX_PATHS_NUM = 128,
+
+	/*
+	 * With the current size of the tag allocated on the client, 4K
+	 * is the maximum number of tags we can allocate.  This number is
+	 * also used on the client to allocate the IU for the user connection
+	 * to receive the RDMA addresses from the server.
+	 */
+	MAX_SESS_QUEUE_DEPTH = 4096,
+
+	IBTRS_HB_INTERVAL_MS = 5000,
+	IBTRS_HB_MISSED_MAX = 5,
+
+	IBTRS_MAGIC = 0x1BBD,
+	IBTRS_PROTO_VER = (IBTRS_PROTO_VER_MAJOR << 8) | IBTRS_PROTO_VER_MINOR,
+};
+
+struct ibtrs_ib_dev;
+
+struct ibtrs_ib_dev_pool_ops {
+	struct ibtrs_ib_dev *(*alloc)(void);
+	void (*free)(struct ibtrs_ib_dev *);
+	int (*init)(struct ibtrs_ib_dev *);
+	void (*deinit)(struct ibtrs_ib_dev *);
+};
+
+struct ibtrs_ib_dev_pool {
+	struct mutex		mutex;
+	struct list_head	list;
+	enum ib_pd_flags	pd_flags;
+	const struct ibtrs_ib_dev_pool_ops *ops;
+};
+
+struct ibtrs_ib_dev {
+	struct ib_device	 *ib_dev;
+	struct ib_pd		 *ib_pd;
+	struct kref		 ref;
+	struct list_head	 entry;
+	struct ibtrs_ib_dev_pool *pool;
+};
+
+struct ibtrs_con {
+	struct ibtrs_sess	*sess;
+	struct ib_qp		*qp;
+	struct ib_cq		*cq;
+	struct rdma_cm_id	*cm_id;
+	u32			cid;
+};
+
+typedef void (ibtrs_hb_handler_t)(struct ibtrs_con *con, int err);
+
+struct ibtrs_sess {
+	struct list_head	entry;
+	struct sockaddr_storage dst_addr;
+	struct sockaddr_storage src_addr;
+	char			sessname[NAME_MAX];
+	uuid_t			uuid;
+	struct ibtrs_con	**con;
+	u32			con_num;
+	u32		recon_cnt;
+	struct ibtrs_ib_dev	*dev;
+	int			dev_ref;
+	struct ib_cqe		*hb_cqe;
+	ibtrs_hb_handler_t	*hb_err_handler;
+	struct workqueue_struct *hb_wq;
+	struct delayed_work	hb_dwork;
+	u32			hb_interval_ms;
+	u32			hb_missed_cnt;
+	u32			hb_missed_max;
+};
+
+struct ibtrs_iu {
+	struct list_head        list;
+	struct ib_cqe           cqe;
+	dma_addr_t              dma_addr;
+	void                    *buf;
+	size_t                  size;
+	enum dma_data_direction direction;
+	u32			tag;
+};
+
+/**
+ * enum ibtrs_msg_types - IBTRS message types.
+ * @IBTRS_MSG_INFO_REQ:		Client additional info request to the server
+ * @IBTRS_MSG_INFO_RSP:		Server additional info response to the client
+ * @IBTRS_MSG_WRITE:		Client writes data per RDMA to server
+ * @IBTRS_MSG_READ:		Client requests data transfer from server
+ */
+enum ibtrs_msg_types {
+	IBTRS_MSG_INFO_REQ,
+	IBTRS_MSG_INFO_RSP,
+	IBTRS_MSG_WRITE,
+	IBTRS_MSG_READ,
+};
+
+/**
+ * enum ibtrs_msg_flags - IBTRS message flags.
+ * @IBTRS_NEED_INVAL:	Send invalidation in response.
+ */
+enum ibtrs_msg_flags {
+	IBTRS_MSG_NEED_INVAL_F = 1<<0
+};
+
+/**
+ * struct ibtrs_sg_desc - RDMA-Buffer entry description
+ * @addr:	Address of RDMA destination buffer
+ * @key:	Authorization rkey to write to the buffer
+ * @len:	Size of the buffer
+ */
+struct ibtrs_sg_desc {
+	__le64			addr;
+	__le32			key;
+	__le32			len;
+};
+
+/**
+ * struct ibtrs_msg_conn_req - Client connection request to the server
+ * @magic:	   IBTRS magic
+ * @version:	   IBTRS protocol version
+ * @cid:	   Current connection id
+ * @cid_num:	   Number of connections per session
+ * @recon_cnt:	   Reconnections counter
+ * @sess_uuid:	   UUID of a session (path)
+ * @paths_uuid:	   UUID of a group of sessions (paths)
+ *
+ * NOTE: max size 56 bytes, see man rdma_connect().
+ */
+struct ibtrs_msg_conn_req {
+	u8		__cma_version; /* Is set to 0 by cma.c in case of
+					* AF_IB, do not touch that. */
+	u8		__ip_version;  /* On sender side that should be
+					* set to 0, or cma_save_ip_info()
+					* extract garbage and will fail. */
+	__le16		magic;
+	__le16		version;
+	__le16		cid;
+	__le16		cid_num;
+	__le16		recon_cnt;
+	uuid_t		sess_uuid;
+	uuid_t		paths_uuid;
+	u8		reserved[12];
+};
+
+/**
+ * struct ibtrs_msg_conn_rsp - Server connection response to the client
+ * @magic:	   IBTRS magic
+ * @version:	   IBTRS protocol version
+ * @errno:	   If rdma_accept() then 0, if rdma_reject() indicates error
+ * @queue_depth:   max inflight messages (queue-depth) in this session
+ * @max_io_size:   max io size server supports
+ * @max_hdr_size:  max msg header size server supports
+ *
+ * NOTE: size is 56 bytes, max possible is 136 bytes, see man rdma_accept().
+ */
+struct ibtrs_msg_conn_rsp {
+	__le16		magic;
+	__le16		version;
+	__le16		errno;
+	__le16		queue_depth;
+	__le32		max_io_size;
+	__le32		max_hdr_size;
+	u8		reserved[40];
+};
+
+/**
+ * struct ibtrs_msg_info_req
+ * @type:		@IBTRS_MSG_INFO_REQ
+ * @sessname:		Session name chosen by client
+ */
+struct ibtrs_msg_info_req {
+	__le16		type;
+	u8		sessname[NAME_MAX];
+	u8		reserved[15];
+};
+
+/**
+ * struct ibtrs_msg_info_rsp
+ * @type:		@IBTRS_MSG_INFO_RSP
+ * @sg_cnt:		Number of @desc entries
+ * @desc:		RDMA buffers where the client can write to server
+ */
+struct ibtrs_msg_info_rsp {
+	__le16		type;
+	__le16          sg_cnt;
+	u8              reserved[4];
+	struct ibtrs_sg_desc desc[];
+};
+
+/**
+ * struct ibtrs_msg_rdma_read - RDMA data transfer request from client
+ * @type:		always @IBTRS_MSG_READ
+ * @usr_len:		length of user payload
+ * @sg_cnt:		number of @desc entries
+ * @desc:		RDMA buffers where the server can write the result to
+ */
+struct ibtrs_msg_rdma_read {
+	__le16			type;
+	__le16			usr_len;
+	__le16			flags;
+	__le16			sg_cnt;
+	struct ibtrs_sg_desc    desc[];
+};
+
+/**
+ * struct_msg_rdma_write - Message transferred to server with RDMA-Write
+ * @type:		always @IBTRS_MSG_WRITE
+ * @usr_len:		length of user payload
+ */
+struct ibtrs_msg_rdma_write {
+	__le16			type;
+	__le16			usr_len;
+};
+
+/**
+ * struct_msg_rdma_hdr - header for read or write request
+ * @type:		@IBTRS_MSG_WRITE | @IBTRS_MSG_READ
+ */
+struct ibtrs_msg_rdma_hdr {
+	__le16			type;
+};
+
+/* ibtrs.c */
+
+struct ibtrs_iu *ibtrs_iu_alloc(u32 tag, size_t size, gfp_t t,
+				struct ib_device *dev, enum dma_data_direction,
+				void (*done)(struct ib_cq *cq, struct ib_wc *wc));
+void ibtrs_iu_free(struct ibtrs_iu *iu, enum dma_data_direction dir,
+		   struct ib_device *dev);
+int ibtrs_iu_post_recv(struct ibtrs_con *con, struct ibtrs_iu *iu);
+int ibtrs_iu_post_send(struct ibtrs_con *con, struct ibtrs_iu *iu, size_t size,
+		       struct ib_send_wr *head);
+int ibtrs_iu_post_rdma_write_imm(struct ibtrs_con *con, struct ibtrs_iu *iu,
+				 struct ib_sge *sge, unsigned int num_sge,
+				 u32 rkey, u64 rdma_addr, u32 imm_data,
+				 enum ib_send_flags flags,
+				 struct ib_send_wr *head);
+
+int ibtrs_post_recv_empty(struct ibtrs_con *con, struct ib_cqe *cqe);
+int ibtrs_post_recv_empty_x2(struct ibtrs_con *con, struct ib_cqe *cqe);
+int ibtrs_post_rdma_write_imm_empty(struct ibtrs_con *con, struct ib_cqe *cqe,
+				    u32 imm_data, enum ib_send_flags flags,
+				    struct ib_send_wr *head);
+
+int ibtrs_cq_qp_create(struct ibtrs_sess *ibtrs_sess, struct ibtrs_con *con,
+		       u32 max_send_sge, int cq_vector, u16 cq_size,
+		       u16 wr_queue_size, enum ib_poll_context poll_ctx);
+void ibtrs_cq_qp_destroy(struct ibtrs_con *con);
+
+void ibtrs_init_hb(struct ibtrs_sess *sess, struct ib_cqe *cqe,
+		   u32 interval_ms, u32 missed_max,
+		   ibtrs_hb_handler_t *err_handler,
+		   struct workqueue_struct *wq);
+void ibtrs_start_hb(struct ibtrs_sess *sess);
+void ibtrs_stop_hb(struct ibtrs_sess *sess);
+void ibtrs_send_hb_ack(struct ibtrs_sess *sess);
+
+void ibtrs_ib_dev_pool_init(enum ib_pd_flags pd_flags,
+			    struct ibtrs_ib_dev_pool *pool);
+void ibtrs_ib_dev_pool_deinit(struct ibtrs_ib_dev_pool *pool);
+
+struct ibtrs_ib_dev *ibtrs_ib_dev_find_or_add(struct ib_device *ib_dev,
+					      struct ibtrs_ib_dev_pool *pool);
+int ibtrs_ib_dev_put(struct ibtrs_ib_dev *dev);
+
+static inline int sockaddr_cmp(const struct sockaddr *a,
+			       const struct sockaddr *b)
+{
+	switch (a->sa_family) {
+	case AF_IB:
+		return memcmp(&((struct sockaddr_ib *)a)->sib_addr,
+			      &((struct sockaddr_ib *)b)->sib_addr,
+			      sizeof(struct ib_addr));
+	case AF_INET:
+		return memcmp(&((struct sockaddr_in *)a)->sin_addr,
+			      &((struct sockaddr_in *)b)->sin_addr,
+			      sizeof(struct in_addr));
+	case AF_INET6:
+		return memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
+			      &((struct sockaddr_in6 *)b)->sin6_addr,
+			      sizeof(struct in6_addr));
+	default:
+		return -ENOENT;
+	}
+}
+
+static inline int sockaddr_to_str(const struct sockaddr *addr,
+				   char *buf, size_t len)
+{
+	int cnt;
+
+	switch (addr->sa_family) {
+	case AF_IB:
+		cnt = scnprintf(buf, len, "gid:%pI6",
+			&((struct sockaddr_ib *)addr)->sib_addr.sib_raw);
+		return cnt;
+	case AF_INET:
+		cnt = scnprintf(buf, len, "ip:%pI4",
+			&((struct sockaddr_in *)addr)->sin_addr);
+		return cnt;
+	case AF_INET6:
+		cnt = scnprintf(buf, len, "ip:%pI6c",
+			  &((struct sockaddr_in6 *)addr)->sin6_addr);
+		return cnt;
+	}
+	cnt = scnprintf(buf, len, "<invalid address family>");
+	pr_err("Invalid address family\n");
+	return cnt;
+}
+
+/**
+ * ibtrs_invalidate_flag() - returns proper flags for invalidation
+ *
+ * NOTE: This function is needed for compat layer, so think twice before
+ *       rename or remove.
+ */
+static inline u32 ibtrs_invalidate_flag(void)
+{
+	return IBTRS_MSG_NEED_INVAL_F;
+}
+
+static inline u32 ibtrs_to_imm(u32 type, u32 payload)
+{
+	BUILD_BUG_ON(MAX_IMM_PAYL_BITS + MAX_IMM_TYPE_BITS != 32);
+	BUILD_BUG_ON(IBTRS_LAST_IMM > (1<<MAX_IMM_TYPE_BITS));
+	return ((type & MAX_IMM_TYPE_MASK) << MAX_IMM_PAYL_BITS) |
+		(payload & MAX_IMM_PAYL_MASK);
+}
+
+static inline void ibtrs_from_imm(u32 imm, u32 *type, u32 *payload)
+{
+	*payload = (imm & MAX_IMM_PAYL_MASK);
+	*type = (imm >> MAX_IMM_PAYL_BITS);
+}
+
+static inline u32 ibtrs_to_io_req_imm(u32 addr)
+{
+	return ibtrs_to_imm(IBTRS_IO_REQ_IMM, addr);
+}
+
+static inline u32 ibtrs_to_io_rsp_imm(u32 msg_id, int errno, bool w_inval)
+{
+	enum ibtrs_imm_type type;
+	u32 payload;
+
+	/* 9 bits for errno, 19 bits for msg_id */
+	payload = (abs(errno) & 0x1ff) << 19 | (msg_id & 0x7ffff);
+	type = (w_inval ? IBTRS_IO_RSP_W_INV_IMM : IBTRS_IO_RSP_IMM);
+
+	return ibtrs_to_imm(type, payload);
+}
+
+static inline void ibtrs_from_io_rsp_imm(u32 payload, u32 *msg_id, int *errno)
+{
+	/* 9 bits for errno, 19 bits for msg_id */
+	*msg_id = (payload & 0x7ffff);
+	*errno = -(int)((payload >> 19) & 0x1ff);
+}
+
+#define STAT_STORE_FUNC(type, store, reset)				\
+static ssize_t store##_store(struct kobject *kobj,			\
+			     struct kobj_attribute *attr,		\
+			     const char *buf, size_t count)		\
+{									\
+	int ret = -EINVAL;						\
+	type *sess = container_of(kobj, type, kobj_stats);		\
+									\
+	if (sysfs_streq(buf, "1"))					\
+		ret = reset(&sess->stats, true);			\
+	else if (sysfs_streq(buf, "0"))					\
+		ret = reset(&sess->stats, false);			\
+	if (ret)							\
+		return ret;						\
+									\
+	return count;							\
+}
+
+#define STAT_SHOW_FUNC(type, show, print)				\
+static ssize_t show##_show(struct kobject *kobj,			\
+			   struct kobj_attribute *attr,			\
+			   char *page)					\
+{									\
+	type *sess = container_of(kobj, type, kobj_stats);		\
+									\
+	return print(&sess->stats, page, PAGE_SIZE);			\
+}
+
+#define STAT_ATTR(type, stat, print, reset)				\
+STAT_STORE_FUNC(type, stat, reset)					\
+STAT_SHOW_FUNC(type, stat, print)					\
+static struct kobj_attribute stat##_attr =				\
+		__ATTR(stat, 0644,					\
+		       stat##_show,					\
+		       stat##_store)
+
+#endif /* IBTRS_PRI_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (2 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 23:03   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 05/25] ibtrs: client: private header with client structs and functions Jack Wang
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is a set of library functions existing as a ibtrs-core module,
used by client and server modules.

Mainly these functions wrap IB and RDMA calls and provide a bit higher
abstraction for implementing of IBTRS protocol on client or server
sides.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs.c | 610 +++++++++++++++++++++++++++
 1 file changed, 610 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs.c b/drivers/infiniband/ulp/ibtrs/ibtrs.c
new file mode 100644
index 000000000000..f6879daa5bb9
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs.c
@@ -0,0 +1,610 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <linux/module.h>
+#include <linux/inet.h>
+
+#include "ibtrs-pri.h"
+#include "ibtrs-log.h"
+
+MODULE_AUTHOR("ibnbd@profitbricks.com");
+MODULE_DESCRIPTION("IBTRS Core");
+MODULE_VERSION(IBTRS_VER_STRING);
+MODULE_LICENSE("GPL");
+
+struct ibtrs_iu *ibtrs_iu_alloc(u32 tag, size_t size, gfp_t gfp_mask,
+				struct ib_device *dma_dev,
+				enum dma_data_direction direction,
+				void (*done)(struct ib_cq *cq,
+					     struct ib_wc *wc))
+{
+	struct ibtrs_iu *iu;
+
+	iu = kmalloc(sizeof(*iu), gfp_mask);
+	if (unlikely(!iu))
+		return NULL;
+
+	iu->buf = kzalloc(size, gfp_mask);
+	if (unlikely(!iu->buf))
+		goto err1;
+
+	iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, direction);
+	if (unlikely(ib_dma_mapping_error(dma_dev, iu->dma_addr)))
+		goto err2;
+
+	iu->cqe.done  = done;
+	iu->size      = size;
+	iu->direction = direction;
+	iu->tag       = tag;
+
+	return iu;
+
+err2:
+	kfree(iu->buf);
+err1:
+	kfree(iu);
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(ibtrs_iu_alloc);
+
+void ibtrs_iu_free(struct ibtrs_iu *iu, enum dma_data_direction dir,
+		   struct ib_device *ibdev)
+{
+	if (!iu)
+		return;
+
+	ib_dma_unmap_single(ibdev, iu->dma_addr, iu->size, dir);
+	kfree(iu->buf);
+	kfree(iu);
+}
+EXPORT_SYMBOL_GPL(ibtrs_iu_free);
+
+int ibtrs_iu_post_recv(struct ibtrs_con *con, struct ibtrs_iu *iu)
+{
+	struct ibtrs_sess *sess = con->sess;
+	struct ib_recv_wr wr;
+	const struct ib_recv_wr *bad_wr;
+	struct ib_sge list;
+
+	list.addr   = iu->dma_addr;
+	list.length = iu->size;
+	list.lkey   = sess->dev->ib_pd->local_dma_lkey;
+
+	if (WARN_ON(list.length == 0)) {
+		ibtrs_wrn(con, "Posting receive work request failed,"
+			  " sg list is empty\n");
+		return -EINVAL;
+	}
+
+	wr.next    = NULL;
+	wr.wr_cqe  = &iu->cqe;
+	wr.sg_list = &list;
+	wr.num_sge = 1;
+
+	return ib_post_recv(con->qp, &wr, &bad_wr);
+}
+EXPORT_SYMBOL_GPL(ibtrs_iu_post_recv);
+
+int ibtrs_post_recv_empty(struct ibtrs_con *con, struct ib_cqe *cqe)
+{
+	struct ib_recv_wr wr;
+	const struct ib_recv_wr *bad_wr;
+
+	wr.next    = NULL;
+	wr.wr_cqe  = cqe;
+	wr.sg_list = NULL;
+	wr.num_sge = 0;
+
+	return ib_post_recv(con->qp, &wr, &bad_wr);
+}
+EXPORT_SYMBOL_GPL(ibtrs_post_recv_empty);
+
+int ibtrs_post_recv_empty_x2(struct ibtrs_con *con, struct ib_cqe *cqe)
+{
+	struct ib_recv_wr wr_arr[2], *wr;
+	const struct ib_recv_wr *bad_wr;
+	int i;
+
+	memset(wr_arr, 0, sizeof(wr_arr));
+	for (i = 0; i < ARRAY_SIZE(wr_arr); i++) {
+		wr = &wr_arr[i];
+		wr->wr_cqe  = cqe;
+		if (i)
+			/* Chain backwards */
+			wr->next = &wr_arr[i - 1];
+	}
+
+	return ib_post_recv(con->qp, wr, &bad_wr);
+}
+EXPORT_SYMBOL_GPL(ibtrs_post_recv_empty_x2);
+
+int ibtrs_iu_post_send(struct ibtrs_con *con, struct ibtrs_iu *iu, size_t size,
+		       struct ib_send_wr *head)
+{
+	struct ibtrs_sess *sess = con->sess;
+	struct ib_send_wr wr;
+	const struct ib_send_wr *bad_wr;
+	struct ib_sge list;
+
+	if ((WARN_ON(size == 0)))
+		return -EINVAL;
+
+	list.addr   = iu->dma_addr;
+	list.length = size;
+	list.lkey   = sess->dev->ib_pd->local_dma_lkey;
+
+	memset(&wr, 0, sizeof(wr));
+	wr.next       = NULL;
+	wr.wr_cqe     = &iu->cqe;
+	wr.sg_list    = &list;
+	wr.num_sge    = 1;
+	wr.opcode     = IB_WR_SEND;
+	wr.send_flags = IB_SEND_SIGNALED;
+
+	if (head) {
+		struct ib_send_wr *tail = head;
+
+		while (tail->next)
+			tail = tail->next;
+		tail->next = &wr;
+	} else {
+		head = &wr;
+	}
+
+	return ib_post_send(con->qp, head, &bad_wr);
+}
+EXPORT_SYMBOL_GPL(ibtrs_iu_post_send);
+
+int ibtrs_iu_post_rdma_write_imm(struct ibtrs_con *con, struct ibtrs_iu *iu,
+				 struct ib_sge *sge, unsigned int num_sge,
+				 u32 rkey, u64 rdma_addr, u32 imm_data,
+				 enum ib_send_flags flags,
+				 struct ib_send_wr *head)
+{
+	const struct ib_send_wr *bad_wr;
+	struct ib_rdma_wr wr;
+	int i;
+
+	wr.wr.next	  = NULL;
+	wr.wr.wr_cqe	  = &iu->cqe;
+	wr.wr.sg_list	  = sge;
+	wr.wr.num_sge	  = num_sge;
+	wr.rkey		  = rkey;
+	wr.remote_addr	  = rdma_addr;
+	wr.wr.opcode	  = IB_WR_RDMA_WRITE_WITH_IMM;
+	wr.wr.ex.imm_data = cpu_to_be32(imm_data);
+	wr.wr.send_flags  = flags;
+
+	/*
+	 * If one of the sges has 0 size, the operation will fail with an
+	 * length error
+	 */
+	for (i = 0; i < num_sge; i++)
+		if (WARN_ON(sge[i].length == 0))
+			return -EINVAL;
+
+	if (head) {
+		struct ib_send_wr *tail = head;
+
+		while (tail->next)
+			tail = tail->next;
+		tail->next = &wr.wr;
+	} else {
+		head = &wr.wr;
+	}
+
+	return ib_post_send(con->qp, head, &bad_wr);
+}
+EXPORT_SYMBOL_GPL(ibtrs_iu_post_rdma_write_imm);
+
+int ibtrs_post_rdma_write_imm_empty(struct ibtrs_con *con, struct ib_cqe *cqe,
+				    u32 imm_data, enum ib_send_flags flags,
+				    struct ib_send_wr *head)
+{
+	struct ib_send_wr wr;
+	const struct ib_send_wr *bad_wr;
+
+	memset(&wr, 0, sizeof(wr));
+	wr.wr_cqe	= cqe;
+	wr.send_flags	= flags;
+	wr.opcode	= IB_WR_RDMA_WRITE_WITH_IMM;
+	wr.ex.imm_data	= cpu_to_be32(imm_data);
+
+	if (head) {
+		struct ib_send_wr *tail = head;
+
+		while (tail->next)
+			tail = tail->next;
+		tail->next = &wr;
+	} else {
+		head = &wr;
+	}
+
+	return ib_post_send(con->qp, head, &bad_wr);
+}
+EXPORT_SYMBOL_GPL(ibtrs_post_rdma_write_imm_empty);
+
+static void qp_event_handler(struct ib_event *ev, void *ctx)
+{
+	struct ibtrs_con *con = ctx;
+
+	switch (ev->event) {
+	case IB_EVENT_COMM_EST:
+		ibtrs_info(con, "QP event %s (%d) received\n",
+			   ib_event_msg(ev->event), ev->event);
+		rdma_notify(con->cm_id, IB_EVENT_COMM_EST);
+		break;
+	default:
+		ibtrs_info(con, "Unhandled QP event %s (%d) received\n",
+			   ib_event_msg(ev->event), ev->event);
+		break;
+	}
+}
+
+static int create_cq(struct ibtrs_con *con, int cq_vector, u16 cq_size,
+		     enum ib_poll_context poll_ctx)
+{
+	struct rdma_cm_id *cm_id = con->cm_id;
+	struct ib_cq *cq;
+
+	cq = ib_alloc_cq(cm_id->device, con, cq_size,
+			 cq_vector, poll_ctx);
+	if (unlikely(IS_ERR(cq))) {
+		ibtrs_err(con, "Creating completion queue failed, errno: %ld\n",
+			  PTR_ERR(cq));
+		return PTR_ERR(cq);
+	}
+	con->cq = cq;
+
+	return 0;
+}
+
+static int create_qp(struct ibtrs_con *con, struct ib_pd *pd,
+		     u16 wr_queue_size, u32 max_sge)
+{
+	struct ib_qp_init_attr init_attr = {NULL};
+	struct rdma_cm_id *cm_id = con->cm_id;
+	int ret;
+
+	init_attr.cap.max_send_wr = wr_queue_size;
+	init_attr.cap.max_recv_wr = wr_queue_size;
+	init_attr.cap.max_recv_sge = 1;
+	init_attr.event_handler = qp_event_handler;
+	init_attr.qp_context = con;
+#undef max_send_sge
+	init_attr.cap.max_send_sge = max_sge;
+
+	init_attr.qp_type = IB_QPT_RC;
+	init_attr.send_cq = con->cq;
+	init_attr.recv_cq = con->cq;
+	init_attr.sq_sig_type = IB_SIGNAL_REQ_WR;
+
+	ret = rdma_create_qp(cm_id, pd, &init_attr);
+	if (unlikely(ret)) {
+		ibtrs_err(con, "Creating QP failed, err: %d\n", ret);
+		return ret;
+	}
+	con->qp = cm_id->qp;
+
+	return ret;
+}
+
+int ibtrs_cq_qp_create(struct ibtrs_sess *sess, struct ibtrs_con *con,
+		       u32 max_send_sge, int cq_vector, u16 cq_size,
+		       u16 wr_queue_size, enum ib_poll_context poll_ctx)
+{
+	int err;
+
+	err = create_cq(con, cq_vector, cq_size, poll_ctx);
+	if (unlikely(err))
+		return err;
+
+	err = create_qp(con, sess->dev->ib_pd, wr_queue_size, max_send_sge);
+	if (unlikely(err)) {
+		ib_free_cq(con->cq);
+		con->cq = NULL;
+		return err;
+	}
+	con->sess = sess;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ibtrs_cq_qp_create);
+
+void ibtrs_cq_qp_destroy(struct ibtrs_con *con)
+{
+	if (con->qp) {
+		rdma_destroy_qp(con->cm_id);
+		con->qp = NULL;
+	}
+	if (con->cq) {
+		ib_free_cq(con->cq);
+		con->cq = NULL;
+	}
+}
+EXPORT_SYMBOL_GPL(ibtrs_cq_qp_destroy);
+
+static void schedule_hb(struct ibtrs_sess *sess)
+{
+	queue_delayed_work(sess->hb_wq, &sess->hb_dwork,
+			   msecs_to_jiffies(sess->hb_interval_ms));
+}
+
+void ibtrs_send_hb_ack(struct ibtrs_sess *sess)
+{
+	struct ibtrs_con *usr_con = sess->con[0];
+	u32 imm;
+	int err;
+
+	imm = ibtrs_to_imm(IBTRS_HB_ACK_IMM, 0);
+	err = ibtrs_post_rdma_write_imm_empty(usr_con, sess->hb_cqe, imm,
+					      IB_SEND_SIGNALED, NULL);
+	if (unlikely(err)) {
+		sess->hb_err_handler(usr_con, err);
+		return;
+	}
+}
+EXPORT_SYMBOL_GPL(ibtrs_send_hb_ack);
+
+static void hb_work(struct work_struct *work)
+{
+	struct ibtrs_con *usr_con;
+	struct ibtrs_sess *sess;
+	u32 imm;
+	int err;
+
+	sess = container_of(to_delayed_work(work), typeof(*sess), hb_dwork);
+	usr_con = sess->con[0];
+
+	if (sess->hb_missed_cnt > sess->hb_missed_max) {
+		sess->hb_err_handler(usr_con, -ETIMEDOUT);
+		return;
+	}
+	if (sess->hb_missed_cnt++) {
+		/* Reschedule work without sending hb */
+		schedule_hb(sess);
+		return;
+	}
+	imm = ibtrs_to_imm(IBTRS_HB_MSG_IMM, 0);
+	err = ibtrs_post_rdma_write_imm_empty(usr_con, sess->hb_cqe, imm,
+					      IB_SEND_SIGNALED, NULL);
+	if (unlikely(err)) {
+		sess->hb_err_handler(usr_con, err);
+		return;
+	}
+
+	schedule_hb(sess);
+}
+
+void ibtrs_init_hb(struct ibtrs_sess *sess, struct ib_cqe *cqe,
+		   unsigned int interval_ms, unsigned int missed_max,
+		   ibtrs_hb_handler_t *err_handler,
+		   struct workqueue_struct *wq)
+{
+	sess->hb_cqe = cqe;
+	sess->hb_interval_ms = interval_ms;
+	sess->hb_err_handler = err_handler;
+	sess->hb_wq = wq;
+	sess->hb_missed_max = missed_max;
+	sess->hb_missed_cnt = 0;
+	INIT_DELAYED_WORK(&sess->hb_dwork, hb_work);
+}
+EXPORT_SYMBOL_GPL(ibtrs_init_hb);
+
+void ibtrs_start_hb(struct ibtrs_sess *sess)
+{
+	schedule_hb(sess);
+}
+EXPORT_SYMBOL_GPL(ibtrs_start_hb);
+
+void ibtrs_stop_hb(struct ibtrs_sess *sess)
+{
+	cancel_delayed_work_sync(&sess->hb_dwork);
+	sess->hb_missed_cnt = 0;
+	sess->hb_missed_max = 0;
+}
+EXPORT_SYMBOL_GPL(ibtrs_stop_hb);
+
+static int ibtrs_str_gid_to_sockaddr(const char *addr, size_t len,
+				     short port, struct sockaddr_storage *dst)
+{
+	struct sockaddr_ib *dst_ib = (struct sockaddr_ib *)dst;
+	int ret;
+
+	/*
+	 * We can use some of the I6 functions since GID is a valid
+	 * IPv6 address format
+	 */
+	ret = in6_pton(addr, len, dst_ib->sib_addr.sib_raw, '\0', NULL);
+	if (ret == 0)
+		return -EINVAL;
+
+	dst_ib->sib_family = AF_IB;
+	/*
+	 * Use the same TCP server port number as the IB service ID
+	 * on the IB port space range
+	 */
+	dst_ib->sib_sid = cpu_to_be64(RDMA_IB_IP_PS_IB | port);
+	dst_ib->sib_sid_mask = cpu_to_be64(0xffffffffffffffffULL);
+	dst_ib->sib_pkey = cpu_to_be16(0xffff);
+
+	return 0;
+}
+
+/**
+ * ibtrs_str_to_sockaddr() - Convert ibtrs address string to sockaddr
+ * @addr	String representation of an addr (IPv4, IPv6 or IB GID):
+ *              - "ip:192.168.1.1"
+ *              - "ip:fe80::200:5aee:feaa:20a2"
+ *              - "gid:fe80::200:5aee:feaa:20a2"
+ * @len         String address length
+ * @port	Destination port
+ * @dst		Destination sockaddr structure
+ *
+ * Returns 0 if conversion successful. Non-zero on error.
+ */
+static int ibtrs_str_to_sockaddr(const char *addr, size_t len,
+				 short port, struct sockaddr_storage *dst)
+{
+	if (strncmp(addr, "gid:", 4) == 0) {
+		return ibtrs_str_gid_to_sockaddr(addr + 4, len - 4, port, dst);
+	} else if (strncmp(addr, "ip:", 3) == 0) {
+		char port_str[8];
+		char *cpy;
+		int err;
+
+		snprintf(port_str, sizeof(port_str), "%u", port);
+		cpy = kstrndup(addr + 3, len - 3, GFP_KERNEL);
+		err = cpy ? inet_pton_with_scope(&init_net, AF_UNSPEC,
+						 cpy, port_str, dst) : -ENOMEM;
+		kfree(cpy);
+
+		return err;
+	}
+	return -EPROTONOSUPPORT;
+}
+
+int ibtrs_addr_to_sockaddr(const char *str, size_t len, short port,
+			   struct ibtrs_addr *addr)
+{
+	const char *d;
+	int ret;
+
+	d = strchr(str, ',');
+	if (!d)
+		d = strchr(str, '@');
+	if (d) {
+		if (ibtrs_str_to_sockaddr(str, d - str, 0, addr->src))
+			return -EINVAL;
+		d += 1;
+		len -= d - str;
+		str  = d;
+
+	} else {
+		addr->src = NULL;
+	}
+	ret = ibtrs_str_to_sockaddr(str, len, port, addr->dst);
+
+	return ret;
+}
+EXPORT_SYMBOL(ibtrs_addr_to_sockaddr);
+
+void ibtrs_ib_dev_pool_init(enum ib_pd_flags pd_flags,
+			    struct ibtrs_ib_dev_pool *pool)
+{
+	WARN_ON(pool->ops && (!pool->ops->alloc ^ !pool->ops->free));
+	INIT_LIST_HEAD(&pool->list);
+	mutex_init(&pool->mutex);
+	pool->pd_flags = pd_flags;
+}
+EXPORT_SYMBOL(ibtrs_ib_dev_pool_init);
+
+void ibtrs_ib_dev_pool_deinit(struct ibtrs_ib_dev_pool *pool)
+{
+	WARN_ON(!list_empty(&pool->list));
+}
+EXPORT_SYMBOL(ibtrs_ib_dev_pool_deinit);
+
+static void dev_free(struct kref *ref)
+{
+	struct ibtrs_ib_dev_pool *pool;
+	struct ibtrs_ib_dev *dev;
+
+	dev = container_of(ref, typeof(*dev), ref);
+	pool = dev->pool;
+
+	mutex_lock(&pool->mutex);
+	list_del(&dev->entry);
+	mutex_unlock(&pool->mutex);
+
+	if (pool->ops && pool->ops->deinit)
+		pool->ops->deinit(dev);
+
+	ib_dealloc_pd(dev->ib_pd);
+
+	if (pool->ops && pool->ops->free)
+		pool->ops->free(dev);
+	else
+		kfree(dev);
+}
+
+int ibtrs_ib_dev_put(struct ibtrs_ib_dev *dev)
+{
+	return kref_put(&dev->ref, dev_free);
+}
+EXPORT_SYMBOL(ibtrs_ib_dev_put);
+
+static int ibtrs_ib_dev_get(struct ibtrs_ib_dev *dev)
+{
+	return kref_get_unless_zero(&dev->ref);
+}
+
+struct ibtrs_ib_dev *
+ibtrs_ib_dev_find_or_add(struct ib_device *ib_dev,
+			 struct ibtrs_ib_dev_pool *pool)
+{
+	struct ibtrs_ib_dev *dev;
+
+	mutex_lock(&pool->mutex);
+	list_for_each_entry(dev, &pool->list, entry) {
+		if (dev->ib_dev->node_guid == ib_dev->node_guid &&
+		    ibtrs_ib_dev_get(dev))
+			goto out_unlock;
+	}
+	if (pool->ops && pool->ops->alloc)
+		dev = pool->ops->alloc();
+	else
+		dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (unlikely(IS_ERR_OR_NULL(dev)))
+		goto out_err;
+
+	kref_init(&dev->ref);
+	dev->pool = pool;
+	dev->ib_dev = ib_dev;
+	dev->ib_pd = ib_alloc_pd(ib_dev, pool->pd_flags);
+	if (unlikely(IS_ERR(dev->ib_pd)))
+		goto out_free_dev;
+
+	if (pool->ops && pool->ops->init && pool->ops->init(dev))
+		goto out_free_pd;
+
+	list_add(&dev->entry, &pool->list);
+out_unlock:
+	mutex_unlock(&pool->mutex);
+	return dev;
+
+out_free_pd:
+	ib_dealloc_pd(dev->ib_pd);
+out_free_dev:
+	if (pool->ops && pool->ops->free)
+		pool->ops->free(dev);
+	else
+		kfree(dev);
+out_err:
+	mutex_unlock(&pool->mutex);
+	return NULL;
+}
+EXPORT_SYMBOL(ibtrs_ib_dev_find_or_add);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 05/25] ibtrs: client: private header with client structs and functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (3 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 23:05   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 06/25] ibtrs: client: main functionality Jack Wang
                   ` (20 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This header describes main structs and functions used by ibtrs-client
module, mainly for managing IBTRS sessions, creating/destroying sysfs
entries, accounting statistics on client side.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs-clt.h | 308 +++++++++++++++++++++++
 1 file changed, 308 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt.h

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-clt.h b/drivers/infiniband/ulp/ibtrs/ibtrs-clt.h
new file mode 100644
index 000000000000..f9e65f5eb5ab
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-clt.h
@@ -0,0 +1,308 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBTRS_CLT_H
+#define IBTRS_CLT_H
+
+#include <linux/device.h>
+#include "ibtrs-pri.h"
+
+/**
+ * enum ibtrs_clt_state - Client states.
+ */
+enum ibtrs_clt_state {
+	IBTRS_CLT_CONNECTING,
+	IBTRS_CLT_CONNECTING_ERR,
+	IBTRS_CLT_RECONNECTING,
+	IBTRS_CLT_CONNECTED,
+	IBTRS_CLT_CLOSING,
+	IBTRS_CLT_CLOSED,
+	IBTRS_CLT_DEAD,
+};
+
+static inline const char *ibtrs_clt_state_str(enum ibtrs_clt_state state)
+{
+	switch (state) {
+	case IBTRS_CLT_CONNECTING:
+		return "IBTRS_CLT_CONNECTING";
+	case IBTRS_CLT_CONNECTING_ERR:
+		return "IBTRS_CLT_CONNECTING_ERR";
+	case IBTRS_CLT_RECONNECTING:
+		return "IBTRS_CLT_RECONNECTING";
+	case IBTRS_CLT_CONNECTED:
+		return "IBTRS_CLT_CONNECTED";
+	case IBTRS_CLT_CLOSING:
+		return "IBTRS_CLT_CLOSING";
+	case IBTRS_CLT_CLOSED:
+		return "IBTRS_CLT_CLOSED";
+	case IBTRS_CLT_DEAD:
+		return "IBTRS_CLT_DEAD";
+	default:
+		return "UNKNOWN";
+	}
+}
+
+enum ibtrs_mp_policy {
+	MP_POLICY_RR,
+	MP_POLICY_MIN_INFLIGHT,
+};
+
+struct ibtrs_clt_stats_reconnects {
+	int successful_cnt;
+	int fail_cnt;
+};
+
+struct ibtrs_clt_stats_wc_comp {
+	u32 cnt;
+	u64 total_cnt;
+};
+
+struct ibtrs_clt_stats_cpu_migr {
+	atomic_t from;
+	int to;
+};
+
+struct ibtrs_clt_stats_rdma {
+	struct {
+		u64 cnt;
+		u64 size_total;
+	} dir[2];
+
+	u64 failover_cnt;
+};
+
+struct ibtrs_clt_stats_rdma_lat {
+	u64 read;
+	u64 write;
+};
+
+#define MIN_LOG_SG 2
+#define MAX_LOG_SG 5
+#define MAX_LIN_SG BIT(MIN_LOG_SG)
+#define SG_DISTR_SZ (MAX_LOG_SG - MIN_LOG_SG + MAX_LIN_SG + 2)
+
+#define MAX_LOG_LAT 16
+#define MIN_LOG_LAT 0
+#define LOG_LAT_SZ (MAX_LOG_LAT - MIN_LOG_LAT + 2)
+
+struct ibtrs_clt_stats_pcpu {
+	struct ibtrs_clt_stats_cpu_migr		cpu_migr;
+	struct ibtrs_clt_stats_rdma		rdma;
+	u64					sg_list_total;
+	u64					sg_list_distr[SG_DISTR_SZ];
+	struct ibtrs_clt_stats_rdma_lat		rdma_lat_distr[LOG_LAT_SZ];
+	struct ibtrs_clt_stats_rdma_lat		rdma_lat_max;
+	struct ibtrs_clt_stats_wc_comp		wc_comp;
+};
+
+struct ibtrs_clt_stats {
+	bool					enable_rdma_lat;
+	struct ibtrs_clt_stats_pcpu    __percpu	*pcpu_stats;
+	struct ibtrs_clt_stats_reconnects	reconnects;
+	atomic_t				inflight;
+};
+
+struct ibtrs_clt_con {
+	struct ibtrs_con	c;
+	u32			cpu;
+	atomic_t		io_cnt;
+	int			cm_err;
+};
+
+/**
+ * ibtrs_tag - tags the memory allocation for future RDMA operation
+ */
+struct ibtrs_tag {
+	enum ibtrs_clt_con_type con_type;
+	u32			cpu_id;
+	u32			mem_id;
+	u32			mem_off;
+};
+
+struct ibtrs_clt_io_req {
+	struct list_head        list;
+	struct ibtrs_iu		*iu;
+	struct scatterlist	*sglist; /* list holding user data */
+	u32			sg_cnt;
+	u32			sg_size;
+	u32			data_len;
+	u32			usr_len;
+	void			*priv;
+	bool			in_use;
+	struct ibtrs_clt_con	*con;
+	struct ibtrs_sg_desc	*desc;
+	struct ib_sge		*sge;
+	struct ibtrs_tag	*tag;
+	enum dma_data_direction dir;
+	ibtrs_conf_fn		*conf;
+	unsigned long		start_jiffies;
+
+	struct ib_mr		*mr;
+	struct ib_cqe		inv_cqe;
+	struct completion	inv_comp;
+	int			inv_errno;
+	bool			need_inv_comp;
+	bool			need_inv;
+};
+
+struct ibtrs_rbuf {
+	u64 addr;
+	u32 rkey;
+};
+
+struct ibtrs_clt_sess {
+	struct ibtrs_sess	s;
+	struct ibtrs_clt	*clt;
+	wait_queue_head_t	state_wq;
+	enum ibtrs_clt_state	state;
+	atomic_t		connected_cnt;
+	struct mutex		init_mutex;
+	struct ibtrs_clt_io_req	*reqs;
+	struct delayed_work	reconnect_dwork;
+	struct work_struct	close_work;
+	u32			reconnect_attempts;
+	bool			established;
+	struct ibtrs_rbuf	*rbufs;
+	size_t			max_io_size;
+	u32			max_hdr_size;
+	u32			chunk_size;
+	size_t			queue_depth;
+	u32			max_pages_per_mr;
+	int			max_send_sge;
+	struct kobject		kobj;
+	struct kobject		kobj_stats;
+	struct ibtrs_clt_stats  stats;
+	/* cache hca_port and hca_name to display in sysfs */
+	u8			hca_port;
+	char                    hca_name[IB_DEVICE_NAME_MAX];
+	struct list_head __percpu
+				*mp_skip_entry;
+};
+
+struct ibtrs_clt {
+	struct list_head   /* __rcu */ paths_list;
+	size_t			       paths_num;
+	struct ibtrs_clt_sess
+		      __rcu * __percpu *pcpu_path;
+
+	bool			opened;
+	uuid_t			paths_uuid;
+	int			paths_up;
+	struct mutex		paths_mutex;
+	struct mutex		paths_ev_mutex;
+	char			sessname[NAME_MAX];
+	short			port;
+	u32			max_reconnect_attempts;
+	u32		reconnect_delay_sec;
+	u32		max_segments;
+	void			*tags;
+	unsigned long		*tags_map;
+	size_t			queue_depth;
+	size_t			max_io_size;
+	wait_queue_head_t	tags_wait;
+	size_t			pdu_sz;
+	void			*priv;
+	link_clt_ev_fn		*link_ev;
+	struct device		dev;
+	struct kobject		kobj_paths;
+	enum ibtrs_mp_policy	mp_policy;
+};
+
+static inline struct ibtrs_clt_con *to_clt_con(struct ibtrs_con *c)
+{
+	return container_of(c, struct ibtrs_clt_con, c);
+}
+
+static inline struct ibtrs_clt_sess *to_clt_sess(struct ibtrs_sess *s)
+{
+	return container_of(s, struct ibtrs_clt_sess, s);
+}
+
+/* See ibtrs-log.h */
+#define TYPES_TO_SESSNAME(obj)						\
+	LIST(CASE(obj, struct ibtrs_clt_sess *, s.sessname),		\
+	     CASE(obj, struct ibtrs_clt *, sessname))
+
+#define TAG_SIZE(clt) (sizeof(struct ibtrs_tag) + (clt)->pdu_sz)
+#define GET_TAG(clt, idx) ((clt)->tags + TAG_SIZE(clt) * idx)
+
+int ibtrs_clt_reconnect_from_sysfs(struct ibtrs_clt_sess *sess);
+int ibtrs_clt_disconnect_from_sysfs(struct ibtrs_clt_sess *sess);
+int ibtrs_clt_create_path_from_sysfs(struct ibtrs_clt *clt,
+				     struct ibtrs_addr *addr);
+int ibtrs_clt_remove_path_from_sysfs(struct ibtrs_clt_sess *sess,
+				     const struct attribute *sysfs_self);
+
+void ibtrs_clt_set_max_reconnect_attempts(struct ibtrs_clt *clt, int value);
+int ibtrs_clt_get_max_reconnect_attempts(const struct ibtrs_clt *clt);
+
+/* ibtrs-clt-stats.c */
+
+int ibtrs_clt_init_stats(struct ibtrs_clt_stats *stats);
+void ibtrs_clt_free_stats(struct ibtrs_clt_stats *stats);
+
+void ibtrs_clt_decrease_inflight(struct ibtrs_clt_stats *s);
+void ibtrs_clt_inc_failover_cnt(struct ibtrs_clt_stats *s);
+
+void ibtrs_clt_update_rdma_lat(struct ibtrs_clt_stats *s, bool read,
+			       unsigned long ms);
+void ibtrs_clt_update_wc_stats(struct ibtrs_clt_con *con);
+void ibtrs_clt_update_all_stats(struct ibtrs_clt_io_req *req, int dir);
+
+int ibtrs_clt_reset_sg_list_distr_stats(struct ibtrs_clt_stats *stats,
+					bool enable);
+int ibtrs_clt_stats_sg_list_distr_to_str(struct ibtrs_clt_stats *stats,
+					 char *buf, size_t len);
+int ibtrs_clt_reset_rdma_lat_distr_stats(struct ibtrs_clt_stats *stats,
+					 bool enable);
+ssize_t ibtrs_clt_stats_rdma_lat_distr_to_str(struct ibtrs_clt_stats *stats,
+					      char *page, size_t len);
+int ibtrs_clt_reset_cpu_migr_stats(struct ibtrs_clt_stats *stats, bool enable);
+int ibtrs_clt_stats_migration_cnt_to_str(struct ibtrs_clt_stats *stats,
+					 char *buf, size_t len);
+int ibtrs_clt_reset_reconnects_stat(struct ibtrs_clt_stats *stats, bool enable);
+int ibtrs_clt_stats_reconnects_to_str(struct ibtrs_clt_stats *stats, char *buf,
+				      size_t len);
+int ibtrs_clt_reset_wc_comp_stats(struct ibtrs_clt_stats *stats, bool enable);
+int ibtrs_clt_stats_wc_completion_to_str(struct ibtrs_clt_stats *stats,
+					 char *buf, size_t len);
+int ibtrs_clt_reset_rdma_stats(struct ibtrs_clt_stats *stats, bool enable);
+ssize_t ibtrs_clt_stats_rdma_to_str(struct ibtrs_clt_stats *stats,
+				    char *page, size_t len);
+bool ibtrs_clt_sess_is_connected(const struct ibtrs_clt_sess *sess);
+int ibtrs_clt_reset_all_stats(struct ibtrs_clt_stats *stats, bool enable);
+ssize_t ibtrs_clt_reset_all_help(struct ibtrs_clt_stats *stats,
+				 char *page, size_t len);
+
+/* ibtrs-clt-sysfs.c */
+
+int ibtrs_clt_create_sysfs_root_folders(struct ibtrs_clt *clt);
+int ibtrs_clt_create_sysfs_root_files(struct ibtrs_clt *clt);
+void ibtrs_clt_destroy_sysfs_root_folders(struct ibtrs_clt *clt);
+void ibtrs_clt_destroy_sysfs_root_files(struct ibtrs_clt *clt);
+
+int ibtrs_clt_create_sess_files(struct ibtrs_clt_sess *sess);
+void ibtrs_clt_destroy_sess_files(struct ibtrs_clt_sess *sess,
+				  const struct attribute *sysfs_self);
+
+#endif /* IBTRS_CLT_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 06/25] ibtrs: client: main functionality
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (4 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 05/25] ibtrs: client: private header with client structs and functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 21:51   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 07/25] ibtrs: client: statistics functions Jack Wang
                   ` (19 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is main functionality of ibtrs-client module, which manages
set of RDMA connections for each IBTRS session, does multipathing,
load balancing and failover of RDMA requests.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs-clt.c | 2844 ++++++++++++++++++++++
 1 file changed, 2844 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-clt.c b/drivers/infiniband/ulp/ibtrs/ibtrs-clt.c
new file mode 100644
index 000000000000..721f9a4fafdc
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-clt.c
@@ -0,0 +1,2844 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <linux/module.h>
+#include <linux/rculist.h>
+#include <linux/blkdev.h> /* for BLK_MAX_SEGMENT_SIZE */
+
+#include "ibtrs-clt.h"
+#include "ibtrs-log.h"
+
+#define IBTRS_CONNECT_TIMEOUT_MS 30000
+
+MODULE_AUTHOR("ibnbd@profitbricks.com");
+MODULE_DESCRIPTION("IBTRS Client");
+MODULE_VERSION(IBTRS_VER_STRING);
+MODULE_LICENSE("GPL");
+
+static ushort nr_cons_per_session;
+module_param(nr_cons_per_session, ushort, 0444);
+MODULE_PARM_DESC(nr_cons_per_session, "Number of connections per session."
+		 " (default: nr_cpu_ids)");
+
+static int retry_cnt = 7;
+module_param_named(retry_cnt, retry_cnt, int, 0644);
+MODULE_PARM_DESC(retry_cnt, "Number of times to send the message if the"
+		 " remote side didn't respond with Ack or Nack (default: 7,"
+		 " min: " __stringify(MIN_RTR_CNT) ", max: "
+		 __stringify(MAX_RTR_CNT) ")");
+
+static int __read_mostly noreg_cnt;
+module_param_named(noreg_cnt, noreg_cnt, int, 0444);
+MODULE_PARM_DESC(noreg_cnt, "Max number of SG entries when MR registration "
+		 "does not happen (default: 0)");
+
+static const struct ibtrs_ib_dev_pool_ops dev_pool_ops;
+static struct ibtrs_ib_dev_pool dev_pool = {
+	.ops = &dev_pool_ops
+};
+
+static struct workqueue_struct *ibtrs_wq;
+static struct class *ibtrs_dev_class;
+
+static void ibtrs_rdma_error_recovery(struct ibtrs_clt_con *con);
+static int ibtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
+				     struct rdma_cm_event *ev);
+static void ibtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc);
+static void complete_rdma_req(struct ibtrs_clt_io_req *req, int errno,
+			      bool notify, bool can_wait);
+static int ibtrs_clt_write_req(struct ibtrs_clt_io_req *req);
+static int ibtrs_clt_read_req(struct ibtrs_clt_io_req *req);
+
+bool ibtrs_clt_sess_is_connected(const struct ibtrs_clt_sess *sess)
+{
+	return sess->state == IBTRS_CLT_CONNECTED;
+}
+
+static inline bool ibtrs_clt_is_connected(const struct ibtrs_clt *clt)
+{
+	struct ibtrs_clt_sess *sess;
+	bool connected = false;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(sess, &clt->paths_list, s.entry)
+		connected |= ibtrs_clt_sess_is_connected(sess);
+	rcu_read_unlock();
+
+	return connected;
+}
+
+static inline struct ibtrs_tag *
+__ibtrs_get_tag(struct ibtrs_clt *clt, enum ibtrs_clt_con_type con_type)
+{
+	size_t max_depth = clt->queue_depth;
+	struct ibtrs_tag *tag;
+	int cpu, bit;
+
+	cpu = get_cpu();
+	do {
+		bit = find_first_zero_bit(clt->tags_map, max_depth);
+		if (unlikely(bit >= max_depth)) {
+			put_cpu();
+			return NULL;
+		}
+
+	} while (unlikely(test_and_set_bit_lock(bit, clt->tags_map)));
+	put_cpu();
+
+	tag = GET_TAG(clt, bit);
+	WARN_ON(tag->mem_id != bit);
+	tag->cpu_id = cpu;
+	tag->con_type = con_type;
+
+	return tag;
+}
+
+static inline void __ibtrs_put_tag(struct ibtrs_clt *clt,
+				   struct ibtrs_tag *tag)
+{
+	clear_bit_unlock(tag->mem_id, clt->tags_map);
+}
+
+struct ibtrs_tag *ibtrs_clt_get_tag(struct ibtrs_clt *clt,
+				    enum ibtrs_clt_con_type con_type,
+				    int can_wait)
+{
+	struct ibtrs_tag *tag;
+	DEFINE_WAIT(wait);
+
+	tag = __ibtrs_get_tag(clt, con_type);
+	if (likely(tag) || !can_wait)
+		return tag;
+
+	do {
+		prepare_to_wait(&clt->tags_wait, &wait, TASK_UNINTERRUPTIBLE);
+		tag = __ibtrs_get_tag(clt, con_type);
+		if (likely(tag))
+			break;
+
+		io_schedule();
+	} while (1);
+
+	finish_wait(&clt->tags_wait, &wait);
+
+	return tag;
+}
+EXPORT_SYMBOL(ibtrs_clt_get_tag);
+
+void ibtrs_clt_put_tag(struct ibtrs_clt *clt, struct ibtrs_tag *tag)
+{
+	if (WARN_ON(!test_bit(tag->mem_id, clt->tags_map)))
+		return;
+
+	__ibtrs_put_tag(clt, tag);
+
+	/*
+	 * Putting a tag is a barrier, so we will observe
+	 * new entry in the wait list, no worries.
+	 */
+	if (waitqueue_active(&clt->tags_wait))
+		wake_up(&clt->tags_wait);
+}
+EXPORT_SYMBOL(ibtrs_clt_put_tag);
+
+struct ibtrs_tag *ibtrs_tag_from_pdu(void *pdu)
+{
+	return pdu - sizeof(struct ibtrs_tag);
+}
+EXPORT_SYMBOL(ibtrs_tag_from_pdu);
+
+void *ibtrs_tag_to_pdu(struct ibtrs_tag *tag)
+{
+	return tag + 1;
+}
+EXPORT_SYMBOL(ibtrs_tag_to_pdu);
+
+/**
+ * ibtrs_tag_to_clt_con() - returns RDMA connection id by the tag
+ *
+ * Note:
+ *     IO connection starts from 1.
+ *     0 connection is for user messages.
+ */
+static struct ibtrs_clt_con *ibtrs_tag_to_clt_con(struct ibtrs_clt_sess *sess,
+						  struct ibtrs_tag *tag)
+{
+	int id = 0;
+
+	if (likely(tag->con_type == IBTRS_IO_CON))
+		id = (tag->cpu_id % (sess->s.con_num - 1)) + 1;
+
+	return to_clt_con(sess->s.con[id]);
+}
+
+static void ibtrs_clt_fast_reg_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_clt_con *con = cq->cq_context;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "Failed IB_WR_REG_MR: %s\n",
+			  ib_wc_status_msg(wc->status));
+		ibtrs_rdma_error_recovery(con);
+	}
+}
+
+static struct ib_cqe fast_reg_cqe = {
+	.done = ibtrs_clt_fast_reg_done
+};
+
+static void ibtrs_clt_inv_rkey_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_clt_io_req *req =
+		container_of(wc->wr_cqe, typeof(*req), inv_cqe);
+	struct ibtrs_clt_con *con = cq->cq_context;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "Failed IB_WR_LOCAL_INV: %s\n",
+			  ib_wc_status_msg(wc->status));
+		ibtrs_rdma_error_recovery(con);
+	}
+	req->need_inv = false;
+	if (likely(req->need_inv_comp))
+		complete(&req->inv_comp);
+	else
+		/* Complete request from INV callback */
+		complete_rdma_req(req, req->inv_errno, true, false);
+}
+
+static int ibtrs_inv_rkey(struct ibtrs_clt_io_req *req)
+{
+	struct ibtrs_clt_con *con = req->con;
+	const struct ib_send_wr *bad_wr;
+	struct ib_send_wr wr = {
+		.opcode		    = IB_WR_LOCAL_INV,
+		.wr_cqe		    = &req->inv_cqe,
+		.next		    = NULL,
+		.num_sge	    = 0,
+		.send_flags	    = IB_SEND_SIGNALED,
+		.ex.invalidate_rkey = req->mr->rkey,
+	};
+	req->inv_cqe.done = ibtrs_clt_inv_rkey_done;
+
+	return ib_post_send(con->c.qp, &wr, &bad_wr);
+}
+
+static int ibtrs_post_send_rdma(struct ibtrs_clt_con *con,
+				struct ibtrs_clt_io_req *req,
+				struct ibtrs_rbuf *rbuf, u32 off,
+				u32 imm, struct ib_send_wr *wr)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	enum ib_send_flags flags;
+	struct ib_sge sge;
+
+	if (unlikely(!req->sg_size)) {
+		ibtrs_wrn(sess, "Doing RDMA Write failed, no data supplied\n");
+		return -EINVAL;
+	}
+
+	/* user data and user message in the first list element */
+	sge.addr   = req->iu->dma_addr;
+	sge.length = req->sg_size;
+	sge.lkey   = sess->s.dev->ib_pd->local_dma_lkey;
+
+	/*
+	 * From time to time we have to post signalled sends,
+	 * or send queue will fill up and only QP reset can help.
+	 */
+	flags = atomic_inc_return(&con->io_cnt) % sess->queue_depth ?
+			0 : IB_SEND_SIGNALED;
+
+	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, req->iu->dma_addr,
+				      req->sg_size, DMA_TO_DEVICE);
+
+	return ibtrs_iu_post_rdma_write_imm(&con->c, req->iu, &sge, 1,
+					    rbuf->rkey, rbuf->addr + off,
+					    imm, flags, wr);
+}
+
+static void complete_rdma_req(struct ibtrs_clt_io_req *req, int errno,
+			      bool notify, bool can_wait)
+{
+	struct ibtrs_clt_con *con = req->con;
+	struct ibtrs_clt_sess *sess;
+	struct ibtrs_clt *clt;
+	int err;
+
+	if (WARN_ON(!req->in_use))
+		return;
+	if (WARN_ON(!req->con))
+		return;
+	sess = to_clt_sess(con->c.sess);
+	clt = sess->clt;
+
+	if (req->sg_cnt) {
+		if (unlikely(req->dir == DMA_FROM_DEVICE && req->need_inv)) {
+			/*
+			 * We are here to invalidate RDMA read requests
+			 * ourselves.  In normal scenario server should
+			 * send INV for all requested RDMA reads, but
+			 * we are here, thus two things could happen:
+			 *
+			 *    1.  this is failover, when errno != 0
+			 *        and can_wait == 1,
+			 *
+			 *    2.  something totally bad happened and
+			 *        server forgot to send INV, so we
+			 *        should do that ourselves.
+			 */
+
+			if (likely(can_wait)) {
+				req->need_inv_comp = true;
+			} else {
+				/* This should be IO path, so always notify */
+				WARN_ON(!notify);
+				/* Save errno for INV callback */
+				req->inv_errno = errno;
+			}
+
+			err = ibtrs_inv_rkey(req);
+			if (unlikely(err)) {
+				ibtrs_err(sess, "Send INV WR key=%#x: %d\n",
+					  req->mr->rkey, err);
+			} else if (likely(can_wait)) {
+				wait_for_completion(&req->inv_comp);
+			} else {
+				/*
+				 * Something went wrong, so request will be
+				 * completed from INV callback.
+				 */
+				WARN_ON_ONCE(1);
+
+				return;
+			}
+		}
+		ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist,
+				req->sg_cnt, req->dir);
+	}
+	if (sess->stats.enable_rdma_lat)
+		ibtrs_clt_update_rdma_lat(&sess->stats,
+				req->dir == DMA_FROM_DEVICE,
+				jiffies_to_msecs(jiffies - req->start_jiffies));
+	ibtrs_clt_decrease_inflight(&sess->stats);
+
+	req->in_use = false;
+	req->con = NULL;
+
+	if (notify)
+		req->conf(req->priv, errno);
+}
+
+static void process_io_rsp(struct ibtrs_clt_sess *sess, u32 msg_id,
+			   s16 errno, bool w_inval)
+{
+	struct ibtrs_clt_io_req *req;
+
+	if (WARN_ON(msg_id >= sess->queue_depth))
+		return;
+
+	req = &sess->reqs[msg_id];
+	/* Drop need_inv if server responsed with invalidation */
+	req->need_inv &= !w_inval;
+	complete_rdma_req(req, errno, true, false);
+}
+
+static struct ib_cqe io_comp_cqe = {
+	.done = ibtrs_clt_rdma_done
+};
+
+static void ibtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_clt_con *con = cq->cq_context;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	u32 imm_type, imm_payload;
+	bool w_inval = false;
+	int err;
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		if (wc->status != IB_WC_WR_FLUSH_ERR) {
+			ibtrs_err(sess, "RDMA failed: %s\n",
+				  ib_wc_status_msg(wc->status));
+			ibtrs_rdma_error_recovery(con);
+		}
+		return;
+	}
+	ibtrs_clt_update_wc_stats(con);
+
+	switch (wc->opcode) {
+	case IB_WC_RDMA_WRITE:
+		/*
+		 * post_send() RDMA write completions of IO reqs (read/write)
+		 * and hb
+		 */
+		break;
+	case IB_WC_RECV:
+		/*
+		 * Key invalidations from server side
+		 */
+		WARN_ON(!(wc->wc_flags & IB_WC_WITH_INVALIDATE));
+		WARN_ON(wc->wr_cqe != &io_comp_cqe);
+		break;
+
+	case IB_WC_RECV_RDMA_WITH_IMM:
+		/*
+		 * post_recv() RDMA write completions of IO reqs (read/write)
+		 * and hb
+		 */
+		if (WARN_ON(wc->wr_cqe != &io_comp_cqe))
+			return;
+
+		ibtrs_from_imm(be32_to_cpu(wc->ex.imm_data),
+			       &imm_type, &imm_payload);
+		if (likely(imm_type == IBTRS_IO_RSP_IMM ||
+			   imm_type == IBTRS_IO_RSP_W_INV_IMM)) {
+			u32 msg_id;
+
+			w_inval = (imm_type == IBTRS_IO_RSP_W_INV_IMM);
+			ibtrs_from_io_rsp_imm(imm_payload, &msg_id, &err);
+			process_io_rsp(sess, msg_id, err, w_inval);
+		} else if (imm_type == IBTRS_HB_MSG_IMM) {
+			WARN_ON(con->c.cid);
+			ibtrs_send_hb_ack(&sess->s);
+		} else if (imm_type == IBTRS_HB_ACK_IMM) {
+			WARN_ON(con->c.cid);
+			sess->s.hb_missed_cnt = 0;
+		} else {
+			ibtrs_wrn(sess, "Unknown IMM type %u\n", imm_type);
+		}
+		if (w_inval)
+			/*
+			 * Post x2 empty WRs: first is for this RDMA with IMM,
+			 * second is for RECV with INV, which happened earlier.
+			 */
+			err = ibtrs_post_recv_empty_x2(&con->c, &io_comp_cqe);
+		else
+			err = ibtrs_post_recv_empty(&con->c, &io_comp_cqe);
+		if (unlikely(err)) {
+			ibtrs_err(sess, "ibtrs_post_recv_empty(): %d\n", err);
+			ibtrs_rdma_error_recovery(con);
+			break;
+		}
+		break;
+	default:
+		ibtrs_wrn(sess, "Unexpected WC type: %d\n", wc->opcode);
+		return;
+	}
+}
+
+static int post_recv_io(struct ibtrs_clt_con *con, size_t q_size)
+{
+	int err, i;
+
+	for (i = 0; i < q_size; i++) {
+		err = ibtrs_post_recv_empty(&con->c, &io_comp_cqe);
+		if (unlikely(err))
+			return err;
+	}
+
+	return 0;
+}
+
+static int post_recv_sess(struct ibtrs_clt_sess *sess)
+{
+	size_t q_size;
+	int err, cid;
+
+	for (cid = 0; cid < sess->s.con_num; cid++) {
+		if (cid == 0)
+			q_size = SERVICE_CON_QUEUE_DEPTH;
+		else
+			q_size = sess->queue_depth;
+
+		/*
+		 * x2 for RDMA read responses + FR key invalidations,
+		 * RDMA writes do not require any FR registrations.
+		 */
+		q_size *= 2;
+
+		err = post_recv_io(to_clt_con(sess->s.con[cid]), q_size);
+		if (unlikely(err)) {
+			ibtrs_err(sess, "post_recv_io(), err: %d\n", err);
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+struct path_it {
+	int i;
+	struct list_head skip_list;
+	struct ibtrs_clt *clt;
+	struct ibtrs_clt_sess *(*next_path)(struct path_it *it);
+};
+
+#define do_each_path(path, clt, it) {					\
+	path_it_init(it, clt);						\
+	rcu_read_lock();						\
+	for ((it)->i = 0; ((path) = ((it)->next_path)(it)) &&		\
+			  (it)->i < (it)->clt->paths_num;		\
+	     (it)->i++)
+
+#define while_each_path(it)						\
+	path_it_deinit(it);						\
+	rcu_read_unlock();						\
+	}
+
+/**
+ * list_next_or_null_rr_rcu - get next list element in round-robin fashion.
+ * @head:	the head for the list.
+ * @ptr:        the list head to take the next element from.
+ * @type:       the type of the struct this is embedded in.
+ * @memb:       the name of the list_head within the struct.
+ *
+ * Next element returned in round-robin fashion, i.e. head will be skipped,
+ * but if list is observed as empty, NULL will be returned.
+ *
+ * This primitive may safely run concurrently with the _rcu list-mutation
+ * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock().
+ */
+#define list_next_or_null_rr_rcu(head, ptr, type, memb) \
+({ \
+	list_next_or_null_rcu(head, ptr, type, memb) ?: \
+		list_next_or_null_rcu(head, READ_ONCE((ptr)->next), \
+				      type, memb); \
+})
+
+/**
+ * get_next_path_rr() - Returns path in round-robin fashion.
+ *
+ * Related to @MP_POLICY_RR
+ *
+ * Locks:
+ *    rcu_read_lock() must be hold.
+ */
+static struct ibtrs_clt_sess *get_next_path_rr(struct path_it *it)
+{
+	struct ibtrs_clt_sess __rcu **ppcpu_path;
+	struct ibtrs_clt_sess *path;
+	struct ibtrs_clt *clt;
+
+	clt = it->clt;
+
+	/*
+	 * Here we use two RCU objects: @paths_list and @pcpu_path
+	 * pointer.  See ibtrs_clt_remove_path_from_arr() for details
+	 * how that is handled.
+	 */
+
+	ppcpu_path = this_cpu_ptr(clt->pcpu_path);
+	path = rcu_dereference(*ppcpu_path);
+	if (unlikely(!path))
+		path = list_first_or_null_rcu(&clt->paths_list,
+					      typeof(*path), s.entry);
+	else
+		path = list_next_or_null_rr_rcu(&clt->paths_list,
+						&path->s.entry,
+						typeof(*path),
+						s.entry);
+	rcu_assign_pointer(*ppcpu_path, path);
+
+	return path;
+}
+
+/**
+ * get_next_path_min_inflight() - Returns path with minimal inflight count.
+ *
+ * Related to @MP_POLICY_MIN_INFLIGHT
+ *
+ * Locks:
+ *    rcu_read_lock() must be hold.
+ */
+static struct ibtrs_clt_sess *get_next_path_min_inflight(struct path_it *it)
+{
+	struct ibtrs_clt_sess *min_path = NULL;
+	struct ibtrs_clt *clt = it->clt;
+	struct ibtrs_clt_sess *sess;
+	int min_inflight = INT_MAX;
+	int inflight;
+
+	list_for_each_entry_rcu(sess, &clt->paths_list, s.entry) {
+		if (unlikely(!list_empty(raw_cpu_ptr(sess->mp_skip_entry))))
+			continue;
+
+		inflight = atomic_read(&sess->stats.inflight);
+
+		if (inflight < min_inflight) {
+			min_inflight = inflight;
+			min_path = sess;
+		}
+	}
+
+	/*
+	 * add the path to the skip list, so that next time we can get
+	 * a different one
+	 */
+	if (min_path)
+		list_add(raw_cpu_ptr(min_path->mp_skip_entry), &it->skip_list);
+
+	return min_path;
+}
+
+static inline void path_it_init(struct path_it *it, struct ibtrs_clt *clt)
+{
+	INIT_LIST_HEAD(&it->skip_list);
+	it->clt = clt;
+	it->i = 0;
+
+	if (clt->mp_policy == MP_POLICY_RR)
+		it->next_path = get_next_path_rr;
+	else
+		it->next_path = get_next_path_min_inflight;
+}
+
+static inline void path_it_deinit(struct path_it *it)
+{
+	struct list_head *skip, *tmp;
+	/*
+	 * The skip_list is used only for the MIN_INFLIGHT policy.
+	 * We need to remove paths from it, so that next IO can insert
+	 * paths (->mp_skip_entry) into a skip_list again.
+	 */
+	list_for_each_safe(skip, tmp, &it->skip_list)
+		list_del_init(skip);
+}
+
+static inline void ibtrs_clt_init_req(struct ibtrs_clt_io_req *req,
+				      struct ibtrs_clt_sess *sess,
+				      ibtrs_conf_fn *conf,
+				      struct ibtrs_tag *tag, void *priv,
+				      const struct kvec *vec, size_t usr_len,
+				      struct scatterlist *sg, size_t sg_cnt,
+				      size_t data_len, int dir)
+{
+	struct iov_iter iter;
+	size_t len;
+
+	req->tag = tag;
+	req->in_use = true;
+	req->usr_len = usr_len;
+	req->data_len = data_len;
+	req->sglist = sg;
+	req->sg_cnt = sg_cnt;
+	req->priv = priv;
+	req->dir = dir;
+	req->con = ibtrs_tag_to_clt_con(sess, tag);
+	req->conf = conf;
+	req->need_inv = false;
+	req->need_inv_comp = false;
+	req->inv_errno = 0;
+
+	iov_iter_kvec(&iter, READ, vec, 1, usr_len);
+	len = _copy_from_iter(req->iu->buf, usr_len, &iter);
+	WARN_ON(len != usr_len);
+
+	reinit_completion(&req->inv_comp);
+	if (sess->stats.enable_rdma_lat)
+		req->start_jiffies = jiffies;
+}
+
+static inline struct ibtrs_clt_io_req *
+ibtrs_clt_get_req(struct ibtrs_clt_sess *sess, ibtrs_conf_fn *conf,
+		  struct ibtrs_tag *tag, void *priv,
+		  const struct kvec *vec, size_t usr_len,
+		  struct scatterlist *sg, size_t sg_cnt,
+		  size_t data_len, int dir)
+{
+	struct ibtrs_clt_io_req *req;
+
+	req = &sess->reqs[tag->mem_id];
+	ibtrs_clt_init_req(req, sess, conf, tag, priv, vec, usr_len,
+			   sg, sg_cnt, data_len, dir);
+	return req;
+}
+
+static inline struct ibtrs_clt_io_req *
+ibtrs_clt_get_copy_req(struct ibtrs_clt_sess *alive_sess,
+		       struct ibtrs_clt_io_req *fail_req)
+{
+	struct ibtrs_clt_io_req *req;
+	struct kvec vec = {
+		.iov_base = fail_req->iu->buf,
+		.iov_len  = fail_req->usr_len
+	};
+
+	req = &alive_sess->reqs[fail_req->tag->mem_id];
+	ibtrs_clt_init_req(req, alive_sess, fail_req->conf, fail_req->tag,
+			   fail_req->priv, &vec, fail_req->usr_len,
+			   fail_req->sglist, fail_req->sg_cnt,
+			   fail_req->data_len, fail_req->dir);
+	return req;
+}
+
+static int ibtrs_clt_failover_req(struct ibtrs_clt *clt,
+				  struct ibtrs_clt_io_req *fail_req)
+{
+	struct ibtrs_clt_sess *alive_sess;
+	struct ibtrs_clt_io_req *req;
+	int err = -ECONNABORTED;
+	struct path_it it;
+
+	do_each_path(alive_sess, clt, &it) {
+		if (unlikely(alive_sess->state != IBTRS_CLT_CONNECTED))
+			continue;
+		req = ibtrs_clt_get_copy_req(alive_sess, fail_req);
+		if (req->dir == DMA_TO_DEVICE)
+			err = ibtrs_clt_write_req(req);
+		else
+			err = ibtrs_clt_read_req(req);
+		if (unlikely(err)) {
+			req->in_use = false;
+			continue;
+		}
+		/* Success path */
+		ibtrs_clt_inc_failover_cnt(&alive_sess->stats);
+		break;
+	} while_each_path(&it);
+
+	return err;
+}
+
+static void fail_all_outstanding_reqs(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt *clt = sess->clt;
+	struct ibtrs_clt_io_req *req;
+	int i, err;
+
+	if (!sess->reqs)
+		return;
+	for (i = 0; i < sess->queue_depth; ++i) {
+		req = &sess->reqs[i];
+		if (!req->in_use)
+			continue;
+
+		/*
+		 * Safely (without notification) complete failed request.
+		 * After completion this request is still usebale and can
+		 * be failovered to another path.
+		 */
+		complete_rdma_req(req, -ECONNABORTED, false, true);
+
+		err = ibtrs_clt_failover_req(clt, req);
+		if (unlikely(err))
+			/* Failover failed, notify anyway */
+			req->conf(req->priv, err);
+	}
+}
+
+static void free_sess_reqs(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt_io_req *req;
+	int i;
+
+	if (!sess->reqs)
+		return;
+	for (i = 0; i < sess->queue_depth; ++i) {
+		req = &sess->reqs[i];
+		if (req->mr)
+			ib_dereg_mr(req->mr);
+		kfree(req->sge);
+		ibtrs_iu_free(req->iu, DMA_TO_DEVICE,
+			      sess->s.dev->ib_dev);
+	}
+	kfree(sess->reqs);
+	sess->reqs = NULL;
+}
+
+static int alloc_sess_reqs(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt_io_req *req;
+	struct ibtrs_clt *clt = sess->clt;
+	int i, err = -ENOMEM;
+
+	sess->reqs = kcalloc(sess->queue_depth, sizeof(*sess->reqs),
+			     GFP_KERNEL);
+	if (unlikely(!sess->reqs))
+		return -ENOMEM;
+
+	for (i = 0; i < sess->queue_depth; ++i) {
+		req = &sess->reqs[i];
+		req->iu = ibtrs_iu_alloc(i, sess->max_hdr_size, GFP_KERNEL,
+					 sess->s.dev->ib_dev, DMA_TO_DEVICE,
+					 ibtrs_clt_rdma_done);
+		if (unlikely(!req->iu))
+			goto out;
+
+		req->sge = kmalloc_array(clt->max_segments + 1,
+					 sizeof(*req->sge), GFP_KERNEL);
+		if (unlikely(!req->sge))
+			goto out;
+
+		req->mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG,
+				      sess->max_pages_per_mr);
+		if (unlikely(IS_ERR(req->mr))) {
+			err = PTR_ERR(req->mr);
+			req->mr = NULL;
+			pr_err("Failed to alloc sess->max_pages_per_mr %d\n",
+			       sess->max_pages_per_mr);
+			goto out;
+		}
+
+		init_completion(&req->inv_comp);
+	}
+
+	return 0;
+
+out:
+	free_sess_reqs(sess);
+
+	return err;
+}
+
+static int alloc_tags(struct ibtrs_clt *clt)
+{
+	unsigned int chunk_bits;
+	int err, i;
+
+	clt->tags_map = kcalloc(BITS_TO_LONGS(clt->queue_depth), sizeof(long),
+				GFP_KERNEL);
+	if (unlikely(!clt->tags_map)) {
+		err = -ENOMEM;
+		goto out_err;
+	}
+	clt->tags = kcalloc(clt->queue_depth, TAG_SIZE(clt), GFP_KERNEL);
+	if (unlikely(!clt->tags)) {
+		err = -ENOMEM;
+		goto err_map;
+	}
+	chunk_bits = ilog2(clt->queue_depth - 1) + 1;
+	for (i = 0; i < clt->queue_depth; i++) {
+		struct ibtrs_tag *tag;
+
+		tag = GET_TAG(clt, i);
+		tag->mem_id = i;
+		tag->mem_off = i << (MAX_IMM_PAYL_BITS - chunk_bits);
+	}
+
+	return 0;
+
+err_map:
+	kfree(clt->tags_map);
+	clt->tags_map = NULL;
+out_err:
+	return err;
+}
+
+static void free_tags(struct ibtrs_clt *clt)
+{
+	kfree(clt->tags_map);
+	clt->tags_map = NULL;
+	kfree(clt->tags);
+	clt->tags = NULL;
+}
+
+static void query_fast_reg_mode(struct ibtrs_clt_sess *sess)
+{
+	struct ib_device *ib_dev;
+	u64 max_pages_per_mr;
+	int mr_page_shift;
+
+	ib_dev = sess->s.dev->ib_dev;
+
+	/*
+	 * Use the smallest page size supported by the HCA, down to a
+	 * minimum of 4096 bytes. We're unlikely to build large sglists
+	 * out of smaller entries.
+	 */
+	mr_page_shift      = max(12, ffs(ib_dev->attrs.page_size_cap) - 1);
+	max_pages_per_mr   = ib_dev->attrs.max_mr_size;
+	do_div(max_pages_per_mr, (1ull << mr_page_shift));
+	sess->max_pages_per_mr =
+		min3(sess->max_pages_per_mr, (u32)max_pages_per_mr,
+		     ib_dev->attrs.max_fast_reg_page_list_len);
+	sess->max_send_sge = ib_dev->attrs.max_send_sge;
+}
+
+static bool __ibtrs_clt_change_state(struct ibtrs_clt_sess *sess,
+				     enum ibtrs_clt_state new_state)
+{
+	enum ibtrs_clt_state old_state;
+	bool changed = false;
+
+	old_state = sess->state;
+	switch (new_state) {
+	case IBTRS_CLT_CONNECTING:
+		switch (old_state) {
+		case IBTRS_CLT_RECONNECTING:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_CLT_RECONNECTING:
+		switch (old_state) {
+		case IBTRS_CLT_CONNECTED:
+		case IBTRS_CLT_CONNECTING_ERR:
+		case IBTRS_CLT_CLOSED:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_CLT_CONNECTED:
+		switch (old_state) {
+		case IBTRS_CLT_CONNECTING:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_CLT_CONNECTING_ERR:
+		switch (old_state) {
+		case IBTRS_CLT_CONNECTING:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_CLT_CLOSING:
+		switch (old_state) {
+		case IBTRS_CLT_CONNECTING:
+		case IBTRS_CLT_CONNECTING_ERR:
+		case IBTRS_CLT_RECONNECTING:
+		case IBTRS_CLT_CONNECTED:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_CLT_CLOSED:
+		switch (old_state) {
+		case IBTRS_CLT_CLOSING:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_CLT_DEAD:
+		switch (old_state) {
+		case IBTRS_CLT_CLOSED:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	default:
+		break;
+	}
+	if (changed) {
+		sess->state = new_state;
+		wake_up_locked(&sess->state_wq);
+	}
+
+	return changed;
+}
+
+static bool ibtrs_clt_change_state_from_to(struct ibtrs_clt_sess *sess,
+					   enum ibtrs_clt_state old_state,
+					   enum ibtrs_clt_state new_state)
+{
+	bool changed = false;
+
+	spin_lock_irq(&sess->state_wq.lock);
+	if (sess->state == old_state)
+		changed = __ibtrs_clt_change_state(sess, new_state);
+	spin_unlock_irq(&sess->state_wq.lock);
+
+	return changed;
+}
+
+static bool ibtrs_clt_change_state_get_old(struct ibtrs_clt_sess *sess,
+					   enum ibtrs_clt_state new_state,
+					   enum ibtrs_clt_state *old_state)
+{
+	bool changed;
+
+	spin_lock_irq(&sess->state_wq.lock);
+	*old_state = sess->state;
+	changed = __ibtrs_clt_change_state(sess, new_state);
+	spin_unlock_irq(&sess->state_wq.lock);
+
+	return changed;
+}
+
+static bool ibtrs_clt_change_state(struct ibtrs_clt_sess *sess,
+				   enum ibtrs_clt_state new_state)
+{
+	enum ibtrs_clt_state old_state;
+
+	return ibtrs_clt_change_state_get_old(sess, new_state, &old_state);
+}
+
+static enum ibtrs_clt_state ibtrs_clt_state(struct ibtrs_clt_sess *sess)
+{
+	enum ibtrs_clt_state state;
+
+	spin_lock_irq(&sess->state_wq.lock);
+	state = sess->state;
+	spin_unlock_irq(&sess->state_wq.lock);
+
+	return state;
+}
+
+static void ibtrs_clt_hb_err_handler(struct ibtrs_con *c, int err)
+{
+	struct ibtrs_clt_con *con;
+
+	(void)err;
+	con = container_of(c, typeof(*con), c);
+	ibtrs_rdma_error_recovery(con);
+}
+
+static void ibtrs_clt_init_hb(struct ibtrs_clt_sess *sess)
+{
+	ibtrs_init_hb(&sess->s, &io_comp_cqe,
+		      IBTRS_HB_INTERVAL_MS,
+		      IBTRS_HB_MISSED_MAX,
+		      ibtrs_clt_hb_err_handler,
+		      ibtrs_wq);
+}
+
+static void ibtrs_clt_start_hb(struct ibtrs_clt_sess *sess)
+{
+	ibtrs_start_hb(&sess->s);
+}
+
+static void ibtrs_clt_stop_hb(struct ibtrs_clt_sess *sess)
+{
+	ibtrs_stop_hb(&sess->s);
+}
+
+static void ibtrs_clt_reconnect_work(struct work_struct *work);
+static void ibtrs_clt_close_work(struct work_struct *work);
+
+static struct ibtrs_clt_sess *alloc_sess(struct ibtrs_clt *clt,
+					 const struct ibtrs_addr *path,
+					 size_t con_num, u16 max_segments)
+{
+	struct ibtrs_clt_sess *sess;
+	int err = -ENOMEM;
+	int cpu;
+
+	sess = kzalloc(sizeof(*sess), GFP_KERNEL);
+	if (unlikely(!sess))
+		goto err;
+
+	/* Extra connection for user messages */
+	con_num += 1;
+
+	sess->s.con = kcalloc(con_num, sizeof(*sess->s.con), GFP_KERNEL);
+	if (unlikely(!sess->s.con))
+		goto err_free_sess;
+
+	mutex_init(&sess->init_mutex);
+	uuid_gen(&sess->s.uuid);
+	memcpy(&sess->s.dst_addr, path->dst,
+	       rdma_addr_size((struct sockaddr *)path->dst));
+
+	/*
+	 * rdma_resolve_addr() passes src_addr to cma_bind_addr, which
+	 * checks the sa_family to be non-zero. If user passed src_addr=NULL
+	 * the sess->src_addr will contain only zeros, which is then fine.
+	 */
+	if (path->src)
+		memcpy(&sess->s.src_addr, path->src,
+		       rdma_addr_size((struct sockaddr *)path->src));
+	strlcpy(sess->s.sessname, clt->sessname, sizeof(sess->s.sessname));
+	sess->s.con_num = con_num;
+	sess->clt = clt;
+	sess->max_pages_per_mr = max_segments * BLK_MAX_SEGMENT_SIZE >> 12;
+	init_waitqueue_head(&sess->state_wq);
+	sess->state = IBTRS_CLT_CONNECTING;
+	atomic_set(&sess->connected_cnt, 0);
+	INIT_WORK(&sess->close_work, ibtrs_clt_close_work);
+	INIT_DELAYED_WORK(&sess->reconnect_dwork, ibtrs_clt_reconnect_work);
+	ibtrs_clt_init_hb(sess);
+
+	sess->mp_skip_entry = alloc_percpu(typeof(*sess->mp_skip_entry));
+	if (unlikely(!sess->mp_skip_entry))
+		goto err_free_con;
+
+	for_each_possible_cpu(cpu)
+		INIT_LIST_HEAD(per_cpu_ptr(sess->mp_skip_entry, cpu));
+
+	err = ibtrs_clt_init_stats(&sess->stats);
+	if (unlikely(err))
+		goto err_free_percpu;
+
+	return sess;
+
+err_free_percpu:
+	free_percpu(sess->mp_skip_entry);
+err_free_con:
+	kfree(sess->s.con);
+err_free_sess:
+	kfree(sess);
+err:
+	return ERR_PTR(err);
+}
+
+static void free_sess(struct ibtrs_clt_sess *sess)
+{
+	ibtrs_clt_free_stats(&sess->stats);
+	free_percpu(sess->mp_skip_entry);
+	kfree(sess->s.con);
+	kfree(sess->rbufs);
+	kfree(sess);
+}
+
+static int create_con(struct ibtrs_clt_sess *sess, unsigned int cid)
+{
+	struct ibtrs_clt_con *con;
+
+	con = kzalloc(sizeof(*con), GFP_KERNEL);
+	if (unlikely(!con))
+		return -ENOMEM;
+
+	/* Map first two connections to the first CPU */
+	con->cpu  = (cid ? cid - 1 : 0) % nr_cpu_ids;
+	con->c.cid = cid;
+	con->c.sess = &sess->s;
+	atomic_set(&con->io_cnt, 0);
+
+	sess->s.con[cid] = &con->c;
+
+	return 0;
+}
+
+static void destroy_con(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+
+	sess->s.con[con->c.cid] = NULL;
+	kfree(con);
+}
+
+static int create_con_cq_qp(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	u16 wr_queue_size;
+	int err, cq_vector;
+
+	/*
+	 * This function can fail, but still destroy_con_cq_qp() should
+	 * be called, this is because create_con_cq_qp() is called on cm
+	 * event path, thus caller/waiter never knows: have we failed before
+	 * create_con_cq_qp() or after.  To solve this dilemma without
+	 * creating any additional flags just allow destroy_con_cq_qp() be
+	 * called many times.
+	 */
+
+	if (con->c.cid == 0) {
+		/*
+		 * One completion for each receive and two for each send
+		 * (send request + registration)
+		 * + 2 for drain and heartbeat
+		 * in case qp gets into error state
+		 */
+		wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
+		/* We must be the first here */
+		if (WARN_ON(sess->s.dev))
+			return -EINVAL;
+
+		/*
+		 * The whole session uses device from user connection.
+		 * Be careful not to close user connection before ib dev
+		 * is gracefully put.
+		 */
+		sess->s.dev = ibtrs_ib_dev_find_or_add(con->c.cm_id->device,
+						       &dev_pool);
+		if (unlikely(!sess->s.dev)) {
+			ibtrs_wrn(sess,
+				  "ibtrs_ib_dev_find_get_or_add(): no memory\n");
+			return -ENOMEM;
+		}
+		sess->s.dev_ref = 1;
+		query_fast_reg_mode(sess);
+	} else {
+		/*
+		 * Here we assume that session members are correctly set.
+		 * This is always true if user connection (cid == 0) is
+		 * established first.
+		 */
+		if (WARN_ON(!sess->s.dev))
+			return -EINVAL;
+		if (WARN_ON(!sess->queue_depth))
+			return -EINVAL;
+
+		/* Shared between connections */
+		sess->s.dev_ref++;
+		wr_queue_size =
+			min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr,
+			      /* QD * (REQ + RSP + FR REGS or INVS) + drain */
+			      sess->queue_depth * 3 + 1);
+	}
+	cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors;
+	err = ibtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge,
+				 cq_vector, wr_queue_size, wr_queue_size,
+				 IB_POLL_SOFTIRQ);
+	/*
+	 * In case of error we do not bother to clean previous allocations,
+	 * since destroy_con_cq_qp() must be called.
+	 */
+
+	if (unlikely(err))
+		return err;
+
+	return err;
+}
+
+static void destroy_con_cq_qp(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+
+	/*
+	 * Be careful here: destroy_con_cq_qp() can be called even
+	 * create_con_cq_qp() failed, see comments there.
+	 */
+
+	ibtrs_cq_qp_destroy(&con->c);
+	if (sess->s.dev_ref && !--sess->s.dev_ref) {
+		ibtrs_ib_dev_put(sess->s.dev);
+		sess->s.dev = NULL;
+	}
+}
+
+static void stop_cm(struct ibtrs_clt_con *con)
+{
+	rdma_disconnect(con->c.cm_id);
+	if (con->c.qp)
+		ib_drain_qp(con->c.qp);
+}
+
+static void destroy_cm(struct ibtrs_clt_con *con)
+{
+	rdma_destroy_id(con->c.cm_id);
+	con->c.cm_id = NULL;
+}
+
+static int create_cm(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct rdma_cm_id *cm_id;
+	int err;
+
+	cm_id = rdma_create_id(&init_net, ibtrs_clt_rdma_cm_handler, con,
+			       sess->s.dst_addr.ss_family == AF_IB ?
+			       RDMA_PS_IB : RDMA_PS_TCP, IB_QPT_RC);
+	if (unlikely(IS_ERR(cm_id))) {
+		err = PTR_ERR(cm_id);
+		ibtrs_err(sess, "Failed to create CM ID, err: %d\n", err);
+
+		return err;
+	}
+	con->c.cm_id = cm_id;
+	con->cm_err = 0;
+	/* allow the port to be reused */
+	err = rdma_set_reuseaddr(cm_id, 1);
+	if (err != 0) {
+		ibtrs_err(sess, "Set address reuse failed, err: %d\n", err);
+		goto destroy_cm;
+	}
+	err = rdma_resolve_addr(cm_id, (struct sockaddr *)&sess->s.src_addr,
+				(struct sockaddr *)&sess->s.dst_addr,
+				IBTRS_CONNECT_TIMEOUT_MS);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "Failed to resolve address, err: %d\n", err);
+		goto destroy_cm;
+	}
+	/*
+	 * Combine connection status and session events. This is needed
+	 * for waiting two possible cases: cm_err has something meaningful
+	 * or session state was really changed to error by device removal.
+	 */
+	err = wait_event_interruptible_timeout(sess->state_wq,
+			con->cm_err || sess->state != IBTRS_CLT_CONNECTING,
+			msecs_to_jiffies(IBTRS_CONNECT_TIMEOUT_MS));
+	if (unlikely(err == 0 || err == -ERESTARTSYS)) {
+		if (err == 0)
+			err = -ETIMEDOUT;
+		/* Timedout or interrupted */
+		goto errr;
+	}
+	if (unlikely(con->cm_err < 0)) {
+		err = con->cm_err;
+		goto errr;
+	}
+	if (unlikely(sess->state != IBTRS_CLT_CONNECTING)) {
+		/* Device removal */
+		err = -ECONNABORTED;
+		goto errr;
+	}
+
+	return 0;
+
+errr:
+	stop_cm(con);
+	/* Is safe to call destroy if cq_qp is not inited */
+	destroy_con_cq_qp(con);
+destroy_cm:
+	destroy_cm(con);
+
+	return err;
+}
+
+static void ibtrs_clt_sess_up(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt *clt = sess->clt;
+	int up;
+
+	/*
+	 * We can fire RECONNECTED event only when all paths were
+	 * connected on ibtrs_clt_open(), then each was disconnected
+	 * and the first one connected again.  That's why this nasty
+	 * game with counter value.
+	 */
+
+	mutex_lock(&clt->paths_ev_mutex);
+	up = ++clt->paths_up;
+	/*
+	 * Here it is safe to access paths num directly since up counter
+	 * is greater than MAX_PATHS_NUM only while ibtrs_clt_open() is
+	 * in progress, thus paths removals are impossible.
+	 */
+	if (up > MAX_PATHS_NUM && up == MAX_PATHS_NUM + clt->paths_num)
+		clt->paths_up = clt->paths_num;
+	else if (up == 1)
+		clt->link_ev(clt->priv, IBTRS_CLT_LINK_EV_RECONNECTED);
+	mutex_unlock(&clt->paths_ev_mutex);
+
+	/* Mark session as established */
+	sess->established = true;
+	sess->reconnect_attempts = 0;
+	sess->stats.reconnects.successful_cnt++;
+}
+
+static void ibtrs_clt_sess_down(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt *clt = sess->clt;
+
+	if (!sess->established)
+		return;
+
+	sess->established = false;
+	mutex_lock(&clt->paths_ev_mutex);
+	WARN_ON(!clt->paths_up);
+	if (--clt->paths_up == 0)
+		clt->link_ev(clt->priv, IBTRS_CLT_LINK_EV_DISCONNECTED);
+	mutex_unlock(&clt->paths_ev_mutex);
+}
+
+static void ibtrs_clt_stop_and_destroy_conns(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt_con *con;
+	unsigned int cid;
+
+	WARN_ON(sess->state == IBTRS_CLT_CONNECTED);
+
+	/*
+	 * Possible race with ibtrs_clt_open(), when DEVICE_REMOVAL comes
+	 * exactly in between.  Start destroying after it finishes.
+	 */
+	mutex_lock(&sess->init_mutex);
+	mutex_unlock(&sess->init_mutex);
+
+	/*
+	 * All IO paths must observe !CONNECTED state before we
+	 * free everything.
+	 */
+	synchronize_rcu();
+
+	ibtrs_clt_stop_hb(sess);
+
+	/*
+	 * The order it utterly crucial: firstly disconnect and complete all
+	 * rdma requests with error (thus set in_use=false for requests),
+	 * then fail outstanding requests checking in_use for each, and
+	 * eventually notify upper layer about session disconnection.
+	 */
+
+	for (cid = 0; cid < sess->s.con_num; cid++) {
+		if (!sess->s.con[cid])
+			break;
+		con = to_clt_con(sess->s.con[cid]);
+		stop_cm(con);
+	}
+	fail_all_outstanding_reqs(sess);
+	free_sess_reqs(sess);
+	ibtrs_clt_sess_down(sess);
+
+	/*
+	 * Wait for graceful shutdown, namely when peer side invokes
+	 * rdma_disconnect(). 'connected_cnt' is decremented only on
+	 * CM events, thus if other side had crashed and hb has detected
+	 * something is wrong, here we will stuck for exactly timeout ms,
+	 * since CM does not fire anything.  That is fine, we are not in
+	 * hurry.
+	 */
+	wait_event_timeout(sess->state_wq, !atomic_read(&sess->connected_cnt),
+			   msecs_to_jiffies(IBTRS_CONNECT_TIMEOUT_MS));
+
+	for (cid = 0; cid < sess->s.con_num; cid++) {
+		if (!sess->s.con[cid])
+			break;
+		con = to_clt_con(sess->s.con[cid]);
+		destroy_con_cq_qp(con);
+		destroy_cm(con);
+		destroy_con(con);
+	}
+}
+
+static inline bool xchg_sessions(struct ibtrs_clt_sess __rcu **rcu_ppcpu_path,
+				 struct ibtrs_clt_sess *sess,
+				 struct ibtrs_clt_sess *next)
+{
+	struct ibtrs_clt_sess **ppcpu_path;
+
+	/* Call cmpxchg() without sparse warnings */
+	ppcpu_path = (typeof(ppcpu_path))rcu_ppcpu_path;
+	return (sess == cmpxchg(ppcpu_path, sess, next));
+}
+
+static void ibtrs_clt_remove_path_from_arr(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt *clt = sess->clt;
+	struct ibtrs_clt_sess *next;
+	bool wait_for_grace = false;
+	int cpu;
+
+	mutex_lock(&clt->paths_mutex);
+	list_del_rcu(&sess->s.entry);
+
+	/* Make sure everybody observes path removal. */
+	synchronize_rcu();
+
+	/*
+	 * At this point nobody sees @sess in the list, but still we have
+	 * dangling pointer @pcpu_path which _can_ point to @sess.  Since
+	 * nobody can observe @sess in the list, we guarantee that IO path
+	 * will not assign @sess to @pcpu_path, i.e. @pcpu_path can be equal
+	 * to @sess, but can never again become @sess.
+	 */
+
+	/*
+	 * Decrement paths number only after grace period, because
+	 * caller of do_each_path() must firstly observe list without
+	 * path and only then decremented paths number.
+	 *
+	 * Otherwise there can be the following situation:
+	 *    o Two paths exist and IO is coming.
+	 *    o One path is removed:
+	 *      CPU#0                          CPU#1
+	 *      do_each_path():                ibtrs_clt_remove_path_from_arr():
+	 *          path = get_next_path()
+	 *          ^^^                            list_del_rcu(path)
+	 *          [!CONNECTED path]              clt->paths_num--
+	 *                                              ^^^^^^^^^
+	 *          load clt->paths_num                 from 2 to 1
+	 *                    ^^^^^^^^^
+	 *                    sees 1
+	 *
+	 *      path is observed as !CONNECTED, but do_each_path() loop
+	 *      ends, because expression i < clt->paths_num is false.
+	 */
+	clt->paths_num--;
+
+	/*
+	 * Get @next connection from current @sess which is going to be
+	 * removed.  If @sess is the last element, then @next is NULL.
+	 */
+	next = list_next_or_null_rr_rcu(&clt->paths_list, &sess->s.entry,
+					typeof(*next), s.entry);
+
+	/*
+	 * @pcpu paths can still point to the path which is going to be
+	 * removed, so change the pointer manually.
+	 */
+	for_each_possible_cpu(cpu) {
+		struct ibtrs_clt_sess __rcu **ppcpu_path;
+
+		ppcpu_path = per_cpu_ptr(clt->pcpu_path, cpu);
+		if (rcu_dereference(*ppcpu_path) != sess)
+			/*
+			 * synchronize_rcu() was called just after deleting
+			 * entry from the list, thus IO code path cannot
+			 * change pointer back to the pointer which is going
+			 * to be removed, we are safe here.
+			 */
+			continue;
+
+		/*
+		 * We race with IO code path, which also changes pointer,
+		 * thus we have to be careful not to overwrite it.
+		 */
+		if (xchg_sessions(ppcpu_path, sess, next))
+			/*
+			 * @ppcpu_path was successfully replaced with @next,
+			 * that means that someone could also pick up the
+			 * @sess and dereferencing it right now, so wait for
+			 * a grace period is required.
+			 */
+			wait_for_grace = true;
+	}
+	if (wait_for_grace)
+		synchronize_rcu();
+
+	mutex_unlock(&clt->paths_mutex);
+}
+
+static void ibtrs_clt_add_path_to_arr(struct ibtrs_clt_sess *sess,
+				      struct ibtrs_addr *addr)
+{
+	struct ibtrs_clt *clt = sess->clt;
+
+	mutex_lock(&clt->paths_mutex);
+	clt->paths_num++;
+
+	/*
+	 * Firstly increase paths_num, wait for GP and then
+	 * add path to the list.  Why?  Since we add path with
+	 * !CONNECTED state explanation is similar to what has
+	 * been written in ibtrs_clt_remove_path_from_arr().
+	 */
+	synchronize_rcu();
+
+	list_add_tail_rcu(&sess->s.entry, &clt->paths_list);
+	mutex_unlock(&clt->paths_mutex);
+}
+
+static void ibtrs_clt_close_work(struct work_struct *work)
+{
+	struct ibtrs_clt_sess *sess;
+
+	sess = container_of(work, struct ibtrs_clt_sess, close_work);
+
+	cancel_delayed_work_sync(&sess->reconnect_dwork);
+	ibtrs_clt_stop_and_destroy_conns(sess);
+	/*
+	 * Sounds stupid, huh?  No, it is not.  Consider this sequence:
+	 *
+	 *   #CPU0                              #CPU1
+	 *   1.  CONNECTED->RECONNECTING
+	 *   2.                                 RECONNECTING->CLOSING
+	 *   3.  queue_work(&reconnect_dwork)
+	 *   4.                                 queue_work(&close_work);
+	 *   5.  reconnect_work();              close_work();
+	 *
+	 * To avoid that case do cancel twice: before and after.
+	 */
+	cancel_delayed_work_sync(&sess->reconnect_dwork);
+	ibtrs_clt_change_state(sess, IBTRS_CLT_CLOSED);
+}
+
+static void ibtrs_clt_close_conns(struct ibtrs_clt_sess *sess, bool wait)
+{
+	if (ibtrs_clt_change_state(sess, IBTRS_CLT_CLOSING))
+		queue_work(ibtrs_wq, &sess->close_work);
+	if (wait)
+		flush_work(&sess->close_work);
+}
+
+static int init_conns(struct ibtrs_clt_sess *sess)
+{
+	unsigned int cid;
+	int err;
+
+	/*
+	 * On every new session connections increase reconnect counter
+	 * to avoid clashes with previous sessions not yet closed
+	 * sessions on a server side.
+	 */
+	sess->s.recon_cnt++;
+
+	/* Establish all RDMA connections  */
+	for (cid = 0; cid < sess->s.con_num; cid++) {
+		err = create_con(sess, cid);
+		if (unlikely(err))
+			goto destroy;
+
+		err = create_cm(to_clt_con(sess->s.con[cid]));
+		if (unlikely(err)) {
+			destroy_con(to_clt_con(sess->s.con[cid]));
+			goto destroy;
+		}
+	}
+	err = alloc_sess_reqs(sess);
+	if (unlikely(err))
+		goto destroy;
+
+	ibtrs_clt_start_hb(sess);
+
+	return 0;
+
+destroy:
+	while (cid--) {
+		struct ibtrs_clt_con *con = to_clt_con(sess->s.con[cid]);
+
+		stop_cm(con);
+		destroy_con_cq_qp(con);
+		destroy_cm(con);
+		destroy_con(con);
+	}
+	/*
+	 * If we've never taken async path and got an error, say,
+	 * doing rdma_resolve_addr(), switch to CONNECTION_ERR state
+	 * manually to keep reconnecting.
+	 */
+	ibtrs_clt_change_state(sess, IBTRS_CLT_CONNECTING_ERR);
+
+	return err;
+}
+
+static int ibtrs_rdma_addr_resolved(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	int err;
+
+	err = create_con_cq_qp(con);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "create_con_cq_qp(), err: %d\n", err);
+		return err;
+	}
+	err = rdma_resolve_route(con->c.cm_id, IBTRS_CONNECT_TIMEOUT_MS);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "Resolving route failed, err: %d\n", err);
+		destroy_con_cq_qp(con);
+	}
+
+	return err;
+}
+
+static int ibtrs_rdma_route_resolved(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_clt *clt = sess->clt;
+	struct ibtrs_msg_conn_req msg;
+	struct rdma_conn_param param;
+
+	int err;
+
+	memset(&param, 0, sizeof(param));
+	param.retry_count = clamp(retry_cnt, MIN_RTR_CNT, MAX_RTR_CNT);
+	param.rnr_retry_count = 7;
+	param.private_data = &msg;
+	param.private_data_len = sizeof(msg);
+
+	/*
+	 * Those two are the part of struct cma_hdr which is shared
+	 * with private_data in case of AF_IB, so put zeroes to avoid
+	 * wrong validation inside cma.c on receiver side.
+	 */
+	msg.__cma_version = 0;
+	msg.__ip_version = 0;
+	msg.magic = cpu_to_le16(IBTRS_MAGIC);
+	msg.version = cpu_to_le16(IBTRS_PROTO_VER);
+	msg.cid = cpu_to_le16(con->c.cid);
+	msg.cid_num = cpu_to_le16(sess->s.con_num);
+	msg.recon_cnt = cpu_to_le16(sess->s.recon_cnt);
+	uuid_copy(&msg.sess_uuid, &sess->s.uuid);
+	uuid_copy(&msg.paths_uuid, &clt->paths_uuid);
+
+	err = rdma_connect(con->c.cm_id, &param);
+	if (err)
+		ibtrs_err(sess, "rdma_connect(): %d\n", err);
+
+	return err;
+}
+
+static int ibtrs_rdma_conn_established(struct ibtrs_clt_con *con,
+				       struct rdma_cm_event *ev)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_clt *clt = sess->clt;
+	const struct ibtrs_msg_conn_rsp *msg;
+	u16 version, queue_depth;
+	int errno;
+	u8 len;
+
+	msg = ev->param.conn.private_data;
+	len = ev->param.conn.private_data_len;
+	if (unlikely(len < sizeof(*msg))) {
+		ibtrs_err(sess, "Invalid IBTRS connection response\n");
+		return -ECONNRESET;
+	}
+	if (unlikely(le16_to_cpu(msg->magic) != IBTRS_MAGIC)) {
+		ibtrs_err(sess, "Invalid IBTRS magic\n");
+		return -ECONNRESET;
+	}
+	version = le16_to_cpu(msg->version);
+	if (unlikely(version >> 8 != IBTRS_PROTO_VER_MAJOR)) {
+		ibtrs_err(sess, "Unsupported major IBTRS version: %d, expected %d\n",
+			  version >> 8, IBTRS_PROTO_VER_MAJOR);
+		return -ECONNRESET;
+	}
+	errno = le16_to_cpu(msg->errno);
+	if (unlikely(errno)) {
+		ibtrs_err(sess, "Invalid IBTRS message: errno %d\n",
+			  errno);
+		return -ECONNRESET;
+	}
+	if (con->c.cid == 0) {
+		queue_depth = le16_to_cpu(msg->queue_depth);
+
+		if (queue_depth > MAX_SESS_QUEUE_DEPTH) {
+			ibtrs_err(sess, "Invalid IBTRS message: queue=%d\n",
+				  queue_depth);
+			return -ECONNRESET;
+		}
+		if (!sess->rbufs || sess->queue_depth < queue_depth) {
+			kfree(sess->rbufs);
+			sess->rbufs = kcalloc(queue_depth, sizeof(*sess->rbufs),
+					      GFP_KERNEL);
+			if (unlikely(!sess->rbufs)) {
+				ibtrs_err(sess, "Failed to allocate "
+					  "queue_depth=%d\n", queue_depth);
+				return -ENOMEM;
+			}
+		}
+		sess->queue_depth = queue_depth;
+		sess->max_hdr_size = le32_to_cpu(msg->max_hdr_size);
+		sess->max_io_size = le32_to_cpu(msg->max_io_size);
+		sess->chunk_size = sess->max_io_size + sess->max_hdr_size;
+
+		/*
+		 * Global queue depth and IO size is always a minimum.
+		 * If while a reconnection server sends us a value a bit
+		 * higher - client does not care and uses cached minimum.
+		 *
+		 * Since we can have several sessions (paths) restablishing
+		 * connections in parallel, use lock.
+		 */
+		mutex_lock(&clt->paths_mutex);
+		clt->queue_depth = min_not_zero(sess->queue_depth,
+						clt->queue_depth);
+		clt->max_io_size = min_not_zero(sess->max_io_size,
+						clt->max_io_size);
+		mutex_unlock(&clt->paths_mutex);
+
+		/*
+		 * Cache the hca_port and hca_name for sysfs
+		 */
+		sess->hca_port = con->c.cm_id->port_num;
+		scnprintf(sess->hca_name, sizeof(sess->hca_name),
+			  sess->s.dev->ib_dev->name);
+		sess->s.src_addr = con->c.cm_id->route.addr.src_addr;
+	}
+
+	return 0;
+}
+
+static int ibtrs_rdma_conn_rejected(struct ibtrs_clt_con *con,
+				    struct rdma_cm_event *ev)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	const struct ibtrs_msg_conn_rsp *msg;
+	const char *rej_msg;
+	int status, errno;
+	u8 data_len;
+
+	status = ev->status;
+	rej_msg = rdma_reject_msg(con->c.cm_id, status);
+	msg = rdma_consumer_reject_data(con->c.cm_id, ev, &data_len);
+
+	if (msg && data_len >= sizeof(*msg)) {
+		errno = (int16_t)le16_to_cpu(msg->errno);
+		if (errno == -EBUSY)
+			ibtrs_err(sess,
+				  "Previous session is still exists on the "
+				  "server, please reconnect later\n");
+		else
+			ibtrs_err(sess,
+				  "Connect rejected: status %d (%s), ibtrs "
+				  "errno %d\n", status, rej_msg, errno);
+	} else {
+		ibtrs_err(sess,
+			  "Connect rejected but with malformed message: "
+			  "status %d (%s)\n", status, rej_msg);
+	}
+
+	return -ECONNRESET;
+}
+
+static void ibtrs_rdma_error_recovery(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+
+	if (ibtrs_clt_change_state_from_to(sess,
+					   IBTRS_CLT_CONNECTED,
+					   IBTRS_CLT_RECONNECTING)) {
+		/*
+		 * Normal scenario, reconnect if we were successfully connected
+		 */
+		queue_delayed_work(ibtrs_wq, &sess->reconnect_dwork, 0);
+	} else {
+		/*
+		 * Error can happen just on establishing new connection,
+		 * so notify waiter with error state, waiter is responsible
+		 * for cleaning the rest and reconnect if needed.
+		 */
+		ibtrs_clt_change_state_from_to(sess,
+					       IBTRS_CLT_CONNECTING,
+					       IBTRS_CLT_CONNECTING_ERR);
+	}
+}
+
+static inline void flag_success_on_conn(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+
+	atomic_inc(&sess->connected_cnt);
+	con->cm_err = 1;
+}
+
+static inline void flag_error_on_conn(struct ibtrs_clt_con *con, int cm_err)
+{
+	if (con->cm_err == 1) {
+		struct ibtrs_clt_sess *sess;
+
+		sess = to_clt_sess(con->c.sess);
+		if (atomic_dec_and_test(&sess->connected_cnt))
+			wake_up(&sess->state_wq);
+	}
+	con->cm_err = cm_err;
+}
+
+static int ibtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
+				     struct rdma_cm_event *ev)
+{
+	struct ibtrs_clt_con *con = cm_id->context;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	int cm_err = 0;
+
+	switch (ev->event) {
+	case RDMA_CM_EVENT_ADDR_RESOLVED:
+		cm_err = ibtrs_rdma_addr_resolved(con);
+		break;
+	case RDMA_CM_EVENT_ROUTE_RESOLVED:
+		cm_err = ibtrs_rdma_route_resolved(con);
+		break;
+	case RDMA_CM_EVENT_ESTABLISHED:
+		con->cm_err = ibtrs_rdma_conn_established(con, ev);
+		if (likely(!con->cm_err)) {
+			/*
+			 * Report success and wake up. Here we abuse state_wq,
+			 * i.e. wake up without state change, but we set cm_err.
+			 */
+			flag_success_on_conn(con);
+			wake_up(&sess->state_wq);
+			return 0;
+		}
+		break;
+	case RDMA_CM_EVENT_REJECTED:
+		cm_err = ibtrs_rdma_conn_rejected(con, ev);
+		break;
+	case RDMA_CM_EVENT_CONNECT_ERROR:
+	case RDMA_CM_EVENT_UNREACHABLE:
+		ibtrs_wrn(sess, "CM error event %d\n", ev->event);
+		cm_err = -ECONNRESET;
+		break;
+	case RDMA_CM_EVENT_ADDR_ERROR:
+	case RDMA_CM_EVENT_ROUTE_ERROR:
+		cm_err = -EHOSTUNREACH;
+		break;
+	case RDMA_CM_EVENT_DISCONNECTED:
+	case RDMA_CM_EVENT_ADDR_CHANGE:
+	case RDMA_CM_EVENT_TIMEWAIT_EXIT:
+		cm_err = -ECONNRESET;
+		break;
+	case RDMA_CM_EVENT_DEVICE_REMOVAL:
+		/*
+		 * Device removal is a special case.  Queue close and return 0.
+		 */
+		ibtrs_clt_close_conns(sess, false);
+		return 0;
+	default:
+		ibtrs_err(sess, "Unexpected RDMA CM event (%d)\n", ev->event);
+		cm_err = -ECONNRESET;
+		break;
+	}
+
+	if (cm_err) {
+		/*
+		 * cm error makes sense only on connection establishing,
+		 * in other cases we rely on normal procedure of reconnecting.
+		 */
+		flag_error_on_conn(con, cm_err);
+		ibtrs_rdma_error_recovery(con);
+	}
+
+	return 0;
+}
+
+static void ibtrs_clt_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_clt_con *con = cq->cq_context;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_iu *iu;
+
+	iu = container_of(wc->wr_cqe, struct ibtrs_iu, cqe);
+	ibtrs_iu_free(iu, DMA_TO_DEVICE, sess->s.dev->ib_dev);
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "Sess info request send failed: %s\n",
+			  ib_wc_status_msg(wc->status));
+		ibtrs_clt_change_state(sess, IBTRS_CLT_CONNECTING_ERR);
+		return;
+	}
+
+	ibtrs_clt_update_wc_stats(con);
+}
+
+static int process_info_rsp(struct ibtrs_clt_sess *sess,
+			    const struct ibtrs_msg_info_rsp *msg)
+{
+	unsigned int sg_cnt, total_len;
+	int i, sgi;
+
+	sg_cnt = le16_to_cpu(msg->sg_cnt);
+	if (unlikely(!sg_cnt))
+		return -EINVAL;
+	/*
+	 * Check if IB immediate data size is enough to hold the mem_id and
+	 * the offset inside the memory chunk.
+	 */
+	if (unlikely((ilog2(sg_cnt - 1) + 1) +
+		     (ilog2(sess->chunk_size - 1) + 1) >
+		     MAX_IMM_PAYL_BITS)) {
+		ibtrs_err(sess, "RDMA immediate size (%db) not enough to "
+			  "encode %d buffers of size %dB\n",  MAX_IMM_PAYL_BITS,
+			  sg_cnt, sess->chunk_size);
+		return -EINVAL;
+	}
+	if (unlikely(!sg_cnt || (sess->queue_depth % sg_cnt))) {
+		ibtrs_err(sess, "Incorrect sg_cnt %d, is not multiple\n",
+			  sg_cnt);
+		return -EINVAL;
+	}
+	total_len = 0;
+	for (sgi = 0, i = 0; sgi < sg_cnt && i < sess->queue_depth; sgi++) {
+		const struct ibtrs_sg_desc *desc = &msg->desc[sgi];
+		u32 len, rkey;
+		u64 addr;
+
+		addr = le64_to_cpu(desc->addr);
+		rkey = le32_to_cpu(desc->key);
+		len  = le32_to_cpu(desc->len);
+
+		total_len += len;
+
+		if (unlikely(!len || (len % sess->chunk_size))) {
+			ibtrs_err(sess, "Incorrect [%d].len %d\n", sgi, len);
+			return -EINVAL;
+		}
+		for ( ; len && i < sess->queue_depth; i++) {
+			sess->rbufs[i].addr = addr;
+			sess->rbufs[i].rkey = rkey;
+
+			len  -= sess->chunk_size;
+			addr += sess->chunk_size;
+		}
+	}
+	/* Sanity check */
+	if (unlikely(sgi != sg_cnt || i != sess->queue_depth)) {
+		ibtrs_err(sess, "Incorrect sg vector, not fully mapped\n");
+		return -EINVAL;
+	}
+	if (unlikely(total_len != sess->chunk_size * sess->queue_depth)) {
+		ibtrs_err(sess, "Incorrect total_len %d\n", total_len);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void ibtrs_clt_info_rsp_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_clt_con *con = cq->cq_context;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_msg_info_rsp *msg;
+	enum ibtrs_clt_state state;
+	struct ibtrs_iu *iu;
+	size_t rx_sz;
+	int err;
+
+	state = IBTRS_CLT_CONNECTING_ERR;
+
+	WARN_ON(con->c.cid);
+	iu = container_of(wc->wr_cqe, struct ibtrs_iu, cqe);
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "Sess info response recv failed: %s\n",
+			  ib_wc_status_msg(wc->status));
+		goto out;
+	}
+	WARN_ON(wc->opcode != IB_WC_RECV);
+
+	if (unlikely(wc->byte_len < sizeof(*msg))) {
+		ibtrs_err(sess, "Sess info response is malformed: size %d\n",
+			  wc->byte_len);
+		goto out;
+	}
+	ib_dma_sync_single_for_cpu(sess->s.dev->ib_dev, iu->dma_addr,
+				   iu->size, DMA_FROM_DEVICE);
+	msg = iu->buf;
+	if (unlikely(le16_to_cpu(msg->type) != IBTRS_MSG_INFO_RSP)) {
+		ibtrs_err(sess, "Sess info response is malformed: type %d\n",
+			  le16_to_cpu(msg->type));
+		goto out;
+	}
+	rx_sz  = sizeof(*msg);
+	rx_sz += sizeof(msg->desc[0]) * le16_to_cpu(msg->sg_cnt);
+	if (unlikely(wc->byte_len < rx_sz)) {
+		ibtrs_err(sess, "Sess info response is malformed: size %d\n",
+			  wc->byte_len);
+		goto out;
+	}
+	err = process_info_rsp(sess, msg);
+	if (unlikely(err))
+		goto out;
+
+	err = post_recv_sess(sess);
+	if (unlikely(err))
+		goto out;
+
+	state = IBTRS_CLT_CONNECTED;
+
+out:
+	ibtrs_clt_update_wc_stats(con);
+	ibtrs_iu_free(iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev);
+	ibtrs_clt_change_state(sess, state);
+}
+
+static int ibtrs_send_sess_info(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt_con *usr_con = to_clt_con(sess->s.con[0]);
+	struct ibtrs_msg_info_req *msg;
+	struct ibtrs_iu *tx_iu, *rx_iu;
+	size_t rx_sz;
+	int err;
+
+	rx_sz  = sizeof(struct ibtrs_msg_info_rsp);
+	rx_sz += sizeof(u64) * MAX_SESS_QUEUE_DEPTH;
+
+	tx_iu = ibtrs_iu_alloc(0, sizeof(struct ibtrs_msg_info_req), GFP_KERNEL,
+			       sess->s.dev->ib_dev, DMA_TO_DEVICE,
+			       ibtrs_clt_info_req_done);
+	rx_iu = ibtrs_iu_alloc(0, rx_sz, GFP_KERNEL, sess->s.dev->ib_dev,
+			       DMA_FROM_DEVICE, ibtrs_clt_info_rsp_done);
+	if (unlikely(!tx_iu || !rx_iu)) {
+		ibtrs_err(sess, "ibtrs_iu_alloc(): no memory\n");
+		err = -ENOMEM;
+		goto out;
+	}
+	/* Prepare for getting info response */
+	err = ibtrs_iu_post_recv(&usr_con->c, rx_iu);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "ibtrs_iu_post_recv(), err: %d\n", err);
+		goto out;
+	}
+	rx_iu = NULL;
+
+	msg = tx_iu->buf;
+	msg->type = cpu_to_le16(IBTRS_MSG_INFO_REQ);
+	memcpy(msg->sessname, sess->s.sessname, sizeof(msg->sessname));
+
+	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, tx_iu->dma_addr,
+				      tx_iu->size, DMA_TO_DEVICE);
+
+	/* Send info request */
+	err = ibtrs_iu_post_send(&usr_con->c, tx_iu, sizeof(*msg), NULL);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "ibtrs_iu_post_send(), err: %d\n", err);
+		goto out;
+	}
+	tx_iu = NULL;
+
+	/* Wait for state change */
+	wait_event_interruptible_timeout(sess->state_wq,
+				sess->state != IBTRS_CLT_CONNECTING,
+				msecs_to_jiffies(IBTRS_CONNECT_TIMEOUT_MS));
+	if (unlikely(sess->state != IBTRS_CLT_CONNECTED)) {
+		if (sess->state == IBTRS_CLT_CONNECTING_ERR)
+			err = -ECONNRESET;
+		else
+			err = -ETIMEDOUT;
+		goto out;
+	}
+
+out:
+	if (tx_iu)
+		ibtrs_iu_free(tx_iu, DMA_TO_DEVICE, sess->s.dev->ib_dev);
+	if (rx_iu)
+		ibtrs_iu_free(rx_iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev);
+	if (unlikely(err))
+		/* If we've never taken async path because of malloc problems */
+		ibtrs_clt_change_state(sess, IBTRS_CLT_CONNECTING_ERR);
+
+	return err;
+}
+
+/**
+ * init_sess() - establishes all session connections and does handshake
+ *
+ * In case of error full close or reconnect procedure should be taken,
+ * because reconnect or close async works can be started.
+ */
+static int init_sess(struct ibtrs_clt_sess *sess)
+{
+	int err;
+
+	mutex_lock(&sess->init_mutex);
+	err = init_conns(sess);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "init_conns(), err: %d\n", err);
+		goto out;
+	}
+	err = ibtrs_send_sess_info(sess);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "ibtrs_send_sess_info(), err: %d\n", err);
+		goto out;
+	}
+	ibtrs_clt_sess_up(sess);
+out:
+	mutex_unlock(&sess->init_mutex);
+
+	return err;
+}
+
+static void ibtrs_clt_reconnect_work(struct work_struct *work)
+{
+	struct ibtrs_clt_sess *sess;
+	struct ibtrs_clt *clt;
+	unsigned int delay_ms;
+	int err;
+
+	sess = container_of(to_delayed_work(work), struct ibtrs_clt_sess,
+			    reconnect_dwork);
+	clt = sess->clt;
+
+	if (ibtrs_clt_state(sess) == IBTRS_CLT_CLOSING)
+		/* User requested closing */
+		return;
+
+	if (sess->reconnect_attempts >= clt->max_reconnect_attempts) {
+		/* Close a session completely if max attempts is reached */
+		ibtrs_clt_close_conns(sess, false);
+		return;
+	}
+	sess->reconnect_attempts++;
+
+	/* Stop everything */
+	ibtrs_clt_stop_and_destroy_conns(sess);
+	ibtrs_clt_change_state(sess, IBTRS_CLT_CONNECTING);
+
+	err = init_sess(sess);
+	if (unlikely(err))
+		goto reconnect_again;
+
+	return;
+
+reconnect_again:
+	if (ibtrs_clt_change_state(sess, IBTRS_CLT_RECONNECTING)) {
+		sess->stats.reconnects.fail_cnt++;
+		delay_ms = clt->reconnect_delay_sec * 1000;
+		queue_delayed_work(ibtrs_wq, &sess->reconnect_dwork,
+				   msecs_to_jiffies(delay_ms));
+	}
+}
+
+static void ibtrs_clt_dev_release(struct device *dev)
+{
+	/* Nobody plays with device references, so nop */
+}
+
+static struct ibtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+				   short port, size_t pdu_sz,
+				   void *priv, link_clt_ev_fn *link_ev,
+				   unsigned int max_segments,
+				   unsigned int reconnect_delay_sec,
+				   unsigned int max_reconnect_attempts)
+{
+	struct ibtrs_clt *clt;
+	int err;
+
+	if (unlikely(!paths_num || paths_num > MAX_PATHS_NUM))
+		return ERR_PTR(-EINVAL);
+
+	if (unlikely(strlen(sessname) >= sizeof(clt->sessname)))
+		return ERR_PTR(-EINVAL);
+
+	clt = kzalloc(sizeof(*clt), GFP_KERNEL);
+	if (unlikely(!clt))
+		return ERR_PTR(-ENOMEM);
+
+	clt->pcpu_path = alloc_percpu(typeof(*clt->pcpu_path));
+	if (unlikely(!clt->pcpu_path)) {
+		kfree(clt);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	uuid_gen(&clt->paths_uuid);
+	INIT_LIST_HEAD_RCU(&clt->paths_list);
+	clt->paths_num = paths_num;
+	clt->paths_up = MAX_PATHS_NUM;
+	clt->port = port;
+	clt->pdu_sz = pdu_sz;
+	clt->max_segments = max_segments;
+	clt->reconnect_delay_sec = reconnect_delay_sec;
+	clt->max_reconnect_attempts = max_reconnect_attempts;
+	clt->priv = priv;
+	clt->link_ev = link_ev;
+	clt->mp_policy = MP_POLICY_MIN_INFLIGHT;
+	strlcpy(clt->sessname, sessname, sizeof(clt->sessname));
+	init_waitqueue_head(&clt->tags_wait);
+	mutex_init(&clt->paths_ev_mutex);
+	mutex_init(&clt->paths_mutex);
+
+	clt->dev.class = ibtrs_dev_class;
+	clt->dev.release = ibtrs_clt_dev_release;
+	dev_set_name(&clt->dev, "%s", sessname);
+
+	err = device_register(&clt->dev);
+	if (unlikely(err))
+		goto percpu_free;
+
+	err = ibtrs_clt_create_sysfs_root_folders(clt);
+	if (unlikely(err))
+		goto dev_unregister;
+
+	return clt;
+
+dev_unregister:
+	/* Nobody plays with dev refs, so dev.release() is nop */
+	device_unregister(&clt->dev);
+percpu_free:
+	free_percpu(clt->pcpu_path);
+	kfree(clt);
+
+	return ERR_PTR(err);
+}
+
+static void wait_for_inflight_tags(struct ibtrs_clt *clt)
+{
+	if (clt->tags_map) {
+		size_t sz = clt->queue_depth;
+
+		wait_event(clt->tags_wait,
+			   find_first_bit(clt->tags_map, sz) >= sz);
+	}
+}
+
+static void free_clt(struct ibtrs_clt *clt)
+{
+	ibtrs_clt_destroy_sysfs_root_folders(clt);
+	wait_for_inflight_tags(clt);
+	free_tags(clt);
+	free_percpu(clt->pcpu_path);
+	/* Nobody plays with dev refs, so dev.release() is nop */
+	device_unregister(&clt->dev);
+	kfree(clt);
+}
+
+struct ibtrs_clt *ibtrs_clt_open(void *priv, link_clt_ev_fn *link_ev,
+				 const char *sessname,
+				 const struct ibtrs_addr *paths,
+				 size_t paths_num,
+				 short port,
+				 size_t pdu_sz, u8 reconnect_delay_sec,
+				 u16 max_segments,
+				 s16 max_reconnect_attempts)
+{
+	struct ibtrs_clt_sess *sess, *tmp;
+	struct ibtrs_clt *clt;
+	int err, i;
+
+	clt = alloc_clt(sessname, paths_num, port, pdu_sz, priv, link_ev,
+			max_segments, reconnect_delay_sec,
+			max_reconnect_attempts);
+	if (unlikely(IS_ERR(clt))) {
+		err = PTR_ERR(clt);
+		goto out;
+	}
+	for (i = 0; i < paths_num; i++) {
+		struct ibtrs_clt_sess *sess;
+
+		sess = alloc_sess(clt, &paths[i], nr_cons_per_session,
+				  max_segments);
+		if (unlikely(IS_ERR(sess))) {
+			err = PTR_ERR(sess);
+			ibtrs_err(clt, "alloc_sess(), err: %d\n", err);
+			goto close_all_sess;
+		}
+		list_add_tail_rcu(&sess->s.entry, &clt->paths_list);
+
+		err = init_sess(sess);
+		if (unlikely(err))
+			goto close_all_sess;
+
+		err = ibtrs_clt_create_sess_files(sess);
+		if (unlikely(err))
+			goto close_all_sess;
+	}
+	err = alloc_tags(clt);
+	if (unlikely(err)) {
+		ibtrs_err(clt, "alloc_tags(), err: %d\n", err);
+		goto close_all_sess;
+	}
+	err = ibtrs_clt_create_sysfs_root_files(clt);
+	if (unlikely(err))
+		goto close_all_sess;
+
+	/*
+	 * There is a race if someone decides to completely remove just
+	 * newly created path using sysfs entry.  To avoid the race we
+	 * use simple 'opened' flag, see ibtrs_clt_remove_path_from_sysfs().
+	 */
+	clt->opened = true;
+
+	/* Do not let module be unloaded if client is alive */
+	__module_get(THIS_MODULE);
+
+	return clt;
+
+close_all_sess:
+	list_for_each_entry_safe(sess, tmp, &clt->paths_list, s.entry) {
+		ibtrs_clt_destroy_sess_files(sess, NULL);
+		ibtrs_clt_close_conns(sess, true);
+		free_sess(sess);
+	}
+	free_clt(clt);
+
+out:
+	return ERR_PTR(err);
+}
+EXPORT_SYMBOL(ibtrs_clt_open);
+
+void ibtrs_clt_close(struct ibtrs_clt *clt)
+{
+	struct ibtrs_clt_sess *sess, *tmp;
+
+	/* Firstly forbid sysfs access */
+	ibtrs_clt_destroy_sysfs_root_files(clt);
+	ibtrs_clt_destroy_sysfs_root_folders(clt);
+
+	/* Now it is safe to iterate over all paths without locks */
+	list_for_each_entry_safe(sess, tmp, &clt->paths_list, s.entry) {
+		ibtrs_clt_destroy_sess_files(sess, NULL);
+		ibtrs_clt_close_conns(sess, true);
+		free_sess(sess);
+	}
+	free_clt(clt);
+	module_put(THIS_MODULE);
+}
+EXPORT_SYMBOL(ibtrs_clt_close);
+
+int ibtrs_clt_reconnect_from_sysfs(struct ibtrs_clt_sess *sess)
+{
+	enum ibtrs_clt_state old_state;
+	int err = -EBUSY;
+	bool changed;
+
+	changed = ibtrs_clt_change_state_get_old(sess, IBTRS_CLT_RECONNECTING,
+						 &old_state);
+	if (changed) {
+		sess->reconnect_attempts = 0;
+		queue_delayed_work(ibtrs_wq, &sess->reconnect_dwork, 0);
+	}
+	if (changed || old_state == IBTRS_CLT_RECONNECTING) {
+		/*
+		 * flush_delayed_work() queues pending work for immediate
+		 * execution, so do the flush if we have queued something
+		 * right now or work is pending.
+		 */
+		flush_delayed_work(&sess->reconnect_dwork);
+		err = ibtrs_clt_sess_is_connected(sess) ? 0 : -ENOTCONN;
+	}
+
+	return err;
+}
+
+int ibtrs_clt_disconnect_from_sysfs(struct ibtrs_clt_sess *sess)
+{
+	ibtrs_clt_close_conns(sess, true);
+
+	return 0;
+}
+
+int ibtrs_clt_remove_path_from_sysfs(struct ibtrs_clt_sess *sess,
+				     const struct attribute *sysfs_self)
+{
+	struct ibtrs_clt *clt = sess->clt;
+	enum ibtrs_clt_state old_state;
+	bool changed;
+
+	/*
+	 * That can happen only when userspace tries to remove path
+	 * very early, when ibtrs_clt_open() is not yet finished.
+	 */
+	if (unlikely(!clt->opened))
+		return -EBUSY;
+
+	/*
+	 * Continue stopping path till state was changed to DEAD or
+	 * state was observed as DEAD:
+	 * 1. State was changed to DEAD - we were fast and nobody
+	 *    invoked ibtrs_clt_reconnect(), which can again start
+	 *    reconnecting.
+	 * 2. State was observed as DEAD - we have someone in parallel
+	 *    removing the path.
+	 */
+	do {
+		ibtrs_clt_close_conns(sess, true);
+	} while (!(changed = ibtrs_clt_change_state_get_old(sess,
+							    IBTRS_CLT_DEAD,
+							    &old_state)) &&
+		   old_state != IBTRS_CLT_DEAD);
+
+	/*
+	 * If state was successfully changed to DEAD, commit suicide.
+	 */
+	if (likely(changed)) {
+		ibtrs_clt_destroy_sess_files(sess, sysfs_self);
+		ibtrs_clt_remove_path_from_arr(sess);
+		free_sess(sess);
+	}
+
+	return 0;
+}
+
+void ibtrs_clt_set_max_reconnect_attempts(struct ibtrs_clt *clt, int value)
+{
+	clt->max_reconnect_attempts = (unsigned int)value;
+}
+
+int ibtrs_clt_get_max_reconnect_attempts(const struct ibtrs_clt *clt)
+{
+	return (int)clt->max_reconnect_attempts;
+}
+
+static int ibtrs_post_rdma_write_sg(struct ibtrs_clt_con *con,
+				    struct ibtrs_clt_io_req *req,
+				    struct ibtrs_rbuf *rbuf,
+				    u32 size, u32 imm)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ib_sge *sge = req->sge;
+	enum ib_send_flags flags;
+	struct scatterlist *sg;
+	size_t num_sge;
+	int i;
+
+	for_each_sg(req->sglist, sg, req->sg_cnt, i) {
+		sge[i].addr   = sg_dma_address(sg);
+		sge[i].length = sg_dma_len(sg);
+		sge[i].lkey   = sess->s.dev->ib_pd->local_dma_lkey;
+	}
+	sge[i].addr   = req->iu->dma_addr;
+	sge[i].length = size;
+	sge[i].lkey   = sess->s.dev->ib_pd->local_dma_lkey;
+
+	num_sge = 1 + req->sg_cnt;
+
+	/*
+	 * From time to time we have to post signalled sends,
+	 * or send queue will fill up and only QP reset can help.
+	 */
+	flags = atomic_inc_return(&con->io_cnt) % sess->queue_depth ?
+			0 : IB_SEND_SIGNALED;
+
+	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, req->iu->dma_addr,
+				      size, DMA_TO_DEVICE);
+
+	return ibtrs_iu_post_rdma_write_imm(&con->c, req->iu, sge, num_sge,
+					    rbuf->rkey, rbuf->addr, imm,
+					    flags, NULL);
+}
+
+static int ibtrs_clt_write_req(struct ibtrs_clt_io_req *req)
+{
+	struct ibtrs_clt_con *con = req->con;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_msg_rdma_write *msg;
+
+	struct ibtrs_rbuf *rbuf;
+	int ret, count = 0;
+	u32 imm, buf_id;
+
+	const size_t tsize = sizeof(*msg) + req->data_len + req->usr_len;
+
+	if (unlikely(tsize > sess->chunk_size)) {
+		ibtrs_wrn(sess, "Write request failed, size too big %zu > %d\n",
+			  tsize, sess->chunk_size);
+		return -EMSGSIZE;
+	}
+	if (req->sg_cnt) {
+		count = ib_dma_map_sg(sess->s.dev->ib_dev, req->sglist,
+				      req->sg_cnt, req->dir);
+		if (unlikely(!count)) {
+			ibtrs_wrn(sess, "Write request failed, map failed\n");
+			return -EINVAL;
+		}
+	}
+	/* put ibtrs msg after sg and user message */
+	msg = req->iu->buf + req->usr_len;
+	msg->type = cpu_to_le16(IBTRS_MSG_WRITE);
+	msg->usr_len = cpu_to_le16(req->usr_len);
+
+	/* ibtrs message on server side will be after user data and message */
+	imm = req->tag->mem_off + req->data_len + req->usr_len;
+	imm = ibtrs_to_io_req_imm(imm);
+	buf_id = req->tag->mem_id;
+	req->sg_size = tsize;
+	rbuf = &sess->rbufs[buf_id];
+
+	/*
+	 * Update stats now, after request is successfully sent it is not
+	 * safe anymore to touch it.
+	 */
+	ibtrs_clt_update_all_stats(req, WRITE);
+
+	ret = ibtrs_post_rdma_write_sg(req->con, req, rbuf,
+				       req->usr_len + sizeof(*msg),
+				       imm);
+	if (unlikely(ret)) {
+		ibtrs_err(sess, "Write request failed: %d\n", ret);
+		ibtrs_clt_decrease_inflight(&sess->stats);
+		if (req->sg_cnt)
+			ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist,
+					req->sg_cnt, req->dir);
+	}
+
+	return ret;
+}
+
+static int ibtrs_map_sg_fr(struct ibtrs_clt_io_req *req, size_t count)
+{
+	int nr;
+
+	/* Align the MR to a 4K page size to match the block virt boundary */
+	nr = ib_map_mr_sg(req->mr, req->sglist, count, NULL, SZ_4K);
+	if (unlikely(nr < req->sg_cnt)) {
+		if (nr < 0)
+			return nr;
+		return -EINVAL;
+	}
+	ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey));
+
+	return nr;
+}
+
+static int ibtrs_clt_read_req(struct ibtrs_clt_io_req *req)
+{
+	struct ibtrs_clt_con *con = req->con;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_msg_rdma_read *msg;
+	struct ibtrs_ib_dev *dev;
+	struct scatterlist *sg;
+
+	struct ib_reg_wr rwr;
+	struct ib_send_wr *wr = NULL;
+
+	int i, ret, count = 0;
+	u32 imm, buf_id;
+
+	const size_t tsize = sizeof(*msg) + req->data_len + req->usr_len;
+
+	dev = sess->s.dev;
+
+	if (unlikely(tsize > sess->chunk_size)) {
+		ibtrs_wrn(sess, "Read request failed, message size is"
+			  " %zu, bigger than CHUNK_SIZE %d\n", tsize,
+			  sess->chunk_size);
+		return -EMSGSIZE;
+	}
+
+	if (req->sg_cnt) {
+		count = ib_dma_map_sg(dev->ib_dev, req->sglist, req->sg_cnt,
+				      req->dir);
+		if (unlikely(!count)) {
+			ibtrs_wrn(sess, "Read request failed, "
+				  "dma map failed\n");
+			return -EINVAL;
+		}
+	}
+	/* put our message into req->buf after user message*/
+	msg = req->iu->buf + req->usr_len;
+	msg->type = cpu_to_le16(IBTRS_MSG_READ);
+	msg->usr_len = cpu_to_le16(req->usr_len);
+
+	if (count > noreg_cnt) {
+		ret = ibtrs_map_sg_fr(req, count);
+		if (ret < 0) {
+			ibtrs_err_rl(sess,
+				     "Read request failed, failed to map "
+				     " fast reg. data, err: %d\n", ret);
+			ib_dma_unmap_sg(dev->ib_dev, req->sglist, req->sg_cnt,
+					req->dir);
+			return ret;
+		}
+		memset(&rwr, 0, sizeof(rwr));
+		rwr.wr.next = NULL;
+		rwr.wr.opcode = IB_WR_REG_MR;
+		rwr.wr.wr_cqe = &fast_reg_cqe;
+		rwr.wr.num_sge = 0;
+		rwr.mr = req->mr;
+		rwr.key = req->mr->rkey;
+		rwr.access = (IB_ACCESS_LOCAL_WRITE |
+			      IB_ACCESS_REMOTE_WRITE);
+		wr = &rwr.wr;
+
+		msg->sg_cnt = cpu_to_le16(1);
+		msg->flags = cpu_to_le16(ibtrs_invalidate_flag());
+
+		msg->desc[0].addr = cpu_to_le64(req->mr->iova);
+		msg->desc[0].key = cpu_to_le32(req->mr->rkey);
+		msg->desc[0].len = cpu_to_le32(req->mr->length);
+
+		/* Further invalidation is required */
+		req->need_inv = !!ibtrs_invalidate_flag();
+
+	} else {
+		msg->sg_cnt = cpu_to_le16(count);
+		msg->flags = 0;
+
+		for_each_sg(req->sglist, sg, req->sg_cnt, i) {
+			msg->desc[i].addr = cpu_to_le64(sg_dma_address(sg));
+			msg->desc[i].key =
+				cpu_to_le32(dev->ib_pd->unsafe_global_rkey);
+			msg->desc[i].len = cpu_to_le32(sg_dma_len(sg));
+		}
+	}
+	/*
+	 * ibtrs message will be after the space reserved for disk data and
+	 * user message
+	 */
+	imm = req->tag->mem_off + req->data_len + req->usr_len;
+	imm = ibtrs_to_io_req_imm(imm);
+	buf_id = req->tag->mem_id;
+
+	req->sg_size  = sizeof(*msg);
+	req->sg_size += le16_to_cpu(msg->sg_cnt) * sizeof(struct ibtrs_sg_desc);
+	req->sg_size += req->usr_len;
+
+	/*
+	 * Update stats now, after request is successfully sent it is not
+	 * safe anymore to touch it.
+	 */
+	ibtrs_clt_update_all_stats(req, READ);
+
+	ret = ibtrs_post_send_rdma(req->con, req, &sess->rbufs[buf_id],
+				   req->data_len, imm, wr);
+	if (unlikely(ret)) {
+		ibtrs_err(sess, "Read request failed: %d\n", ret);
+		ibtrs_clt_decrease_inflight(&sess->stats);
+		req->need_inv = false;
+		if (req->sg_cnt)
+			ib_dma_unmap_sg(dev->ib_dev, req->sglist,
+					req->sg_cnt, req->dir);
+	}
+
+	return ret;
+}
+
+int ibtrs_clt_request(int dir, ibtrs_conf_fn *conf, struct ibtrs_clt *clt,
+		      struct ibtrs_tag *tag, void *priv, const struct kvec *vec,
+		      size_t nr, size_t data_len, struct scatterlist *sg,
+		      unsigned int sg_cnt)
+{
+	struct ibtrs_clt_io_req *req;
+	struct ibtrs_clt_sess *sess;
+
+	enum dma_data_direction dma_dir;
+	int err = -ECONNABORTED, i;
+	size_t usr_len, hdr_len;
+	struct path_it it;
+
+	/* Get kvec length */
+	for (i = 0, usr_len = 0; i < nr; i++)
+		usr_len += vec[i].iov_len;
+
+	if (dir == READ) {
+		hdr_len = sizeof(struct ibtrs_msg_rdma_read) +
+			  sg_cnt * sizeof(struct ibtrs_sg_desc);
+		dma_dir = DMA_FROM_DEVICE;
+	} else {
+		hdr_len = sizeof(struct ibtrs_msg_rdma_write);
+		dma_dir = DMA_TO_DEVICE;
+	}
+
+	do_each_path(sess, clt, &it) {
+		if (unlikely(sess->state != IBTRS_CLT_CONNECTED))
+			continue;
+
+		if (unlikely(usr_len + hdr_len > sess->max_hdr_size)) {
+			ibtrs_wrn_rl(sess, "%s request failed, user message "
+				     "size is %zu and header length %zu, but "
+				     "max size is %u\n",
+				     dir == READ ? "Read" : "Write",
+				     usr_len, hdr_len, sess->max_hdr_size);
+			err = -EMSGSIZE;
+			break;
+		}
+		req = ibtrs_clt_get_req(sess, conf, tag, priv, vec, usr_len,
+					sg, sg_cnt, data_len, dma_dir);
+		if (dir == READ)
+			err = ibtrs_clt_read_req(req);
+		else
+			err = ibtrs_clt_write_req(req);
+		if (unlikely(err)) {
+			req->in_use = false;
+			continue;
+		}
+		/* Success path */
+		break;
+	} while_each_path(&it);
+
+	return err;
+}
+EXPORT_SYMBOL(ibtrs_clt_request);
+
+int ibtrs_clt_query(struct ibtrs_clt *clt, struct ibtrs_attrs *attr)
+{
+	if (unlikely(!ibtrs_clt_is_connected(clt)))
+		return -ECOMM;
+
+	attr->queue_depth      = clt->queue_depth;
+	attr->max_io_size      = clt->max_io_size;
+	attr->sess_kobj	       = &clt->dev.kobj;
+	strlcpy(attr->sessname, clt->sessname, sizeof(attr->sessname));
+
+	return 0;
+}
+EXPORT_SYMBOL(ibtrs_clt_query);
+
+int ibtrs_clt_create_path_from_sysfs(struct ibtrs_clt *clt,
+				     struct ibtrs_addr *addr)
+{
+	struct ibtrs_clt_sess *sess;
+	int err;
+
+	sess = alloc_sess(clt, addr, nr_cons_per_session, clt->max_segments);
+	if (unlikely(IS_ERR(sess)))
+		return PTR_ERR(sess);
+
+	/*
+	 * It is totally safe to add path in CONNECTING state: coming
+	 * IO will never grab it.  Also it is very important to add
+	 * path before init, since init fires LINK_CONNECTED event.
+	 */
+	ibtrs_clt_add_path_to_arr(sess, addr);
+
+	err = init_sess(sess);
+	if (unlikely(err))
+		goto close_sess;
+
+	err = ibtrs_clt_create_sess_files(sess);
+	if (unlikely(err))
+		goto close_sess;
+
+	return 0;
+
+close_sess:
+	ibtrs_clt_remove_path_from_arr(sess);
+	ibtrs_clt_close_conns(sess, true);
+	free_sess(sess);
+
+	return err;
+}
+
+static int check_module_params(void)
+{
+	if (nr_cons_per_session == 0)
+		nr_cons_per_session = min_t(unsigned int, nr_cpu_ids, U16_MAX);
+
+	return 0;
+}
+
+static int ibtrs_clt_ib_dev_init(struct ibtrs_ib_dev *dev)
+{
+	if (!(dev->ib_dev->attrs.device_cap_flags &
+	      IB_DEVICE_MEM_MGT_EXTENSIONS)) {
+		pr_err("Memory registrations not supported.\n");
+		return -ENOTSUPP;
+	}
+
+	return 0;
+}
+
+static const struct ibtrs_ib_dev_pool_ops dev_pool_ops = {
+	.init = ibtrs_clt_ib_dev_init
+};
+
+static int __init ibtrs_client_init(void)
+{
+	int err;
+
+	pr_info("Loading module %s, version %s, proto %s: "
+		"(retry_cnt: %d, noreg_cnt: %d)\n",
+		KBUILD_MODNAME, IBTRS_VER_STRING, IBTRS_PROTO_VER_STRING,
+		retry_cnt, noreg_cnt);
+
+	ibtrs_ib_dev_pool_init(noreg_cnt ? IB_PD_UNSAFE_GLOBAL_RKEY : 0,
+			       &dev_pool);
+
+	err = check_module_params();
+	if (unlikely(err)) {
+		pr_err("Failed to load module, invalid module parameters,"
+		       " err: %d\n", err);
+		return err;
+	}
+	ibtrs_dev_class = class_create(THIS_MODULE, "ibtrs-client");
+	if (unlikely(IS_ERR(ibtrs_dev_class))) {
+		pr_err("Failed to create ibtrs-client dev class\n");
+		return PTR_ERR(ibtrs_dev_class);
+	}
+	ibtrs_wq = alloc_workqueue("ibtrs_client_wq", WQ_MEM_RECLAIM, 0);
+	if (unlikely(!ibtrs_wq)) {
+		pr_err("Failed to load module, alloc ibtrs_client_wq failed\n");
+		class_destroy(ibtrs_dev_class);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void __exit ibtrs_client_exit(void)
+{
+	destroy_workqueue(ibtrs_wq);
+	class_destroy(ibtrs_dev_class);
+	ibtrs_ib_dev_pool_deinit(&dev_pool);
+}
+
+module_init(ibtrs_client_init);
+module_exit(ibtrs_client_exit);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 07/25] ibtrs: client: statistics functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (5 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 06/25] ibtrs: client: main functionality Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 23:15   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 08/25] ibtrs: client: sysfs interface functions Jack Wang
                   ` (18 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This introduces set of functions used on client side to account
statistics of RDMA data sent/received, amount of IOs inflight,
latency, cpu migrations, etc.  Almost all statistics is collected
using percpu variables.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 .../infiniband/ulp/ibtrs/ibtrs-clt-stats.c    | 447 ++++++++++++++++++
 1 file changed, 447 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt-stats.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-clt-stats.c b/drivers/infiniband/ulp/ibtrs/ibtrs-clt-stats.c
new file mode 100644
index 000000000000..fbeb1549aaf4
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-clt-stats.c
@@ -0,0 +1,447 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include "ibtrs-clt.h"
+
+static inline int ibtrs_clt_ms_to_id(unsigned long ms)
+{
+	int id = ms ? ilog2(ms) - MIN_LOG_LAT + 1 : 0;
+
+	return clamp(id, 0, LOG_LAT_SZ - 1);
+}
+
+void ibtrs_clt_update_rdma_lat(struct ibtrs_clt_stats *stats, bool read,
+			       unsigned long ms)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+	int id;
+
+	id = ibtrs_clt_ms_to_id(ms);
+	s = this_cpu_ptr(stats->pcpu_stats);
+	if (read) {
+		s->rdma_lat_distr[id].read++;
+		if (s->rdma_lat_max.read < ms)
+			s->rdma_lat_max.read = ms;
+	} else {
+		s->rdma_lat_distr[id].write++;
+		if (s->rdma_lat_max.write < ms)
+			s->rdma_lat_max.write = ms;
+	}
+}
+
+void ibtrs_clt_decrease_inflight(struct ibtrs_clt_stats *stats)
+{
+	atomic_dec(&stats->inflight);
+}
+
+void ibtrs_clt_update_wc_stats(struct ibtrs_clt_con *con)
+{
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_clt_stats *stats = &sess->stats;
+	struct ibtrs_clt_stats_pcpu *s;
+	int cpu;
+
+	cpu = raw_smp_processor_id();
+	s = this_cpu_ptr(stats->pcpu_stats);
+	s->wc_comp.cnt++;
+	s->wc_comp.total_cnt++;
+	if (unlikely(con->cpu != cpu)) {
+		s->cpu_migr.to++;
+
+		/* Careful here, override s pointer */
+		s = per_cpu_ptr(stats->pcpu_stats, con->cpu);
+		atomic_inc(&s->cpu_migr.from);
+	}
+}
+
+void ibtrs_clt_inc_failover_cnt(struct ibtrs_clt_stats *stats)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+
+	s = this_cpu_ptr(stats->pcpu_stats);
+	s->rdma.failover_cnt++;
+}
+
+static inline u32 ibtrs_clt_stats_get_avg_wc_cnt(struct ibtrs_clt_stats *stats)
+{
+	u32 cnt = 0;
+	u64 sum = 0;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		struct ibtrs_clt_stats_pcpu *s;
+
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		sum += s->wc_comp.total_cnt;
+		cnt += s->wc_comp.cnt;
+	}
+
+	return cnt ? sum / cnt : 0;
+}
+
+int ibtrs_clt_stats_wc_completion_to_str(struct ibtrs_clt_stats *stats,
+					 char *buf, size_t len)
+{
+	return scnprintf(buf, len, "%u\n",
+			 ibtrs_clt_stats_get_avg_wc_cnt(stats));
+}
+
+ssize_t ibtrs_clt_stats_rdma_lat_distr_to_str(struct ibtrs_clt_stats *stats,
+					      char *page, size_t len)
+{
+	struct ibtrs_clt_stats_rdma_lat res[LOG_LAT_SZ];
+	struct ibtrs_clt_stats_rdma_lat max;
+	struct ibtrs_clt_stats_pcpu *s;
+
+	ssize_t cnt = 0;
+	int i, cpu;
+
+	max.write = 0;
+	max.read = 0;
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+
+		if (max.write < s->rdma_lat_max.write)
+			max.write = s->rdma_lat_max.write;
+		if (max.read < s->rdma_lat_max.read)
+			max.read = s->rdma_lat_max.read;
+	}
+	for (i = 0; i < ARRAY_SIZE(res); i++) {
+		res[i].write = 0;
+		res[i].read = 0;
+		for_each_possible_cpu(cpu) {
+			s = per_cpu_ptr(stats->pcpu_stats, cpu);
+
+			res[i].write += s->rdma_lat_distr[i].write;
+			res[i].read += s->rdma_lat_distr[i].read;
+		}
+	}
+
+	for (i = 0; i < ARRAY_SIZE(res) - 1; i++)
+		cnt += scnprintf(page + cnt, len - cnt,
+				 "< %6d ms: %llu %llu\n",
+				 1 << (i + MIN_LOG_LAT), res[i].read,
+				 res[i].write);
+	cnt += scnprintf(page + cnt, len - cnt, ">= %5d ms: %llu %llu\n",
+			 1 << (i - 1 + MIN_LOG_LAT), res[i].read,
+			 res[i].write);
+	cnt += scnprintf(page + cnt, len - cnt, " maximum ms: %llu %llu\n",
+			 max.read, max.write);
+
+	return cnt;
+}
+
+int ibtrs_clt_stats_migration_cnt_to_str(struct ibtrs_clt_stats *stats,
+					 char *buf, size_t len)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+
+	size_t used;
+	int cpu;
+
+	used = scnprintf(buf, len, "    ");
+	for_each_possible_cpu(cpu)
+		used += scnprintf(buf + used, len - used, " CPU%u", cpu);
+
+	used += scnprintf(buf + used, len - used, "\nfrom:");
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		used += scnprintf(buf + used, len - used, " %d",
+				  atomic_read(&s->cpu_migr.from));
+	}
+
+	used += scnprintf(buf + used, len - used, "\nto  :");
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		used += scnprintf(buf + used, len - used, " %d",
+				  s->cpu_migr.to);
+	}
+	used += scnprintf(buf + used, len - used, "\n");
+
+	return used;
+}
+
+int ibtrs_clt_stats_reconnects_to_str(struct ibtrs_clt_stats *stats, char *buf,
+				      size_t len)
+{
+	return scnprintf(buf, len, "%d %d\n",
+			 stats->reconnects.successful_cnt,
+			 stats->reconnects.fail_cnt);
+}
+
+ssize_t ibtrs_clt_stats_rdma_to_str(struct ibtrs_clt_stats *stats,
+				    char *page, size_t len)
+{
+	struct ibtrs_clt_stats_rdma sum;
+	struct ibtrs_clt_stats_rdma *r;
+	int cpu;
+
+	memset(&sum, 0, sizeof(sum));
+
+	for_each_possible_cpu(cpu) {
+		r = &per_cpu_ptr(stats->pcpu_stats, cpu)->rdma;
+
+		sum.dir[READ].cnt	  += r->dir[READ].cnt;
+		sum.dir[READ].size_total  += r->dir[READ].size_total;
+		sum.dir[WRITE].cnt	  += r->dir[WRITE].cnt;
+		sum.dir[WRITE].size_total += r->dir[WRITE].size_total;
+		sum.failover_cnt	  += r->failover_cnt;
+	}
+
+	return scnprintf(page, len, "%llu %llu %llu %llu %u %llu\n",
+			 sum.dir[READ].cnt, sum.dir[READ].size_total,
+			 sum.dir[WRITE].cnt, sum.dir[WRITE].size_total,
+			 atomic_read(&stats->inflight), sum.failover_cnt);
+}
+
+int ibtrs_clt_stats_sg_list_distr_to_str(struct ibtrs_clt_stats *stats,
+					 char *buf, size_t len)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+
+	int i, cpu, cnt;
+
+	cnt = scnprintf(buf, len, "n\\cpu:");
+	for_each_possible_cpu(cpu)
+		cnt += scnprintf(buf + cnt, len - cnt, "%5d", cpu);
+
+	for (i = 0; i < SG_DISTR_SZ; i++) {
+		if (i <= MAX_LIN_SG)
+			cnt += scnprintf(buf + cnt, len - cnt, "\n= %3d:", i);
+		else if (i < SG_DISTR_SZ - 1)
+			cnt += scnprintf(buf + cnt, len - cnt, "\n< %3d:",
+					 1 << (i + MIN_LOG_SG - MAX_LIN_SG));
+		else
+			cnt += scnprintf(buf + cnt, len - cnt, "\n>=%3d:",
+					 1 << (i + MIN_LOG_SG -
+					       MAX_LIN_SG - 1));
+
+		for_each_possible_cpu(cpu) {
+			unsigned int p, p_i, p_f;
+			u64 total, distr;
+
+			s = per_cpu_ptr(stats->pcpu_stats, cpu);
+			total = s->sg_list_total;
+			distr = s->sg_list_distr[i];
+
+			p = total ? distr * 1000 / total : 0;
+			p_i = p / 10;
+			p_f = p % 10;
+
+			if (distr)
+				cnt += scnprintf(buf + cnt, len - cnt,
+						 " %2u.%01u", p_i, p_f);
+			else
+				cnt += scnprintf(buf + cnt, len - cnt, "    0");
+		}
+	}
+
+	cnt += scnprintf(buf + cnt, len - cnt, "\ntotal:");
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		cnt += scnprintf(buf + cnt, len - cnt, " %llu",
+				 s->sg_list_total);
+	}
+	cnt += scnprintf(buf + cnt, len - cnt, "\n");
+
+	return cnt;
+}
+
+ssize_t ibtrs_clt_reset_all_help(struct ibtrs_clt_stats *s,
+				 char *page, size_t len)
+{
+	return scnprintf(page, len, "echo 1 to reset all statistics\n");
+}
+
+int ibtrs_clt_reset_rdma_stats(struct ibtrs_clt_stats *stats, bool enable)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+	int cpu;
+
+	if (unlikely(!enable))
+		return -EINVAL;
+
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		memset(&s->rdma, 0, sizeof(s->rdma));
+	}
+
+	return 0;
+}
+
+int ibtrs_clt_reset_rdma_lat_distr_stats(struct ibtrs_clt_stats *stats,
+					 bool enable)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+	int cpu;
+
+	if (enable) {
+		for_each_possible_cpu(cpu) {
+			s = per_cpu_ptr(stats->pcpu_stats, cpu);
+			memset(&s->rdma_lat_max, 0, sizeof(s->rdma_lat_max));
+			memset(&s->rdma_lat_distr, 0,
+			       sizeof(s->rdma_lat_distr));
+		}
+	}
+	stats->enable_rdma_lat = enable;
+
+	return 0;
+}
+
+int ibtrs_clt_reset_sg_list_distr_stats(struct ibtrs_clt_stats *stats,
+					bool enable)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+	int cpu;
+
+	if (unlikely(!enable))
+		return -EINVAL;
+
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		memset(&s->sg_list_total, 0, sizeof(s->sg_list_total));
+		memset(&s->sg_list_distr, 0, sizeof(s->sg_list_distr));
+	}
+
+	return 0;
+}
+
+int ibtrs_clt_reset_cpu_migr_stats(struct ibtrs_clt_stats *stats, bool enable)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+	int cpu;
+
+	if (unlikely(!enable))
+		return -EINVAL;
+
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		memset(&s->cpu_migr, 0, sizeof(s->cpu_migr));
+	}
+
+	return 0;
+}
+
+int ibtrs_clt_reset_reconnects_stat(struct ibtrs_clt_stats *stats, bool enable)
+{
+	if (unlikely(!enable))
+		return -EINVAL;
+
+	memset(&stats->reconnects, 0, sizeof(stats->reconnects));
+
+	return 0;
+}
+
+int ibtrs_clt_reset_wc_comp_stats(struct ibtrs_clt_stats *stats, bool enable)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+	int cpu;
+
+	if (unlikely(!enable))
+		return -EINVAL;
+
+	for_each_possible_cpu(cpu) {
+		s = per_cpu_ptr(stats->pcpu_stats, cpu);
+		memset(&s->wc_comp, 0, sizeof(s->wc_comp));
+	}
+
+	return 0;
+}
+
+int ibtrs_clt_reset_all_stats(struct ibtrs_clt_stats *s, bool enable)
+{
+	if (enable) {
+		ibtrs_clt_reset_rdma_stats(s, enable);
+		ibtrs_clt_reset_rdma_lat_distr_stats(s, enable);
+		ibtrs_clt_reset_sg_list_distr_stats(s, enable);
+		ibtrs_clt_reset_cpu_migr_stats(s, enable);
+		ibtrs_clt_reset_reconnects_stat(s, enable);
+		ibtrs_clt_reset_wc_comp_stats(s, enable);
+		atomic_set(&s->inflight, 0);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static inline void ibtrs_clt_record_sg_distr(u64 stat[SG_DISTR_SZ], u64 *total,
+					     unsigned int cnt)
+{
+	int i;
+
+	i = cnt > MAX_LIN_SG ? ilog2(cnt) + MAX_LIN_SG - MIN_LOG_SG + 1 : cnt;
+	i = i < SG_DISTR_SZ ? i : SG_DISTR_SZ - 1;
+
+	stat[i]++;
+	(*total)++;
+}
+
+static inline void ibtrs_clt_update_rdma_stats(struct ibtrs_clt_stats *stats,
+					       size_t size, int d)
+{
+	struct ibtrs_clt_stats_pcpu *s;
+
+	s = this_cpu_ptr(stats->pcpu_stats);
+	s->rdma.dir[d].cnt++;
+	s->rdma.dir[d].size_total += size;
+}
+
+void ibtrs_clt_update_all_stats(struct ibtrs_clt_io_req *req, int dir)
+{
+	struct ibtrs_clt_con *con = req->con;
+	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
+	struct ibtrs_clt_stats *stats = &sess->stats;
+	unsigned int len;
+
+	struct ibtrs_clt_stats_pcpu *s;
+
+	s = this_cpu_ptr(stats->pcpu_stats);
+	ibtrs_clt_record_sg_distr(s->sg_list_distr, &s->sg_list_total,
+				  req->sg_cnt);
+	len = req->usr_len + req->data_len;
+	ibtrs_clt_update_rdma_stats(stats, len, dir);
+	atomic_inc(&stats->inflight);
+}
+
+int ibtrs_clt_init_stats(struct ibtrs_clt_stats *stats)
+{
+	stats->enable_rdma_lat = false;
+	stats->pcpu_stats = alloc_percpu(typeof(*stats->pcpu_stats));
+	if (unlikely(!stats->pcpu_stats))
+		return -ENOMEM;
+
+	/*
+	 * successful_cnt will be set to 0 after session
+	 * is established for the first time
+	 */
+	stats->reconnects.successful_cnt = -1;
+
+	return 0;
+}
+
+void ibtrs_clt_free_stats(struct ibtrs_clt_stats *stats)
+{
+	free_percpu(stats->pcpu_stats);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 08/25] ibtrs: client: sysfs interface functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (6 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 07/25] ibtrs: client: statistics functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-06-20 15:03 ` [PATCH v4 09/25] ibtrs: server: private header with server structs and functions Jack Wang
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is the sysfs interface to IBTRS sessions on client side:

  /sys/devices/virtual/ibtrs-client/<SESS-NAME>/
    *** IBTRS session created by ibtrs_clt_open() API call
    |
    |- max_reconnect_attempts
    |  *** number of reconnect attempts for session
    |
    |- add_path
    |  *** adds another connection path into IBTRS session
    |
    |- paths/<SRC@DST>/
       *** established paths to server in a session
       |
       |- disconnect
       |  *** disconnect path
       |
       |- reconnect
       |  *** reconnect path
       |
       |- remove_path
       |  *** remove current path
       |
       |- state
       |  *** retrieve current path state
       |
       |- hca_port
       |  *** HCA port number
       |
       |- hca_name
       |  *** HCA name
       |
       |- stats/
          *** current path statistics
          |
	  |- cpu_migration
	  |- rdma
	  |- rdma_lat
	  |- reconnects
	  |- reset_all
	  |- sg_entries
	  |- wc_completions

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 .../infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c    | 514 ++++++++++++++++++
 1 file changed, 514 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c b/drivers/infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c
new file mode 100644
index 000000000000..1f7b6c28e6b4
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c
@@ -0,0 +1,514 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include "ibtrs-pri.h"
+#include "ibtrs-clt.h"
+#include "ibtrs-log.h"
+
+#define MIN_MAX_RECONN_ATT -1
+#define MAX_MAX_RECONN_ATT 9999
+
+static struct kobj_type ktype = {
+	.sysfs_ops = &kobj_sysfs_ops,
+};
+
+static ssize_t max_reconnect_attempts_show(struct device *dev,
+					   struct device_attribute *attr,
+					   char *page)
+{
+	struct ibtrs_clt *clt;
+
+	clt = container_of(dev, struct ibtrs_clt, dev);
+
+	return sprintf(page, "%d\n", ibtrs_clt_get_max_reconnect_attempts(clt));
+}
+
+static ssize_t max_reconnect_attempts_store(struct device *dev,
+					    struct device_attribute *attr,
+					    const char *buf,
+					    size_t count)
+{
+	struct ibtrs_clt *clt;
+	int value;
+	int ret;
+
+	clt = container_of(dev, struct ibtrs_clt, dev);
+
+	ret = kstrtoint(buf, 10, &value);
+	if (unlikely(ret)) {
+		ibtrs_err(clt, "%s: failed to convert string '%s' to int\n",
+			  attr->attr.name, buf);
+		return ret;
+	}
+	if (unlikely(value > MAX_MAX_RECONN_ATT ||
+		     value < MIN_MAX_RECONN_ATT)) {
+		ibtrs_err(clt, "%s: invalid range"
+			  " (provided: '%s', accepted: min: %d, max: %d)\n",
+			  attr->attr.name, buf, MIN_MAX_RECONN_ATT,
+			  MAX_MAX_RECONN_ATT);
+		return -EINVAL;
+	}
+	ibtrs_clt_set_max_reconnect_attempts(clt, value);
+
+	return count;
+}
+
+static DEVICE_ATTR_RW(max_reconnect_attempts);
+
+static ssize_t mpath_policy_show(struct device *dev,
+				 struct device_attribute *attr,
+				 char *page)
+{
+	struct ibtrs_clt *clt;
+
+	clt = container_of(dev, struct ibtrs_clt, dev);
+
+	switch (clt->mp_policy) {
+	case MP_POLICY_RR:
+		return sprintf(page, "round-robin (RR: %d)\n", clt->mp_policy);
+	case MP_POLICY_MIN_INFLIGHT:
+		return sprintf(page, "min-inflight (MI: %d)\n", clt->mp_policy);
+	default:
+		return sprintf(page, "Unknown (%d)\n", clt->mp_policy);
+	}
+}
+
+static ssize_t mpath_policy_store(struct device *dev,
+				  struct device_attribute *attr,
+				  const char *buf,
+				  size_t count)
+{
+	struct ibtrs_clt *clt;
+	int value;
+	int ret;
+
+	clt = container_of(dev, struct ibtrs_clt, dev);
+
+	ret = kstrtoint(buf, 10, &value);
+	if (!ret && (value == MP_POLICY_RR ||
+		     value == MP_POLICY_MIN_INFLIGHT)) {
+		clt->mp_policy = value;
+		return count;
+	}
+
+	if (!strncasecmp(buf, "round-robin", 11) ||
+	    !strncasecmp(buf, "rr", 2))
+		clt->mp_policy = MP_POLICY_RR;
+	else if (!strncasecmp(buf, "min-inflight", 12) ||
+		 !strncasecmp(buf, "mi", 2))
+		clt->mp_policy = MP_POLICY_MIN_INFLIGHT;
+	else
+		return -EINVAL;
+
+	return count;
+}
+
+static DEVICE_ATTR_RW(mpath_policy);
+
+static ssize_t add_path_show(struct device *dev,
+			     struct device_attribute *attr, char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo"
+			 " [<source addr>,]<destination addr> > %s\n\n"
+			"*addr ::= [ ip:<ipv4|ipv6> | gid:<gid> ]\n",
+			 attr->attr.name);
+}
+
+static ssize_t add_path_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t count)
+{
+	struct sockaddr_storage srcaddr, dstaddr;
+	struct ibtrs_addr addr = {
+		.src = &srcaddr,
+		.dst = &dstaddr
+	};
+	struct ibtrs_clt *clt;
+	const char *nl;
+	size_t len;
+	int err;
+
+	clt = container_of(dev, struct ibtrs_clt, dev);
+
+	nl = strchr(buf, '\n');
+	if (nl)
+		len = nl - buf;
+	else
+		len = count;
+	err = ibtrs_addr_to_sockaddr(buf, len, clt->port, &addr);
+	if (unlikely(err))
+		return -EINVAL;
+
+	err = ibtrs_clt_create_path_from_sysfs(clt, &addr);
+	if (unlikely(err))
+		return err;
+
+	return count;
+}
+
+static DEVICE_ATTR_RW(add_path);
+
+static ssize_t ibtrs_clt_state_show(struct kobject *kobj,
+				    struct kobj_attribute *attr, char *page)
+{
+	struct ibtrs_clt_sess *sess;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+	if (ibtrs_clt_sess_is_connected(sess))
+		return sprintf(page, "connected\n");
+
+	return sprintf(page, "disconnected\n");
+}
+
+static struct kobj_attribute ibtrs_clt_state_attr =
+	__ATTR(state, 0444, ibtrs_clt_state_show, NULL);
+
+static ssize_t ibtrs_clt_reconnect_show(struct kobject *kobj,
+					struct kobj_attribute *attr,
+					char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo 1 > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibtrs_clt_reconnect_store(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 const char *buf, size_t count)
+{
+	struct ibtrs_clt_sess *sess;
+	int ret;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+	if (!sysfs_streq(buf, "1")) {
+		ibtrs_err(sess, "%s: unknown value: '%s'\n",
+			  attr->attr.name, buf);
+		return -EINVAL;
+	}
+	ret = ibtrs_clt_reconnect_from_sysfs(sess);
+	if (unlikely(ret))
+		return ret;
+
+	return count;
+}
+
+static struct kobj_attribute ibtrs_clt_reconnect_attr =
+	__ATTR(reconnect, 0644, ibtrs_clt_reconnect_show,
+	       ibtrs_clt_reconnect_store);
+
+static ssize_t ibtrs_clt_disconnect_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo 1 > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibtrs_clt_disconnect_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	struct ibtrs_clt_sess *sess;
+	int ret;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+	if (!sysfs_streq(buf, "1")) {
+		ibtrs_err(sess, "%s: unknown value: '%s'\n",
+			  attr->attr.name, buf);
+		return -EINVAL;
+	}
+	ret = ibtrs_clt_disconnect_from_sysfs(sess);
+	if (unlikely(ret))
+		return ret;
+
+	return count;
+}
+
+static struct kobj_attribute ibtrs_clt_disconnect_attr =
+	__ATTR(disconnect, 0644, ibtrs_clt_disconnect_show,
+	       ibtrs_clt_disconnect_store);
+
+static ssize_t ibtrs_clt_remove_path_show(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo 1 > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibtrs_clt_remove_path_store(struct kobject *kobj,
+					   struct kobj_attribute *attr,
+					   const char *buf, size_t count)
+{
+	struct ibtrs_clt_sess *sess;
+	int ret;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+	if (!sysfs_streq(buf, "1")) {
+		ibtrs_err(sess, "%s: unknown value: '%s'\n",
+			  attr->attr.name, buf);
+		return -EINVAL;
+	}
+	ret = ibtrs_clt_remove_path_from_sysfs(sess, &attr->attr);
+	if (unlikely(ret))
+		return ret;
+
+	return count;
+}
+
+static struct kobj_attribute ibtrs_clt_remove_path_attr =
+	__ATTR(remove_path, 0644, ibtrs_clt_remove_path_show,
+	       ibtrs_clt_remove_path_store);
+
+STAT_ATTR(struct ibtrs_clt_sess, cpu_migration,
+	  ibtrs_clt_stats_migration_cnt_to_str,
+	  ibtrs_clt_reset_cpu_migr_stats);
+
+STAT_ATTR(struct ibtrs_clt_sess, sg_entries,
+	  ibtrs_clt_stats_sg_list_distr_to_str,
+	  ibtrs_clt_reset_sg_list_distr_stats);
+
+STAT_ATTR(struct ibtrs_clt_sess, reconnects,
+	  ibtrs_clt_stats_reconnects_to_str,
+	  ibtrs_clt_reset_reconnects_stat);
+
+STAT_ATTR(struct ibtrs_clt_sess, rdma_lat,
+	  ibtrs_clt_stats_rdma_lat_distr_to_str,
+	  ibtrs_clt_reset_rdma_lat_distr_stats);
+
+STAT_ATTR(struct ibtrs_clt_sess, wc_completion,
+	  ibtrs_clt_stats_wc_completion_to_str,
+	  ibtrs_clt_reset_wc_comp_stats);
+
+STAT_ATTR(struct ibtrs_clt_sess, rdma,
+	  ibtrs_clt_stats_rdma_to_str,
+	  ibtrs_clt_reset_rdma_stats);
+
+STAT_ATTR(struct ibtrs_clt_sess, reset_all,
+	  ibtrs_clt_reset_all_help,
+	  ibtrs_clt_reset_all_stats);
+
+static struct attribute *ibtrs_clt_stats_attrs[] = {
+	&sg_entries_attr.attr,
+	&cpu_migration_attr.attr,
+	&reconnects_attr.attr,
+	&rdma_lat_attr.attr,
+	&wc_completion_attr.attr,
+	&rdma_attr.attr,
+	&reset_all_attr.attr,
+	NULL,
+};
+
+static struct attribute_group ibtrs_clt_stats_attr_group = {
+	.attrs = ibtrs_clt_stats_attrs,
+};
+
+static int ibtrs_clt_create_stats_files(struct kobject *kobj,
+					struct kobject *kobj_stats)
+{
+	int ret;
+
+	ret = kobject_init_and_add(kobj_stats, &ktype, kobj, "stats");
+	if (ret) {
+		pr_err("Failed to init and add stats kobject, err: %d\n",
+		       ret);
+		return ret;
+	}
+
+	ret = sysfs_create_group(kobj_stats, &ibtrs_clt_stats_attr_group);
+	if (ret) {
+		pr_err("failed to create stats sysfs group, err: %d\n",
+		       ret);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	kobject_del(kobj_stats);
+	kobject_put(kobj_stats);
+
+	return ret;
+}
+
+static ssize_t ibtrs_clt_hca_port_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_clt_sess *sess;
+
+	sess = container_of(kobj, typeof(*sess), kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%u\n", sess->hca_port);
+}
+
+static struct kobj_attribute ibtrs_clt_hca_port_attr =
+	__ATTR(hca_port, 0444, ibtrs_clt_hca_port_show, NULL);
+
+static ssize_t ibtrs_clt_hca_name_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_clt_sess *sess;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n", sess->hca_name);
+}
+
+static struct kobj_attribute ibtrs_clt_hca_name_attr =
+	__ATTR(hca_name, 0444, ibtrs_clt_hca_name_show, NULL);
+
+static ssize_t ibtrs_clt_src_addr_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_clt_sess *sess;
+	int cnt;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+	cnt = sockaddr_to_str((struct sockaddr *)&sess->s.src_addr,
+			      page, PAGE_SIZE);
+	return cnt + scnprintf(page + cnt, PAGE_SIZE - cnt, "\n");
+}
+
+static struct kobj_attribute ibtrs_clt_src_addr_attr =
+	__ATTR(src_addr, 0444, ibtrs_clt_src_addr_show, NULL);
+
+static ssize_t ibtrs_clt_dst_addr_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_clt_sess *sess;
+	int cnt;
+
+	sess = container_of(kobj, struct ibtrs_clt_sess, kobj);
+	cnt = sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr,
+			      page, PAGE_SIZE);
+	return cnt + scnprintf(page + cnt, PAGE_SIZE - cnt, "\n");
+}
+
+static struct kobj_attribute ibtrs_clt_dst_addr_attr =
+	__ATTR(dst_addr, 0444, ibtrs_clt_dst_addr_show, NULL);
+
+static struct attribute *ibtrs_clt_sess_attrs[] = {
+	&ibtrs_clt_hca_name_attr.attr,
+	&ibtrs_clt_hca_port_attr.attr,
+	&ibtrs_clt_src_addr_attr.attr,
+	&ibtrs_clt_dst_addr_attr.attr,
+	&ibtrs_clt_state_attr.attr,
+	&ibtrs_clt_reconnect_attr.attr,
+	&ibtrs_clt_disconnect_attr.attr,
+	&ibtrs_clt_remove_path_attr.attr,
+	NULL,
+};
+
+static struct attribute_group ibtrs_clt_sess_attr_group = {
+	.attrs = ibtrs_clt_sess_attrs,
+};
+
+int ibtrs_clt_create_sess_files(struct ibtrs_clt_sess *sess)
+{
+	struct ibtrs_clt *clt = sess->clt;
+	char str[NAME_MAX];
+	int err, cnt;
+
+	cnt = sockaddr_to_str((struct sockaddr *)&sess->s.src_addr,
+			      str, sizeof(str));
+	cnt += scnprintf(str + cnt, sizeof(str) - cnt, "@");
+	sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr,
+			str + cnt, sizeof(str) - cnt);
+
+	err = kobject_init_and_add(&sess->kobj, &ktype, &clt->kobj_paths,
+				   "%s", str);
+	if (unlikely(err)) {
+		pr_err("kobject_init_and_add: %d\n", err);
+		return err;
+	}
+	err = sysfs_create_group(&sess->kobj, &ibtrs_clt_sess_attr_group);
+	if (unlikely(err)) {
+		pr_err("sysfs_create_group(): %d\n", err);
+		goto put_kobj;
+	}
+	err = ibtrs_clt_create_stats_files(&sess->kobj, &sess->kobj_stats);
+	if (unlikely(err))
+		goto put_kobj;
+
+	return 0;
+
+put_kobj:
+	kobject_del(&sess->kobj);
+	kobject_put(&sess->kobj);
+
+	return err;
+}
+
+void ibtrs_clt_destroy_sess_files(struct ibtrs_clt_sess *sess,
+				  const struct attribute *sysfs_self)
+{
+	if (sess->kobj.state_in_sysfs) {
+		kobject_del(&sess->kobj_stats);
+		kobject_put(&sess->kobj_stats);
+		if (sysfs_self)
+			/* To avoid deadlock firstly commit suicide */
+			sysfs_remove_file_self(&sess->kobj, sysfs_self);
+		kobject_del(&sess->kobj);
+		kobject_put(&sess->kobj);
+	}
+}
+
+static struct attribute *ibtrs_clt_attrs[] = {
+	&dev_attr_max_reconnect_attempts.attr,
+	&dev_attr_mpath_policy.attr,
+	&dev_attr_add_path.attr,
+	NULL,
+};
+
+static struct attribute_group ibtrs_clt_attr_group = {
+	.attrs = ibtrs_clt_attrs,
+};
+
+int ibtrs_clt_create_sysfs_root_folders(struct ibtrs_clt *clt)
+{
+	return kobject_init_and_add(&clt->kobj_paths, &ktype,
+				    &clt->dev.kobj, "paths");
+}
+
+int ibtrs_clt_create_sysfs_root_files(struct ibtrs_clt *clt)
+{
+	return sysfs_create_group(&clt->dev.kobj, &ibtrs_clt_attr_group);
+}
+
+void ibtrs_clt_destroy_sysfs_root_folders(struct ibtrs_clt *clt)
+{
+	if (clt->kobj_paths.state_in_sysfs) {
+		kobject_del(&clt->kobj_paths);
+		kobject_put(&clt->kobj_paths);
+	}
+}
+
+void ibtrs_clt_destroy_sysfs_root_files(struct ibtrs_clt *clt)
+{
+	sysfs_remove_group(&clt->dev.kobj, &ibtrs_clt_attr_group);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 09/25] ibtrs: server: private header with server structs and functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (7 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 08/25] ibtrs: client: sysfs interface functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 23:21   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 10/25] ibtrs: server: main functionality Jack Wang
                   ` (16 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This header describes main structs and functions used by ibtrs-server
module, mainly for accepting IBTRS sessions, creating/destroying
sysfs entries, accounting statistics on server side.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs-srv.h | 170 +++++++++++++++++++++++
 1 file changed, 170 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv.h

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-srv.h b/drivers/infiniband/ulp/ibtrs/ibtrs-srv.h
new file mode 100644
index 000000000000..6d3b77541d77
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-srv.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBTRS_SRV_H
+#define IBTRS_SRV_H
+
+#include <linux/device.h>
+#include <linux/refcount.h>
+#include "ibtrs-pri.h"
+
+/**
+ * enum ibtrs_srv_state - Server states.
+ */
+enum ibtrs_srv_state {
+	IBTRS_SRV_CONNECTING,
+	IBTRS_SRV_CONNECTED,
+	IBTRS_SRV_CLOSING,
+	IBTRS_SRV_CLOSED,
+};
+
+static inline const char *ibtrs_srv_state_str(enum ibtrs_srv_state state)
+{
+	switch (state) {
+	case IBTRS_SRV_CONNECTING:
+		return "IBTRS_SRV_CONNECTING";
+	case IBTRS_SRV_CONNECTED:
+		return "IBTRS_SRV_CONNECTED";
+	case IBTRS_SRV_CLOSING:
+		return "IBTRS_SRV_CLOSING";
+	case IBTRS_SRV_CLOSED:
+		return "IBTRS_SRV_CLOSED";
+	default:
+		return "UNKNOWN";
+	}
+}
+
+struct ibtrs_stats_wc_comp {
+	atomic64_t	calls;
+	atomic64_t	total_wc_cnt;
+};
+
+struct ibtrs_srv_stats_rdma_stats {
+	struct {
+		atomic64_t	cnt;
+		atomic64_t	size_total;
+	} dir[2];
+};
+
+struct ibtrs_srv_stats {
+	struct ibtrs_srv_stats_rdma_stats	rdma_stats;
+	atomic_t				apm_cnt;
+	struct ibtrs_stats_wc_comp		wc_comp;
+};
+
+struct ibtrs_srv_con {
+	struct ibtrs_con	c;
+	atomic_t		wr_cnt;
+};
+
+struct ibtrs_srv_op {
+	struct ibtrs_srv_con		*con;
+	u32				msg_id;
+	u8				dir;
+	struct ibtrs_msg_rdma_read	*rd_msg;
+	struct ib_rdma_wr		*tx_wr;
+	struct ib_sge			*tx_sg;
+};
+
+struct ibtrs_srv_mr {
+	struct ib_mr	*mr;
+	struct sg_table	sgt;
+};
+
+struct ibtrs_srv_sess {
+	struct ibtrs_sess	s;
+	struct ibtrs_srv	*srv;
+	struct work_struct	close_work;
+	enum ibtrs_srv_state	state;
+	spinlock_t		state_lock;
+	int			cur_cq_vector;
+	struct ibtrs_srv_op	**ops_ids;
+	atomic_t		ids_inflight;
+	wait_queue_head_t	ids_waitq;
+	struct ibtrs_srv_mr	*mrs;
+	unsigned int		mrs_num;
+	dma_addr_t		*dma_addr;
+	bool			established;
+	unsigned int		mem_bits;
+	struct kobject		kobj;
+	struct kobject		kobj_stats;
+	struct ibtrs_srv_stats	stats;
+};
+
+struct ibtrs_srv {
+	struct list_head	paths_list;
+	int			paths_up;
+	struct mutex		paths_ev_mutex;
+	size_t			paths_num;
+	struct mutex		paths_mutex;
+	uuid_t			paths_uuid;
+	refcount_t		refcount;
+	struct ibtrs_srv_ctx	*ctx;
+	struct list_head	ctx_list;
+	void			*priv;
+	size_t			queue_depth;
+	struct page		**chunks;
+	struct device		dev;
+	u32			dev_ref;
+	struct kobject		kobj_paths;
+};
+
+struct ibtrs_srv_ctx {
+	rdma_ev_fn *rdma_ev;
+	link_ev_fn *link_ev;
+	struct rdma_cm_id *cm_id_ip;
+	struct rdma_cm_id *cm_id_ib;
+	struct mutex srv_mutex;
+	struct list_head srv_list;
+};
+
+extern struct class *ibtrs_dev_class;
+
+/* See ibtrs-log.h */
+#define TYPES_TO_SESSNAME(obj)						\
+	LIST(CASE(obj, struct ibtrs_srv_sess *, s.sessname))
+
+void ibtrs_srv_queue_close(struct ibtrs_srv_sess *sess);
+
+/* ibtrs-srv-stats.c */
+
+void ibtrs_srv_update_rdma_stats(struct ibtrs_srv_stats *s, size_t size, int d);
+void ibtrs_srv_update_wc_stats(struct ibtrs_srv_stats *s);
+
+int ibtrs_srv_reset_rdma_stats(struct ibtrs_srv_stats *stats, bool enable);
+ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
+				    char *page, size_t len);
+int ibtrs_srv_reset_wc_completion_stats(struct ibtrs_srv_stats *stats,
+					bool enable);
+int ibtrs_srv_stats_wc_completion_to_str(struct ibtrs_srv_stats *stats,
+					 char *buf, size_t len);
+int ibtrs_srv_reset_all_stats(struct ibtrs_srv_stats *stats, bool enable);
+ssize_t ibtrs_srv_reset_all_help(struct ibtrs_srv_stats *stats,
+				 char *page, size_t len);
+
+/* ibtrs-srv-sysfs.c */
+
+int ibtrs_srv_create_sess_files(struct ibtrs_srv_sess *sess);
+void ibtrs_srv_destroy_sess_files(struct ibtrs_srv_sess *sess);
+
+#endif /* IBTRS_SRV_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 10/25] ibtrs: server: main functionality
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (8 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 09/25] ibtrs: server: private header with server structs and functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 23:49   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 11/25] ibtrs: server: statistics functions Jack Wang
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is main functionality of ibtrs-server module, which accepts
set of RDMA connections (so called IBTRS session), creates/destroys
sysfs entries associated with IBTRS session and notifies upper layer
(user of IBTRS API) about RDMA requests or link events.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/ibtrs-srv.c | 1998 ++++++++++++++++++++++
 1 file changed, 1998 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-srv.c b/drivers/infiniband/ulp/ibtrs/ibtrs-srv.c
new file mode 100644
index 000000000000..b6d4906d0f09
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-srv.c
@@ -0,0 +1,1998 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <linux/module.h>
+#include <linux/mempool.h>
+
+#include "ibtrs-srv.h"
+#include "ibtrs-log.h"
+
+MODULE_AUTHOR("ibnbd@profitbricks.com");
+MODULE_DESCRIPTION("IBTRS Server");
+MODULE_VERSION(IBTRS_VER_STRING);
+MODULE_LICENSE("GPL");
+
+/* Must be power of 2, see mask from mr->page_size in ib_sg_to_pages() */
+#define DEFAULT_MAX_CHUNK_SIZE (128 << 10)
+#define DEFAULT_SESS_QUEUE_DEPTH 512
+#define MAX_HDR_SIZE PAGE_SIZE
+#define MAX_SG_COUNT ((MAX_HDR_SIZE - sizeof(struct ibtrs_msg_rdma_read)) \
+		      / sizeof(struct ibtrs_sg_desc))
+
+/* We guarantee to serve 10 paths at least */
+#define CHUNK_POOL_SZ 10
+
+static struct ibtrs_ib_dev_pool dev_pool;
+static mempool_t *chunk_pool;
+struct class *ibtrs_dev_class;
+
+static int retry_count = 7;
+static int __read_mostly max_chunk_size = DEFAULT_MAX_CHUNK_SIZE;
+static int __read_mostly sess_queue_depth = DEFAULT_SESS_QUEUE_DEPTH;
+
+module_param_named(max_chunk_size, max_chunk_size, int, 0444);
+MODULE_PARM_DESC(max_chunk_size,
+		 "Max size for each IO request, when change the unit is in byte"
+		 " (default: " __stringify(DEFAULT_MAX_CHUNK_SIZE_KB) "KB)");
+
+module_param_named(sess_queue_depth, sess_queue_depth, int, 0444);
+MODULE_PARM_DESC(sess_queue_depth,
+		 "Number of buffers for pending I/O requests to allocate"
+		 " per session. Maximum: " __stringify(MAX_SESS_QUEUE_DEPTH)
+		 " (default: " __stringify(DEFAULT_SESS_QUEUE_DEPTH) ")");
+
+static int retry_count_set(const char *val, const struct kernel_param *kp)
+{
+	int err, ival;
+
+	err = kstrtoint(val, 0, &ival);
+	if (err)
+		return err;
+
+	if (ival < MIN_RTR_CNT || ival > MAX_RTR_CNT) {
+		pr_err("Invalid retry count value %d, has to be"
+		       " > %d, < %d\n", ival, MIN_RTR_CNT, MAX_RTR_CNT);
+		return -EINVAL;
+	}
+
+	retry_count = ival;
+	pr_info("QP retry count changed to %d\n", ival);
+
+	return 0;
+}
+
+static const struct kernel_param_ops retry_count_ops = {
+	.set		= retry_count_set,
+	.get		= param_get_int,
+};
+module_param_cb(retry_count, &retry_count_ops, &retry_count, 0644);
+
+MODULE_PARM_DESC(retry_count, "Number of times to send the message if the"
+		 " remote side didn't respond with Ack or Nack (default: 3,"
+		 " min: " __stringify(MIN_RTR_CNT) ", max: "
+		 __stringify(MAX_RTR_CNT) ")");
+
+static char cq_affinity_list[256] = "";
+static cpumask_t cq_affinity_mask = { CPU_BITS_ALL };
+
+static void init_cq_affinity(void)
+{
+	sprintf(cq_affinity_list, "0-%d", nr_cpu_ids - 1);
+}
+
+static int cq_affinity_list_set(const char *val, const struct kernel_param *kp)
+{
+	int ret = 0, len = strlen(val);
+	cpumask_var_t new_value;
+
+	if (!strlen(cq_affinity_list))
+		init_cq_affinity();
+
+	if (len >= sizeof(cq_affinity_list))
+		return -EINVAL;
+	if (!alloc_cpumask_var(&new_value, GFP_KERNEL))
+		return -ENOMEM;
+
+	ret = cpulist_parse(val, new_value);
+	if (ret) {
+		pr_err("Can't set cq_affinity_list \"%s\": %d\n", val,
+		       ret);
+		goto free_cpumask;
+	}
+
+	strlcpy(cq_affinity_list, val, sizeof(cq_affinity_list));
+	*strchrnul(cq_affinity_list, '\n') = '\0';
+	cpumask_copy(&cq_affinity_mask, new_value);
+
+	pr_info("cq_affinity_list changed to %*pbl\n",
+		cpumask_pr_args(&cq_affinity_mask));
+free_cpumask:
+	free_cpumask_var(new_value);
+	return ret;
+}
+
+static struct kparam_string cq_affinity_list_kparam_str = {
+	.maxlen	= sizeof(cq_affinity_list),
+	.string	= cq_affinity_list
+};
+
+static const struct kernel_param_ops cq_affinity_list_ops = {
+	.set	= cq_affinity_list_set,
+	.get	= param_get_string,
+};
+
+module_param_cb(cq_affinity_list, &cq_affinity_list_ops,
+		&cq_affinity_list_kparam_str, 0644);
+MODULE_PARM_DESC(cq_affinity_list, "Sets the list of cpus to use as cq vectors."
+		 "(default: use all possible CPUs)");
+
+static struct workqueue_struct *ibtrs_wq;
+
+static void close_sess(struct ibtrs_srv_sess *sess);
+
+static inline struct ibtrs_srv_con *to_srv_con(struct ibtrs_con *c)
+{
+	return container_of(c, struct ibtrs_srv_con, c);
+}
+
+static inline struct ibtrs_srv_sess *to_srv_sess(struct ibtrs_sess *s)
+{
+	return container_of(s, struct ibtrs_srv_sess, s);
+}
+
+static bool __ibtrs_srv_change_state(struct ibtrs_srv_sess *sess,
+				     enum ibtrs_srv_state new_state)
+{
+	enum ibtrs_srv_state old_state;
+	bool changed = false;
+
+	old_state = sess->state;
+	switch (new_state) {
+	case IBTRS_SRV_CONNECTED:
+		switch (old_state) {
+		case IBTRS_SRV_CONNECTING:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_SRV_CLOSING:
+		switch (old_state) {
+		case IBTRS_SRV_CONNECTING:
+		case IBTRS_SRV_CONNECTED:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	case IBTRS_SRV_CLOSED:
+		switch (old_state) {
+		case IBTRS_SRV_CLOSING:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
+	default:
+		break;
+	}
+	if (changed)
+		sess->state = new_state;
+
+	return changed;
+}
+
+static bool ibtrs_srv_change_state_get_old(struct ibtrs_srv_sess *sess,
+					   enum ibtrs_srv_state new_state,
+					   enum ibtrs_srv_state *old_state)
+{
+	bool changed;
+
+	spin_lock_irq(&sess->state_lock);
+	*old_state = sess->state;
+	changed = __ibtrs_srv_change_state(sess, new_state);
+	spin_unlock_irq(&sess->state_lock);
+
+	return changed;
+}
+
+static bool ibtrs_srv_change_state(struct ibtrs_srv_sess *sess,
+				   enum ibtrs_srv_state new_state)
+{
+	enum ibtrs_srv_state old_state;
+
+	return ibtrs_srv_change_state_get_old(sess, new_state, &old_state);
+}
+
+static void free_id(struct ibtrs_srv_op *id)
+{
+	if (!id)
+		return;
+	kfree(id->tx_wr);
+	kfree(id->tx_sg);
+	kfree(id);
+}
+
+static void ibtrs_srv_free_ops_ids(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	int i;
+
+	WARN_ON(atomic_read(&sess->ids_inflight));
+	if (sess->ops_ids) {
+		for (i = 0; i < srv->queue_depth; i++)
+			free_id(sess->ops_ids[i]);
+		kfree(sess->ops_ids);
+		sess->ops_ids = NULL;
+	}
+}
+
+static int ibtrs_srv_alloc_ops_ids(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_srv_op *id;
+	int i;
+
+	sess->ops_ids = kcalloc(srv->queue_depth, sizeof(*sess->ops_ids),
+				GFP_KERNEL);
+	if (unlikely(!sess->ops_ids))
+		goto err;
+
+	for (i = 0; i < srv->queue_depth; ++i) {
+		id = kzalloc(sizeof(*id), GFP_KERNEL);
+		if (unlikely(!id))
+			goto err;
+
+		sess->ops_ids[i] = id;
+		id->tx_wr = kcalloc(MAX_SG_COUNT, sizeof(*id->tx_wr),
+				    GFP_KERNEL);
+		if (unlikely(!id->tx_wr))
+			goto err;
+
+		id->tx_sg = kcalloc(MAX_SG_COUNT, sizeof(*id->tx_sg),
+				    GFP_KERNEL);
+		if (unlikely(!id->tx_sg))
+			goto err;
+	}
+	init_waitqueue_head(&sess->ids_waitq);
+	atomic_set(&sess->ids_inflight, 0);
+
+	return 0;
+
+err:
+	ibtrs_srv_free_ops_ids(sess);
+	return -ENOMEM;
+}
+
+static void ibtrs_srv_get_ops_ids(struct ibtrs_srv_sess *sess)
+{
+	atomic_inc(&sess->ids_inflight);
+}
+
+static void ibtrs_srv_put_ops_ids(struct ibtrs_srv_sess *sess)
+{
+	if (atomic_dec_and_test(&sess->ids_inflight))
+		wake_up(&sess->ids_waitq);
+}
+
+static void ibtrs_srv_wait_ops_ids(struct ibtrs_srv_sess *sess)
+{
+	wait_event(sess->ids_waitq, !atomic_read(&sess->ids_inflight));
+}
+
+static void ibtrs_srv_rdma_done(struct ib_cq *cq, struct ib_wc *wc);
+
+static struct ib_cqe io_comp_cqe = {
+	.done = ibtrs_srv_rdma_done
+};
+
+/**
+ * rdma_write_sg() - response on successful READ request
+ */
+static int rdma_write_sg(struct ibtrs_srv_op *id)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(id->con->c.sess);
+	dma_addr_t dma_addr = sess->dma_addr[id->msg_id];
+	struct ibtrs_srv *srv = sess->srv;
+	struct ib_send_wr inv_wr, imm_wr;
+	struct ib_rdma_wr *wr = NULL;
+	const struct ib_send_wr *bad_wr;
+	enum ib_send_flags flags;
+	size_t sg_cnt;
+	int err, i, offset;
+	bool need_inval;
+	u32 rkey = 0;
+
+	sg_cnt = le16_to_cpu(id->rd_msg->sg_cnt);
+	need_inval = le16_to_cpu(id->rd_msg->flags) & IBTRS_MSG_NEED_INVAL_F;
+	if (unlikely(!sg_cnt))
+		return -EINVAL;
+
+	offset = 0;
+	for (i = 0; i < sg_cnt; i++) {
+		struct ib_sge *list;
+
+		wr		= &id->tx_wr[i];
+		list		= &id->tx_sg[i];
+		list->addr	= dma_addr + offset;
+		list->length	= le32_to_cpu(id->rd_msg->desc[i].len);
+
+		/* WR will fail with length error
+		 * if this is 0
+		 */
+		if (unlikely(list->length == 0)) {
+			ibtrs_err(sess, "Invalid RDMA-Write sg list length 0\n");
+			return -EINVAL;
+		}
+
+		list->lkey = sess->s.dev->ib_pd->local_dma_lkey;
+		offset += list->length;
+
+		wr->wr.wr_cqe	= &io_comp_cqe;
+		wr->wr.sg_list	= list;
+		wr->wr.num_sge	= 1;
+		wr->remote_addr	= le64_to_cpu(id->rd_msg->desc[i].addr);
+		wr->rkey	= le32_to_cpu(id->rd_msg->desc[i].key);
+		if (rkey == 0)
+			rkey = wr->rkey;
+		else
+			/* Only one key is actually used */
+			WARN_ON_ONCE(rkey != wr->rkey);
+
+		if (i < (sg_cnt - 1))
+			wr->wr.next = &id->tx_wr[i + 1].wr;
+		else if (need_inval)
+			wr->wr.next = &inv_wr;
+		else
+			wr->wr.next = &imm_wr;
+
+		wr->wr.opcode = IB_WR_RDMA_WRITE;
+		wr->wr.ex.imm_data = 0;
+		wr->wr.send_flags  = 0;
+	}
+	/*
+	 * From time to time we have to post signalled sends,
+	 * or send queue will fill up and only QP reset can help.
+	 */
+	flags = atomic_inc_return(&id->con->wr_cnt) % srv->queue_depth ?
+			0 : IB_SEND_SIGNALED;
+
+	if (need_inval) {
+		inv_wr.next = &imm_wr;
+		inv_wr.wr_cqe = &io_comp_cqe;
+		inv_wr.sg_list = NULL;
+		inv_wr.num_sge = 0;
+		inv_wr.opcode = IB_WR_SEND_WITH_INV;
+		inv_wr.send_flags = 0;
+		inv_wr.ex.invalidate_rkey = rkey;
+	}
+	imm_wr.next = NULL;
+	imm_wr.wr_cqe = &io_comp_cqe;
+	imm_wr.sg_list = NULL;
+	imm_wr.num_sge = 0;
+	imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
+	imm_wr.send_flags = flags;
+	imm_wr.ex.imm_data = cpu_to_be32(ibtrs_to_io_rsp_imm(id->msg_id,
+							     0, need_inval));
+
+	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, dma_addr,
+				      offset, DMA_BIDIRECTIONAL);
+
+	err = ib_post_send(id->con->c.qp, &id->tx_wr[0].wr, &bad_wr);
+	if (unlikely(err))
+		ibtrs_err(sess,
+			  "Posting RDMA-Write-Request to QP failed, err: %d\n",
+			  err);
+
+	return err;
+}
+
+/**
+ * send_io_resp_imm() - response with empty IMM on failed READ/WRITE requests or
+ *                      on successful WRITE request.
+ */
+static int send_io_resp_imm(struct ibtrs_srv_con *con, struct ibtrs_srv_op *id,
+			    int errno)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ib_send_wr inv_wr, *wr = NULL;
+	struct ibtrs_srv *srv = sess->srv;
+	bool need_inval = false;
+	enum ib_send_flags flags;
+	u32 imm;
+	int err;
+
+	if (id->dir == READ) {
+		struct ibtrs_msg_rdma_read *rd_msg = id->rd_msg;
+		size_t sg_cnt;
+
+		need_inval = le16_to_cpu(rd_msg->flags) &
+				IBTRS_MSG_NEED_INVAL_F;
+		sg_cnt = le16_to_cpu(rd_msg->sg_cnt);
+
+		if (need_inval) {
+			if (likely(sg_cnt)) {
+				inv_wr.next = NULL;
+				inv_wr.wr_cqe = &io_comp_cqe;
+				inv_wr.sg_list = NULL;
+				inv_wr.num_sge = 0;
+				inv_wr.opcode = IB_WR_SEND_WITH_INV;
+				inv_wr.send_flags = 0;
+				/* Only one key is actually used */
+				inv_wr.ex.invalidate_rkey =
+					le32_to_cpu(rd_msg->desc[0].key);
+				wr = &inv_wr;
+			} else {
+				WARN_ON_ONCE(1);
+				need_inval = false;
+			}
+		}
+	}
+
+	/*
+	 * From time to time we have to post signalled sends,
+	 * or send queue will fill up and only QP reset can help.
+	 */
+	flags = atomic_inc_return(&con->wr_cnt) % srv->queue_depth ?
+			0 : IB_SEND_SIGNALED;
+	imm = ibtrs_to_io_rsp_imm(id->msg_id, errno, need_inval);
+	err = ibtrs_post_rdma_write_imm_empty(&con->c, &io_comp_cqe, imm,
+					      flags, wr);
+	if (unlikely(err))
+		ibtrs_err_rl(sess, "ib_post_send(), err: %d\n", err);
+
+	return err;
+}
+
+/*
+ * ibtrs_srv_resp_rdma() - sends response to the client.
+ *
+ * Context: any
+ */
+void ibtrs_srv_resp_rdma(struct ibtrs_srv_op *id, int status)
+{
+	struct ibtrs_srv_con *con = id->con;
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	int err;
+
+	if (WARN_ON(!id))
+		return;
+
+	if (unlikely(sess->state != IBTRS_SRV_CONNECTED)) {
+		ibtrs_err_rl(sess, "Sending I/O response failed, "
+			     " session is disconnected, sess state %s\n",
+			     ibtrs_srv_state_str(sess->state));
+		goto out;
+	}
+	if (status || id->dir == WRITE || !id->rd_msg->sg_cnt)
+		err = send_io_resp_imm(con, id, status);
+	else
+		err = rdma_write_sg(id);
+	if (unlikely(err)) {
+		ibtrs_err_rl(sess, "IO response failed: %d\n", err);
+		close_sess(sess);
+	}
+out:
+	ibtrs_srv_put_ops_ids(sess);
+}
+EXPORT_SYMBOL(ibtrs_srv_resp_rdma);
+
+void ibtrs_srv_set_sess_priv(struct ibtrs_srv *srv, void *priv)
+{
+	srv->priv = priv;
+}
+EXPORT_SYMBOL(ibtrs_srv_set_sess_priv);
+
+static void unmap_cont_bufs(struct ibtrs_srv_sess *sess)
+{
+	int i;
+
+	for (i = 0; i < sess->mrs_num; i++) {
+		struct ibtrs_srv_mr *srv_mr;
+
+		srv_mr = &sess->mrs[i];
+		ib_dereg_mr(srv_mr->mr);
+		ib_dma_unmap_sg(sess->s.dev->ib_dev, srv_mr->sgt.sgl,
+				srv_mr->sgt.nents, DMA_BIDIRECTIONAL);
+		sg_free_table(&srv_mr->sgt);
+	}
+	kfree(sess->mrs);
+}
+
+static int map_cont_bufs(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	int i, mri, err, mrs_num;
+	unsigned int chunk_bits;
+	int chunks_per_mr;
+
+	/*
+	 * Here we map queue_depth chunks to MR.  Firstly we have to
+	 * figure out how many chunks can we map per MR.
+	 */
+
+	chunks_per_mr = sess->s.dev->ib_dev->attrs.max_fast_reg_page_list_len;
+	mrs_num = DIV_ROUND_UP(srv->queue_depth, chunks_per_mr);
+	chunks_per_mr = DIV_ROUND_UP(srv->queue_depth, mrs_num);
+
+	sess->mrs = kcalloc(mrs_num, sizeof(*sess->mrs), GFP_KERNEL);
+	if (unlikely(!sess->mrs))
+		return -ENOMEM;
+
+	sess->mrs_num = mrs_num;
+
+	for (mri = 0; mri < mrs_num; mri++) {
+		struct ibtrs_srv_mr *srv_mr = &sess->mrs[mri];
+		struct sg_table *sgt = &srv_mr->sgt;
+		struct scatterlist *s;
+		struct ib_mr *mr;
+		int nr, chunks;
+
+		chunks = chunks_per_mr * mri;
+		chunks_per_mr = min_t(int, chunks_per_mr,
+				      srv->queue_depth - chunks);
+
+		err = sg_alloc_table(sgt, chunks_per_mr, GFP_KERNEL);
+		if (unlikely(err))
+			goto err;
+
+		for_each_sg(sgt->sgl, s, chunks_per_mr, i)
+			sg_set_page(s, srv->chunks[chunks + i],
+				    max_chunk_size, 0);
+
+		nr = ib_dma_map_sg(sess->s.dev->ib_dev, sgt->sgl,
+				   sgt->nents, DMA_BIDIRECTIONAL);
+		if (unlikely(nr < sgt->nents)) {
+			err = nr < 0 ? nr : -EINVAL;
+			goto free_sg;
+		}
+		mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG,
+				 sgt->nents);
+		if (unlikely(IS_ERR(mr))) {
+			err = PTR_ERR(mr);
+			goto unmap_sg;
+		}
+		nr = ib_map_mr_sg(mr, sgt->sgl, sgt->nents,
+				  NULL, max_chunk_size);
+		if (unlikely(nr < sgt->nents)) {
+			err = nr < 0 ? nr : -EINVAL;
+			goto dereg_mr;
+		}
+
+		/* Eventually dma addr for each chunk can be cached */
+		for_each_sg(sgt->sgl, s, sgt->orig_nents, i)
+			sess->dma_addr[chunks + i] = sg_dma_address(s);
+
+		ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+
+		srv_mr->mr = mr;
+
+		continue;
+err:
+		while (mri--) {
+			srv_mr = &sess->mrs[mri];
+			sgt = &srv_mr->sgt;
+			mr = srv_mr->mr;
+dereg_mr:
+			ib_dereg_mr(mr);
+unmap_sg:
+			ib_dma_unmap_sg(sess->s.dev->ib_dev, sgt->sgl,
+					sgt->nents, DMA_BIDIRECTIONAL);
+free_sg:
+			sg_free_table(sgt);
+		}
+		kfree(sess->mrs);
+
+		return err;
+	}
+
+	chunk_bits = ilog2(srv->queue_depth - 1) + 1;
+	sess->mem_bits = (MAX_IMM_PAYL_BITS - chunk_bits);
+
+	return 0;
+}
+
+static void ibtrs_srv_hb_err_handler(struct ibtrs_con *c, int err)
+{
+	(void)err;
+	close_sess(to_srv_sess(c->sess));
+}
+
+static void ibtrs_srv_init_hb(struct ibtrs_srv_sess *sess)
+{
+	ibtrs_init_hb(&sess->s, &io_comp_cqe,
+		      IBTRS_HB_INTERVAL_MS,
+		      IBTRS_HB_MISSED_MAX,
+		      ibtrs_srv_hb_err_handler,
+		      ibtrs_wq);
+}
+
+static void ibtrs_srv_start_hb(struct ibtrs_srv_sess *sess)
+{
+	ibtrs_start_hb(&sess->s);
+}
+
+static void ibtrs_srv_stop_hb(struct ibtrs_srv_sess *sess)
+{
+	ibtrs_stop_hb(&sess->s);
+}
+
+static void ibtrs_srv_info_rsp_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_srv_con *con = cq->cq_context;
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_iu *iu;
+
+	iu = container_of(wc->wr_cqe, struct ibtrs_iu, cqe);
+	ibtrs_iu_free(iu, DMA_TO_DEVICE, sess->s.dev->ib_dev);
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "Sess info response send failed: %s\n",
+			  ib_wc_status_msg(wc->status));
+		close_sess(sess);
+		return;
+	}
+	WARN_ON(wc->opcode != IB_WC_SEND);
+	ibtrs_srv_update_wc_stats(&sess->stats);
+}
+
+static void ibtrs_srv_sess_up(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_srv_ctx *ctx = srv->ctx;
+	int up;
+
+	mutex_lock(&srv->paths_ev_mutex);
+	up = ++srv->paths_up;
+	if (up == 1)
+		ctx->link_ev(srv, IBTRS_SRV_LINK_EV_CONNECTED, NULL);
+	mutex_unlock(&srv->paths_ev_mutex);
+
+	/* Mark session as established */
+	sess->established = true;
+}
+
+static void ibtrs_srv_sess_down(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_srv_ctx *ctx = srv->ctx;
+
+	if (!sess->established)
+		return;
+
+	sess->established = false;
+	mutex_lock(&srv->paths_ev_mutex);
+	WARN_ON(!srv->paths_up);
+	if (--srv->paths_up == 0)
+		ctx->link_ev(srv, IBTRS_SRV_LINK_EV_DISCONNECTED, srv->priv);
+	mutex_unlock(&srv->paths_ev_mutex);
+}
+
+static void ibtrs_srv_reg_mr_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_srv_con *con = cq->cq_context;
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "REG MR failed: %s\n",
+			  ib_wc_status_msg(wc->status));
+		close_sess(sess);
+		return;
+	}
+}
+
+static struct ib_cqe local_reg_cqe = {
+	.done = ibtrs_srv_reg_mr_done
+};
+
+static int post_recv_sess(struct ibtrs_srv_sess *sess);
+
+static int process_info_req(struct ibtrs_srv_con *con,
+			    struct ibtrs_msg_info_req *msg)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ib_send_wr *reg_wr = NULL;
+	struct ibtrs_msg_info_rsp *rsp;
+	struct ibtrs_iu *tx_iu;
+	struct ib_reg_wr *rwr;
+	int mri, err;
+	size_t tx_sz;
+
+	err = post_recv_sess(sess);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "post_recv_sess(), err: %d\n", err);
+		return err;
+	}
+	rwr = kcalloc(sess->mrs_num, sizeof(*rwr), GFP_KERNEL);
+	if (unlikely(!rwr)) {
+		ibtrs_err(sess, "No memory\n");
+		return -ENOMEM;
+	}
+	memcpy(sess->s.sessname, msg->sessname, sizeof(sess->s.sessname));
+
+	tx_sz  = sizeof(*rsp);
+	tx_sz += sizeof(rsp->desc[0]) * sess->mrs_num;
+	tx_iu = ibtrs_iu_alloc(0, tx_sz, GFP_KERNEL, sess->s.dev->ib_dev,
+			       DMA_TO_DEVICE, ibtrs_srv_info_rsp_done);
+	if (unlikely(!tx_iu)) {
+		ibtrs_err(sess, "ibtrs_iu_alloc(), err: %d\n", -ENOMEM);
+		err = -ENOMEM;
+		goto rwr_free;
+	}
+
+	rsp = tx_iu->buf;
+	rsp->type = cpu_to_le16(IBTRS_MSG_INFO_RSP);
+	rsp->sg_cnt = cpu_to_le16(sess->mrs_num);
+
+	for (mri = 0; mri < sess->mrs_num; mri++) {
+		struct ib_mr *mr = sess->mrs[mri].mr;
+
+		rsp->desc[mri].addr = cpu_to_le64(mr->iova);
+		rsp->desc[mri].key  = cpu_to_le32(mr->rkey);
+		rsp->desc[mri].len  = cpu_to_le32(mr->length);
+
+		/*
+		 * Fill in reg MR request and chain them *backwards*
+		 */
+		rwr[mri].wr.next = mri ? &rwr[mri - 1].wr : NULL;
+		rwr[mri].wr.opcode = IB_WR_REG_MR;
+		rwr[mri].wr.wr_cqe = &local_reg_cqe;
+		rwr[mri].wr.num_sge = 0;
+		rwr[mri].wr.send_flags = 0;
+		rwr[mri].mr = mr;
+		rwr[mri].key = mr->rkey;
+		rwr[mri].access = (IB_ACCESS_LOCAL_WRITE |
+				   IB_ACCESS_REMOTE_WRITE);
+		reg_wr = &rwr[mri].wr;
+	}
+
+	err = ibtrs_srv_create_sess_files(sess);
+	if (unlikely(err))
+		goto iu_free;
+
+	ibtrs_srv_change_state(sess, IBTRS_SRV_CONNECTED);
+	ibtrs_srv_start_hb(sess);
+
+	/*
+	 * We do not account number of established connections at the current
+	 * moment, we rely on the client, which should send info request when
+	 * all connections are successfully established.  Thus, simply notify
+	 * listener with a proper event if we are the first path.
+	 */
+	ibtrs_srv_sess_up(sess);
+
+	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, tx_iu->dma_addr,
+				      tx_iu->size, DMA_TO_DEVICE);
+
+	/* Send info response */
+	err = ibtrs_iu_post_send(&con->c, tx_iu, tx_sz, reg_wr);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "ibtrs_iu_post_send(), err: %d\n", err);
+iu_free:
+		ibtrs_iu_free(tx_iu, DMA_TO_DEVICE, sess->s.dev->ib_dev);
+	}
+rwr_free:
+	kfree(rwr);
+
+	return err;
+}
+
+static void ibtrs_srv_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_srv_con *con = cq->cq_context;
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_msg_info_req *msg;
+	struct ibtrs_iu *iu;
+	int err;
+
+	WARN_ON(con->c.cid);
+
+	iu = container_of(wc->wr_cqe, struct ibtrs_iu, cqe);
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		ibtrs_err(sess, "Sess info request receive failed: %s\n",
+			  ib_wc_status_msg(wc->status));
+		goto close;
+	}
+	WARN_ON(wc->opcode != IB_WC_RECV);
+
+	if (unlikely(wc->byte_len < sizeof(*msg))) {
+		ibtrs_err(sess, "Sess info request is malformed: size %d\n",
+			  wc->byte_len);
+		goto close;
+	}
+	ib_dma_sync_single_for_cpu(sess->s.dev->ib_dev, iu->dma_addr,
+				   iu->size, DMA_FROM_DEVICE);
+	msg = iu->buf;
+	if (unlikely(le16_to_cpu(msg->type) != IBTRS_MSG_INFO_REQ)) {
+		ibtrs_err(sess, "Sess info request is malformed: type %d\n",
+			  le16_to_cpu(msg->type));
+		goto close;
+	}
+	err = process_info_req(con, msg);
+	if (unlikely(err))
+		goto close;
+
+out:
+	ibtrs_iu_free(iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev);
+	return;
+close:
+	close_sess(sess);
+	goto out;
+}
+
+static int post_recv_info_req(struct ibtrs_srv_con *con)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_iu *rx_iu;
+	int err;
+
+	rx_iu = ibtrs_iu_alloc(0, sizeof(struct ibtrs_msg_info_req),
+			       GFP_KERNEL, sess->s.dev->ib_dev,
+			       DMA_FROM_DEVICE, ibtrs_srv_info_req_done);
+	if (unlikely(!rx_iu)) {
+		ibtrs_err(sess, "ibtrs_iu_alloc(): no memory\n");
+		return -ENOMEM;
+	}
+	/* Prepare for getting info response */
+	err = ibtrs_iu_post_recv(&con->c, rx_iu);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "ibtrs_iu_post_recv(), err: %d\n", err);
+		ibtrs_iu_free(rx_iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev);
+		return err;
+	}
+
+	return 0;
+}
+
+static int post_recv_io(struct ibtrs_srv_con *con, size_t q_size)
+{
+	int i, err;
+
+	for (i = 0; i < q_size; i++) {
+		err = ibtrs_post_recv_empty(&con->c, &io_comp_cqe);
+		if (unlikely(err))
+			return err;
+	}
+
+	return 0;
+}
+
+static int post_recv_sess(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	size_t q_size;
+	int err, cid;
+
+	for (cid = 0; cid < sess->s.con_num; cid++) {
+		if (cid == 0)
+			q_size = SERVICE_CON_QUEUE_DEPTH;
+		else
+			q_size = srv->queue_depth;
+
+		err = post_recv_io(to_srv_con(sess->s.con[cid]), q_size);
+		if (unlikely(err)) {
+			ibtrs_err(sess, "post_recv_io(), err: %d\n", err);
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+static void process_read(struct ibtrs_srv_con *con,
+			 struct ibtrs_msg_rdma_read *msg,
+			 u32 buf_id, u32 off)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_srv_ctx *ctx = srv->ctx;
+	struct ibtrs_srv_op *id;
+
+	size_t usr_len, data_len;
+	void *data;
+	int ret;
+
+	if (unlikely(sess->state != IBTRS_SRV_CONNECTED)) {
+		ibtrs_err_rl(sess, "Processing read request failed, "
+			     " session is disconnected, sess state %s\n",
+			     ibtrs_srv_state_str(sess->state));
+		return;
+	}
+	ibtrs_srv_get_ops_ids(sess);
+	ibtrs_srv_update_rdma_stats(&sess->stats, off, READ);
+	id = sess->ops_ids[buf_id];
+	id->con		= con;
+	id->dir		= READ;
+	id->msg_id	= buf_id;
+	id->rd_msg	= msg;
+	usr_len = le16_to_cpu(msg->usr_len);
+	data_len = off - usr_len;
+	data = page_address(srv->chunks[buf_id]);
+	ret = ctx->rdma_ev(srv, srv->priv, id, READ, data, data_len,
+			   data + data_len, usr_len);
+
+	if (unlikely(ret)) {
+		ibtrs_err_rl(sess, "Processing read request failed, user "
+			     "module cb reported for msg_id %d, err: %d\n",
+			     buf_id, ret);
+		goto send_err_msg;
+	}
+
+	return;
+
+send_err_msg:
+	ret = send_io_resp_imm(con, id, ret);
+	if (ret < 0) {
+		ibtrs_err_rl(sess, "Sending err msg for failed RDMA-Write-Req"
+			     " failed, msg_id %d, err: %d\n", buf_id, ret);
+		close_sess(sess);
+	}
+	ibtrs_srv_put_ops_ids(sess);
+}
+
+static void process_write(struct ibtrs_srv_con *con,
+			  struct ibtrs_msg_rdma_write *req,
+			  u32 buf_id, u32 off)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_srv_ctx *ctx = srv->ctx;
+	struct ibtrs_srv_op *id;
+
+	size_t data_len, usr_len;
+	void *data;
+	int ret;
+
+	if (unlikely(sess->state != IBTRS_SRV_CONNECTED)) {
+		ibtrs_err_rl(sess, "Processing write request failed, "
+			     " session is disconnected, sess state %s\n",
+			     ibtrs_srv_state_str(sess->state));
+		return;
+	}
+	ibtrs_srv_get_ops_ids(sess);
+	ibtrs_srv_update_rdma_stats(&sess->stats, off, WRITE);
+	id = sess->ops_ids[buf_id];
+	id->con    = con;
+	id->dir    = WRITE;
+	id->msg_id = buf_id;
+
+	usr_len = le16_to_cpu(req->usr_len);
+	data_len = off - usr_len;
+	data = page_address(srv->chunks[buf_id]);
+	ret = ctx->rdma_ev(srv, srv->priv, id, WRITE, data, data_len,
+			   data + data_len, usr_len);
+	if (unlikely(ret)) {
+		ibtrs_err_rl(sess, "Processing write request failed, user"
+			     " module callback reports err: %d\n", ret);
+		goto send_err_msg;
+	}
+
+	return;
+
+send_err_msg:
+	ret = send_io_resp_imm(con, id, ret);
+	if (ret < 0) {
+		ibtrs_err_rl(sess, "Processing write request failed, sending"
+			     " I/O response failed, msg_id %d, err: %d\n",
+			     buf_id, ret);
+		close_sess(sess);
+	}
+	ibtrs_srv_put_ops_ids(sess);
+}
+
+static void process_io_req(struct ibtrs_srv_con *con, void *msg,
+			   u32 id, u32 off)
+{
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_msg_rdma_hdr *hdr;
+	unsigned int type;
+
+	ib_dma_sync_single_for_cpu(sess->s.dev->ib_dev, sess->dma_addr[id],
+				   max_chunk_size, DMA_BIDIRECTIONAL);
+	hdr = msg;
+	type = le16_to_cpu(hdr->type);
+
+	switch (type) {
+	case IBTRS_MSG_WRITE:
+		process_write(con, msg, id, off);
+		break;
+	case IBTRS_MSG_READ:
+		process_read(con, msg, id, off);
+		break;
+	default:
+		ibtrs_err(sess, "Processing I/O request failed, "
+			  "unknown message type received: 0x%02x\n", type);
+		goto err;
+	}
+
+	return;
+
+err:
+	close_sess(sess);
+}
+
+static void ibtrs_srv_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct ibtrs_srv_con *con = cq->cq_context;
+	struct ibtrs_srv_sess *sess = to_srv_sess(con->c.sess);
+	struct ibtrs_srv *srv = sess->srv;
+	u32 imm_type, imm_payload;
+	int err;
+
+	if (unlikely(wc->status != IB_WC_SUCCESS)) {
+		if (wc->status != IB_WC_WR_FLUSH_ERR) {
+			ibtrs_err(sess, "%s (wr_cqe: %p,"
+				  " type: %d, vendor_err: 0x%x, len: %u)\n",
+				  ib_wc_status_msg(wc->status), wc->wr_cqe,
+				  wc->opcode, wc->vendor_err, wc->byte_len);
+			close_sess(sess);
+		}
+		return;
+	}
+	ibtrs_srv_update_wc_stats(&sess->stats);
+
+	switch (wc->opcode) {
+	case IB_WC_RDMA_WRITE:
+		/*
+		 * post_send() RDMA write completions of IO reqs (read/write)
+		 * and hb
+		 */
+		break;
+	case IB_WC_RECV_RDMA_WITH_IMM:
+		/*
+		 * post_recv() RDMA write completions of IO reqs (read/write)
+		 * and hb
+		 */
+		if (WARN_ON(wc->wr_cqe != &io_comp_cqe))
+			return;
+		err = ibtrs_post_recv_empty(&con->c, &io_comp_cqe);
+		if (unlikely(err)) {
+			ibtrs_err(sess, "ibtrs_post_recv(), err: %d\n", err);
+			close_sess(sess);
+			break;
+		}
+		ibtrs_from_imm(be32_to_cpu(wc->ex.imm_data),
+			       &imm_type, &imm_payload);
+		if (likely(imm_type == IBTRS_IO_REQ_IMM)) {
+			u32 msg_id, off;
+			void *data;
+
+			msg_id = imm_payload >> sess->mem_bits;
+			off = imm_payload & ((1 << sess->mem_bits) - 1);
+			if (unlikely(msg_id > srv->queue_depth ||
+				     off > max_chunk_size)) {
+				ibtrs_err(sess, "Wrong msg_id %u, off %u\n",
+					  msg_id, off);
+				close_sess(sess);
+				return;
+			}
+			data = page_address(srv->chunks[msg_id]) + off;
+			process_io_req(con, data, msg_id, off);
+		} else if (imm_type == IBTRS_HB_MSG_IMM) {
+			WARN_ON(con->c.cid);
+			ibtrs_send_hb_ack(&sess->s);
+		} else if (imm_type == IBTRS_HB_ACK_IMM) {
+			WARN_ON(con->c.cid);
+			sess->s.hb_missed_cnt = 0;
+		} else {
+			ibtrs_wrn(sess, "Unknown IMM type %u\n", imm_type);
+		}
+		break;
+	default:
+		ibtrs_wrn(sess, "Unexpected WC type: %d\n", wc->opcode);
+		return;
+	}
+}
+
+int ibtrs_srv_get_sess_name(struct ibtrs_srv *srv, char *sessname, size_t len)
+{
+	struct ibtrs_srv_sess *sess;
+	int err = -ENOTCONN;
+
+	mutex_lock(&srv->paths_mutex);
+	list_for_each_entry(sess, &srv->paths_list, s.entry) {
+		if (sess->state != IBTRS_SRV_CONNECTED)
+			continue;
+		memcpy(sessname, sess->s.sessname,
+		       min_t(size_t, sizeof(sess->s.sessname), len));
+		err = 0;
+		break;
+	}
+	mutex_unlock(&srv->paths_mutex);
+
+	return err;
+}
+EXPORT_SYMBOL(ibtrs_srv_get_sess_name);
+
+int ibtrs_srv_get_queue_depth(struct ibtrs_srv *srv)
+{
+	return srv->queue_depth;
+}
+EXPORT_SYMBOL(ibtrs_srv_get_queue_depth);
+
+static int find_next_bit_ring(struct ibtrs_srv_sess *sess)
+{
+	struct ib_device *ib_dev = sess->s.dev->ib_dev;
+	int v;
+
+	v = cpumask_next(sess->cur_cq_vector, &cq_affinity_mask);
+	if (v >= nr_cpu_ids || v >= ib_dev->num_comp_vectors)
+		v = cpumask_first(&cq_affinity_mask);
+	return v;
+}
+
+static int ibtrs_srv_get_next_cq_vector(struct ibtrs_srv_sess *sess)
+{
+	sess->cur_cq_vector = find_next_bit_ring(sess);
+
+	return sess->cur_cq_vector;
+}
+
+static struct ibtrs_srv *__alloc_srv(struct ibtrs_srv_ctx *ctx,
+				     const uuid_t *paths_uuid)
+{
+	struct ibtrs_srv *srv;
+	int i;
+
+	srv = kzalloc(sizeof(*srv), GFP_KERNEL);
+	if  (unlikely(!srv))
+		return NULL;
+
+	refcount_set(&srv->refcount, 1);
+	INIT_LIST_HEAD(&srv->paths_list);
+	mutex_init(&srv->paths_mutex);
+	mutex_init(&srv->paths_ev_mutex);
+	uuid_copy(&srv->paths_uuid, paths_uuid);
+	srv->queue_depth = sess_queue_depth;
+	srv->ctx = ctx;
+
+	srv->chunks = kcalloc(srv->queue_depth, sizeof(*srv->chunks),
+			      GFP_KERNEL);
+	if (unlikely(!srv->chunks))
+		goto err_free_srv;
+
+	for (i = 0; i < srv->queue_depth; i++) {
+		srv->chunks[i] = mempool_alloc(chunk_pool, GFP_KERNEL);
+		if (unlikely(!srv->chunks[i])) {
+			pr_err("mempool_alloc() failed\n");
+			goto err_free_chunks;
+		}
+	}
+	list_add(&srv->ctx_list, &ctx->srv_list);
+
+	return srv;
+
+err_free_chunks:
+	while (i--)
+		mempool_free(srv->chunks[i], chunk_pool);
+	kfree(srv->chunks);
+
+err_free_srv:
+	kfree(srv);
+
+	return NULL;
+}
+
+static void free_srv(struct ibtrs_srv *srv)
+{
+	int i;
+
+	WARN_ON(refcount_read(&srv->refcount));
+	for (i = 0; i < srv->queue_depth; i++)
+		mempool_free(srv->chunks[i], chunk_pool);
+	kfree(srv->chunks);
+	kfree(srv);
+}
+
+static inline struct ibtrs_srv *__find_srv_and_get(struct ibtrs_srv_ctx *ctx,
+						   const uuid_t *paths_uuid)
+{
+	struct ibtrs_srv *srv;
+
+	list_for_each_entry(srv, &ctx->srv_list, ctx_list) {
+		if (uuid_equal(&srv->paths_uuid, paths_uuid) &&
+		    refcount_inc_not_zero(&srv->refcount))
+			return srv;
+	}
+
+	return NULL;
+}
+
+static struct ibtrs_srv *get_or_create_srv(struct ibtrs_srv_ctx *ctx,
+					   const uuid_t *paths_uuid)
+{
+	struct ibtrs_srv *srv;
+
+	mutex_lock(&ctx->srv_mutex);
+	srv = __find_srv_and_get(ctx, paths_uuid);
+	if (!srv)
+		srv = __alloc_srv(ctx, paths_uuid);
+	mutex_unlock(&ctx->srv_mutex);
+
+	return srv;
+}
+
+static void put_srv(struct ibtrs_srv *srv)
+{
+	if (refcount_dec_and_test(&srv->refcount)) {
+		struct ibtrs_srv_ctx *ctx = srv->ctx;
+
+		WARN_ON(srv->dev.kobj.state_in_sysfs);
+		WARN_ON(srv->kobj_paths.state_in_sysfs);
+
+		mutex_lock(&ctx->srv_mutex);
+		list_del(&srv->ctx_list);
+		mutex_unlock(&ctx->srv_mutex);
+		free_srv(srv);
+	}
+}
+
+static void __add_path_to_srv(struct ibtrs_srv *srv,
+			      struct ibtrs_srv_sess *sess)
+{
+	list_add_tail(&sess->s.entry, &srv->paths_list);
+	srv->paths_num++;
+	WARN_ON(srv->paths_num >= MAX_PATHS_NUM);
+}
+
+static void del_path_from_srv(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+
+	if (WARN_ON(!srv))
+		return;
+
+	mutex_lock(&srv->paths_mutex);
+	list_del(&sess->s.entry);
+	WARN_ON(!srv->paths_num);
+	srv->paths_num--;
+	mutex_unlock(&srv->paths_mutex);
+}
+
+static inline bool __is_path_w_addr_exists(struct ibtrs_srv *srv,
+					   struct rdma_addr *addr)
+{
+	struct ibtrs_srv_sess *sess;
+
+	list_for_each_entry(sess, &srv->paths_list, s.entry)
+		if (!sockaddr_cmp((struct sockaddr *)&sess->s.dst_addr,
+				  (struct sockaddr *)&addr->dst_addr) &&
+		    !sockaddr_cmp((struct sockaddr *)&sess->s.src_addr,
+				  (struct sockaddr *)&addr->src_addr))
+			return true;
+
+	return false;
+}
+
+static void ibtrs_srv_close_work(struct work_struct *work)
+{
+	struct ibtrs_srv_sess *sess;
+	struct ibtrs_srv_ctx *ctx;
+	struct ibtrs_srv_con *con;
+	int i;
+
+	sess = container_of(work, typeof(*sess), close_work);
+	ctx = sess->srv->ctx;
+
+	ibtrs_srv_destroy_sess_files(sess);
+	ibtrs_srv_stop_hb(sess);
+
+	for (i = 0; i < sess->s.con_num; i++) {
+		if (!sess->s.con[i])
+			continue;
+		con = to_srv_con(sess->s.con[i]);
+		rdma_disconnect(con->c.cm_id);
+		ib_drain_qp(con->c.qp);
+	}
+	/* Wait for all inflights */
+	ibtrs_srv_wait_ops_ids(sess);
+
+	/* Notify upper layer if we are the last path */
+	ibtrs_srv_sess_down(sess);
+
+	unmap_cont_bufs(sess);
+	ibtrs_srv_free_ops_ids(sess);
+
+	for (i = 0; i < sess->s.con_num; i++) {
+		if (!sess->s.con[i])
+			continue;
+		con = to_srv_con(sess->s.con[i]);
+		ibtrs_cq_qp_destroy(&con->c);
+		rdma_destroy_id(con->c.cm_id);
+		kfree(con);
+	}
+	ibtrs_ib_dev_put(sess->s.dev);
+
+	del_path_from_srv(sess);
+	put_srv(sess->srv);
+	sess->srv = NULL;
+	ibtrs_srv_change_state(sess, IBTRS_SRV_CLOSED);
+
+	kfree(sess->dma_addr);
+	kfree(sess->s.con);
+	kfree(sess);
+}
+
+static int ibtrs_rdma_do_accept(struct ibtrs_srv_sess *sess,
+				struct rdma_cm_id *cm_id)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_msg_conn_rsp msg;
+	struct rdma_conn_param param;
+	int err;
+
+	memset(&param, 0, sizeof(param));
+	param.retry_count = retry_count;
+	param.rnr_retry_count = 7;
+	param.private_data = &msg;
+	param.private_data_len = sizeof(msg);
+
+	memset(&msg, 0, sizeof(msg));
+	msg.magic = cpu_to_le16(IBTRS_MAGIC);
+	msg.version = cpu_to_le16(IBTRS_PROTO_VER);
+	msg.errno = 0;
+	msg.queue_depth = cpu_to_le16(srv->queue_depth);
+	msg.max_io_size = cpu_to_le32(max_chunk_size - MAX_HDR_SIZE);
+	msg.max_hdr_size = cpu_to_le32(MAX_HDR_SIZE);
+
+	err = rdma_accept(cm_id, &param);
+	if (err)
+		pr_err("rdma_accept(), err: %d\n", err);
+
+	return err;
+}
+
+static int ibtrs_rdma_do_reject(struct rdma_cm_id *cm_id, int errno)
+{
+	struct ibtrs_msg_conn_rsp msg;
+	int err;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.magic = cpu_to_le16(IBTRS_MAGIC);
+	msg.version = cpu_to_le16(IBTRS_PROTO_VER);
+	msg.errno = cpu_to_le16(errno);
+
+	err = rdma_reject(cm_id, &msg, sizeof(msg));
+	if (err)
+		pr_err("rdma_reject(), err: %d\n", err);
+
+	/* Bounce errno back */
+	return errno;
+}
+
+static struct ibtrs_srv_sess *
+__find_sess(struct ibtrs_srv *srv, const uuid_t *sess_uuid)
+{
+	struct ibtrs_srv_sess *sess;
+
+	list_for_each_entry(sess, &srv->paths_list, s.entry) {
+		if (uuid_equal(&sess->s.uuid, sess_uuid))
+			return sess;
+	}
+
+	return NULL;
+}
+
+static int create_con(struct ibtrs_srv_sess *sess,
+		      struct rdma_cm_id *cm_id,
+		      unsigned int cid)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	struct ibtrs_srv_con *con;
+
+	u16 cq_size, wr_queue_size;
+	int err, cq_vector;
+
+	con = kzalloc(sizeof(*con), GFP_KERNEL);
+	if (unlikely(!con)) {
+		ibtrs_err(sess, "kzalloc() failed\n");
+		err = -ENOMEM;
+		goto err;
+	}
+
+	con->c.cm_id = cm_id;
+	con->c.sess = &sess->s;
+	con->c.cid = cid;
+	atomic_set(&con->wr_cnt, 0);
+
+	if (con->c.cid == 0) {
+		/*
+		 * All receive and all send (each requiring invalidate)
+		 * + 2 for drain and heartbeat
+		 */
+		wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
+		cq_size = wr_queue_size;
+	} else {
+		/*
+		 * If we have all receive requests posted and
+		 * all write requests posted and each read request
+		 * requires an invalidate request + drain
+		 * and qp gets into error state.
+		 */
+		cq_size = srv->queue_depth * 3 + 1;
+		/*
+		 * In theory we might have queue_depth * 32
+		 * outstanding requests if an unsafe global key is used
+		 * and we have queue_depth read requests each consisting
+		 * of 32 different addresses. div 3 for mlx5.
+		 */
+		wr_queue_size = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
+	}
+
+	cq_vector = ibtrs_srv_get_next_cq_vector(sess);
+
+	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
+	err = ibtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
+				 wr_queue_size, IB_POLL_WORKQUEUE);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "ibtrs_cq_qp_create(), err: %d\n", err);
+		goto free_con;
+	}
+	if (con->c.cid == 0) {
+		err = post_recv_info_req(con);
+		if (unlikely(err))
+			goto free_cqqp;
+	}
+	WARN_ON(sess->s.con[cid]);
+	sess->s.con[cid] = &con->c;
+
+	/*
+	 * Change context from server to current connection.  The other
+	 * way is to use cm_id->qp->qp_context, which does not work on OFED.
+	 */
+	cm_id->context = &con->c;
+
+	return 0;
+
+free_cqqp:
+	ibtrs_cq_qp_destroy(&con->c);
+free_con:
+	kfree(con);
+
+err:
+	return err;
+}
+
+static struct ibtrs_srv_sess *__alloc_sess(struct ibtrs_srv *srv,
+					   struct rdma_cm_id *cm_id,
+					   unsigned int con_num,
+					   unsigned int recon_cnt,
+					   const uuid_t *uuid)
+{
+	struct ibtrs_srv_sess *sess;
+	int err = -ENOMEM;
+
+	if (unlikely(srv->paths_num >= MAX_PATHS_NUM)) {
+		err = -ECONNRESET;
+		goto err;
+	}
+	if (unlikely(__is_path_w_addr_exists(srv, &cm_id->route.addr))) {
+		err = -EEXIST;
+		goto err;
+	}
+	sess = kzalloc(sizeof(*sess), GFP_KERNEL);
+	if (unlikely(!sess))
+		goto err;
+
+	sess->dma_addr = kcalloc(srv->queue_depth, sizeof(*sess->dma_addr),
+				 GFP_KERNEL);
+	if (unlikely(!sess->dma_addr))
+		goto err_free_sess;
+
+	sess->s.con = kcalloc(con_num, sizeof(*sess->s.con), GFP_KERNEL);
+	if (unlikely(!sess->s.con))
+		goto err_free_dma_addr;
+
+	sess->state = IBTRS_SRV_CONNECTING;
+	sess->srv = srv;
+	sess->cur_cq_vector = -1;
+	sess->s.dst_addr = cm_id->route.addr.dst_addr;
+	sess->s.src_addr = cm_id->route.addr.src_addr;
+	sess->s.con_num = con_num;
+	sess->s.recon_cnt = recon_cnt;
+	uuid_copy(&sess->s.uuid, uuid);
+	spin_lock_init(&sess->state_lock);
+	INIT_WORK(&sess->close_work, ibtrs_srv_close_work);
+	ibtrs_srv_init_hb(sess);
+
+	sess->s.dev = ibtrs_ib_dev_find_or_add(cm_id->device, &dev_pool);
+	if (unlikely(!sess->s.dev)) {
+		err = -ENOMEM;
+		ibtrs_wrn(sess, "Failed to alloc ibtrs_device\n");
+		goto err_free_con;
+	}
+	err = map_cont_bufs(sess);
+	if (unlikely(err))
+		goto err_put_dev;
+
+	err = ibtrs_srv_alloc_ops_ids(sess);
+	if (unlikely(err))
+		goto err_unmap_bufs;
+
+	__add_path_to_srv(srv, sess);
+
+	return sess;
+
+err_unmap_bufs:
+	unmap_cont_bufs(sess);
+err_put_dev:
+	ibtrs_ib_dev_put(sess->s.dev);
+err_free_con:
+	kfree(sess->s.con);
+err_free_dma_addr:
+	kfree(sess->dma_addr);
+err_free_sess:
+	kfree(sess);
+
+err:
+	return ERR_PTR(err);
+}
+
+static int ibtrs_rdma_connect(struct rdma_cm_id *cm_id,
+			      const struct ibtrs_msg_conn_req *msg,
+			      size_t len)
+{
+	struct ibtrs_srv_ctx *ctx = cm_id->context;
+	struct ibtrs_srv_sess *sess;
+	struct ibtrs_srv *srv;
+
+	u16 version, con_num, cid;
+	u16 recon_cnt;
+	int err;
+
+	if (unlikely(len < sizeof(*msg))) {
+		pr_err("Invalid IBTRS connection request\n");
+		goto reject_w_econnreset;
+	}
+	if (unlikely(le16_to_cpu(msg->magic) != IBTRS_MAGIC)) {
+		pr_err("Invalid IBTRS magic\n");
+		goto reject_w_econnreset;
+	}
+	version = le16_to_cpu(msg->version);
+	if (unlikely(version >> 8 != IBTRS_PROTO_VER_MAJOR)) {
+		pr_err("Unsupported major IBTRS version: %d, expected %d\n",
+		       version >> 8, IBTRS_PROTO_VER_MAJOR);
+		goto reject_w_econnreset;
+	}
+	con_num = le16_to_cpu(msg->cid_num);
+	if (unlikely(con_num > 4096)) {
+		/* Sanity check */
+		pr_err("Too many connections requested: %d\n", con_num);
+		goto reject_w_econnreset;
+	}
+	cid = le16_to_cpu(msg->cid);
+	if (unlikely(cid >= con_num)) {
+		/* Sanity check */
+		pr_err("Incorrect cid: %d >= %d\n", cid, con_num);
+		goto reject_w_econnreset;
+	}
+	recon_cnt = le16_to_cpu(msg->recon_cnt);
+	srv = get_or_create_srv(ctx, &msg->paths_uuid);
+	if (unlikely(!srv)) {
+		err = -ENOMEM;
+		goto reject_w_err;
+	}
+	mutex_lock(&srv->paths_mutex);
+	sess = __find_sess(srv, &msg->sess_uuid);
+	if (sess) {
+		/* Session already holds a reference */
+		put_srv(srv);
+
+		if (unlikely(sess->s.recon_cnt != recon_cnt)) {
+			ibtrs_err(sess, "Reconnect detected %d != %d, but "
+				  "previous session is still alive, reconnect "
+				  "later\n", sess->s.recon_cnt, recon_cnt);
+			mutex_unlock(&srv->paths_mutex);
+			goto reject_w_ebusy;
+		}
+		if (unlikely(sess->state != IBTRS_SRV_CONNECTING)) {
+			ibtrs_err(sess, "Session in wrong state: %s\n",
+				  ibtrs_srv_state_str(sess->state));
+			mutex_unlock(&srv->paths_mutex);
+			goto reject_w_econnreset;
+		}
+		/*
+		 * Sanity checks
+		 */
+		if (unlikely(con_num != sess->s.con_num ||
+			     cid >= sess->s.con_num)) {
+			ibtrs_err(sess, "Incorrect request: %d, %d\n",
+				  cid, con_num);
+			mutex_unlock(&srv->paths_mutex);
+			goto reject_w_econnreset;
+		}
+		if (unlikely(sess->s.con[cid])) {
+			ibtrs_err(sess, "Connection already exists: %d\n",
+				  cid);
+			mutex_unlock(&srv->paths_mutex);
+			goto reject_w_econnreset;
+		}
+	} else {
+		sess = __alloc_sess(srv, cm_id, con_num, recon_cnt,
+				    &msg->sess_uuid);
+		if (unlikely(IS_ERR(sess))) {
+			mutex_unlock(&srv->paths_mutex);
+			put_srv(srv);
+			err = PTR_ERR(sess);
+			goto reject_w_err;
+		}
+	}
+	err = create_con(sess, cm_id, cid);
+	if (unlikely(err)) {
+		(void)ibtrs_rdma_do_reject(cm_id, err);
+		/*
+		 * Since session has other connections we follow normal way
+		 * through workqueue, but still return an error to tell cma.c
+		 * to call rdma_destroy_id() for current connection.
+		 */
+		goto close_and_return_err;
+	}
+	err = ibtrs_rdma_do_accept(sess, cm_id);
+	if (unlikely(err)) {
+		(void)ibtrs_rdma_do_reject(cm_id, err);
+		/*
+		 * Since current connection was successfully added to the
+		 * session we follow normal way through workqueue to close the
+		 * session, thus return 0 to tell cma.c we call
+		 * rdma_destroy_id() ourselves.
+		 */
+		err = 0;
+		goto close_and_return_err;
+	}
+	mutex_unlock(&srv->paths_mutex);
+
+	return 0;
+
+reject_w_err:
+	return ibtrs_rdma_do_reject(cm_id, err);
+
+reject_w_econnreset:
+	return ibtrs_rdma_do_reject(cm_id, -ECONNRESET);
+
+reject_w_ebusy:
+	return ibtrs_rdma_do_reject(cm_id, -EBUSY);
+
+close_and_return_err:
+	close_sess(sess);
+	mutex_unlock(&srv->paths_mutex);
+
+	return err;
+}
+
+static int ibtrs_srv_rdma_cm_handler(struct rdma_cm_id *cm_id,
+				     struct rdma_cm_event *ev)
+{
+	struct ibtrs_srv_sess *sess = NULL;
+
+	if (ev->event != RDMA_CM_EVENT_CONNECT_REQUEST) {
+		struct ibtrs_con *c = cm_id->context;
+
+		sess = to_srv_sess(c->sess);
+	}
+
+	switch (ev->event) {
+	case RDMA_CM_EVENT_CONNECT_REQUEST:
+		/*
+		 * In case of error cma.c will destroy cm_id,
+		 * see cma_process_remove()
+		 */
+		return ibtrs_rdma_connect(cm_id, ev->param.conn.private_data,
+					  ev->param.conn.private_data_len);
+	case RDMA_CM_EVENT_ESTABLISHED:
+		/* Nothing here */
+		break;
+	case RDMA_CM_EVENT_REJECTED:
+	case RDMA_CM_EVENT_CONNECT_ERROR:
+	case RDMA_CM_EVENT_UNREACHABLE:
+		ibtrs_err(sess, "CM error (CM event: %s, err: %d)\n",
+			  rdma_event_msg(ev->event), ev->status);
+		close_sess(sess);
+		break;
+	case RDMA_CM_EVENT_DISCONNECTED:
+	case RDMA_CM_EVENT_ADDR_CHANGE:
+	case RDMA_CM_EVENT_TIMEWAIT_EXIT:
+		close_sess(sess);
+		break;
+	case RDMA_CM_EVENT_DEVICE_REMOVAL:
+		close_sess(sess);
+		break;
+	default:
+		pr_err("Ignoring unexpected CM event %s, err %d\n",
+		       rdma_event_msg(ev->event), ev->status);
+		break;
+	}
+
+	return 0;
+}
+
+static struct rdma_cm_id *ibtrs_srv_cm_init(struct ibtrs_srv_ctx *ctx,
+					    struct sockaddr *addr,
+					    enum rdma_ucm_port_space ps)
+{
+	struct rdma_cm_id *cm_id;
+	int ret;
+
+	cm_id = rdma_create_id(&init_net, ibtrs_srv_rdma_cm_handler,
+			       ctx, ps, IB_QPT_RC);
+	if (IS_ERR(cm_id)) {
+		ret = PTR_ERR(cm_id);
+		pr_err("Creating id for RDMA connection failed, err: %d\n",
+		       ret);
+		goto err_out;
+	}
+	ret = rdma_bind_addr(cm_id, addr);
+	if (ret) {
+		pr_err("Binding RDMA address failed, err: %d\n", ret);
+		goto err_cm;
+	}
+	ret = rdma_listen(cm_id, 64);
+	if (ret) {
+		pr_err("Listening on RDMA connection failed, err: %d\n",
+		       ret);
+		goto err_cm;
+	}
+
+	return cm_id;
+
+err_cm:
+	rdma_destroy_id(cm_id);
+err_out:
+
+	return ERR_PTR(ret);
+}
+
+static int ibtrs_srv_rdma_init(struct ibtrs_srv_ctx *ctx, unsigned int port)
+{
+	struct sockaddr_in6 sin = {
+		.sin6_family	= AF_INET6,
+		.sin6_addr	= IN6ADDR_ANY_INIT,
+		.sin6_port	= htons(port),
+	};
+	struct sockaddr_ib sib = {
+		.sib_family			= AF_IB,
+		.sib_addr.sib_subnet_prefix	= 0ULL,
+		.sib_addr.sib_interface_id	= 0ULL,
+		.sib_sid	= cpu_to_be64(RDMA_IB_IP_PS_IB | port),
+		.sib_sid_mask	= cpu_to_be64(0xffffffffffffffffULL),
+		.sib_pkey	= cpu_to_be16(0xffff),
+	};
+	struct rdma_cm_id *cm_ip, *cm_ib;
+	int ret;
+
+	/*
+	 * We accept both IPoIB and IB connections, so we need to keep
+	 * two cm id's, one for each socket type and port space.
+	 * If the cm initialization of one of the id's fails, we abort
+	 * everything.
+	 */
+	cm_ip = ibtrs_srv_cm_init(ctx, (struct sockaddr *)&sin, RDMA_PS_TCP);
+	if (unlikely(IS_ERR(cm_ip)))
+		return PTR_ERR(cm_ip);
+
+	cm_ib = ibtrs_srv_cm_init(ctx, (struct sockaddr *)&sib, RDMA_PS_IB);
+	if (unlikely(IS_ERR(cm_ib))) {
+		ret = PTR_ERR(cm_ib);
+		goto free_cm_ip;
+	}
+
+	ctx->cm_id_ip = cm_ip;
+	ctx->cm_id_ib = cm_ib;
+
+	return 0;
+
+free_cm_ip:
+	rdma_destroy_id(cm_ip);
+
+	return ret;
+}
+
+static struct ibtrs_srv_ctx *alloc_srv_ctx(rdma_ev_fn *rdma_ev,
+					   link_ev_fn *link_ev)
+{
+	struct ibtrs_srv_ctx *ctx;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return NULL;
+
+	ctx->rdma_ev = rdma_ev;
+	ctx->link_ev = link_ev;
+	mutex_init(&ctx->srv_mutex);
+	INIT_LIST_HEAD(&ctx->srv_list);
+
+	return ctx;
+}
+
+static void free_srv_ctx(struct ibtrs_srv_ctx *ctx)
+{
+	WARN_ON(!list_empty(&ctx->srv_list));
+	kfree(ctx);
+}
+
+struct ibtrs_srv_ctx *ibtrs_srv_open(rdma_ev_fn *rdma_ev, link_ev_fn *link_ev,
+				     unsigned int port)
+{
+	struct ibtrs_srv_ctx *ctx;
+	int err;
+
+	ctx = alloc_srv_ctx(rdma_ev, link_ev);
+	if (unlikely(!ctx))
+		return ERR_PTR(-ENOMEM);
+
+	err = ibtrs_srv_rdma_init(ctx, port);
+	if (unlikely(err)) {
+		free_srv_ctx(ctx);
+		return ERR_PTR(err);
+	}
+	/* Do not let module be unloaded if server context is alive */
+	__module_get(THIS_MODULE);
+
+	return ctx;
+}
+EXPORT_SYMBOL(ibtrs_srv_open);
+
+void ibtrs_srv_queue_close(struct ibtrs_srv_sess *sess)
+{
+	close_sess(sess);
+}
+
+static void close_sess(struct ibtrs_srv_sess *sess)
+{
+	enum ibtrs_srv_state old_state;
+
+	if (ibtrs_srv_change_state_get_old(sess, IBTRS_SRV_CLOSING,
+					   &old_state))
+		queue_work(ibtrs_wq, &sess->close_work);
+	WARN_ON(sess->state != IBTRS_SRV_CLOSING);
+}
+
+static void close_sessions(struct ibtrs_srv *srv)
+{
+	struct ibtrs_srv_sess *sess;
+
+	mutex_lock(&srv->paths_mutex);
+	list_for_each_entry(sess, &srv->paths_list, s.entry)
+		close_sess(sess);
+	mutex_unlock(&srv->paths_mutex);
+}
+
+static void close_ctx(struct ibtrs_srv_ctx *ctx)
+{
+	struct ibtrs_srv *srv;
+
+	mutex_lock(&ctx->srv_mutex);
+	list_for_each_entry(srv, &ctx->srv_list, ctx_list)
+		close_sessions(srv);
+	mutex_unlock(&ctx->srv_mutex);
+	flush_workqueue(ibtrs_wq);
+}
+
+void ibtrs_srv_close(struct ibtrs_srv_ctx *ctx)
+{
+	rdma_destroy_id(ctx->cm_id_ip);
+	rdma_destroy_id(ctx->cm_id_ib);
+	close_ctx(ctx);
+	free_srv_ctx(ctx);
+	module_put(THIS_MODULE);
+}
+EXPORT_SYMBOL(ibtrs_srv_close);
+
+static int check_module_params(void)
+{
+	if (sess_queue_depth < 1 || sess_queue_depth > MAX_SESS_QUEUE_DEPTH) {
+		pr_err("Invalid sess_queue_depth value %d, has to be"
+		       " >= %d, <= %d.\n",
+		       sess_queue_depth, 1, MAX_SESS_QUEUE_DEPTH);
+		return -EINVAL;
+	}
+	if (max_chunk_size < 4096 || !is_power_of_2(max_chunk_size)) {
+		pr_err("Invalid max_chunk_size value %d, has to be"
+		       " >= %d and should be power of two.\n",
+		       max_chunk_size, 4096);
+		return -EINVAL;
+	}
+
+	/*
+	 * Check if IB immediate data size is enough to hold the mem_id and the
+	 * offset inside the memory chunk
+	 */
+	if ((ilog2(sess_queue_depth - 1) + 1) +
+	    (ilog2(max_chunk_size - 1) + 1) > MAX_IMM_PAYL_BITS) {
+		pr_err("RDMA immediate size (%db) not enough to encode "
+		       "%d buffers of size %dB. Reduce 'sess_queue_depth' "
+		       "or 'max_chunk_size' parameters.\n", MAX_IMM_PAYL_BITS,
+		       sess_queue_depth, max_chunk_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init ibtrs_server_init(void)
+{
+	int err;
+
+	if (!strlen(cq_affinity_list))
+		init_cq_affinity();
+
+	pr_info("Loading module %s, version %s, proto %s: "
+		"(retry_count: %d, cq_affinity_list: %s, "
+		"max_chunk_size: %d (pure IO %ld, headers %ld) , "
+		"sess_queue_depth: %d)\n",
+		KBUILD_MODNAME, IBTRS_VER_STRING, IBTRS_PROTO_VER_STRING,
+		retry_count, cq_affinity_list, max_chunk_size,
+		max_chunk_size - MAX_HDR_SIZE, MAX_HDR_SIZE,
+		sess_queue_depth);
+
+	ibtrs_ib_dev_pool_init(0, &dev_pool);
+
+	err = check_module_params();
+	if (err) {
+		pr_err("Failed to load module, invalid module parameters,"
+		       " err: %d\n", err);
+		return err;
+	}
+	chunk_pool = mempool_create_page_pool(sess_queue_depth * CHUNK_POOL_SZ,
+					      get_order(max_chunk_size));
+	if (unlikely(!chunk_pool)) {
+		pr_err("Failed preallocate pool of chunks\n");
+		return -ENOMEM;
+	}
+	ibtrs_dev_class = class_create(THIS_MODULE, "ibtrs-server");
+	if (unlikely(IS_ERR(ibtrs_dev_class))) {
+		pr_err("Failed to create ibtrs-server dev class\n");
+		err = PTR_ERR(ibtrs_dev_class);
+		goto out_chunk_pool;
+	}
+	ibtrs_wq = alloc_workqueue("ibtrs_server_wq", WQ_MEM_RECLAIM, 0);
+	if (unlikely(!ibtrs_wq)) {
+		pr_err("Failed to load module, alloc ibtrs_server_wq failed\n");
+		goto out_dev_class;
+	}
+
+	return 0;
+
+out_dev_class:
+	class_destroy(ibtrs_dev_class);
+out_chunk_pool:
+	mempool_destroy(chunk_pool);
+
+	return err;
+}
+
+static void __exit ibtrs_server_exit(void)
+{
+	destroy_workqueue(ibtrs_wq);
+	class_destroy(ibtrs_dev_class);
+	mempool_destroy(chunk_pool);
+	ibtrs_ib_dev_pool_deinit(&dev_pool);
+}
+
+module_init(ibtrs_server_init);
+module_exit(ibtrs_server_exit);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 11/25] ibtrs: server: statistics functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (9 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 10/25] ibtrs: server: main functionality Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-23 23:56   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 12/25] ibtrs: server: sysfs interface functions Jack Wang
                   ` (14 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This introduces set of functions used on server side to account
statistics of RDMA data sent/received.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 .../infiniband/ulp/ibtrs/ibtrs-srv-stats.c    | 103 ++++++++++++++++++
 1 file changed, 103 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv-stats.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-srv-stats.c b/drivers/infiniband/ulp/ibtrs/ibtrs-srv-stats.c
new file mode 100644
index 000000000000..47f8d6d2d88d
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-srv-stats.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include "ibtrs-srv.h"
+
+void ibtrs_srv_update_rdma_stats(struct ibtrs_srv_stats *s,
+				 size_t size, int d)
+{
+	atomic64_inc(&s->rdma_stats.dir[d].cnt);
+	atomic64_add(size, &s->rdma_stats.dir[d].size_total);
+}
+
+void ibtrs_srv_update_wc_stats(struct ibtrs_srv_stats *s)
+{
+	atomic64_inc(&s->wc_comp.calls);
+	atomic64_inc(&s->wc_comp.total_wc_cnt);
+}
+
+int ibtrs_srv_reset_rdma_stats(struct ibtrs_srv_stats *stats, bool enable)
+{
+	if (enable) {
+		struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
+
+		memset(r, 0, sizeof(*r));
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
+				    char *page, size_t len)
+{
+	struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
+	struct ibtrs_srv_sess *sess;
+
+	sess = container_of(stats, typeof(*sess), stats);
+
+	return scnprintf(page, len, "%lld %lld %lld %lld %u\n",
+			 (s64)atomic64_read(&r->dir[READ].cnt),
+			 (s64)atomic64_read(&r->dir[READ].size_total),
+			 (s64)atomic64_read(&r->dir[WRITE].cnt),
+			 (s64)atomic64_read(&r->dir[WRITE].size_total),
+			 atomic_read(&sess->ids_inflight));
+}
+
+int ibtrs_srv_reset_wc_completion_stats(struct ibtrs_srv_stats *stats,
+					bool enable)
+{
+	if (enable) {
+		memset(&stats->wc_comp, 0, sizeof(stats->wc_comp));
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+int ibtrs_srv_stats_wc_completion_to_str(struct ibtrs_srv_stats *stats,
+					 char *buf, size_t len)
+{
+	return snprintf(buf, len, "%lld %lld\n",
+			(s64)atomic64_read(&stats->wc_comp.total_wc_cnt),
+			(s64)atomic64_read(&stats->wc_comp.calls));
+}
+
+ssize_t ibtrs_srv_reset_all_help(struct ibtrs_srv_stats *stats,
+				 char *page, size_t len)
+{
+	return scnprintf(page, PAGE_SIZE, "echo 1 to reset all statistics\n");
+}
+
+int ibtrs_srv_reset_all_stats(struct ibtrs_srv_stats *stats, bool enable)
+{
+	if (enable) {
+		ibtrs_srv_reset_wc_completion_stats(stats, enable);
+		ibtrs_srv_reset_rdma_stats(stats, enable);
+		return 0;
+	}
+
+	return -EINVAL;
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 12/25] ibtrs: server: sysfs interface functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (10 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 11/25] ibtrs: server: statistics functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-24  0:00   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 13/25] ibtrs: include client and server modules into kernel compilation Jack Wang
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is the sysfs interface to IBTRS sessions on server side:

  /sys/devices/virtual/ibtrs-server/<SESS-NAME>/
    *** IBTRS session accepted from a client peer
    |
    |- paths/<SRC@DST>/
       *** established paths from a client in a session
       |
       |- disconnect
       |  *** disconnect path
       |
       |- hca_name
       |  *** HCA name
       |
       |- hca_port
       |  *** HCA port
       |
       |- stats/
          *** current path statistics
          |
	  |- rdma
	  |- reset_all
	  |- wc_completions

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 .../infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c    | 303 ++++++++++++++++++
 1 file changed, 303 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c

diff --git a/drivers/infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c b/drivers/infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c
new file mode 100644
index 000000000000..c48c368c1906
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c
@@ -0,0 +1,303 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Transport Layer
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include "ibtrs-pri.h"
+#include "ibtrs-srv.h"
+#include "ibtrs-log.h"
+
+static struct kobj_type ktype = {
+	.sysfs_ops	= &kobj_sysfs_ops,
+};
+
+static ssize_t ibtrs_srv_disconnect_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo 1 > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibtrs_srv_disconnect_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	struct ibtrs_srv_sess *sess;
+	char str[MAXHOSTNAMELEN];
+
+	sess = container_of(kobj, struct ibtrs_srv_sess, kobj);
+	if (!sysfs_streq(buf, "1")) {
+		ibtrs_err(sess, "%s: invalid value: '%s'\n",
+			  attr->attr.name, buf);
+		return -EINVAL;
+	}
+
+	sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr, str, sizeof(str));
+
+	ibtrs_info(sess, "disconnect for path %s requested\n", str);
+	ibtrs_srv_queue_close(sess);
+
+	return count;
+}
+
+static struct kobj_attribute ibtrs_srv_disconnect_attr =
+	__ATTR(disconnect, 0644,
+	       ibtrs_srv_disconnect_show, ibtrs_srv_disconnect_store);
+
+static ssize_t ibtrs_srv_hca_port_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_srv_sess *sess;
+	struct ibtrs_con *usr_con;
+
+	sess = container_of(kobj, typeof(*sess), kobj);
+	usr_con = sess->s.con[0];
+
+	return scnprintf(page, PAGE_SIZE, "%u\n",
+			 usr_con->cm_id->port_num);
+}
+
+static struct kobj_attribute ibtrs_srv_hca_port_attr =
+	__ATTR(hca_port, 0444, ibtrs_srv_hca_port_show, NULL);
+
+static ssize_t ibtrs_srv_hca_name_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_srv_sess *sess;
+
+	sess = container_of(kobj, struct ibtrs_srv_sess, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n",
+			 sess->s.dev->ib_dev->name);
+}
+
+static struct kobj_attribute ibtrs_srv_hca_name_attr =
+	__ATTR(hca_name, 0444, ibtrs_srv_hca_name_show, NULL);
+
+static ssize_t ibtrs_srv_src_addr_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_srv_sess *sess;
+	int cnt;
+
+	sess = container_of(kobj, struct ibtrs_srv_sess, kobj);
+	cnt = sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr,
+			      page, PAGE_SIZE);
+	return cnt + scnprintf(page + cnt, PAGE_SIZE - cnt, "\n");
+}
+
+static struct kobj_attribute ibtrs_srv_src_addr_attr =
+	__ATTR(src_addr, 0444, ibtrs_srv_src_addr_show, NULL);
+
+static ssize_t ibtrs_srv_dst_addr_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibtrs_srv_sess *sess;
+	int cnt;
+
+	sess = container_of(kobj, struct ibtrs_srv_sess, kobj);
+	cnt = sockaddr_to_str((struct sockaddr *)&sess->s.src_addr,
+			      page, PAGE_SIZE);
+	return cnt + scnprintf(page + cnt, PAGE_SIZE - cnt, "\n");
+}
+
+static struct kobj_attribute ibtrs_srv_dst_addr_attr =
+	__ATTR(dst_addr, 0444, ibtrs_srv_dst_addr_show, NULL);
+
+static struct attribute *ibtrs_srv_sess_attrs[] = {
+	&ibtrs_srv_hca_name_attr.attr,
+	&ibtrs_srv_hca_port_attr.attr,
+	&ibtrs_srv_src_addr_attr.attr,
+	&ibtrs_srv_dst_addr_attr.attr,
+	&ibtrs_srv_disconnect_attr.attr,
+	NULL,
+};
+
+static struct attribute_group ibtrs_srv_sess_attr_group = {
+	.attrs = ibtrs_srv_sess_attrs,
+};
+
+STAT_ATTR(struct ibtrs_srv_sess, rdma,
+	  ibtrs_srv_stats_rdma_to_str,
+	  ibtrs_srv_reset_rdma_stats);
+
+STAT_ATTR(struct ibtrs_srv_sess, wc_completion,
+	  ibtrs_srv_stats_wc_completion_to_str,
+	  ibtrs_srv_reset_wc_completion_stats);
+
+STAT_ATTR(struct ibtrs_srv_sess, reset_all,
+	  ibtrs_srv_reset_all_help,
+	  ibtrs_srv_reset_all_stats);
+
+static struct attribute *ibtrs_srv_stats_attrs[] = {
+	&rdma_attr.attr,
+	&wc_completion_attr.attr,
+	&reset_all_attr.attr,
+	NULL,
+};
+
+static struct attribute_group ibtrs_srv_stats_attr_group = {
+	.attrs = ibtrs_srv_stats_attrs,
+};
+
+static void ibtrs_srv_dev_release(struct device *dev)
+{
+	/* Nobody plays with device references, so nop */
+}
+
+static int ibtrs_srv_create_once_sysfs_root_folders(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	int err = 0;
+
+	mutex_lock(&srv->paths_mutex);
+	if (srv->dev_ref++) {
+		/*
+		 * Just increase device reference.  We can't use get_device()
+		 * because we need to unregister device when ref goes to 0,
+		 * not just to put it.
+		 */
+		goto unlock;
+	}
+	srv->dev.class = ibtrs_dev_class;
+	srv->dev.release = ibtrs_srv_dev_release;
+	dev_set_name(&srv->dev, "%s", sess->s.sessname);
+
+	err = device_register(&srv->dev);
+	if (unlikely(err)) {
+		pr_err("device_register(): %d\n", err);
+		goto unlock;
+	}
+	err = kobject_init_and_add(&srv->kobj_paths, &ktype,
+				   &srv->dev.kobj, "paths");
+	if (unlikely(err)) {
+		pr_err("kobject_init_and_add(): %d\n", err);
+		device_unregister(&srv->dev);
+		goto unlock;
+	}
+unlock:
+	mutex_unlock(&srv->paths_mutex);
+
+	return err;
+}
+
+static void
+ibtrs_srv_destroy_once_sysfs_root_folders(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+
+	mutex_lock(&srv->paths_mutex);
+	if (!--srv->dev_ref) {
+		kobject_del(&srv->kobj_paths);
+		kobject_put(&srv->kobj_paths);
+		device_unregister(&srv->dev);
+	}
+	mutex_unlock(&srv->paths_mutex);
+}
+
+static int ibtrs_srv_create_stats_files(struct ibtrs_srv_sess *sess)
+{
+	int err;
+
+	err = kobject_init_and_add(&sess->kobj_stats, &ktype,
+				   &sess->kobj, "stats");
+	if (unlikely(err)) {
+		ibtrs_err(sess, "kobject_init_and_add(): %d\n", err);
+		return err;
+	}
+	err = sysfs_create_group(&sess->kobj_stats,
+				 &ibtrs_srv_stats_attr_group);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "sysfs_create_group(): %d\n", err);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	kobject_del(&sess->kobj_stats);
+	kobject_put(&sess->kobj_stats);
+
+	return err;
+}
+
+int ibtrs_srv_create_sess_files(struct ibtrs_srv_sess *sess)
+{
+	struct ibtrs_srv *srv = sess->srv;
+	char str[NAME_MAX];
+	int err, cnt;
+
+	cnt = sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr,
+			      str, sizeof(str));
+	cnt += scnprintf(str + cnt, sizeof(str) - cnt, "@");
+	sockaddr_to_str((struct sockaddr *)&sess->s.src_addr,
+			str + cnt, sizeof(str) - cnt);
+
+	err = ibtrs_srv_create_once_sysfs_root_folders(sess);
+	if (unlikely(err))
+		return err;
+
+	err = kobject_init_and_add(&sess->kobj, &ktype, &srv->kobj_paths,
+				   "%s", str);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "kobject_init_and_add(): %d\n", err);
+		goto destroy_root;
+	}
+	err = sysfs_create_group(&sess->kobj, &ibtrs_srv_sess_attr_group);
+	if (unlikely(err)) {
+		ibtrs_err(sess, "sysfs_create_group(): %d\n", err);
+		goto put_kobj;
+	}
+	err = ibtrs_srv_create_stats_files(sess);
+	if (unlikely(err))
+		goto remove_group;
+
+	return 0;
+
+remove_group:
+	sysfs_remove_group(&sess->kobj, &ibtrs_srv_sess_attr_group);
+put_kobj:
+	kobject_del(&sess->kobj);
+	kobject_put(&sess->kobj);
+destroy_root:
+	ibtrs_srv_destroy_once_sysfs_root_folders(sess);
+
+	return err;
+}
+
+void ibtrs_srv_destroy_sess_files(struct ibtrs_srv_sess *sess)
+{
+	if (sess->kobj.state_in_sysfs) {
+		kobject_del(&sess->kobj_stats);
+		kobject_put(&sess->kobj_stats);
+		kobject_del(&sess->kobj);
+		kobject_put(&sess->kobj);
+
+		ibtrs_srv_destroy_once_sysfs_root_folders(sess);
+	}
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 13/25] ibtrs: include client and server modules into kernel compilation
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (11 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 12/25] ibtrs: server: sysfs interface functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-06-20 15:03 ` [PATCH v4 14/25] ibtrs: a bit of documentation Jack Wang
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

Add IBTRS Makefile, Kconfig and also corresponding lines into upper
layer infiniband/ulp files.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/Kconfig            |  1 +
 drivers/infiniband/ulp/Makefile       |  1 +
 drivers/infiniband/ulp/ibtrs/Kconfig  | 22 ++++++++++++++++++++++
 drivers/infiniband/ulp/ibtrs/Makefile | 15 +++++++++++++++
 4 files changed, 39 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/Kconfig
 create mode 100644 drivers/infiniband/ulp/ibtrs/Makefile

diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index 8ba41cbf1869..1a271ade9997 100644
--- a/drivers/infiniband/Kconfig
+++ b/drivers/infiniband/Kconfig
@@ -117,6 +117,7 @@ source "drivers/infiniband/ulp/srpt/Kconfig"
 
 source "drivers/infiniband/ulp/iser/Kconfig"
 source "drivers/infiniband/ulp/isert/Kconfig"
+source "drivers/infiniband/ulp/ibtrs/Kconfig"
 
 source "drivers/infiniband/ulp/opa_vnic/Kconfig"
 
diff --git a/drivers/infiniband/ulp/Makefile b/drivers/infiniband/ulp/Makefile
index 437813c7b481..1c4f10dc8d49 100644
--- a/drivers/infiniband/ulp/Makefile
+++ b/drivers/infiniband/ulp/Makefile
@@ -5,3 +5,4 @@ obj-$(CONFIG_INFINIBAND_SRPT)		+= srpt/
 obj-$(CONFIG_INFINIBAND_ISER)		+= iser/
 obj-$(CONFIG_INFINIBAND_ISERT)		+= isert/
 obj-$(CONFIG_INFINIBAND_OPA_VNIC)	+= opa_vnic/
+obj-$(CONFIG_INFINIBAND_IBTRS)		+= ibtrs/
diff --git a/drivers/infiniband/ulp/ibtrs/Kconfig b/drivers/infiniband/ulp/ibtrs/Kconfig
new file mode 100644
index 000000000000..1f30c88783e6
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/Kconfig
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+config INFINIBAND_IBTRS
+	tristate
+	depends on INFINIBAND_ADDR_TRANS
+
+config INFINIBAND_IBTRS_CLIENT
+	tristate "IBTRS client module"
+	depends on INFINIBAND_ADDR_TRANS
+	select INFINIBAND_IBTRS
+	help
+	  IBTRS client allows for simplified data transfer and connection
+	  establishment over RDMA (InfiniBand, RoCE, iWarp). Uses BIO-like
+	  READ/WRITE semantics and provides multipath capabilities.
+
+config INFINIBAND_IBTRS_SERVER
+	tristate "IBTRS server module"
+	depends on INFINIBAND_ADDR_TRANS
+	select INFINIBAND_IBTRS
+	help
+	  IBTRS server module processing connection and IO requests received
+	  from the IBTRS client module.
diff --git a/drivers/infiniband/ulp/ibtrs/Makefile b/drivers/infiniband/ulp/ibtrs/Makefile
new file mode 100644
index 000000000000..d2e6cce8f94f
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/Makefile
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+ibtrs-client-y := ibtrs-clt.o \
+		  ibtrs-clt-stats.o \
+		  ibtrs-clt-sysfs.o
+
+ibtrs-server-y := ibtrs-srv.o \
+		  ibtrs-srv-stats.o \
+		  ibtrs-srv-sysfs.o
+
+ibtrs-core-y := ibtrs.o
+
+obj-$(CONFIG_INFINIBAND_IBTRS)        += ibtrs-core.o
+obj-$(CONFIG_INFINIBAND_IBTRS_CLIENT) += ibtrs-client.o
+obj-$(CONFIG_INFINIBAND_IBTRS_SERVER) += ibtrs-server.o
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 14/25] ibtrs: a bit of documentation
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (12 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 13/25] ibtrs: include client and server modules into kernel compilation Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-06-20 15:03 ` [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers Jack Wang
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

README with description of major sysfs entries.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/ibtrs/README | 385 ++++++++++++++++++++++++++++
 1 file changed, 385 insertions(+)
 create mode 100644 drivers/infiniband/ulp/ibtrs/README

diff --git a/drivers/infiniband/ulp/ibtrs/README b/drivers/infiniband/ulp/ibtrs/README
new file mode 100644
index 000000000000..86d5cf836097
--- /dev/null
+++ b/drivers/infiniband/ulp/ibtrs/README
@@ -0,0 +1,385 @@
+****************************
+InfiniBand Transport (IBTRS)
+****************************
+
+IBTRS (InfiniBand Transport) is a reliable high speed transport library
+which provides support to establish optimal number of connections
+between client and server machines using RDMA (InfiniBand, RoCE, iWarp)
+transport. It is optimized to transfer (read/write) IO blocks.
+
+In its core interface it follows the BIO semantics of providing the
+possibility to either write data from an sg list to the remote side
+or to request ("read") data transfer from the remote side into a given
+sg list.
+
+IBTRS provides I/O fail-over and load-balancing capabilities by using
+multipath I/O (see "add_path" and "mp_policy" configuration entries).
+
+IBTRS is used by the IBNBD (Infiniband Network Block Device) modules.
+
+======================
+Client Sysfs Interface
+======================
+
+This chapter describes only the most important files of sysfs interface
+on client side.
+
+Entries under /sys/devices/virtual/ibtrs-client/
+================================================
+
+When a user of IBTRS API creates a new session, a directory entry with
+the name of that session is created.
+
+Entries under /sys/devices/virtual/ibtrs-client/<session-name>/
+===============================================================
+
+add_path (RW)
+-------------
+
+Adds a new path (connection) to an existing session. Expected format is the
+following:
+
+  <[source addr,]destination addr>
+
+  *addr ::= [ ip:<ipv4|ipv6> | gid:<gid> ]
+
+max_reconnect_attempts (RW)
+---------------------------
+
+Maximum number reconnect attempts the client should make before giving up
+after connection breaks unexpectedly.
+
+mp_policy (RW)
+--------------
+
+Multipath policy specifies which path should be selected on each IO:
+
+   round-robin (0):
+       select path in per CPU round-robin manner.
+
+   min-inflight (1):
+       select path with minimum inflights.
+
+Entries under /sys/devices/virtual/ibtrs-client/<session-name>/paths/
+=====================================================================
+
+
+Each path belonging to a given session is listed here by its source and
+destination address. When a new path is added to a session by writing to
+the "add_path" entry, a directory <src@dst> is created.
+
+Entries under /sys/devices/virtual/ibtrs-client/<session-name>/paths/<src@dst>/
+===============================================================================
+
+state (R)
+---------
+
+Contains "connected" if the session is connected to the peer and fully
+functional.  Otherwise the file contains "disconnected"
+
+reconnect (RW)
+--------------
+
+Write "1" to the file in order to reconnect the path.
+Operation is blocking and returns 0 if reconnect was successful.
+
+disconnect (RW)
+---------------
+
+Write "1" to the file in order to disconnect the path.
+Operation blocks until IBTRS path is disconnected.
+
+remove_path (RW)
+----------------
+
+Write "1" to the file in order to disconnected and remove the path
+from the session.  Operation blocks until the path is disconnected
+and removed from the session.
+
+hca_name (R)
+------------
+
+Contains the the name of HCA the connection established on.
+
+hca_port (R)
+------------
+
+Contains the port number of active port traffic is going through.
+
+src_addr (R)
+------------
+
+Contains the source address of the path
+
+dst_addr (R)
+------------
+
+Contains the destination address of the path
+
+
+Entries under /sys/devices/virtual/ibtrs-client/<session-name>/paths/<src@dst>/stats/
+=====================================================================================
+
+Write "0" to any file in that directory to reset corresponding statistics.
+
+reset_all (RW)
+--------------
+
+Read will return usage help, write 0 will clear all the statistics.
+
+sg_entries (RW)
+---------------
+
+Data to be transferred via RDMA is passed to IBTRS as scatter-gather
+list. A scatter-gather list can contain multiple entries.
+Scatter-gather list with less entries require less processing power
+and can therefore transferred faster. The file sg_entries outputs a
+per-CPU distribution table for the number of entries in the
+scatter-gather lists, that were passed to the IBTRS API function
+ibtrs_clt_request (READ or WRITE).
+
+cpu_migration (RW)
+------------------
+
+IBTRS expects that each HCA IRQ is pinned to a separate CPU. If it's
+not the case, the processing of an I/O response could be processed on a
+different CPU than where it was originally submitted.  This file shows
+how many interrupts where generated on a non expected CPU.
+"from:" is the CPU on which the IRQ was expected, but not generated.
+"to:" is the CPU on which the IRQ was generated, but not expected.
+
+reconnects (RW)
+---------------
+
+Contains 2 unsigned int values, the first one records number of successful
+reconnects in the path lifetime, the second one records number of failed
+reconnects in the path lifetime.
+
+rdma_lat (RW)
+-------------
+
+Latency distribution of IBTRS requests.
+The format is:
+   1 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+   2 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+   4 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+   8 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+  16 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+  ...
+  65536 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+  >= 65536 ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+  maximum ms: <CNT-LAT-READ> <CNT-LAT-WRITE>
+
+wc_completion (RW)
+------------------
+
+Contains 2 unsigned int values, the first one records max number of work
+requests processed in work_completion in session lifetime, the second
+one records average number of work requests processed in work_completion
+in session lifetime.
+
+rdma (RW)
+---------
+
+Contains statistics regarding rdma operations and inflight operations.
+The output consists of 6 values:
+
+<read-count> <read-total-size> <write-count> <write-total-size> \
+<inflights> <failovered>
+
+======================
+Server Sysfs Interface
+======================
+
+Entries under /sys/devices/virtual/ibtrs-server/
+================================================
+
+When a user of IBTRS API creates a new session on a client side, a
+directory entry with the name of that session is created in here.
+
+Entries under /sys/devices/virtual/ibtrs-server/<session-name>/paths/
+=====================================================================
+
+When new path is created by writing to "add_path" entry on client side,
+a directory entry named as <source address>@<destination address> is created
+on server.
+
+Entries under /sys/devices/virtual/ibtrs-server/<session-name>/paths/<src@dst>/
+===============================================================================
+
+disconnect (RW)
+---------------
+
+When "1" is written to the file, the IBTRS session is being disconnected.
+Operations is non-blocking and returns control immediately to the caller.
+
+hca_name (R)
+------------
+
+Contains the the name of HCA the connection established on.
+
+hca_port (R)
+------------
+
+Contains the port number of active port traffic is going through.
+
+src_addr (R)
+------------
+
+Contains the source address of the path
+
+dst_addr (R)
+------------
+
+Contains the destination address of the path
+
+Entries under /sys/devices/virtual/ibtrs-server/<session-name>/paths/<src@dst>/stats/
+=====================================================================================
+
+When "0" is written to a file in this directory, the corresponding counters
+will be reset.
+
+reset_all (RW)
+--------------
+
+Read will return usage help, write 0 will clear all the counters about
+stats.
+
+rdma (RW)
+---------
+
+Contains statistics regarding rdma operations and inflight operations.
+The output consists of 5 values:
+
+<read-count> <read-total-size> <write-count> <write-total-size> <inflights>
+
+wc_completion (RW)
+------------------
+
+Contains 3 values, the first one is int, records max number of work
+requests processed in work_completion in session lifetime, the second
+one long int records total number of work requests processed in
+work_completion in session lifetime and the 3rd one long int records
+total number of calls to the cq completion handler. Division of 2nd
+number through 3rd gives the average number of completions processed
+in completion handler.
+
+==================
+Transport protocol
+==================
+
+Overview
+--------
+An established connection between a client and a server is called ibtrs
+session. A session is associated with a set of memory chunks reserved on the
+server side for a given client for rdma transfer. A session
+consists of multiple paths, each representing a separate physical link
+between client and server. Those are used for load balancing and failover.
+Each path consists of as many connections (QPs) as there are cpus on
+the client.
+
+When processing an incoming rdma write or read request ibtrs client uses memory
+chunks reserved for him on the server side. Their number, size and addresses
+need to be exchanged between client and server during the connection
+establishment phase. Apart from the memory related information client needs to
+inform the server about the session name and identify each path and connection
+individually.
+
+On an established session client sends to server write or read messages.
+Server uses immediate field to tell the client which request is being
+acknowledged and for errno. Client uses immediate field to tell the server
+which of the memory chunks has been accessed and at which offset the message
+can be found.
+
+Connection establishment
+------------------------
+
+1. Client starts establishing connections belonging to a path of a session one
+by one via attaching IBTRS_MSG_CON_REQ messages to the rdma_connect requests.
+Those include uuid of the session and uuid of the path to be
+established. They are used by the server to find a persisting session/path or
+to create a new one when necessary. The message also contains the protocol
+version and magic for compatibility, total number of connections per session
+(as many as cpus on the client), the id of the current connection and
+the reconnect counter, which is used to resolve the situations where
+client is trying to reconnect a path, while server is still destroying the old
+one.
+
+2. Server accepts the connection requests one by one and attaches
+IBTRS_MSG_CONN_RSP messages to the rdma_accept. Apart from magic and
+protocol version, the messages include error code, queue depth supported by
+the server (number of memory chunks which are going to be allocated for that
+session) and the maximum size of one io.
+
+3. After all connections of a path are established client sends to server the
+IBTRS_MSG_INFO_REQ message, containing the name of the session. This message
+requests the address information from the server.
+
+4. Server replies to the session info request message with IBTRS_MSG_INFO_RSP,
+which contains the addresses and keys of the RDMA buffers allocated for that
+session.
+
+5. Session becomes connected after all paths to be established are connected
+(i.e. steps 1-4 finished for all paths requested for a session)
+
+6. Server and client exchange periodically heartbeat messages (empty rdma
+messages with an immediate field) which are used to detect a crash on remote
+side or network outage in an absence of IO.
+
+7. On any RDMA related error or in the case of a heartbeat timeout, the
+corresponding path is disconnected, all the inflight IO are failed over to a
+healthy path, if any, and the reconnect mechanism is triggered.
+
+CLT                                     SRV
+*for each connection belonging to a path and for each path:
+IBTRS_MSG_CON_REQ  ------------------->
+                   <------------------- IBTRS_MSG_CON_RSP
+...
+*after all connections are established:
+IBTRS_MSG_INFO_REQ ------------------->
+                   <------------------- IBTRS_MSG_INFO_RSP
+*heartbeat is started from both sides:
+                   -------------------> [IBTRS_HB_MSG_IMM]
+[IBTRS_HB_MSG_ACK] <-------------------
+[IBTRS_HB_MSG_IMM] <-------------------
+                   -------------------> [IBTRS_HB_MSG_ACK]
+
+IO path
+-------
+
+* Write *
+
+1. When processing a write request client selects one of the memory chunks
+on the server side and rdma writes there the user data, user header and the
+IBTRS_MSG_RDMA_WRITE message. Apart from the type (write), the message only
+contains size of the user header. The client tells the server which chunk has
+been accessed and at what offset the IBTRS_MSG_RDMA_WRITE can be found by
+using the IMM field.
+
+2. When confirming a write request server sends an "empty" rdma message with
+an immediate field. The 32 bit field is used to specify the outstanding
+inflight IO and for the error code.
+
+CLT                                                          SRV
+usr_data + usr_hdr + ibtrs_msg_rdma_write -----------------> [IBTRS_IO_REQ_IMM]
+[IBTRS_IO_RSP_IMM]                        <----------------- (id + errno)
+
+* Read *
+
+1. When processing a read request client selects one of the memory chunks
+on the server side and rdma writes there the user header and the
+IBTRS_MSG_RDMA_READ message. This message contains the type (read), size of
+the user header, flags (specifying if memory invalidation is necessary) and the
+list of addresses along with keys for the data to be read into.
+
+2. When confirming a read request server transfers the requested data first,
+attaches an invalidation message if requested and finally an "empty" rdma
+message with an immediate field. The 32 bit field is used to specify the
+outstanding inflight IO and the error code.
+
+CLT                                           SRV
+usr_hdr + ibtrs_msg_rdma_read --------------> [IBTRS_IO_REQ_IMM]
+[IBTRS_IO_RSP_IMM]            <-------------- usr_data + (id + errno)
+or in case client requested invalidation:
+[IBTRS_IO_RSP_IMM_W_INV]      <-------------- usr_data + (INV) + (id + errno)
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (13 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 14/25] ibtrs: a bit of documentation Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-13 22:10   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 16/25] ibnbd: client: private header with client structs and functions Jack Wang
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

These are common private headers with IBNBD protocol structures,
logging, sysfs and other helper functions, which are used on
both client and server sides.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-log.h   |  59 +++++
 drivers/block/ibnbd/ibnbd-proto.h | 378 ++++++++++++++++++++++++++++++
 2 files changed, 437 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-log.h
 create mode 100644 drivers/block/ibnbd/ibnbd-proto.h

diff --git a/drivers/block/ibnbd/ibnbd-log.h b/drivers/block/ibnbd/ibnbd-log.h
new file mode 100644
index 000000000000..7a7ac3908564
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-log.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBNBD_LOG_H
+#define IBNBD_LOG_H
+
+#include "ibnbd-clt.h"
+#include "ibnbd-srv.h"
+
+void unknown_type(void);
+
+#define ibnbd_log(fn, dev, fmt, ...) ({				\
+	__builtin_choose_expr(						\
+		__builtin_types_compatible_p(				\
+			typeof(dev), struct ibnbd_clt_dev *),		\
+		fn("<%s@%s> " fmt, (dev)->pathname,			\
+		(dev)->sess->sessname,					\
+		   ##__VA_ARGS__),					\
+		__builtin_choose_expr(					\
+			__builtin_types_compatible_p(typeof(dev),	\
+					struct ibnbd_srv_sess_dev *),	\
+			fn("<%s@%s>: " fmt, (dev)->pathname,		\
+			   (dev)->sess->sessname, ##__VA_ARGS__),	\
+			unknown_type()));				\
+})
+
+#define ibnbd_err(dev, fmt, ...)	\
+	ibnbd_log(pr_err, dev, fmt, ##__VA_ARGS__)
+#define ibnbd_err_rl(dev, fmt, ...)	\
+	ibnbd_log(pr_err_ratelimited, dev, fmt, ##__VA_ARGS__)
+#define ibnbd_wrn(dev, fmt, ...)	\
+	ibnbd_log(pr_warn, dev, fmt, ##__VA_ARGS__)
+#define ibnbd_wrn_rl(dev, fmt, ...) \
+	ibnbd_log(pr_warn_ratelimited, dev, fmt, ##__VA_ARGS__)
+#define ibnbd_info(dev, fmt, ...) \
+	ibnbd_log(pr_info, dev, fmt, ##__VA_ARGS__)
+#define ibnbd_info_rl(dev, fmt, ...) \
+	ibnbd_log(pr_info_ratelimited, dev, fmt, ##__VA_ARGS__)
+
+#endif /* IBNBD_LOG_H */
diff --git a/drivers/block/ibnbd/ibnbd-proto.h b/drivers/block/ibnbd/ibnbd-proto.h
new file mode 100644
index 000000000000..e5a0a539447b
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-proto.h
@@ -0,0 +1,378 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBNBD_PROTO_H
+#define IBNBD_PROTO_H
+
+#include <linux/types.h>
+#include <linux/blkdev.h>
+#include <linux/limits.h>
+#include <linux/inet.h>
+#include <linux/in.h>
+#include <linux/in6.h>
+#include <rdma/ib.h>
+
+#define IBNBD_PROTO_VER_MAJOR 2
+#define IBNBD_PROTO_VER_MINOR 0
+
+#define IBNBD_PROTO_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
+			       __stringify(IBNBD_PROTO_VER_MINOR)
+
+#ifndef IBNBD_VER_STRING
+#define IBNBD_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
+			 __stringify(IBNBD_PROTO_VER_MINOR)
+#endif
+
+/* TODO: should be configurable */
+#define IBTRS_PORT 1234
+
+/**
+ * enum ibnbd_msg_types - IBNBD message types
+ * @IBNBD_MSG_SESS_INFO:	initial session info from client to server
+ * @IBNBD_MSG_SESS_INFO_RSP:	initial session info from server to client
+ * @IBNBD_MSG_OPEN:		open (map) device request
+ * @IBNBD_MSG_OPEN_RSP:		response to an @IBNBD_MSG_OPEN
+ * @IBNBD_MSG_IO:		block IO request operation
+ * @IBNBD_MSG_CLOSE:		close (unmap) device request
+ */
+enum ibnbd_msg_type {
+	IBNBD_MSG_SESS_INFO,
+	IBNBD_MSG_SESS_INFO_RSP,
+	IBNBD_MSG_OPEN,
+	IBNBD_MSG_OPEN_RSP,
+	IBNBD_MSG_IO,
+	IBNBD_MSG_CLOSE,
+};
+
+/**
+ * struct ibnbd_msg_hdr - header of IBNBD messages
+ * @type:	Message type, valid values see: enum ibnbd_msg_types
+ */
+struct ibnbd_msg_hdr {
+	__le16		type;
+	__le16		__padding;
+};
+
+enum ibnbd_access_mode {
+	IBNBD_ACCESS_RO,
+	IBNBD_ACCESS_RW,
+	IBNBD_ACCESS_MIGRATION,
+};
+
+#define _IBNBD_FILEIO  0
+#define _IBNBD_BLOCKIO 1
+#define _IBNBD_AUTOIO  2
+
+enum ibnbd_io_mode {
+	IBNBD_FILEIO = _IBNBD_FILEIO,
+	IBNBD_BLOCKIO = _IBNBD_BLOCKIO,
+	IBNBD_AUTOIO = _IBNBD_AUTOIO,
+};
+
+/**
+ * struct ibnbd_msg_sess_info - initial session info from client to server
+ * @hdr:		message header
+ * @ver:		IBNBD protocol version
+ */
+struct ibnbd_msg_sess_info {
+	struct ibnbd_msg_hdr hdr;
+	u8		ver;
+	u8		reserved[31];
+};
+
+/**
+ * struct ibnbd_msg_sess_info_rsp - initial session info from server to client
+ * @hdr:		message header
+ * @ver:		IBNBD protocol version
+ */
+struct ibnbd_msg_sess_info_rsp {
+	struct ibnbd_msg_hdr hdr;
+	u8		ver;
+	u8		reserved[31];
+};
+
+/**
+ * struct ibnbd_msg_open - request to open a remote device.
+ * @hdr:		message header
+ * @access_mode:	the mode to open remote device, valid values see:
+ *			enum ibnbd_access_mode
+ * @io_mode:		Open volume on server as block device or as file
+ * @device_name:	device path on remote side
+ */
+struct ibnbd_msg_open {
+	struct ibnbd_msg_hdr hdr;
+	u8		access_mode;
+	u8		io_mode;
+	s8		dev_name[NAME_MAX];
+	u8		__padding[3];
+};
+
+/**
+ * struct ibnbd_msg_close - request to close a remote device.
+ * @hdr:	message header
+ * @device_id:	device_id on server side to identify the device
+ */
+struct ibnbd_msg_close {
+	struct ibnbd_msg_hdr hdr;
+	__le32		device_id;
+};
+
+/**
+ * struct ibnbd_msg_open_rsp - response message to IBNBD_MSG_OPEN
+ * @hdr:		message header
+ * @nsectors:		number of sectors
+ * @device_id:		device_id on server side to identify the device
+ * @queue_flags:	queue_flags of the device on server side
+ * @max_hw_sectors:	max hardware sectors in the usual 512b unit
+ * @max_write_same_sectors: max sectors for WRITE SAME in the 512b unit
+ * @max_discard_sectors: max. sectors that can be discarded at once
+ * @discard_granularity: size of the internal discard allocation unit
+ * @discard_alignment: offset from internal allocation assignment
+ * @physical_block_size: physical block size device supports
+ * @logical_block_size: logical block size device supports
+ * @max_segments:	max segments hardware support in one transfer
+ * @secure_discard:	supports secure discard
+ * @rotation:		is a rotational disc?
+ * @io_mode:		io_mode device is opened.
+ */
+struct ibnbd_msg_open_rsp {
+	struct ibnbd_msg_hdr	hdr;
+	__le32			device_id;
+	__le64			nsectors;
+	__le32			max_hw_sectors;
+	__le32			max_write_same_sectors;
+	__le32			max_discard_sectors;
+	__le32			discard_granularity;
+	__le32			discard_alignment;
+	__le16			physical_block_size;
+	__le16			logical_block_size;
+	__le16			max_segments;
+	__le16			secure_discard;
+	u8			rotational;
+	u8			io_mode;
+	u8			__padding[10];
+};
+
+/**
+ * struct ibnbd_msg_io_old - message for I/O read/write for 
+ * ver < IBNBD_PROTO_VER_MAJOR
+ * This structure is there only to know the size of the "old" message format
+ * @hdr:	message header
+ * @device_id:	device_id on server side to find the right device
+ * @sector:	bi_sector attribute from struct bio
+ * @rw:		bitmask, valid values are defined in enum ibnbd_io_flags
+ * @bi_size:    number of bytes for I/O read/write
+ * @prio:       priority
+ */
+struct ibnbd_msg_io_old {
+	struct ibnbd_msg_hdr hdr;
+	__le32		device_id;
+	__le64		sector;
+	__le32		rw;
+	__le32		bi_size;
+};
+
+/**
+ * struct ibnbd_msg_io - message for I/O read/write
+ * @hdr:	message header
+ * @device_id:	device_id on server side to find the right device
+ * @sector:	bi_sector attribute from struct bio
+ * @rw:		bitmask, valid values are defined in enum ibnbd_io_flags
+ * @bi_size:    number of bytes for I/O read/write
+ * @prio:       priority
+ */
+struct ibnbd_msg_io {
+	struct ibnbd_msg_hdr hdr;
+	__le32		device_id;
+	__le64		sector;
+	__le32		rw;
+	__le32		bi_size;
+	__le16		prio;
+};
+
+#define IBNBD_OP_BITS  8
+#define IBNBD_OP_MASK  ((1 << IBNBD_OP_BITS) - 1)
+
+/**
+ * enum ibnbd_io_flags - IBNBD request types from rq_flag_bits
+ * @IBNBD_OP_READ:	     read sectors from the device
+ * @IBNBD_OP_WRITE:	     write sectors to the device
+ * @IBNBD_OP_FLUSH:	     flush the volatile write cache
+ * @IBNBD_OP_DISCARD:        discard sectors
+ * @IBNBD_OP_SECURE_ERASE:   securely erase sectors
+ * @IBNBD_OP_WRITE_SAME:     write the same sectors many times
+
+ * @IBNBD_F_SYNC:	     request is sync (sync write or read)
+ * @IBNBD_F_FUA:             forced unit access
+ */
+enum ibnbd_io_flags {
+
+	/* Operations */
+
+	IBNBD_OP_READ		= 0,
+	IBNBD_OP_WRITE		= 1,
+	IBNBD_OP_FLUSH		= 2,
+	IBNBD_OP_DISCARD	= 3,
+	IBNBD_OP_SECURE_ERASE	= 4,
+	IBNBD_OP_WRITE_SAME	= 5,
+
+	IBNBD_OP_LAST,
+
+	/* Flags */
+
+	IBNBD_F_SYNC  = 1<<(IBNBD_OP_BITS + 0),
+	IBNBD_F_FUA   = 1<<(IBNBD_OP_BITS + 1),
+
+	IBNBD_F_ALL   = (IBNBD_F_SYNC | IBNBD_F_FUA)
+
+};
+
+static inline u32 ibnbd_op(u32 flags)
+{
+	return (flags & IBNBD_OP_MASK);
+}
+
+static inline u32 ibnbd_flags(u32 flags)
+{
+	return (flags & ~IBNBD_OP_MASK);
+}
+
+static inline bool ibnbd_flags_supported(u32 flags)
+{
+	u32 op;
+
+	op = ibnbd_op(flags);
+	flags = ibnbd_flags(flags);
+
+	if (op >= IBNBD_OP_LAST)
+		return false;
+	if (flags & ~IBNBD_F_ALL)
+		return false;
+
+	return true;
+}
+
+static inline u32 ibnbd_to_bio_flags(u32 ibnbd_flags)
+{
+	u32 bio_flags;
+
+	switch (ibnbd_op(ibnbd_flags)) {
+	case IBNBD_OP_READ:
+		bio_flags = REQ_OP_READ;
+		break;
+	case IBNBD_OP_WRITE:
+		bio_flags = REQ_OP_WRITE;
+		break;
+	case IBNBD_OP_FLUSH:
+		bio_flags = REQ_OP_FLUSH | REQ_PREFLUSH;
+		break;
+	case IBNBD_OP_DISCARD:
+		bio_flags = REQ_OP_DISCARD;
+		break;
+	case IBNBD_OP_SECURE_ERASE:
+		bio_flags = REQ_OP_SECURE_ERASE;
+		break;
+	case IBNBD_OP_WRITE_SAME:
+		bio_flags = REQ_OP_WRITE_SAME;
+		break;
+	default:
+		WARN(1, "Unknown IBNBD type: %d (flags %d)\n",
+		     ibnbd_op(ibnbd_flags), ibnbd_flags);
+		bio_flags = 0;
+	}
+
+	if (ibnbd_flags & IBNBD_F_SYNC)
+		bio_flags |= REQ_SYNC;
+
+	if (ibnbd_flags & IBNBD_F_FUA)
+		bio_flags |= REQ_FUA;
+
+	return bio_flags;
+}
+
+static inline u32 rq_to_ibnbd_flags(struct request *rq)
+{
+	u32 ibnbd_flags;
+
+	switch (req_op(rq)) {
+	case REQ_OP_READ:
+		ibnbd_flags = IBNBD_OP_READ;
+		break;
+	case REQ_OP_WRITE:
+		ibnbd_flags = IBNBD_OP_WRITE;
+		break;
+	case REQ_OP_DISCARD:
+		ibnbd_flags = IBNBD_OP_DISCARD;
+		break;
+	case REQ_OP_SECURE_ERASE:
+		ibnbd_flags = IBNBD_OP_SECURE_ERASE;
+		break;
+	case REQ_OP_WRITE_SAME:
+		ibnbd_flags = IBNBD_OP_WRITE_SAME;
+		break;
+	case REQ_OP_FLUSH:
+		ibnbd_flags = IBNBD_OP_FLUSH;
+		break;
+	default:
+		WARN(1, "Unknown request type %d (flags %llu)\n",
+		     req_op(rq), (unsigned long long)rq->cmd_flags);
+		ibnbd_flags = 0;
+	}
+
+	if (op_is_sync(rq->cmd_flags))
+		ibnbd_flags |= IBNBD_F_SYNC;
+
+	if (op_is_flush(rq->cmd_flags))
+		ibnbd_flags |= IBNBD_F_FUA;
+
+	return ibnbd_flags;
+}
+
+static inline const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
+{
+	switch (mode) {
+	case IBNBD_FILEIO:
+		return "fileio";
+	case IBNBD_BLOCKIO:
+		return "blockio";
+	case IBNBD_AUTOIO:
+		return "autoio";
+	default:
+		return "unknown";
+	}
+}
+
+static inline const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
+{
+	switch (mode) {
+	case IBNBD_ACCESS_RO:
+		return "ro";
+	case IBNBD_ACCESS_RW:
+		return "rw";
+	case IBNBD_ACCESS_MIGRATION:
+		return "migration";
+	default:
+		return "unknown";
+	}
+}
+
+#endif /* IBNBD_PROTO_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 16/25] ibnbd: client: private header with client structs and functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (14 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-13 22:25   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 17/25] ibnbd: client: main functionality Jack Wang
                   ` (9 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This header describes main structs and functions used by ibnbd-client
module, mainly for managing IBNBD sessions and mapped block devices,
creating and destroying sysfs entries.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-clt.h | 166 ++++++++++++++++++++++++++++++++
 1 file changed, 166 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-clt.h

diff --git a/drivers/block/ibnbd/ibnbd-clt.h b/drivers/block/ibnbd/ibnbd-clt.h
new file mode 100644
index 000000000000..005becfb110f
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-clt.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBNBD_CLT_H
+#define IBNBD_CLT_H
+
+#include <linux/wait.h>
+#include <linux/in.h>
+#include <linux/inet.h>
+#include <linux/blk-mq.h>
+#include <linux/refcount.h>
+
+#include "ibtrs.h"
+#include "ibnbd-proto.h"
+#include "ibnbd-log.h"
+
+#define BMAX_SEGMENTS 29
+#define RECONNECT_DELAY 30
+#define MAX_RECONNECTS -1
+
+enum ibnbd_clt_dev_state {
+	DEV_STATE_INIT,
+	DEV_STATE_MAPPED,
+	DEV_STATE_MAPPED_DISCONNECTED,
+	DEV_STATE_UNMAPPED,
+};
+
+struct ibnbd_iu_comp {
+	wait_queue_head_t wait;
+	int errno;
+};
+
+struct ibnbd_iu {
+	union {
+		struct request *rq; /* for block io */
+		void *buf; /* for user messages */
+	};
+	struct ibtrs_tag	*tag;
+	union {
+		/* use to send msg associated with a dev */
+		struct ibnbd_clt_dev *dev;
+		/* use to send msg associated with a sess */
+		struct ibnbd_clt_session *sess;
+	};
+	blk_status_t		status;
+	struct scatterlist	sglist[BMAX_SEGMENTS];
+	struct work_struct	work;
+	int			errno;
+	struct ibnbd_iu_comp	comp;
+	atomic_t		refcount;
+};
+
+struct ibnbd_cpu_qlist {
+	struct list_head	requeue_list;
+	spinlock_t		requeue_lock;
+	unsigned int		cpu;
+};
+
+struct ibnbd_clt_session {
+	struct list_head        list;
+	struct ibtrs_clt        *ibtrs;
+	wait_queue_head_t       ibtrs_waitq;
+	bool                    ibtrs_ready;
+	struct ibnbd_cpu_qlist	__percpu
+				*cpu_queues;
+	DECLARE_BITMAP(cpu_queues_bm, NR_CPUS);
+	int	__percpu	*cpu_rr; /* per-cpu var for CPU round-robin */
+	atomic_t		busy;
+	int			queue_depth;
+	u32			max_io_size;
+	struct blk_mq_tag_set	tag_set;
+	struct mutex		lock; /* protects state and devs_list */
+	struct list_head        devs_list; /* list of struct ibnbd_clt_dev */
+	refcount_t		refcount;
+	char			sessname[NAME_MAX];
+	u8			ver; /* protocol version */
+};
+
+/**
+ * Submission queues.
+ */
+struct ibnbd_queue {
+	struct list_head	requeue_list;
+	unsigned long		in_list;
+	struct ibnbd_clt_dev	*dev;
+	struct blk_mq_hw_ctx	*hctx;
+};
+
+struct ibnbd_clt_dev {
+	struct ibnbd_clt_session	*sess;
+	struct request_queue	*queue;
+	struct ibnbd_queue	*hw_queues;
+	u32			device_id;
+	/* local Idr index - used to track minor number allocations. */
+	u32			clt_device_id;
+	struct mutex		lock;
+	enum ibnbd_clt_dev_state	dev_state;
+	enum ibnbd_io_mode	io_mode; /* user requested */
+	enum ibnbd_io_mode	remote_io_mode; /* server really used */
+	char			pathname[NAME_MAX];
+	enum ibnbd_access_mode	access_mode;
+	bool			read_only;
+	bool			rotational;
+	u32			max_hw_sectors;
+	u32			max_write_same_sectors;
+	u32			max_discard_sectors;
+	u32			discard_granularity;
+	u32			discard_alignment;
+	u16			secure_discard;
+	u16			physical_block_size;
+	u16			logical_block_size;
+	u16			max_segments;
+	size_t			nsectors;
+	u64			size;		/* device size in bytes */
+	struct list_head        list;
+	struct gendisk		*gd;
+	struct kobject		kobj;
+	char			blk_symlink_name[NAME_MAX];
+	refcount_t		refcount;
+	struct work_struct	unmap_on_rmmod_work;
+};
+
+/* ibnbd-clt.c */
+
+struct ibnbd_clt_dev *ibnbd_clt_map_device(const char *sessname,
+					   struct ibtrs_addr *paths,
+					   size_t path_cnt,
+					   const char *pathname,
+					   enum ibnbd_access_mode access_mode,
+					   enum ibnbd_io_mode io_mode);
+int ibnbd_clt_unmap_device(struct ibnbd_clt_dev *dev, bool force,
+			   const struct attribute *sysfs_self);
+
+int ibnbd_clt_remap_device(struct ibnbd_clt_dev *dev);
+int ibnbd_clt_resize_disk(struct ibnbd_clt_dev *dev, size_t newsize);
+
+/* ibnbd-clt-sysfs.c */
+
+int ibnbd_clt_create_sysfs_files(void);
+
+void ibnbd_clt_destroy_sysfs_files(void);
+void ibnbd_clt_destroy_default_group(void);
+
+void ibnbd_clt_remove_dev_symlink(struct ibnbd_clt_dev *dev);
+
+#endif /* IBNBD_CLT_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 17/25] ibnbd: client: main functionality
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (15 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 16/25] ibnbd: client: private header with client structs and functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-13 23:46   ` Bart Van Assche
  2019-09-14  0:00   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 18/25] ibnbd: client: sysfs interface functions Jack Wang
                   ` (8 subsequent siblings)
  25 siblings, 2 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is main functionality of ibnbd-client module, which provides
interface to map remote device as local block device /dev/ibnbd<N>
and feeds IBTRS with IO requests.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-clt.c | 1832 +++++++++++++++++++++++++++++++
 1 file changed, 1832 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-clt.c

diff --git a/drivers/block/ibnbd/ibnbd-clt.c b/drivers/block/ibnbd/ibnbd-clt.c
new file mode 100644
index 000000000000..b5675fac0a60
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-clt.c
@@ -0,0 +1,1832 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <linux/module.h>
+#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+#include <linux/scatterlist.h>
+#include <linux/idr.h>
+
+#include "ibnbd-clt.h"
+
+MODULE_AUTHOR("ibnbd@profitbricks.com");
+MODULE_DESCRIPTION("InfiniBand Network Block Device Client");
+MODULE_VERSION(IBNBD_VER_STRING);
+MODULE_LICENSE("GPL");
+
+/*
+ * This is for closing devices when unloading the module:
+ * we might be closing a lot (>256) of devices in parallel
+ * and it is better not to use the system_wq.
+ */
+static struct workqueue_struct *unload_wq;
+static int ibnbd_client_major;
+static DEFINE_IDA(index_ida);
+static DEFINE_MUTEX(ida_lock);
+static DEFINE_MUTEX(sess_lock);
+static LIST_HEAD(sess_list);
+
+static bool softirq_enable;
+module_param(softirq_enable, bool, 0444);
+MODULE_PARM_DESC(softirq_enable, "finish request in softirq_fn."
+		 " (default: 0)");
+/*
+ * Maximum number of partitions an instance can have.
+ * 6 bits = 64 minors = 63 partitions (one minor is used for the device itself)
+ */
+#define IBNBD_PART_BITS		6
+#define KERNEL_SECTOR_SIZE      512
+
+static inline bool ibnbd_clt_get_sess(struct ibnbd_clt_session *sess)
+{
+	return refcount_inc_not_zero(&sess->refcount);
+}
+
+static void free_sess(struct ibnbd_clt_session *sess);
+
+static void ibnbd_clt_put_sess(struct ibnbd_clt_session *sess)
+{
+	might_sleep();
+
+	if (refcount_dec_and_test(&sess->refcount))
+		free_sess(sess);
+}
+
+static inline bool ibnbd_clt_dev_is_mapped(struct ibnbd_clt_dev *dev)
+{
+	return dev->dev_state == DEV_STATE_MAPPED;
+}
+
+static void ibnbd_clt_put_dev(struct ibnbd_clt_dev *dev)
+{
+	might_sleep();
+
+	if (refcount_dec_and_test(&dev->refcount)) {
+		mutex_lock(&ida_lock);
+		ida_simple_remove(&index_ida, dev->clt_device_id);
+		mutex_unlock(&ida_lock);
+		kfree(dev->hw_queues);
+		ibnbd_clt_put_sess(dev->sess);
+		kfree(dev);
+	}
+}
+
+static inline bool ibnbd_clt_get_dev(struct ibnbd_clt_dev *dev)
+{
+	return refcount_inc_not_zero(&dev->refcount);
+}
+
+static int ibnbd_clt_set_dev_attr(struct ibnbd_clt_dev *dev,
+				  const struct ibnbd_msg_open_rsp *rsp)
+{
+	struct ibnbd_clt_session *sess = dev->sess;
+
+	if (unlikely(!rsp->logical_block_size))
+		return -EINVAL;
+
+	dev->device_id		    = le32_to_cpu(rsp->device_id);
+	dev->nsectors		    = le64_to_cpu(rsp->nsectors);
+	dev->logical_block_size	    = le16_to_cpu(rsp->logical_block_size);
+	dev->physical_block_size    = le16_to_cpu(rsp->physical_block_size);
+	dev->max_write_same_sectors = le32_to_cpu(rsp->max_write_same_sectors);
+	dev->max_discard_sectors    = le32_to_cpu(rsp->max_discard_sectors);
+	dev->discard_granularity    = le32_to_cpu(rsp->discard_granularity);
+	dev->discard_alignment	    = le32_to_cpu(rsp->discard_alignment);
+	dev->secure_discard	    = le16_to_cpu(rsp->secure_discard);
+	dev->rotational		    = rsp->rotational;
+	dev->remote_io_mode	    = rsp->io_mode;
+
+	dev->max_hw_sectors = sess->max_io_size / dev->logical_block_size;
+	dev->max_segments = BMAX_SEGMENTS;
+
+	if (dev->remote_io_mode == IBNBD_BLOCKIO) {
+		dev->max_hw_sectors = min_t(u32, dev->max_hw_sectors,
+					    le32_to_cpu(rsp->max_hw_sectors));
+		dev->max_segments = min_t(u16, dev->max_segments,
+					  le16_to_cpu(rsp->max_segments));
+	}
+
+	return 0;
+}
+
+static int ibnbd_clt_revalidate_disk(struct ibnbd_clt_dev *dev,
+				     size_t new_nsectors)
+{
+	int err = 0;
+
+	ibnbd_info(dev, "Device size changed from %zu to %zu sectors\n",
+		   dev->nsectors, new_nsectors);
+	dev->nsectors = new_nsectors;
+	set_capacity(dev->gd,
+		     dev->nsectors * (dev->logical_block_size /
+				      KERNEL_SECTOR_SIZE));
+	err = revalidate_disk(dev->gd);
+	if (err)
+		ibnbd_err(dev, "Failed to change device size from"
+			  " %zu to %zu, err: %d\n", dev->nsectors,
+			  new_nsectors, err);
+	return err;
+}
+
+static int process_msg_open_rsp(struct ibnbd_clt_dev *dev,
+				struct ibnbd_msg_open_rsp *rsp)
+{
+	int err = 0;
+
+	mutex_lock(&dev->lock);
+	if (dev->dev_state == DEV_STATE_UNMAPPED) {
+		ibnbd_info(dev, "Ignoring Open-Response message from server for "
+			   " unmapped device\n");
+		err = -ENOENT;
+		goto out;
+	}
+	if (dev->dev_state == DEV_STATE_MAPPED_DISCONNECTED) {
+		u64 nsectors = le64_to_cpu(rsp->nsectors);
+
+		/*
+		 * If the device was remapped and the size changed in the
+		 * meantime we need to revalidate it
+		 */
+		if (dev->nsectors != nsectors)
+			ibnbd_clt_revalidate_disk(dev, nsectors);
+		ibnbd_info(dev, "Device online, device remapped successfully\n");
+	}
+	err = ibnbd_clt_set_dev_attr(dev, rsp);
+	if (unlikely(err))
+		goto out;
+	dev->dev_state = DEV_STATE_MAPPED;
+
+out:
+	mutex_unlock(&dev->lock);
+
+	return err;
+}
+
+int ibnbd_clt_resize_disk(struct ibnbd_clt_dev *dev, size_t newsize)
+{
+	int ret = 0;
+
+	mutex_lock(&dev->lock);
+	if (dev->dev_state != DEV_STATE_MAPPED) {
+		pr_err("Failed to set new size of the device, "
+		       "device is not opened\n");
+		ret = -ENOENT;
+		goto out;
+	}
+	ret = ibnbd_clt_revalidate_disk(dev, newsize);
+
+out:
+	mutex_unlock(&dev->lock);
+
+	return ret;
+}
+
+static inline void ibnbd_clt_dev_requeue(struct ibnbd_queue *q)
+{
+	if (WARN_ON(!q->hctx))
+		return;
+
+	/* We can come here from interrupt, thus async=true */
+	blk_mq_run_hw_queue(q->hctx, true);
+}
+
+enum {
+	IBNBD_DELAY_10ms   = 10,
+	IBNBD_DELAY_IFBUSY = -1,
+};
+
+/**
+ * ibnbd_get_cpu_qlist() - finds a list with HW queues to be requeued
+ *
+ * Description:
+ *     Each CPU has a list of HW queues, which needs to be requeed.  If a list
+ *     is not empty - it is marked with a bit.  This function finds first
+ *     set bit in a bitmap and returns corresponding CPU list.
+ */
+static struct ibnbd_cpu_qlist *
+ibnbd_get_cpu_qlist(struct ibnbd_clt_session *sess, int cpu)
+{
+	int bit;
+
+	/* First half */
+	bit = find_next_bit(sess->cpu_queues_bm, nr_cpu_ids, cpu);
+	if (bit < nr_cpu_ids) {
+		return per_cpu_ptr(sess->cpu_queues, bit);
+	} else if (cpu != 0) {
+		/* Second half */
+		bit = find_next_bit(sess->cpu_queues_bm, cpu, 0);
+		if (bit < cpu)
+			return per_cpu_ptr(sess->cpu_queues, bit);
+	}
+
+	return NULL;
+}
+
+static inline int nxt_cpu(int cpu)
+{
+	return (cpu + 1) % nr_cpu_ids;
+}
+
+/**
+ * ibnbd_requeue_if_needed() - requeue if CPU queue is marked as non empty
+ *
+ * Description:
+ *     Each CPU has it's own list of HW queues, which should be requeued.
+ *     Function finds such list with HW queues, takes a list lock, picks up
+ *     the first HW queue out of the list and requeues it.
+ *
+ * Return:
+ *     True if the queue was requeued, false otherwise.
+ *
+ * Context:
+ *     Does not matter.
+ */
+static inline bool ibnbd_requeue_if_needed(struct ibnbd_clt_session *sess)
+{
+	struct ibnbd_queue *q = NULL;
+	struct ibnbd_cpu_qlist *cpu_q;
+	unsigned long flags;
+	int *cpup;
+
+	/*
+	 * To keep fairness and not to let other queues starve we always
+	 * try to wake up someone else in round-robin manner.  That of course
+	 * increases latency but queues always have a chance to be executed.
+	 */
+	cpup = get_cpu_ptr(sess->cpu_rr);
+	for (cpu_q = ibnbd_get_cpu_qlist(sess, nxt_cpu(*cpup)); cpu_q;
+	     cpu_q = ibnbd_get_cpu_qlist(sess, nxt_cpu(cpu_q->cpu))) {
+		if (!spin_trylock_irqsave(&cpu_q->requeue_lock, flags))
+			continue;
+		if (likely(test_bit(cpu_q->cpu, sess->cpu_queues_bm))) {
+			q = list_first_entry_or_null(&cpu_q->requeue_list,
+						     typeof(*q), requeue_list);
+			if (WARN_ON(!q))
+				goto clear_bit;
+			list_del_init(&q->requeue_list);
+			clear_bit_unlock(0, &q->in_list);
+
+			if (list_empty(&cpu_q->requeue_list)) {
+				/* Clear bit if nothing is left */
+clear_bit:
+				clear_bit(cpu_q->cpu, sess->cpu_queues_bm);
+			}
+		}
+		spin_unlock_irqrestore(&cpu_q->requeue_lock, flags);
+
+		if (q)
+			break;
+	}
+
+	/**
+	 * Saves the CPU that is going to be requeued on the per-cpu var. Just
+	 * incrementing it doesn't work because ibnbd_get_cpu_qlist() will
+	 * always return the first CPU with something on the queue list when the
+	 * value stored on the var is greater than the last CPU with something
+	 * on the list.
+	 */
+	if (cpu_q)
+		*cpup = cpu_q->cpu;
+	put_cpu_var(sess->cpu_rr);
+
+	if (q)
+		ibnbd_clt_dev_requeue(q);
+
+	return !!q;
+}
+
+/**
+ * ibnbd_requeue_all_if_idle() - requeue all queues left in the list if
+ *     session is idling (there are no requests in-flight).
+ *
+ * Description:
+ *     This function tries to rerun all stopped queues if there are no
+ *     requests in-flight anymore.  This function tries to solve an obvious
+ *     problem, when number of tags < than number of queues (hctx), which
+ *     are stopped and put to sleep.  If last tag, which has been just put,
+ *     does not wake up all left queues (hctxs), IO requests hang forever.
+ *
+ *     That can happen when all number of tags, say N, have been exhausted
+ *     from one CPU, and we have many block devices per session, say M.
+ *     Each block device has it's own queue (hctx) for each CPU, so eventually
+ *     we can put that number of queues (hctxs) to sleep: M x nr_cpu_ids.
+ *     If number of tags N < M x nr_cpu_ids finally we will get an IO hang.
+ *
+ *     To avoid this hang last caller of ibnbd_put_tag() (last caller is the
+ *     one who observes sess->busy == 0) must wake up all remaining queues.
+ *
+ * Context:
+ *     Does not matter.
+ */
+static inline void ibnbd_requeue_all_if_idle(struct ibnbd_clt_session *sess)
+{
+	bool requeued;
+
+	do {
+		requeued = ibnbd_requeue_if_needed(sess);
+	} while (atomic_read(&sess->busy) == 0 && requeued);
+}
+
+static struct ibtrs_tag *ibnbd_get_tag(struct ibnbd_clt_session *sess,
+				       enum ibtrs_clt_con_type con_type,
+				       int wait)
+{
+	struct ibtrs_tag *tag;
+
+	tag = ibtrs_clt_get_tag(sess->ibtrs, con_type,
+				wait ? IBTRS_TAG_WAIT : IBTRS_TAG_NOWAIT);
+	if (likely(tag))
+		/* We have a subtle rare case here, when all tags can be
+		 * consumed before busy counter increased.  This is safe,
+		 * because loser will get NULL as a tag, observe 0 busy
+		 * counter and immediately restart the queue himself.
+		 */
+		atomic_inc(&sess->busy);
+
+	return tag;
+}
+
+static void ibnbd_put_tag(struct ibnbd_clt_session *sess, struct ibtrs_tag *tag)
+{
+	ibtrs_clt_put_tag(sess->ibtrs, tag);
+	atomic_dec(&sess->busy);
+	/* Paired with ibnbd_clt_dev_add_to_requeue().  Decrement first
+	 * and then check queue bits.
+	 */
+	smp_mb__after_atomic();
+	ibnbd_requeue_all_if_idle(sess);
+}
+
+static struct ibnbd_iu *ibnbd_get_iu(struct ibnbd_clt_session *sess,
+				     enum ibtrs_clt_con_type con_type,
+				     int wait)
+{
+	struct ibnbd_iu *iu;
+	struct ibtrs_tag *tag;
+
+	tag = ibnbd_get_tag(sess, con_type,
+			    wait ? IBTRS_TAG_WAIT : IBTRS_TAG_NOWAIT);
+	if (unlikely(!tag))
+		return NULL;
+	iu = ibtrs_tag_to_pdu(tag);
+	iu->tag = tag; /* yes, ibtrs_tag_from_pdu() can be nice here,
+			* but also we have to think about MQ mode
+			*/
+	/*
+	 * 1st reference is dropped after finishing sending a "user" message,
+	 * 2nd reference is dropped after confirmation with the response is
+	 * returned.
+	 * 1st and 2nd can happen in any order, so the ibnbd_iu should be
+	 * released (ibtrs_tag returned to ibbtrs) only leased after both
+	 * are finished.
+	 */
+	atomic_set(&iu->refcount, 2);
+	init_waitqueue_head(&iu->comp.wait);
+	iu->comp.errno = INT_MAX;
+
+	return iu;
+}
+
+static void ibnbd_put_iu(struct ibnbd_clt_session *sess, struct ibnbd_iu *iu)
+{
+	if (atomic_dec_and_test(&iu->refcount))
+		ibnbd_put_tag(sess, iu->tag);
+}
+
+static void ibnbd_softirq_done_fn(struct request *rq)
+{
+	struct ibnbd_clt_dev *dev	= rq->rq_disk->private_data;
+	struct ibnbd_clt_session *sess	= dev->sess;
+	struct ibnbd_iu *iu;
+
+	iu = blk_mq_rq_to_pdu(rq);
+	ibnbd_put_tag(sess, iu->tag);
+	blk_mq_end_request(rq, iu->status);
+}
+
+static void msg_io_conf(void *priv, int errno)
+{
+	struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
+	struct ibnbd_clt_dev *dev = iu->dev;
+	struct request *rq = iu->rq;
+
+	iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
+
+	if (softirq_enable) {
+		blk_mq_complete_request(rq);
+	} else {
+		ibnbd_put_tag(dev->sess, iu->tag);
+		blk_mq_end_request(rq, iu->status);
+	}
+
+	if (errno)
+		ibnbd_info_rl(dev, "%s I/O failed with err: %d\n",
+			      rq_data_dir(rq) == READ ? "read" : "write",
+			      errno);
+}
+
+static void wake_up_iu_comp(struct ibnbd_iu *iu, int errno)
+{
+	iu->comp.errno = errno;
+	wake_up(&iu->comp.wait);
+}
+
+static void msg_conf(void *priv, int errno)
+{
+	struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
+
+	iu->errno = errno;
+	schedule_work(&iu->work);
+}
+
+enum {
+	NO_WAIT = 0,
+	WAIT    = 1
+};
+
+static int send_usr_msg(struct ibtrs_clt *ibtrs, int dir,
+			struct ibnbd_iu *iu, struct kvec *vec, size_t nr,
+			size_t len, struct scatterlist *sg, unsigned int sg_len,
+			void (*conf)(struct work_struct *work),
+			int *errno, bool wait)
+{
+	int err;
+
+	INIT_WORK(&iu->work, conf);
+	err = ibtrs_clt_request(dir, msg_conf, ibtrs, iu->tag,
+				iu, vec, nr, len, sg, sg_len);
+	if (!err && wait) {
+		wait_event(iu->comp.wait, iu->comp.errno != INT_MAX);
+		*errno = iu->comp.errno;
+	} else {
+		*errno = 0;
+	}
+
+	return err;
+}
+
+static void msg_close_conf(struct work_struct *work)
+{
+	struct ibnbd_iu *iu = container_of(work, struct ibnbd_iu, work);
+	struct ibnbd_clt_dev *dev = iu->dev;
+
+	wake_up_iu_comp(iu, iu->errno);
+	ibnbd_put_iu(dev->sess, iu);
+	ibnbd_clt_put_dev(dev);
+}
+
+static int send_msg_close(struct ibnbd_clt_dev *dev, u32 device_id, bool wait)
+{
+	struct ibnbd_clt_session *sess = dev->sess;
+	struct ibnbd_msg_close msg;
+	struct ibnbd_iu *iu;
+	struct kvec vec = {
+		.iov_base = &msg,
+		.iov_len  = sizeof(msg)
+	};
+	int err, errno;
+
+	iu = ibnbd_get_iu(sess, IBTRS_USR_CON, IBTRS_TAG_WAIT);
+	if (unlikely(!iu))
+		return -ENOMEM;
+
+	iu->buf = NULL;
+	iu->dev = dev;
+
+	sg_mark_end(&iu->sglist[0]);
+
+	msg.hdr.type	= cpu_to_le16(IBNBD_MSG_CLOSE);
+	msg.device_id	= cpu_to_le32(device_id);
+
+	WARN_ON(!ibnbd_clt_get_dev(dev));
+	err = send_usr_msg(sess->ibtrs, WRITE, iu, &vec, 1, 0, NULL, 0,
+			   msg_close_conf, &errno, wait);
+	if (unlikely(err)) {
+		ibnbd_clt_put_dev(dev);
+		ibnbd_put_iu(sess, iu);
+	} else {
+		err = errno;
+	}
+
+	ibnbd_put_iu(sess, iu);
+	return err;
+}
+
+static void msg_open_conf(struct work_struct *work)
+{
+	struct ibnbd_iu *iu = container_of(work, struct ibnbd_iu, work);
+	struct ibnbd_msg_open_rsp *rsp = iu->buf;
+	struct ibnbd_clt_dev *dev = iu->dev;
+	int errno = iu->errno;
+
+	if (errno) {
+		ibnbd_err(dev, "Opening failed, server responded: %d\n", errno);
+	} else {
+		errno = process_msg_open_rsp(dev, rsp);
+		if (unlikely(errno)) {
+			u32 device_id = le32_to_cpu(rsp->device_id);
+			/*
+			 * If server thinks its fine, but we fail to process
+			 * then be nice and send a close to server.
+			 */
+			(void)send_msg_close(dev, device_id, NO_WAIT);
+		}
+	}
+	kfree(rsp);
+	wake_up_iu_comp(iu, errno);
+	ibnbd_put_iu(dev->sess, iu);
+	ibnbd_clt_put_dev(dev);
+}
+
+static void msg_sess_info_conf(struct work_struct *work)
+{
+	struct ibnbd_iu *iu = container_of(work, struct ibnbd_iu, work);
+	struct ibnbd_msg_sess_info_rsp *rsp = iu->buf;
+	struct ibnbd_clt_session *sess = iu->sess;
+
+	if (likely(!iu->errno))
+		sess->ver = min_t(u8, rsp->ver, IBNBD_PROTO_VER_MAJOR);
+
+	kfree(rsp);
+	wake_up_iu_comp(iu, iu->errno);
+	ibnbd_put_iu(sess, iu);
+	ibnbd_clt_put_sess(sess);
+}
+
+static int send_msg_open(struct ibnbd_clt_dev *dev, bool wait)
+{
+	struct ibnbd_clt_session *sess = dev->sess;
+	struct ibnbd_msg_open_rsp *rsp;
+	struct ibnbd_msg_open msg;
+	struct ibnbd_iu *iu;
+	struct kvec vec = {
+		.iov_base = &msg,
+		.iov_len  = sizeof(msg)
+	};
+	int err, errno;
+
+	rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
+	if (unlikely(!rsp))
+		return -ENOMEM;
+
+	iu = ibnbd_get_iu(sess, IBTRS_USR_CON, IBTRS_TAG_WAIT);
+	if (unlikely(!iu)) {
+		kfree(rsp);
+		return -ENOMEM;
+	}
+
+	iu->buf = rsp;
+	iu->dev = dev;
+
+	sg_init_one(iu->sglist, rsp, sizeof(*rsp));
+
+	msg.hdr.type	= cpu_to_le16(IBNBD_MSG_OPEN);
+	msg.access_mode	= dev->access_mode;
+	msg.io_mode	= dev->io_mode;
+	strlcpy(msg.dev_name, dev->pathname, sizeof(msg.dev_name));
+
+	WARN_ON(!ibnbd_clt_get_dev(dev));
+	err = send_usr_msg(sess->ibtrs, READ, iu,
+			   &vec, 1, sizeof(*rsp), iu->sglist, 1,
+			   msg_open_conf, &errno, wait);
+	if (unlikely(err)) {
+		ibnbd_clt_put_dev(dev);
+		ibnbd_put_iu(sess, iu);
+		kfree(rsp);
+	} else {
+		err = errno;
+	}
+
+	ibnbd_put_iu(sess, iu);
+	return err;
+}
+
+static int send_msg_sess_info(struct ibnbd_clt_session *sess, bool wait)
+{
+	struct ibnbd_msg_sess_info_rsp *rsp;
+	struct ibnbd_msg_sess_info msg;
+	struct ibnbd_iu *iu;
+	struct kvec vec = {
+		.iov_base = &msg,
+		.iov_len  = sizeof(msg)
+	};
+	int err, errno;
+
+	rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
+	if (unlikely(!rsp))
+		return -ENOMEM;
+
+	iu = ibnbd_get_iu(sess, IBTRS_USR_CON, IBTRS_TAG_WAIT);
+	if (unlikely(!iu)) {
+		kfree(rsp);
+		return -ENOMEM;
+	}
+
+	iu->buf = rsp;
+	iu->sess = sess;
+
+	sg_init_one(iu->sglist, rsp, sizeof(*rsp));
+
+	msg.hdr.type = cpu_to_le16(IBNBD_MSG_SESS_INFO);
+	msg.ver      = IBNBD_PROTO_VER_MAJOR;
+
+	if (unlikely(!ibnbd_clt_get_sess(sess))) {
+		/*
+		 * That can happen only in one case, when IBTRS has restablished
+		 * the connection and link_ev() is called, but session is almost
+		 * dead, last reference on session is put and caller is waiting
+		 * for IBTRS to close everything.
+		 */
+		err = -ENODEV;
+		goto put_iu;
+	}
+	err = send_usr_msg(sess->ibtrs, READ, iu,
+			   &vec, 1, sizeof(*rsp), iu->sglist, 1,
+			   msg_sess_info_conf, &errno, wait);
+	if (unlikely(err)) {
+		ibnbd_clt_put_sess(sess);
+put_iu:
+		ibnbd_put_iu(sess, iu);
+		kfree(rsp);
+	} else {
+		err = errno;
+	}
+
+	ibnbd_put_iu(sess, iu);
+	return err;
+}
+
+static void set_dev_states_to_disconnected(struct ibnbd_clt_session *sess)
+{
+	struct ibnbd_clt_dev *dev;
+
+	mutex_lock(&sess->lock);
+	list_for_each_entry(dev, &sess->devs_list, list) {
+		ibnbd_err(dev, "Device disconnected.\n");
+
+		mutex_lock(&dev->lock);
+		if (dev->dev_state == DEV_STATE_MAPPED)
+			dev->dev_state = DEV_STATE_MAPPED_DISCONNECTED;
+		mutex_unlock(&dev->lock);
+	}
+	mutex_unlock(&sess->lock);
+}
+
+static void remap_devs(struct ibnbd_clt_session *sess)
+{
+	struct ibnbd_clt_dev *dev;
+	struct ibtrs_attrs attrs;
+	int err;
+
+	/*
+	 * Careful here: we are called from IBTRS link event directly,
+	 * thus we can't send any IBTRS request and wait for response
+	 * or IBTRS will not be able to complete request with failure
+	 * if something goes wrong (failing of outstanding requests
+	 * happens exactly from the context where we are blocking now).
+	 *
+	 * So to avoid deadlocks each usr message sent from here must
+	 * be asynchronous.
+	 */
+
+	err = send_msg_sess_info(sess, NO_WAIT);
+	if (unlikely(err)) {
+		pr_err("send_msg_sess_info(\"%s\"): %d\n", sess->sessname, err);
+		return;
+	}
+
+	ibtrs_clt_query(sess->ibtrs, &attrs);
+	mutex_lock(&sess->lock);
+	sess->max_io_size = attrs.max_io_size;
+
+	list_for_each_entry(dev, &sess->devs_list, list) {
+		bool skip;
+
+		mutex_lock(&dev->lock);
+		skip = (dev->dev_state == DEV_STATE_INIT);
+		mutex_unlock(&dev->lock);
+		if (skip)
+			/*
+			 * When device is establishing connection for the first
+			 * time - do not remap, it will be closed soon.
+			 */
+			continue;
+
+		ibnbd_info(dev, "session reconnected, remapping device\n");
+		err = send_msg_open(dev, NO_WAIT);
+		if (unlikely(err)) {
+			ibnbd_err(dev, "send_msg_open(): %d\n", err);
+			break;
+		}
+	}
+	mutex_unlock(&sess->lock);
+}
+
+static void ibnbd_clt_link_ev(void *priv, enum ibtrs_clt_link_ev ev)
+{
+	struct ibnbd_clt_session *sess = priv;
+
+	switch (ev) {
+	case IBTRS_CLT_LINK_EV_DISCONNECTED:
+		set_dev_states_to_disconnected(sess);
+		break;
+	case IBTRS_CLT_LINK_EV_RECONNECTED:
+		remap_devs(sess);
+		break;
+	default:
+		pr_err("Unknown session event received (%d), session: %s\n",
+		       ev, sess->sessname);
+	}
+}
+
+static void ibnbd_init_cpu_qlists(struct ibnbd_cpu_qlist __percpu *cpu_queues)
+{
+	unsigned int cpu;
+	struct ibnbd_cpu_qlist *cpu_q;
+
+	for_each_possible_cpu(cpu) {
+		cpu_q = per_cpu_ptr(cpu_queues, cpu);
+
+		cpu_q->cpu = cpu;
+		INIT_LIST_HEAD(&cpu_q->requeue_list);
+		spin_lock_init(&cpu_q->requeue_lock);
+	}
+}
+
+static struct blk_mq_ops ibnbd_mq_ops;
+static int setup_mq_tags(struct ibnbd_clt_session *sess)
+{
+	struct blk_mq_tag_set *tags = &sess->tag_set;
+
+	memset(tags, 0, sizeof(*tags));
+	tags->ops		= &ibnbd_mq_ops;
+	tags->queue_depth	= sess->queue_depth;
+	tags->numa_node		= NUMA_NO_NODE;
+	tags->flags		= BLK_MQ_F_SHOULD_MERGE |
+				  BLK_MQ_F_TAG_SHARED;
+	tags->cmd_size		= sizeof(struct ibnbd_iu);
+	tags->nr_hw_queues	= num_online_cpus();
+
+	return blk_mq_alloc_tag_set(tags);
+}
+
+static void destroy_mq_tags(struct ibnbd_clt_session *sess)
+{
+	if (sess->tag_set.tags)
+		blk_mq_free_tag_set(&sess->tag_set);
+}
+
+static inline void wake_up_ibtrs_waiters(struct ibnbd_clt_session *sess)
+{
+	/* paired with rmb() in wait_for_ibtrs_connection() */
+	smp_wmb();
+	sess->ibtrs_ready = true;
+	wake_up_all(&sess->ibtrs_waitq);
+}
+
+static void close_ibtrs(struct ibnbd_clt_session *sess)
+{
+	might_sleep();
+
+	if (!IS_ERR_OR_NULL(sess->ibtrs)) {
+		ibtrs_clt_close(sess->ibtrs);
+		sess->ibtrs = NULL;
+		wake_up_ibtrs_waiters(sess);
+	}
+}
+
+static void free_sess(struct ibnbd_clt_session *sess)
+{
+	WARN_ON(!list_empty(&sess->devs_list));
+
+	might_sleep();
+
+	close_ibtrs(sess);
+	destroy_mq_tags(sess);
+	if (!list_empty(&sess->list)) {
+		mutex_lock(&sess_lock);
+		list_del(&sess->list);
+		mutex_unlock(&sess_lock);
+	}
+	free_percpu(sess->cpu_queues);
+	free_percpu(sess->cpu_rr);
+	kfree(sess);
+}
+
+static struct ibnbd_clt_session *alloc_sess(const char *sessname)
+{
+	struct ibnbd_clt_session *sess;
+	int err, cpu;
+
+	sess = kzalloc_node(sizeof(*sess), GFP_KERNEL, NUMA_NO_NODE);
+	if (unlikely(!sess)) {
+		pr_err("Failed to create session %s,"
+		       " allocating session struct failed\n", sessname);
+		return ERR_PTR(-ENOMEM);
+	}
+	strlcpy(sess->sessname, sessname, sizeof(sess->sessname));
+	atomic_set(&sess->busy, 0);
+	mutex_init(&sess->lock);
+	INIT_LIST_HEAD(&sess->devs_list);
+	INIT_LIST_HEAD(&sess->list);
+	bitmap_zero(sess->cpu_queues_bm, NR_CPUS);
+	init_waitqueue_head(&sess->ibtrs_waitq);
+	refcount_set(&sess->refcount, 1);
+
+	sess->cpu_queues = alloc_percpu(struct ibnbd_cpu_qlist);
+	if (unlikely(!sess->cpu_queues)) {
+		pr_err("Failed to create session to %s,"
+		       " alloc of percpu var (cpu_queues) failed\n", sessname);
+		err = -ENOMEM;
+		goto err;
+	}
+	ibnbd_init_cpu_qlists(sess->cpu_queues);
+
+	/**
+	 * That is simple percpu variable which stores cpu indeces, which are
+	 * incremented on each access.  We need that for the sake of fairness
+	 * to wake up queues in a round-robin manner.
+	 */
+	sess->cpu_rr = alloc_percpu(int);
+	if (unlikely(!sess->cpu_rr)) {
+		pr_err("Failed to create session %s,"
+		       " alloc of percpu var (cpu_rr) failed\n", sessname);
+		err = -ENOMEM;
+		goto err;
+	}
+	for_each_possible_cpu(cpu)
+		*per_cpu_ptr(sess->cpu_rr, cpu) = cpu;
+
+	return sess;
+
+err:
+	free_sess(sess);
+
+	return ERR_PTR(err);
+}
+
+static int wait_for_ibtrs_connection(struct ibnbd_clt_session *sess)
+{
+	wait_event(sess->ibtrs_waitq, sess->ibtrs_ready);
+	/* paired with wmb() in wake_up_ibtrs_waiters() */
+	smp_rmb();
+	if (unlikely(IS_ERR_OR_NULL(sess->ibtrs)))
+		return -ECONNRESET;
+
+	return 0;
+}
+
+static void wait_for_ibtrs_disconnection(struct ibnbd_clt_session *sess)
+__releases(&sess_lock)
+__acquires(&sess_lock)
+{
+	DEFINE_WAIT_FUNC(wait, autoremove_wake_function);
+
+	prepare_to_wait(&sess->ibtrs_waitq, &wait, TASK_UNINTERRUPTIBLE);
+	if (IS_ERR_OR_NULL(sess->ibtrs)) {
+		finish_wait(&sess->ibtrs_waitq, &wait);
+		return;
+	}
+	mutex_unlock(&sess_lock);
+	/* After unlock session can be freed, so careful */
+	schedule();
+	mutex_lock(&sess_lock);
+}
+
+static struct ibnbd_clt_session *__find_and_get_sess(const char *sessname)
+__releases(&sess_lock)
+__acquires(&sess_lock)
+{
+	struct ibnbd_clt_session *sess;
+	int err;
+
+again:
+	list_for_each_entry(sess, &sess_list, list) {
+		if (strcmp(sessname, sess->sessname))
+			continue;
+
+		if (unlikely(sess->ibtrs_ready && IS_ERR_OR_NULL(sess->ibtrs)))
+			/*
+			 * No IBTRS connection, session is dying.
+			 */
+			continue;
+
+		if (likely(ibnbd_clt_get_sess(sess))) {
+			/*
+			 * Alive session is found, wait for IBTRS connection.
+			 */
+			mutex_unlock(&sess_lock);
+			err = wait_for_ibtrs_connection(sess);
+			if (unlikely(err))
+				ibnbd_clt_put_sess(sess);
+			mutex_lock(&sess_lock);
+
+			if (unlikely(err))
+				/* Session is dying, repeat the loop */
+				goto again;
+
+			return sess;
+		}
+		/*
+		 * Ref is 0, session is dying, wait for IBTRS disconnect
+		 * in order to avoid session names clashes.
+		 */
+		wait_for_ibtrs_disconnection(sess);
+		/*
+		 * IBTRS is disconnected and soon session will be freed,
+		 * so repeat a loop.
+		 */
+		goto again;
+	}
+
+	return NULL;
+}
+
+static struct ibnbd_clt_session *find_and_get_sess(const char *sessname)
+{
+	struct ibnbd_clt_session *sess;
+
+	mutex_lock(&sess_lock);
+	sess = __find_and_get_sess(sessname);
+	mutex_unlock(&sess_lock);
+
+	return sess;
+}
+
+static struct ibnbd_clt_session *
+find_and_get_or_insert_sess(struct ibnbd_clt_session *sess)
+{
+	struct ibnbd_clt_session *found;
+
+	mutex_lock(&sess_lock);
+	found = __find_and_get_sess(sess->sessname);
+	if (!found)
+		list_add(&sess->list, &sess_list);
+	mutex_unlock(&sess_lock);
+
+	return found;
+}
+
+static struct ibnbd_clt_session *
+find_and_get_or_create_sess(const char *sessname,
+			    const struct ibtrs_addr *paths,
+			    size_t path_cnt)
+{
+	struct ibnbd_clt_session *sess, *found;
+	struct ibtrs_attrs attrs;
+	int err;
+
+	sess = find_and_get_sess(sessname);
+	if (sess)
+		return sess;
+
+	sess = alloc_sess(sessname);
+	if (unlikely(IS_ERR(sess)))
+		return sess;
+
+	found = find_and_get_or_insert_sess(sess);
+	if (unlikely(found)) {
+		free_sess(sess);
+
+		return found;
+	}
+	/*
+	 * Nothing was found, establish ibtrs connection and proceed further.
+	 */
+	sess->ibtrs = ibtrs_clt_open(sess, ibnbd_clt_link_ev, sessname,
+				     paths, path_cnt, IBTRS_PORT,
+				     sizeof(struct ibnbd_iu),
+				     RECONNECT_DELAY, BMAX_SEGMENTS,
+				     MAX_RECONNECTS);
+	if (unlikely(IS_ERR(sess->ibtrs))) {
+		err = PTR_ERR(sess->ibtrs);
+		goto wake_up_and_put;
+	}
+	ibtrs_clt_query(sess->ibtrs, &attrs);
+	sess->max_io_size = attrs.max_io_size;
+	sess->queue_depth = attrs.queue_depth;
+
+	err = setup_mq_tags(sess);
+	if (unlikely(err))
+		goto close_ibtrs;
+
+	err = send_msg_sess_info(sess, WAIT);
+	if (unlikely(err))
+		goto close_ibtrs;
+
+	wake_up_ibtrs_waiters(sess);
+
+	return sess;
+
+close_ibtrs:
+	close_ibtrs(sess);
+put_sess:
+	ibnbd_clt_put_sess(sess);
+
+	return ERR_PTR(err);
+
+wake_up_and_put:
+	wake_up_ibtrs_waiters(sess);
+	goto put_sess;
+}
+
+static int ibnbd_client_open(struct block_device *block_device, fmode_t mode)
+{
+	struct ibnbd_clt_dev *dev = block_device->bd_disk->private_data;
+
+	if (dev->read_only && (mode & FMODE_WRITE))
+		return -EPERM;
+
+	if (dev->dev_state == DEV_STATE_UNMAPPED ||
+	    !ibnbd_clt_get_dev(dev))
+		return -EIO;
+
+	return 0;
+}
+
+static void ibnbd_client_release(struct gendisk *gen, fmode_t mode)
+{
+	struct ibnbd_clt_dev *dev = gen->private_data;
+
+	ibnbd_clt_put_dev(dev);
+}
+
+static int ibnbd_client_getgeo(struct block_device *block_device,
+			       struct hd_geometry *geo)
+{
+	u64 size;
+	struct ibnbd_clt_dev *dev;
+
+	dev = block_device->bd_disk->private_data;
+	size = dev->size * (dev->logical_block_size / KERNEL_SECTOR_SIZE);
+	geo->cylinders	= (size & ~0x3f) >> 6;	/* size/64 */
+	geo->heads	= 4;
+	geo->sectors	= 16;
+	geo->start	= 0;
+
+	return 0;
+}
+
+static const struct block_device_operations ibnbd_client_ops = {
+	.owner		= THIS_MODULE,
+	.open		= ibnbd_client_open,
+	.release	= ibnbd_client_release,
+	.getgeo		= ibnbd_client_getgeo
+};
+
+static size_t ibnbd_clt_get_sg_size(struct scatterlist *sglist, u32 len)
+{
+	struct scatterlist *sg;
+	size_t tsize = 0;
+	int i;
+
+	for_each_sg(sglist, sg, len, i)
+		tsize += sg->length;
+	return tsize;
+}
+
+/*
+ * Get iorio of current task
+ */
+static short ibnbd_current_ioprio(void)
+{
+	struct task_struct *tsp = current;
+	unsigned short prio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
+
+	if (likely(tsp->io_context))
+		prio = tsp->io_context->ioprio;
+	return prio;
+}
+
+static short ibnbd_ioprio_best(unsigned short prio1, unsigned short prio2)
+{
+	if (!ioprio_valid(prio1)) {
+		if (!ioprio_valid(prio2))
+			return IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
+		else
+			return prio2;
+	}
+	if (!ioprio_valid(prio2))
+		return prio1;
+
+	return min(prio1, prio2);
+}
+
+static int ibnbd_client_xfer_request(struct ibnbd_clt_dev *dev,
+				     struct request *rq,
+				     struct ibnbd_iu *iu)
+{
+	struct ibtrs_clt *ibtrs = dev->sess->ibtrs;
+	struct ibtrs_tag *tag = iu->tag;
+	struct ibnbd_msg_io msg;
+	unsigned int sg_cnt = 0;
+	struct kvec vec;
+	size_t size;
+	int err;
+
+	iu->rq		= rq;
+	iu->dev		= dev;
+	msg.sector	= cpu_to_le64(blk_rq_pos(rq));
+	msg.bi_size	= cpu_to_le32(blk_rq_bytes(rq));
+	msg.rw		= cpu_to_le32(rq_to_ibnbd_flags(rq));
+	msg.prio	= cpu_to_le16(ibnbd_ioprio_best(
+						req_get_ioprio(rq),
+						ibnbd_current_ioprio()));
+
+	/*
+	 * We only support discards with single segment for now.
+	 * See queue limits.
+	 */
+	if (req_op(rq) != REQ_OP_DISCARD)
+		sg_cnt = blk_rq_map_sg(dev->queue, rq, iu->sglist);
+
+	if (sg_cnt == 0)
+		/* Do not forget to mark the end */
+		sg_mark_end(&iu->sglist[0]);
+
+	msg.hdr.type	= cpu_to_le16(IBNBD_MSG_IO);
+	msg.device_id	= cpu_to_le32(dev->device_id);
+
+	vec = (struct kvec) {
+		.iov_base = &msg,
+		.iov_len  = dev->sess->ver < IBNBD_PROTO_VER_MAJOR ?
+				sizeof(struct ibnbd_msg_io_old) : sizeof(msg)
+	};
+	size = ibnbd_clt_get_sg_size(iu->sglist, sg_cnt);
+	err = ibtrs_clt_request(rq_data_dir(rq), msg_io_conf, ibtrs, tag,
+				iu, &vec, 1, size, iu->sglist, sg_cnt);
+	if (unlikely(err)) {
+		ibnbd_err_rl(dev, "IBTRS failed to transfer IO, err: %d\n",
+			     err);
+		return err;
+	}
+
+	return 0;
+}
+
+/**
+ * ibnbd_clt_dev_add_to_requeue() - add device to requeue if session is busy
+ *
+ * Description:
+ *     If session is busy, that means someone will requeue us when resources
+ *     are freed.  If session is not doing anything - device is not added to
+ *     the list and @false is returned.
+ */
+static inline bool ibnbd_clt_dev_add_to_requeue(struct ibnbd_clt_dev *dev,
+						struct ibnbd_queue *q)
+{
+	struct ibnbd_clt_session *sess = dev->sess;
+	struct ibnbd_cpu_qlist *cpu_q;
+	unsigned long flags;
+	bool added = true;
+	bool need_set;
+
+	cpu_q = get_cpu_ptr(sess->cpu_queues);
+	spin_lock_irqsave(&cpu_q->requeue_lock, flags);
+
+	if (likely(!test_and_set_bit_lock(0, &q->in_list))) {
+		if (WARN_ON(!list_empty(&q->requeue_list)))
+			goto unlock;
+
+		need_set = !test_bit(cpu_q->cpu, sess->cpu_queues_bm);
+		if (need_set) {
+			set_bit(cpu_q->cpu, sess->cpu_queues_bm);
+			/* Paired with ibnbd_put_tag().	 Set a bit first
+			 * and then observe the busy counter.
+			 */
+			smp_mb__before_atomic();
+		}
+		if (likely(atomic_read(&sess->busy))) {
+			list_add_tail(&q->requeue_list, &cpu_q->requeue_list);
+		} else {
+			/* Very unlikely, but possible: busy counter was
+			 * observed as zero.  Drop all bits and return
+			 * false to restart the queue by ourselves.
+			 */
+			if (need_set)
+				clear_bit(cpu_q->cpu, sess->cpu_queues_bm);
+			clear_bit_unlock(0, &q->in_list);
+			added = false;
+		}
+	}
+unlock:
+	spin_unlock_irqrestore(&cpu_q->requeue_lock, flags);
+	put_cpu_ptr(sess->cpu_queues);
+
+	return added;
+}
+
+static void ibnbd_clt_dev_kick_mq_queue(struct ibnbd_clt_dev *dev,
+					struct blk_mq_hw_ctx *hctx,
+					int delay)
+{
+	struct ibnbd_queue *q = hctx->driver_data;
+
+	if (delay != IBNBD_DELAY_IFBUSY)
+		blk_mq_delay_run_hw_queue(hctx, delay);
+	else if (unlikely(!ibnbd_clt_dev_add_to_requeue(dev, q)))
+		/*
+		 * If session is not busy we have to restart
+		 * the queue ourselves.
+		 */
+		blk_mq_delay_run_hw_queue(hctx, IBNBD_DELAY_10ms);
+}
+
+static blk_status_t ibnbd_queue_rq(struct blk_mq_hw_ctx *hctx,
+				   const struct blk_mq_queue_data *bd)
+{
+	struct request *rq = bd->rq;
+	struct ibnbd_clt_dev *dev = rq->rq_disk->private_data;
+	struct ibnbd_iu *iu = blk_mq_rq_to_pdu(rq);
+	int err;
+
+	if (unlikely(!ibnbd_clt_dev_is_mapped(dev)))
+		return BLK_STS_IOERR;
+
+	iu->tag = ibnbd_get_tag(dev->sess, IBTRS_IO_CON, IBTRS_TAG_NOWAIT);
+	if (unlikely(!iu->tag)) {
+		ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_IFBUSY);
+		return BLK_STS_RESOURCE;
+	}
+
+	blk_mq_start_request(rq);
+	err = ibnbd_client_xfer_request(dev, rq, iu);
+	if (likely(err == 0))
+		return BLK_STS_OK;
+	if (unlikely(err == -EAGAIN || err == -ENOMEM)) {
+		ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_10ms);
+		ibnbd_put_tag(dev->sess, iu->tag);
+		return BLK_STS_RESOURCE;
+	}
+
+	ibnbd_put_tag(dev->sess, iu->tag);
+	return BLK_STS_IOERR;
+}
+
+static int ibnbd_init_request(struct blk_mq_tag_set *set, struct request *rq,
+			      unsigned int hctx_idx, unsigned int numa_node)
+{
+	struct ibnbd_iu *iu = blk_mq_rq_to_pdu(rq);
+
+	sg_init_table(iu->sglist, BMAX_SEGMENTS);
+	return 0;
+}
+
+static inline void ibnbd_init_hw_queue(struct ibnbd_clt_dev *dev,
+				       struct ibnbd_queue *q,
+				       struct blk_mq_hw_ctx *hctx)
+{
+	INIT_LIST_HEAD(&q->requeue_list);
+	q->dev  = dev;
+	q->hctx = hctx;
+}
+
+static void ibnbd_init_mq_hw_queues(struct ibnbd_clt_dev *dev)
+{
+	int i;
+	struct blk_mq_hw_ctx *hctx;
+	struct ibnbd_queue *q;
+
+	queue_for_each_hw_ctx(dev->queue, hctx, i) {
+		q = &dev->hw_queues[i];
+		ibnbd_init_hw_queue(dev, q, hctx);
+		hctx->driver_data = q;
+	}
+}
+
+static struct blk_mq_ops ibnbd_mq_ops = {
+	.queue_rq	= ibnbd_queue_rq,
+	.init_request	= ibnbd_init_request,
+	.complete	= ibnbd_softirq_done_fn,
+};
+
+static int index_to_minor(int index)
+{
+	return index << IBNBD_PART_BITS;
+}
+
+static int minor_to_index(int minor)
+{
+	return minor >> IBNBD_PART_BITS;
+}
+
+static int setup_mq_dev(struct ibnbd_clt_dev *dev)
+{
+	dev->queue = blk_mq_init_queue(&dev->sess->tag_set);
+	if (IS_ERR(dev->queue)) {
+		ibnbd_err(dev,
+			  "Initializing multiqueue queue failed, err: %ld\n",
+			  PTR_ERR(dev->queue));
+		return PTR_ERR(dev->queue);
+	}
+	ibnbd_init_mq_hw_queues(dev);
+	return 0;
+}
+
+static void setup_request_queue(struct ibnbd_clt_dev *dev)
+{
+	blk_queue_logical_block_size(dev->queue, dev->logical_block_size);
+	blk_queue_physical_block_size(dev->queue, dev->physical_block_size);
+	blk_queue_max_hw_sectors(dev->queue, dev->max_hw_sectors);
+	blk_queue_max_write_same_sectors(dev->queue,
+					 dev->max_write_same_sectors);
+
+	/*
+	 * we don't support discards to "discontiguous" segments
+	 * in on request
+	 */
+	blk_queue_max_discard_segments(dev->queue, 1);
+
+	blk_queue_max_discard_sectors(dev->queue, dev->max_discard_sectors);
+	dev->queue->limits.discard_granularity	= dev->discard_granularity;
+	dev->queue->limits.discard_alignment	= dev->discard_alignment;
+	if (dev->max_discard_sectors)
+		blk_queue_flag_set(QUEUE_FLAG_DISCARD, dev->queue);
+	if (dev->secure_discard)
+		blk_queue_flag_set(QUEUE_FLAG_SECERASE, dev->queue);
+
+	blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
+	blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
+	blk_queue_max_segments(dev->queue, dev->max_segments);
+	blk_queue_io_opt(dev->queue, dev->sess->max_io_size);
+	blk_queue_virt_boundary(dev->queue, 4095);
+	blk_queue_write_cache(dev->queue, true, true);
+	dev->queue->queuedata = dev;
+}
+
+static void ibnbd_clt_setup_gen_disk(struct ibnbd_clt_dev *dev, int idx)
+{
+	dev->gd->major		= ibnbd_client_major;
+	dev->gd->first_minor	= index_to_minor(idx);
+	dev->gd->fops		= &ibnbd_client_ops;
+	dev->gd->queue		= dev->queue;
+	dev->gd->private_data	= dev;
+	snprintf(dev->gd->disk_name, sizeof(dev->gd->disk_name), "ibnbd%d",
+		 idx);
+	pr_debug("disk_name=%s, capacity=%zu\n",
+		 dev->gd->disk_name,
+		 dev->nsectors * (dev->logical_block_size / KERNEL_SECTOR_SIZE)
+		 );
+
+	set_capacity(dev->gd, dev->nsectors * (dev->logical_block_size /
+					       KERNEL_SECTOR_SIZE));
+
+	if (dev->access_mode == IBNBD_ACCESS_RO) {
+		dev->read_only = true;
+		set_disk_ro(dev->gd, true);
+	} else {
+		dev->read_only = false;
+	}
+
+	if (!dev->rotational)
+		blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue);
+}
+
+static void ibnbd_clt_add_gen_disk(struct ibnbd_clt_dev *dev)
+{
+	add_disk(dev->gd);
+}
+
+static int ibnbd_client_setup_device(struct ibnbd_clt_session *sess,
+				     struct ibnbd_clt_dev *dev, int idx)
+{
+	int err;
+
+	dev->size = dev->nsectors * dev->logical_block_size;
+
+	err = setup_mq_dev(dev);
+	if (err)
+		return err;
+
+	setup_request_queue(dev);
+
+	dev->gd = alloc_disk_node(1 << IBNBD_PART_BITS,	NUMA_NO_NODE);
+	if (!dev->gd) {
+		ibnbd_err(dev, "Failed to allocate disk node\n");
+		blk_cleanup_queue(dev->queue);
+		return -ENOMEM;
+	}
+
+	ibnbd_clt_setup_gen_disk(dev, idx);
+
+	return 0;
+}
+
+static struct ibnbd_clt_dev *init_dev(struct ibnbd_clt_session *sess,
+				      enum ibnbd_access_mode access_mode,
+				      enum ibnbd_io_mode io_mode,
+				      const char *pathname)
+{
+	struct ibnbd_clt_dev *dev;
+	int ret;
+
+	dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, NUMA_NO_NODE);
+	if (!dev)
+		return ERR_PTR(-ENOMEM);
+
+	dev->hw_queues = kcalloc(nr_cpu_ids, sizeof(*dev->hw_queues),
+				 GFP_KERNEL);
+	if (unlikely(!dev->hw_queues)) {
+		pr_err("Failed to initialize device '%s' from session"
+		       " %s, allocating hw_queues failed.", pathname,
+		       sess->sessname);
+		ret = -ENOMEM;
+		goto out_alloc;
+	}
+
+	mutex_lock(&ida_lock);
+	ret = ida_simple_get(&index_ida, 0, minor_to_index(1 << MINORBITS),
+			     GFP_KERNEL);
+	mutex_unlock(&ida_lock);
+	if (ret < 0) {
+		pr_err("Failed to initialize device '%s' from session %s,"
+		       " allocating idr failed, err: %d\n", pathname,
+		       sess->sessname, ret);
+		goto out_queues;
+	}
+	dev->clt_device_id	= ret;
+	dev->sess		= sess;
+	dev->access_mode	= access_mode;
+	dev->io_mode		= io_mode;
+	strlcpy(dev->pathname, pathname, sizeof(dev->pathname));
+	mutex_init(&dev->lock);
+	refcount_set(&dev->refcount, 1);
+	dev->dev_state = DEV_STATE_INIT;
+
+	/*
+	 * Here we called from sysfs entry, thus clt-sysfs is
+	 * responsible that session will not disappear.
+	 */
+	WARN_ON(!ibnbd_clt_get_sess(sess));
+
+	return dev;
+
+out_queues:
+	kfree(dev->hw_queues);
+out_alloc:
+	kfree(dev);
+	return ERR_PTR(ret);
+}
+
+static bool __exists_dev(const char *pathname)
+{
+	struct ibnbd_clt_session *sess;
+	struct ibnbd_clt_dev *dev;
+	bool found = false;
+
+	list_for_each_entry(sess, &sess_list, list) {
+		mutex_lock(&sess->lock);
+		list_for_each_entry(dev, &sess->devs_list, list) {
+			if (!strncmp(dev->pathname, pathname,
+				     sizeof(dev->pathname))) {
+				found = true;
+				break;
+			}
+		}
+		mutex_unlock(&sess->lock);
+		if (found)
+			break;
+	}
+
+	return found;
+}
+
+static bool exists_devpath(const char *pathname)
+{
+	bool found;
+
+	mutex_lock(&sess_lock);
+	found = __exists_dev(pathname);
+	mutex_unlock(&sess_lock);
+
+	return found;
+}
+
+static bool insert_dev_if_not_exists_devpath(const char *pathname,
+					     struct ibnbd_clt_session *sess,
+					     struct ibnbd_clt_dev *dev)
+{
+	bool found;
+
+	mutex_lock(&sess_lock);
+	found = __exists_dev(pathname);
+	if (!found) {
+		mutex_lock(&sess->lock);
+		list_add_tail(&dev->list, &sess->devs_list);
+		mutex_unlock(&sess->lock);
+	}
+	mutex_unlock(&sess_lock);
+
+	return found;
+}
+
+static void delete_dev(struct ibnbd_clt_dev *dev)
+{
+	struct ibnbd_clt_session *sess = dev->sess;
+
+	mutex_lock(&sess->lock);
+	list_del(&dev->list);
+	mutex_unlock(&sess->lock);
+}
+
+struct ibnbd_clt_dev *ibnbd_clt_map_device(const char *sessname,
+					   struct ibtrs_addr *paths,
+					   size_t path_cnt,
+					   const char *pathname,
+					   enum ibnbd_access_mode access_mode,
+					   enum ibnbd_io_mode io_mode)
+{
+	struct ibnbd_clt_session *sess;
+	struct ibnbd_clt_dev *dev;
+	int ret;
+
+	if (unlikely(exists_devpath(pathname)))
+		return ERR_PTR(-EEXIST);
+
+	sess = find_and_get_or_create_sess(sessname, paths, path_cnt);
+	if (unlikely(IS_ERR(sess)))
+		return ERR_CAST(sess);
+
+	dev = init_dev(sess, access_mode, io_mode, pathname);
+	if (unlikely(IS_ERR(dev))) {
+		pr_err("map_device: failed to map device '%s' from session %s,"
+		       " can't initialize device, err: %ld\n", pathname,
+		       sess->sessname, PTR_ERR(dev));
+		ret = PTR_ERR(dev);
+		goto put_sess;
+	}
+	if (unlikely(insert_dev_if_not_exists_devpath(pathname, sess, dev))) {
+		ret = -EEXIST;
+		goto put_dev;
+	}
+	ret = send_msg_open(dev, WAIT);
+	if (unlikely(ret)) {
+		ibnbd_err(dev, "map_device: failed, can't open remote device,"
+			  " err: %d\n", ret);
+		goto del_dev;
+	}
+	mutex_lock(&dev->lock);
+	pr_debug("Opened remote device: session=%s, path='%s'\n",
+		 sess->sessname, pathname);
+	ret = ibnbd_client_setup_device(sess, dev, dev->clt_device_id);
+	if (ret) {
+		ibnbd_err(dev, "map_device: Failed to configure device, err: %d\n",
+			  ret);
+		mutex_unlock(&dev->lock);
+		goto del_dev;
+	}
+
+	ibnbd_info(dev, "map_device: Device mapped as %s (nsectors: %zu,"
+		   " logical_block_size: %d, physical_block_size: %d,"
+		   " max_write_same_sectors: %d, max_discard_sectors: %d,"
+		   " discard_granularity: %d, discard_alignment: %d, "
+		   "secure_discard: %d, max_segments: %d, max_hw_sectors: %d, "
+		   "rotational: %d)\n",
+		   dev->gd->disk_name, dev->nsectors, dev->logical_block_size,
+		   dev->physical_block_size, dev->max_write_same_sectors,
+		   dev->max_discard_sectors, dev->discard_granularity,
+		   dev->discard_alignment, dev->secure_discard,
+		   dev->max_segments, dev->max_hw_sectors, dev->rotational);
+
+	mutex_unlock(&dev->lock);
+
+	ibnbd_clt_add_gen_disk(dev);
+	ibnbd_clt_put_sess(sess);
+
+	return dev;
+
+del_dev:
+	delete_dev(dev);
+put_dev:
+	ibnbd_clt_put_dev(dev);
+put_sess:
+	ibnbd_clt_put_sess(sess);
+
+	return ERR_PTR(ret);
+}
+
+static void destroy_gen_disk(struct ibnbd_clt_dev *dev)
+{
+	del_gendisk(dev->gd);
+	/*
+	 * Before marking queue as dying (blk_cleanup_queue() does that)
+	 * we have to be sure that everything in-flight has gone.
+	 * Blink with freeze/unfreeze.
+	 */
+	blk_mq_freeze_queue(dev->queue);
+	blk_mq_unfreeze_queue(dev->queue);
+	blk_cleanup_queue(dev->queue);
+	put_disk(dev->gd);
+}
+
+static void destroy_sysfs(struct ibnbd_clt_dev *dev,
+			  const struct attribute *sysfs_self)
+{
+	ibnbd_clt_remove_dev_symlink(dev);
+	if (dev->kobj.state_initialized) {
+		if (sysfs_self)
+			/* To avoid deadlock firstly commit suicide */
+			sysfs_remove_file_self(&dev->kobj, sysfs_self);
+		kobject_del(&dev->kobj);
+		kobject_put(&dev->kobj);
+	}
+}
+
+int ibnbd_clt_unmap_device(struct ibnbd_clt_dev *dev, bool force,
+			   const struct attribute *sysfs_self)
+{
+	struct ibnbd_clt_session *sess = dev->sess;
+	int refcount, ret = 0;
+	bool was_mapped;
+
+	mutex_lock(&dev->lock);
+	if (dev->dev_state == DEV_STATE_UNMAPPED) {
+		ibnbd_info(dev, "Device is already being unmapped\n");
+		ret = -EALREADY;
+		goto err;
+	}
+	refcount = refcount_read(&dev->refcount);
+	if (!force && refcount > 1) {
+		ibnbd_err(dev, "Closing device failed, device is in use,"
+			  " (%d device users)\n", refcount - 1);
+		ret = -EBUSY;
+		goto err;
+	}
+	was_mapped = (dev->dev_state == DEV_STATE_MAPPED);
+	dev->dev_state = DEV_STATE_UNMAPPED;
+	mutex_unlock(&dev->lock);
+
+	delete_dev(dev);
+	destroy_sysfs(dev, sysfs_self);
+	destroy_gen_disk(dev);
+	if (was_mapped && sess->ibtrs)
+		send_msg_close(dev, dev->device_id, WAIT);
+
+	ibnbd_info(dev, "Device is unmapped\n");
+
+	/* Likely last reference put */
+	ibnbd_clt_put_dev(dev);
+
+	/*
+	 * Here device and session can be vanished!
+	 */
+
+	return 0;
+err:
+	mutex_unlock(&dev->lock);
+
+	return ret;
+}
+
+int ibnbd_clt_remap_device(struct ibnbd_clt_dev *dev)
+{
+	int err;
+
+	mutex_lock(&dev->lock);
+	if (likely(dev->dev_state == DEV_STATE_MAPPED_DISCONNECTED))
+		err = 0;
+	else if (dev->dev_state == DEV_STATE_UNMAPPED)
+		err = -ENODEV;
+	else if (dev->dev_state == DEV_STATE_MAPPED)
+		err = -EALREADY;
+	else
+		err = -EBUSY;
+	mutex_unlock(&dev->lock);
+	if (likely(!err)) {
+		ibnbd_info(dev, "Remapping device.\n");
+		err = send_msg_open(dev, WAIT);
+		if (unlikely(err))
+			ibnbd_err(dev, "remap_device: %d\n", err);
+	}
+
+	return err;
+}
+
+static void unmap_device_work(struct work_struct *work)
+{
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(work, typeof(*dev), unmap_on_rmmod_work);
+	ibnbd_clt_unmap_device(dev, true, NULL);
+}
+
+static void ibnbd_destroy_sessions(void)
+{
+	struct ibnbd_clt_session *sess, *sn;
+	struct ibnbd_clt_dev *dev, *tn;
+
+	/* Firstly forbid access through sysfs interface */
+	ibnbd_clt_destroy_default_group();
+	ibnbd_clt_destroy_sysfs_files();
+
+	/*
+	 * Here at this point there is no any concurrent access to sessions
+	 * list and devices list:
+	 *   1. New session or device can'be be created - session sysfs files
+	 *      are removed.
+	 *   2. Device or session can't be removed - module reference is taken
+	 *      into account in unmap device sysfs callback.
+	 *   3. No IO requests inflight - each file open of block_dev increases
+	 *      module reference in get_disk().
+	 *
+	 * But still there can be user requests inflights, which are sent by
+	 * asynchronous send_msg_*() functions, thus before unmapping devices
+	 * IBTRS session must be explicitly closed.
+	 */
+
+	list_for_each_entry_safe(sess, sn, &sess_list, list) {
+		WARN_ON(!ibnbd_clt_get_sess(sess));
+		close_ibtrs(sess);
+		list_for_each_entry_safe(dev, tn, &sess->devs_list, list) {
+			/*
+			 * Here unmap happens in parallel for only one reason:
+			 * blk_cleanup_queue() takes around half a second, so
+			 * on huge amount of devices the whole module unload
+			 * procedure takes minutes.
+			 */
+			INIT_WORK(&dev->unmap_on_rmmod_work, unmap_device_work);
+			queue_work(unload_wq, &dev->unmap_on_rmmod_work);
+		}
+		ibnbd_clt_put_sess(sess);
+	}
+	/* Wait for all scheduled unmap works */
+	flush_workqueue(unload_wq);
+	WARN_ON(!list_empty(&sess_list));
+}
+
+static int __init ibnbd_client_init(void)
+{
+	int err;
+
+	pr_info("Loading module %s, version %s, proto %s: "
+		"(softirq_enable: %d)\n", KBUILD_MODNAME,
+		IBNBD_VER_STRING, IBNBD_PROTO_VER_STRING,
+		softirq_enable);
+
+	ibnbd_client_major = register_blkdev(ibnbd_client_major, "ibnbd");
+	if (ibnbd_client_major <= 0) {
+		pr_err("Failed to load module,"
+		       " block device registration failed\n");
+		err = -EBUSY;
+		goto out;
+	}
+
+	err = ibnbd_clt_create_sysfs_files();
+	if (err) {
+		pr_err("Failed to load module,"
+		       " creating sysfs device files failed, err: %d\n",
+		       err);
+		goto out_unregister_blk;
+	}
+
+	unload_wq = alloc_workqueue("ibnbd_unload_wq", WQ_MEM_RECLAIM, 0);
+	if (!unload_wq) {
+		pr_err("Failed to load module, alloc ibnbd_unload_wq failed\n");
+		goto out_destroy_sysfs_files;
+	}
+
+	return 0;
+
+out_destroy_sysfs_files:
+	ibnbd_clt_destroy_sysfs_files();
+out_unregister_blk:
+	unregister_blkdev(ibnbd_client_major, "ibnbd");
+out:
+	return err;
+}
+
+static void __exit ibnbd_client_exit(void)
+{
+	pr_info("Unloading module\n");
+	ibnbd_destroy_sessions();
+	unregister_blkdev(ibnbd_client_major, "ibnbd");
+	ida_destroy(&index_ida);
+	destroy_workqueue(unload_wq);
+	pr_info("Module unloaded\n");
+}
+
+module_init(ibnbd_client_init);
+module_exit(ibnbd_client_exit);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 18/25] ibnbd: client: sysfs interface functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (16 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 17/25] ibnbd: client: main functionality Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-18 16:28   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 19/25] ibnbd: server: private header with server structs and functions Jack Wang
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is the sysfs interface to IBNBD block devices on client side:

  /sys/devices/virtual/ibnbd-client/ctl/
    |- map_device
    |  *** maps remote device
    |
    |- devices/
       *** all mapped devices

  /sys/block/ibnbd<N>/ibnbd_client/
    |- unmap_device
    |  *** unmaps device
    |
    |- state
    |  *** device state
    |
    |- session
    |  *** session name
    |
    |- mapping_path
       *** path of the dev that was mapped on server

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-clt-sysfs.c | 691 ++++++++++++++++++++++++++
 1 file changed, 691 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-clt-sysfs.c

diff --git a/drivers/block/ibnbd/ibnbd-clt-sysfs.c b/drivers/block/ibnbd/ibnbd-clt-sysfs.c
new file mode 100644
index 000000000000..4fd365e68e0f
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-clt-sysfs.c
@@ -0,0 +1,691 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Swapnil Ingle <swapnil.ingle@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <linux/types.h>
+#include <linux/ctype.h>
+#include <linux/parser.h>
+#include <linux/module.h>
+#include <linux/in6.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/device.h>
+#include <rdma/ib.h>
+#include <rdma/rdma_cm.h>
+
+#include "ibnbd-clt.h"
+
+static struct device *ibnbd_dev;
+static struct class *ibnbd_dev_class;
+static struct kobject *ibnbd_devs_kobj;
+
+enum {
+	IBNBD_OPT_ERR		= 0,
+	IBNBD_OPT_PATH		= 1 << 0,
+	IBNBD_OPT_DEV_PATH	= 1 << 1,
+	IBNBD_OPT_ACCESS_MODE	= 1 << 3,
+	IBNBD_OPT_IO_MODE	= 1 << 5,
+	IBNBD_OPT_SESSNAME	= 1 << 6,
+};
+
+static unsigned int ibnbd_opt_mandatory[] = {
+	IBNBD_OPT_PATH,
+	IBNBD_OPT_DEV_PATH,
+	IBNBD_OPT_SESSNAME,
+};
+
+static const match_table_t ibnbd_opt_tokens = {
+	{	IBNBD_OPT_PATH,		"path=%s"		},
+	{	IBNBD_OPT_DEV_PATH,	"device_path=%s"	},
+	{	IBNBD_OPT_ACCESS_MODE,	"access_mode=%s"	},
+	{	IBNBD_OPT_IO_MODE,	"io_mode=%s"		},
+	{	IBNBD_OPT_SESSNAME,	"sessname=%s"		},
+	{	IBNBD_OPT_ERR,		NULL			},
+};
+
+/* remove new line from string */
+static void strip(char *s)
+{
+	char *p = s;
+
+	while (*s != '\0') {
+		if (*s != '\n')
+			*p++ = *s++;
+		else
+			++s;
+	}
+	*p = '\0';
+}
+
+static int ibnbd_clt_parse_map_options(const char *buf,
+				       char *sessname,
+				       struct ibtrs_addr *paths,
+				       size_t *path_cnt,
+				       size_t max_path_cnt,
+				       char *pathname,
+				       enum ibnbd_access_mode *access_mode,
+				       enum ibnbd_io_mode *io_mode)
+{
+	char *options, *sep_opt;
+	char *p;
+	substring_t args[MAX_OPT_ARGS];
+	int opt_mask = 0;
+	int token;
+	int ret = -EINVAL;
+	int i;
+	int p_cnt = 0;
+
+	options = kstrdup(buf, GFP_KERNEL);
+	if (!options)
+		return -ENOMEM;
+
+	sep_opt = strstrip(options);
+	strip(sep_opt);
+	while ((p = strsep(&sep_opt, " ")) != NULL) {
+		if (!*p)
+			continue;
+
+		token = match_token(p, ibnbd_opt_tokens, args);
+		opt_mask |= token;
+
+		switch (token) {
+		case IBNBD_OPT_SESSNAME:
+			p = match_strdup(args);
+			if (!p) {
+				ret = -ENOMEM;
+				goto out;
+			}
+			if (strlen(p) > NAME_MAX) {
+				pr_err("map_device: sessname too long\n");
+				ret = -EINVAL;
+				kfree(p);
+				goto out;
+			}
+			strlcpy(sessname, p, NAME_MAX);
+			kfree(p);
+			break;
+
+		case IBNBD_OPT_PATH:
+			if (p_cnt >= max_path_cnt) {
+				pr_err("map_device: too many (> %zu) paths "
+				       "provided\n", max_path_cnt);
+				ret = -ENOMEM;
+				goto out;
+			}
+			p = match_strdup(args);
+			if (!p) {
+				ret = -ENOMEM;
+				goto out;
+			}
+
+			ret = ibtrs_addr_to_sockaddr(p, strlen(p), IBTRS_PORT,
+						     &paths[p_cnt]);
+			if (ret) {
+				pr_err("Can't parse path %s: %d\n", p, ret);
+				kfree(p);
+				goto out;
+			}
+
+			p_cnt++;
+
+			kfree(p);
+			break;
+
+		case IBNBD_OPT_DEV_PATH:
+			p = match_strdup(args);
+			if (!p) {
+				ret = -ENOMEM;
+				goto out;
+			}
+			if (strlen(p) > NAME_MAX) {
+				pr_err("map_device: Device path too long\n");
+				ret = -EINVAL;
+				kfree(p);
+				goto out;
+			}
+			strlcpy(pathname, p, NAME_MAX);
+			kfree(p);
+			break;
+
+		case IBNBD_OPT_ACCESS_MODE:
+			p = match_strdup(args);
+			if (!p) {
+				ret = -ENOMEM;
+				goto out;
+			}
+
+			if (!strcmp(p, "ro")) {
+				*access_mode = IBNBD_ACCESS_RO;
+			} else if (!strcmp(p, "rw")) {
+				*access_mode = IBNBD_ACCESS_RW;
+			} else if (!strcmp(p, "migration")) {
+				*access_mode = IBNBD_ACCESS_MIGRATION;
+			} else {
+				pr_err("map_device: Invalid access_mode:"
+				       " '%s'\n", p);
+				ret = -EINVAL;
+				kfree(p);
+				goto out;
+			}
+
+			kfree(p);
+			break;
+
+		case IBNBD_OPT_IO_MODE:
+			p = match_strdup(args);
+			if (!p) {
+				ret = -ENOMEM;
+				goto out;
+			}
+			if (!strcmp(p, "blockio")) {
+				*io_mode = IBNBD_BLOCKIO;
+			} else if (!strcmp(p, "fileio")) {
+				*io_mode = IBNBD_FILEIO;
+			} else {
+				pr_err("map_device: Invalid io_mode: '%s'.\n",
+				       p);
+				ret = -EINVAL;
+				kfree(p);
+				goto out;
+			}
+			kfree(p);
+			break;
+
+		default:
+			pr_err("map_device: Unknown parameter or missing value"
+			       " '%s'\n", p);
+			ret = -EINVAL;
+			goto out;
+		}
+	}
+
+	for (i = 0; i < ARRAY_SIZE(ibnbd_opt_mandatory); i++) {
+		if ((opt_mask & ibnbd_opt_mandatory[i])) {
+			ret = 0;
+		} else {
+			pr_err("map_device: Parameters missing\n");
+			ret = -EINVAL;
+			break;
+		}
+	}
+
+out:
+	*path_cnt = p_cnt;
+	kfree(options);
+	return ret;
+}
+
+static ssize_t ibnbd_clt_state_show(struct kobject *kobj,
+				    struct kobj_attribute *attr, char *page)
+{
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	switch (dev->dev_state) {
+	case (DEV_STATE_INIT):
+		return scnprintf(page, PAGE_SIZE, "init\n");
+	case (DEV_STATE_MAPPED):
+		/* TODO fix cli tool before changing to proper state */
+		return scnprintf(page, PAGE_SIZE, "open\n");
+	case (DEV_STATE_MAPPED_DISCONNECTED):
+		/* TODO fix cli tool before changing to proper state */
+		return scnprintf(page, PAGE_SIZE, "closed\n");
+	case (DEV_STATE_UNMAPPED):
+		return scnprintf(page, PAGE_SIZE, "unmapped\n");
+	default:
+		return scnprintf(page, PAGE_SIZE, "unknown\n");
+	}
+}
+
+static struct kobj_attribute ibnbd_clt_state_attr =
+	__ATTR(state, 0444, ibnbd_clt_state_show, NULL);
+
+static ssize_t ibnbd_clt_mapping_path_show(struct kobject *kobj,
+					   struct kobj_attribute *attr,
+					   char *page)
+{
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n", dev->pathname);
+}
+
+static struct kobj_attribute ibnbd_clt_mapping_path_attr =
+	__ATTR(mapping_path, 0444, ibnbd_clt_mapping_path_show, NULL);
+
+static ssize_t ibnbd_clt_io_mode_show(struct kobject *kobj,
+				      struct kobj_attribute *attr, char *page)
+{
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n",
+			 ibnbd_io_mode_str(dev->remote_io_mode));
+}
+
+static struct kobj_attribute ibnbd_clt_io_mode =
+	__ATTR(io_mode, 0444, ibnbd_clt_io_mode_show, NULL);
+
+static ssize_t ibnbd_clt_access_mode_show(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  char *page)
+{
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n",
+			 ibnbd_access_mode_str(dev->access_mode));
+}
+
+static struct kobj_attribute ibnbd_clt_access_mode =
+	__ATTR(access_mode, 0444, ibnbd_clt_access_mode_show, NULL);
+
+static ssize_t ibnbd_clt_unmap_dev_show(struct kobject *kobj,
+					struct kobj_attribute *attr, char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo <normal|force> > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibnbd_clt_unmap_dev_store(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 const char *buf, size_t count)
+{
+	struct ibnbd_clt_dev *dev;
+	char *opt, *options;
+	bool force;
+	int err;
+
+	opt = kstrdup(buf, GFP_KERNEL);
+	if (!opt)
+		return -ENOMEM;
+
+	options = strstrip(opt);
+	strip(options);
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	if (sysfs_streq(options, "normal")) {
+		force = false;
+	} else if (sysfs_streq(options, "force")) {
+		force = true;
+	} else {
+		ibnbd_err(dev, "unmap_device: Invalid value: %s\n", options);
+		err = -EINVAL;
+		goto out;
+	}
+
+	ibnbd_info(dev, "Unmapping device, option: %s.\n",
+		   force ? "force" : "normal");
+
+	/*
+	 * We take explicit module reference only for one reason: do not
+	 * race with lockless ibnbd_destroy_sessions().
+	 */
+	if (!try_module_get(THIS_MODULE)) {
+		err = -ENODEV;
+		goto out;
+	}
+	err = ibnbd_clt_unmap_device(dev, force, &attr->attr);
+	if (unlikely(err)) {
+		if (unlikely(err != -EALREADY))
+			ibnbd_err(dev, "unmap_device: %d\n",  err);
+		goto module_put;
+	}
+
+	/*
+	 * Here device can be vanished!
+	 */
+
+	err = count;
+
+module_put:
+	module_put(THIS_MODULE);
+out:
+	kfree(opt);
+
+	return err;
+}
+
+static struct kobj_attribute ibnbd_clt_unmap_device_attr =
+	__ATTR(unmap_device, 0644, ibnbd_clt_unmap_dev_show,
+	       ibnbd_clt_unmap_dev_store);
+
+static ssize_t ibnbd_clt_resize_dev_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *page)
+{
+	return scnprintf(page, PAGE_SIZE,
+			 "Usage: echo <new size in sectors> > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibnbd_clt_resize_dev_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	int ret;
+	unsigned long sectors;
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	ret = kstrtoul(buf, 0, &sectors);
+	if (ret)
+		return ret;
+
+	ret = ibnbd_clt_resize_disk(dev, (size_t)sectors);
+	if (ret)
+		return ret;
+
+	return count;
+}
+
+static struct kobj_attribute ibnbd_clt_resize_dev_attr =
+	__ATTR(resize, 0644, ibnbd_clt_resize_dev_show,
+	       ibnbd_clt_resize_dev_store);
+
+static ssize_t ibnbd_clt_remap_dev_show(struct kobject *kobj,
+					struct kobj_attribute *attr, char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo <1> > %s\n",
+			 attr->attr.name);
+}
+
+static ssize_t ibnbd_clt_remap_dev_store(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 const char *buf, size_t count)
+{
+	struct ibnbd_clt_dev *dev;
+	char *opt, *options;
+	int err;
+
+	opt = kstrdup(buf, GFP_KERNEL);
+	if (!opt)
+		return -ENOMEM;
+
+	options = strstrip(opt);
+	strip(options);
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+	if (!sysfs_streq(options, "1")) {
+		ibnbd_err(dev, "remap_device: Invalid value: %s\n", options);
+		err = -EINVAL;
+		goto out;
+	}
+	err = ibnbd_clt_remap_device(dev);
+	if (likely(!err))
+		err = count;
+
+out:
+	kfree(opt);
+
+	return err;
+}
+
+static struct kobj_attribute ibnbd_clt_remap_device_attr =
+	__ATTR(remap_device, 0644, ibnbd_clt_remap_dev_show,
+	       ibnbd_clt_remap_dev_store);
+
+static ssize_t ibnbd_clt_session_show(struct kobject *kobj,
+				      struct kobj_attribute *attr,
+				      char *page)
+{
+	struct ibnbd_clt_dev *dev;
+
+	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n", dev->sess->sessname);
+}
+
+static struct kobj_attribute ibnbd_clt_session_attr =
+	__ATTR(session, 0444, ibnbd_clt_session_show, NULL);
+
+static struct attribute *ibnbd_dev_attrs[] = {
+	&ibnbd_clt_unmap_device_attr.attr,
+	&ibnbd_clt_resize_dev_attr.attr,
+	&ibnbd_clt_remap_device_attr.attr,
+	&ibnbd_clt_mapping_path_attr.attr,
+	&ibnbd_clt_state_attr.attr,
+	&ibnbd_clt_session_attr.attr,
+	&ibnbd_clt_io_mode.attr,
+	&ibnbd_clt_access_mode.attr,
+	NULL,
+};
+
+void ibnbd_clt_remove_dev_symlink(struct ibnbd_clt_dev *dev)
+{
+	/*
+	 * The module_is_live() check is crucial and helps to avoid annoying
+	 * sysfs warning raised in sysfs_remove_link(), when the whole sysfs
+	 * path was just removed, see ibnbd_close_sessions().
+	 */
+	if (strlen(dev->blk_symlink_name) && module_is_live(THIS_MODULE))
+		sysfs_remove_link(ibnbd_devs_kobj, dev->blk_symlink_name);
+}
+
+static struct kobj_type ibnbd_dev_ktype = {
+	.sysfs_ops      = &kobj_sysfs_ops,
+	.default_attrs  = ibnbd_dev_attrs,
+};
+
+static int ibnbd_clt_add_dev_kobj(struct ibnbd_clt_dev *dev)
+{
+	int ret;
+	struct kobject *gd_kobj = &disk_to_dev(dev->gd)->kobj;
+
+	ret = kobject_init_and_add(&dev->kobj, &ibnbd_dev_ktype, gd_kobj, "%s",
+				   "ibnbd");
+	if (ret)
+		ibnbd_err(dev, "Failed to create device sysfs dir, err: %d\n",
+			  ret);
+
+	return ret;
+}
+
+static ssize_t ibnbd_clt_map_device_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "Usage: echo \""
+			 "sessname=<name of the ibtrs session>"
+			 " path=<[srcaddr,]dstaddr>"
+			 " [path=<[srcaddr,]dstaddr>]"
+			 " device_path=<full path on remote side>"
+			 " [access_mode=<ro|rw|migration>]"
+			 " [io_mode=<fileio|blockio>]\" > %s\n\n"
+			 "addr ::= [ ip:<ipv4> | ip:<ipv6> | gid:<gid> ]\n",
+			 attr->attr.name);
+}
+
+static int ibnbd_clt_get_path_name(struct ibnbd_clt_dev *dev, char *buf,
+				   size_t len)
+{
+	int ret;
+	char pathname[NAME_MAX], *s;
+
+	strlcpy(pathname, dev->pathname, sizeof(pathname));
+	while ((s = strchr(pathname, '/')))
+		s[0] = '!';
+
+	ret = snprintf(buf, len, "%s", pathname);
+	if (ret >= len)
+		return -ENAMETOOLONG;
+
+	return 0;
+}
+
+static int ibnbd_clt_add_dev_symlink(struct ibnbd_clt_dev *dev)
+{
+	struct kobject *gd_kobj = &disk_to_dev(dev->gd)->kobj;
+	int ret;
+
+	ret = ibnbd_clt_get_path_name(dev, dev->blk_symlink_name,
+				      sizeof(dev->blk_symlink_name));
+	if (ret) {
+		ibnbd_err(dev, "Failed to get /sys/block symlink path, err: %d\n",
+			  ret);
+		goto out_err;
+	}
+
+	ret = sysfs_create_link(ibnbd_devs_kobj, gd_kobj,
+				dev->blk_symlink_name);
+	if (ret) {
+		ibnbd_err(dev, "Creating /sys/block symlink failed, err: %d\n",
+			  ret);
+		goto out_err;
+	}
+
+	return 0;
+
+out_err:
+	dev->blk_symlink_name[0] = '\0';
+	return ret;
+}
+
+static ssize_t ibnbd_clt_map_device_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	struct ibnbd_clt_dev *dev;
+	int ret;
+	char pathname[NAME_MAX];
+	char sessname[NAME_MAX];
+	enum ibnbd_access_mode access_mode = IBNBD_ACCESS_RW;
+	enum ibnbd_io_mode io_mode = IBNBD_AUTOIO;
+
+	struct sockaddr_storage *addrs;
+	struct ibtrs_addr paths[6];
+	size_t path_cnt;
+
+	addrs = kcalloc(ARRAY_SIZE(paths) * 2, sizeof(*addrs), GFP_KERNEL);
+	if (!addrs)
+		return -ENOMEM;
+
+	for (path_cnt = 0; path_cnt < ARRAY_SIZE(paths); path_cnt++) {
+		paths[path_cnt].src = &addrs[path_cnt * 2];
+		paths[path_cnt].dst = &addrs[path_cnt * 2 + 1];
+	}
+
+	ret = ibnbd_clt_parse_map_options(buf, sessname, paths,
+					  &path_cnt, ARRAY_SIZE(paths),
+					  pathname, &access_mode, &io_mode);
+	if (ret)
+		goto out;
+
+	pr_info("Mapping device %s on session %s, (access_mode: %s, "
+		"io_mode: %s)\n", pathname, sessname,
+		ibnbd_access_mode_str(access_mode), ibnbd_io_mode_str(io_mode));
+
+	dev = ibnbd_clt_map_device(sessname, paths, path_cnt, pathname,
+				   access_mode, io_mode);
+	if (unlikely(IS_ERR(dev))) {
+		ret = PTR_ERR(dev);
+		goto out;
+	}
+
+	ret = ibnbd_clt_add_dev_kobj(dev);
+	if (unlikely(ret))
+		goto unmap_dev;
+
+	ret = ibnbd_clt_add_dev_symlink(dev);
+	if (ret)
+		goto unmap_dev;
+
+	kfree(addrs);
+	return count;
+
+unmap_dev:
+	ibnbd_clt_unmap_device(dev, true, NULL);
+out:
+	kfree(addrs);
+	return ret;
+}
+
+static struct kobj_attribute ibnbd_clt_map_device_attr =
+	__ATTR(map_device, 0644,
+	       ibnbd_clt_map_device_show, ibnbd_clt_map_device_store);
+
+static struct attribute *default_attrs[] = {
+	&ibnbd_clt_map_device_attr.attr,
+	NULL,
+};
+
+static struct attribute_group default_attr_group = {
+	.attrs = default_attrs,
+};
+
+int ibnbd_clt_create_sysfs_files(void)
+{
+	int err;
+
+	ibnbd_dev_class = class_create(THIS_MODULE, "ibnbd-client");
+	if (unlikely(IS_ERR(ibnbd_dev_class)))
+		return PTR_ERR(ibnbd_dev_class);
+
+	ibnbd_dev = device_create(ibnbd_dev_class, NULL,
+				  MKDEV(0, 0), NULL, "ctl");
+	if (unlikely(IS_ERR(ibnbd_dev))) {
+		err = PTR_ERR(ibnbd_dev);
+		goto cls_destroy;
+	}
+	ibnbd_devs_kobj = kobject_create_and_add("devices", &ibnbd_dev->kobj);
+	if (unlikely(!ibnbd_devs_kobj)) {
+		err = -ENOMEM;
+		goto dev_destroy;
+	}
+	err = sysfs_create_group(&ibnbd_dev->kobj, &default_attr_group);
+	if (unlikely(err))
+		goto put_devs_kobj;
+
+	return 0;
+
+put_devs_kobj:
+	kobject_del(ibnbd_devs_kobj);
+	kobject_put(ibnbd_devs_kobj);
+dev_destroy:
+	device_destroy(ibnbd_dev_class, MKDEV(0, 0));
+cls_destroy:
+	class_destroy(ibnbd_dev_class);
+
+	return err;
+}
+
+void ibnbd_clt_destroy_default_group(void)
+{
+	sysfs_remove_group(&ibnbd_dev->kobj, &default_attr_group);
+}
+
+void ibnbd_clt_destroy_sysfs_files(void)
+{
+	kobject_del(ibnbd_devs_kobj);
+	kobject_put(ibnbd_devs_kobj);
+	device_destroy(ibnbd_dev_class, MKDEV(0, 0));
+	class_destroy(ibnbd_dev_class);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 19/25] ibnbd: server: private header with server structs and functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (17 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 18/25] ibnbd: client: sysfs interface functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-06-20 15:03 ` [PATCH v4 20/25] ibnbd: server: main functionality Jack Wang
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This header describes main structs and functions used by ibnbd-server
module, namely structs for managing sessions from different clients
and mapped (opened) devices.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-srv.h | 94 +++++++++++++++++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-srv.h

diff --git a/drivers/block/ibnbd/ibnbd-srv.h b/drivers/block/ibnbd/ibnbd-srv.h
new file mode 100644
index 000000000000..6e46c3e97bf4
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-srv.h
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBNBD_SRV_H
+#define IBNBD_SRV_H
+
+#include <linux/types.h>
+#include <linux/idr.h>
+#include <linux/kref.h>
+
+#include "ibtrs.h"
+#include "ibnbd-proto.h"
+#include "ibnbd-log.h"
+
+struct ibnbd_srv_session {
+	/* Entry inside global sess_list */
+	struct list_head        list;
+	struct ibtrs_srv	*ibtrs;
+	char			sessname[NAME_MAX];
+	int			queue_depth;
+	struct bio_set		sess_bio_set;
+
+	rwlock_t                index_lock ____cacheline_aligned;
+	struct idr              index_idr;
+	/* List of struct ibnbd_srv_sess_dev */
+	struct list_head        sess_dev_list;
+	struct mutex		lock;
+	u8			ver;
+};
+
+struct ibnbd_srv_dev {
+	/* Entry inside global dev_list */
+	struct list_head                list;
+	struct kobject                  dev_kobj;
+	struct kobject                  dev_sessions_kobj;
+	struct kref                     kref;
+	char				id[NAME_MAX];
+	/* List of ibnbd_srv_sess_dev structs */
+	struct list_head		sess_dev_list;
+	struct mutex			lock;
+	int				open_write_cnt;
+	enum ibnbd_io_mode		mode;
+};
+
+/* Structure which binds N devices and N sessions */
+struct ibnbd_srv_sess_dev {
+	/* Entry inside ibnbd_srv_dev struct */
+	struct list_head		dev_list;
+	/* Entry inside ibnbd_srv_session struct */
+	struct list_head		sess_list;
+	struct ibnbd_dev		*ibnbd_dev;
+	struct ibnbd_srv_session        *sess;
+	struct ibnbd_srv_dev		*dev;
+	struct kobject                  kobj;
+	struct completion		*sysfs_release_compl;
+	u32                             device_id;
+	fmode_t                         open_flags;
+	struct kref			kref;
+	struct completion               *destroy_comp;
+	char				pathname[NAME_MAX];
+	enum ibnbd_access_mode		access_mode;
+};
+
+/* ibnbd-srv-sysfs.c */
+
+int ibnbd_srv_create_dev_sysfs(struct ibnbd_srv_dev *dev,
+			       struct block_device *bdev,
+			       const char *dir_name);
+void ibnbd_srv_destroy_dev_sysfs(struct ibnbd_srv_dev *dev);
+int ibnbd_srv_create_dev_session_sysfs(struct ibnbd_srv_sess_dev *sess_dev);
+void ibnbd_srv_destroy_dev_session_sysfs(struct ibnbd_srv_sess_dev *sess_dev);
+int ibnbd_srv_create_sysfs_files(void);
+void ibnbd_srv_destroy_sysfs_files(void);
+
+#endif /* IBNBD_SRV_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 20/25] ibnbd: server: main functionality
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (18 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 19/25] ibnbd: server: private header with server structs and functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-18 17:41   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev Jack Wang
                   ` (5 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is main functionality of ibnbd-server module, which handles IBTRS
events and IBNBD protocol requests, like map (open) or unmap (close)
device.  Also server side is responsible for processing incoming IBTRS
IO requests and forward them to local mapped devices.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-srv.c | 945 ++++++++++++++++++++++++++++++++
 1 file changed, 945 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-srv.c

diff --git a/drivers/block/ibnbd/ibnbd-srv.c b/drivers/block/ibnbd/ibnbd-srv.c
new file mode 100644
index 000000000000..07b4ae49fda2
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-srv.c
@@ -0,0 +1,945 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <linux/module.h>
+#include <linux/blkdev.h>
+
+#include "ibnbd-srv.h"
+#include "ibnbd-srv-dev.h"
+
+MODULE_AUTHOR("ibnbd@profitbricks.com");
+MODULE_VERSION(IBNBD_VER_STRING);
+MODULE_DESCRIPTION("InfiniBand Network Block Device Server");
+MODULE_LICENSE("GPL");
+
+#define DEFAULT_DEV_SEARCH_PATH "/"
+
+static char dev_search_path[PATH_MAX] = DEFAULT_DEV_SEARCH_PATH;
+
+static int dev_search_path_set(const char *val, const struct kernel_param *kp)
+{
+	char *dup;
+
+	if (strlen(val) >= sizeof(dev_search_path))
+		return -EINVAL;
+
+	dup = kstrdup(val, GFP_KERNEL);
+
+	if (dup[strlen(dup) - 1] == '\n')
+		dup[strlen(dup) - 1] = '\0';
+
+	strlcpy(dev_search_path, dup, sizeof(dev_search_path));
+
+	kfree(dup);
+	pr_info("dev_search_path changed to '%s'\n", dev_search_path);
+
+	return 0;
+}
+
+static struct kparam_string dev_search_path_kparam_str = {
+	.maxlen	= sizeof(dev_search_path),
+	.string	= dev_search_path
+};
+
+static const struct kernel_param_ops dev_search_path_ops = {
+	.set	= dev_search_path_set,
+	.get	= param_get_string,
+};
+
+module_param_cb(dev_search_path, &dev_search_path_ops,
+		&dev_search_path_kparam_str, 0444);
+MODULE_PARM_DESC(dev_search_path, "Sets the dev_search_path."
+		 " When a device is mapped this path is prepended to the"
+		 " device path from the map device operation.  If %SESSNAME%"
+		 " is specified in a path, then device will be searched in a"
+		 " session namespace."
+		 " (default: " DEFAULT_DEV_SEARCH_PATH ")");
+
+static int def_io_mode = IBNBD_BLOCKIO;
+
+static int def_io_mode_set(const char *val, const struct kernel_param *kp)
+{
+	int io_mode, rc;
+
+	rc = kstrtoint(val, 0, &io_mode);
+	if (unlikely(rc))
+		return rc;
+
+	switch (io_mode) {
+	case IBNBD_FILEIO:
+	case IBNBD_BLOCKIO:
+		def_io_mode = io_mode;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+static const struct kernel_param_ops def_io_mode_ops = {
+	.set	= def_io_mode_set,
+	.get	= param_get_int,
+};
+module_param_cb(def_io_mode, &def_io_mode_ops, &def_io_mode, 0444);
+MODULE_PARM_DESC(def_io_mode, "By default, export devices in"
+		 " blockio(" __stringify(_IBNBD_BLOCKIO) ") or"
+		 " fileio(" __stringify(_IBNBD_FILEIO) ") mode."
+		 " (default: " __stringify(_IBNBD_BLOCKIO) " (blockio))");
+
+static DEFINE_MUTEX(sess_lock);
+static DEFINE_SPINLOCK(dev_lock);
+
+static LIST_HEAD(sess_list);
+static LIST_HEAD(dev_list);
+
+struct ibnbd_io_private {
+	struct ibtrs_srv_op		*id;
+	struct ibnbd_srv_sess_dev	*sess_dev;
+};
+
+static void ibnbd_sess_dev_release(struct kref *kref)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	sess_dev = container_of(kref, struct ibnbd_srv_sess_dev, kref);
+	complete(sess_dev->destroy_comp);
+}
+
+static inline void ibnbd_put_sess_dev(struct ibnbd_srv_sess_dev *sess_dev)
+{
+	kref_put(&sess_dev->kref, ibnbd_sess_dev_release);
+}
+
+static void ibnbd_endio(void *priv, int error)
+{
+	struct ibnbd_io_private *ibnbd_priv = priv;
+	struct ibnbd_srv_sess_dev *sess_dev = ibnbd_priv->sess_dev;
+
+	ibnbd_put_sess_dev(sess_dev);
+
+	ibtrs_srv_resp_rdma(ibnbd_priv->id, error);
+
+	kfree(priv);
+}
+
+static struct ibnbd_srv_sess_dev *
+ibnbd_get_sess_dev(int dev_id, struct ibnbd_srv_session *srv_sess)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+	int ret = 0;
+
+	read_lock(&srv_sess->index_lock);
+	sess_dev = idr_find(&srv_sess->index_idr, dev_id);
+	if (likely(sess_dev))
+		ret = kref_get_unless_zero(&sess_dev->kref);
+	read_unlock(&srv_sess->index_lock);
+
+	if (unlikely(!sess_dev || !ret))
+		return ERR_PTR(-ENXIO);
+
+	return sess_dev;
+}
+
+static int process_rdma(struct ibtrs_srv *sess,
+			struct ibnbd_srv_session *srv_sess,
+			struct ibtrs_srv_op *id, void *data, u32 datalen,
+			const void *usr, size_t usrlen)
+{
+	const struct ibnbd_msg_io *msg = usr;
+	struct ibnbd_io_private *priv;
+	struct ibnbd_srv_sess_dev *sess_dev;
+	u32 dev_id;
+	int err;
+
+	priv = kmalloc(sizeof(*priv), GFP_KERNEL);
+	if (unlikely(!priv))
+		return -ENOMEM;
+
+	dev_id = le32_to_cpu(msg->device_id);
+
+	sess_dev = ibnbd_get_sess_dev(dev_id, srv_sess);
+	if (unlikely(IS_ERR(sess_dev))) {
+		pr_err_ratelimited("Got I/O request on session %s for "
+				   "unknown device id %d\n",
+				   srv_sess->sessname, dev_id);
+		err = -ENOTCONN;
+		goto err;
+	}
+
+	priv->sess_dev = sess_dev;
+	priv->id = id;
+
+	err = ibnbd_dev_submit_io(sess_dev->ibnbd_dev, le64_to_cpu(msg->sector),
+				  data, datalen, le32_to_cpu(msg->bi_size),
+				  le32_to_cpu(msg->rw),
+				  srv_sess->ver < IBNBD_PROTO_VER_MAJOR ||
+				  usrlen < sizeof(*msg) ?
+				  0 : le16_to_cpu(msg->prio), priv);
+	if (unlikely(err)) {
+		ibnbd_err(sess_dev,
+			  "Submitting I/O to device failed, err: %d\n", err);
+		goto sess_dev_put;
+	}
+
+	return 0;
+
+sess_dev_put:
+	ibnbd_put_sess_dev(sess_dev);
+err:
+	kfree(priv);
+	return err;
+}
+
+static void destroy_device(struct ibnbd_srv_dev *dev)
+{
+	WARN(!list_empty(&dev->sess_dev_list),
+	     "Device %s is being destroyed but still in use!\n",
+	     dev->id);
+
+	spin_lock(&dev_lock);
+	list_del(&dev->list);
+	spin_unlock(&dev_lock);
+
+	if (dev->dev_kobj.state_in_sysfs)
+		/*
+		 * Destroy kobj only if it was really created.
+		 * The following call should be sync, because
+		 *  we free the memory afterwards.
+		 */
+		ibnbd_srv_destroy_dev_sysfs(dev);
+
+	kfree(dev);
+}
+
+static void destroy_device_cb(struct kref *kref)
+{
+	struct ibnbd_srv_dev *dev;
+
+	dev = container_of(kref, struct ibnbd_srv_dev, kref);
+
+	destroy_device(dev);
+}
+
+static void ibnbd_put_srv_dev(struct ibnbd_srv_dev *dev)
+{
+	kref_put(&dev->kref, destroy_device_cb);
+}
+
+static void ibnbd_destroy_sess_dev(struct ibnbd_srv_sess_dev *sess_dev)
+{
+	DECLARE_COMPLETION_ONSTACK(dc);
+
+	write_lock(&sess_dev->sess->index_lock);
+	idr_remove(&sess_dev->sess->index_idr, sess_dev->device_id);
+	write_unlock(&sess_dev->sess->index_lock);
+
+	sess_dev->destroy_comp = &dc;
+	ibnbd_put_sess_dev(sess_dev);
+	wait_for_completion(&dc);
+
+	ibnbd_dev_close(sess_dev->ibnbd_dev);
+	list_del(&sess_dev->sess_list);
+	mutex_lock(&sess_dev->dev->lock);
+	list_del(&sess_dev->dev_list);
+	if (sess_dev->open_flags & FMODE_WRITE)
+		sess_dev->dev->open_write_cnt--;
+	mutex_unlock(&sess_dev->dev->lock);
+
+	ibnbd_put_srv_dev(sess_dev->dev);
+
+	ibnbd_info(sess_dev, "Device closed\n");
+	kfree(sess_dev);
+}
+
+static void destroy_sess(struct ibnbd_srv_session *srv_sess)
+{
+	struct ibnbd_srv_sess_dev *sess_dev, *tmp;
+
+	if (list_empty(&srv_sess->sess_dev_list))
+		goto out;
+
+	mutex_lock(&srv_sess->lock);
+	list_for_each_entry_safe(sess_dev, tmp, &srv_sess->sess_dev_list,
+				 sess_list) {
+		ibnbd_srv_destroy_dev_session_sysfs(sess_dev);
+		ibnbd_destroy_sess_dev(sess_dev);
+	}
+	mutex_unlock(&srv_sess->lock);
+
+out:
+	idr_destroy(&srv_sess->index_idr);
+	bioset_exit(&srv_sess->sess_bio_set);
+
+	pr_info("IBTRS Session %s disconnected\n", srv_sess->sessname);
+
+	mutex_lock(&sess_lock);
+	list_del(&srv_sess->list);
+	mutex_unlock(&sess_lock);
+
+	kfree(srv_sess);
+}
+
+static int create_sess(struct ibtrs_srv *ibtrs)
+{
+	struct ibnbd_srv_session *srv_sess;
+	char sessname[NAME_MAX];
+	int err;
+
+	err = ibtrs_srv_get_sess_name(ibtrs, sessname, sizeof(sessname));
+	if (unlikely(err)) {
+		pr_err("ibtrs_srv_get_sess_name(%s): %d\n", sessname, err);
+
+		return err;
+	}
+	srv_sess = kzalloc(sizeof(*srv_sess), GFP_KERNEL);
+	if (!srv_sess)
+		return -ENOMEM;
+	srv_sess->queue_depth = ibtrs_srv_get_queue_depth(ibtrs);
+
+	err = bioset_init(&srv_sess->sess_bio_set, srv_sess->queue_depth, 0,
+			  BIOSET_NEED_BVECS);
+	if (err) {
+		pr_err("Allocating srv_session for session %s failed\n",
+		       sessname);
+		kfree(srv_sess);
+		return err;
+	}
+
+	idr_init(&srv_sess->index_idr);
+	rwlock_init(&srv_sess->index_lock);
+	INIT_LIST_HEAD(&srv_sess->sess_dev_list);
+	mutex_init(&srv_sess->lock);
+	mutex_lock(&sess_lock);
+	list_add(&srv_sess->list, &sess_list);
+	mutex_unlock(&sess_lock);
+
+	srv_sess->ibtrs = ibtrs;
+	srv_sess->queue_depth = ibtrs_srv_get_queue_depth(ibtrs);
+	strlcpy(srv_sess->sessname, sessname, sizeof(srv_sess->sessname));
+
+	ibtrs_srv_set_sess_priv(ibtrs, srv_sess);
+
+	return 0;
+}
+
+static int ibnbd_srv_link_ev(struct ibtrs_srv *ibtrs,
+			     enum ibtrs_srv_link_ev ev, void *priv)
+{
+	struct ibnbd_srv_session *srv_sess = priv;
+
+	switch (ev) {
+	case IBTRS_SRV_LINK_EV_CONNECTED:
+		return create_sess(ibtrs);
+
+	case IBTRS_SRV_LINK_EV_DISCONNECTED:
+		if (WARN_ON(!srv_sess))
+			return -EINVAL;
+
+		destroy_sess(srv_sess);
+		return 0;
+
+	default:
+		pr_warn("Received unknown IBTRS session event %d from session"
+			" %s\n", ev, srv_sess->sessname);
+		return -EINVAL;
+	}
+}
+
+static int process_msg_close(struct ibtrs_srv *ibtrs,
+			     struct ibnbd_srv_session *srv_sess,
+			     void *data, size_t datalen, const void *usr,
+			     size_t usrlen)
+{
+	const struct ibnbd_msg_close *close_msg = usr;
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	sess_dev = ibnbd_get_sess_dev(le32_to_cpu(close_msg->device_id),
+				      srv_sess);
+	if (unlikely(IS_ERR(sess_dev)))
+		return 0;
+
+	ibnbd_srv_destroy_dev_session_sysfs(sess_dev);
+	ibnbd_put_sess_dev(sess_dev);
+	mutex_lock(&srv_sess->lock);
+	ibnbd_destroy_sess_dev(sess_dev);
+	mutex_unlock(&srv_sess->lock);
+	return 0;
+}
+
+static int process_msg_open(struct ibtrs_srv *ibtrs,
+			    struct ibnbd_srv_session *srv_sess,
+			    const void *msg, size_t len,
+			    void *data, size_t datalen);
+
+static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
+				 struct ibnbd_srv_session *srv_sess,
+				 const void *msg, size_t len,
+				 void *data, size_t datalen);
+
+static int ibnbd_srv_rdma_ev(struct ibtrs_srv *ibtrs, void *priv,
+			     struct ibtrs_srv_op *id, int dir,
+			     void *data, size_t datalen, const void *usr,
+			     size_t usrlen)
+{
+	struct ibnbd_srv_session *srv_sess = priv;
+	const struct ibnbd_msg_hdr *hdr = usr;
+	int ret = 0;
+	u16 type;
+
+	if (unlikely(WARN_ON(!srv_sess)))
+		return -ENODEV;
+
+	type = le16_to_cpu(hdr->type);
+
+	switch (type) {
+	case IBNBD_MSG_IO:
+		return process_rdma(ibtrs, srv_sess, id, data, datalen, usr,
+				    usrlen);
+	case IBNBD_MSG_CLOSE:
+		ret = process_msg_close(ibtrs, srv_sess, data, datalen,
+					usr, usrlen);
+		break;
+	case IBNBD_MSG_OPEN:
+		ret = process_msg_open(ibtrs, srv_sess, usr, usrlen,
+				       data, datalen);
+		break;
+	case IBNBD_MSG_SESS_INFO:
+		ret = process_msg_sess_info(ibtrs, srv_sess, usr, usrlen,
+					    data, datalen);
+		break;
+	default:
+		pr_warn("Received unexpected message type %d with dir %d from"
+			" session %s\n", type, dir, srv_sess->sessname);
+		return -EINVAL;
+	}
+
+	ibtrs_srv_resp_rdma(id, ret);
+	return 0;
+}
+
+static struct ibnbd_srv_sess_dev
+*ibnbd_sess_dev_alloc(struct ibnbd_srv_session *srv_sess)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+	int error;
+
+	sess_dev = kzalloc(sizeof(*sess_dev), GFP_KERNEL);
+	if (!sess_dev)
+		return ERR_PTR(-ENOMEM);
+
+	idr_preload(GFP_KERNEL);
+	write_lock(&srv_sess->index_lock);
+
+	error = idr_alloc(&srv_sess->index_idr, sess_dev, 0, -1, GFP_NOWAIT);
+	if (error < 0) {
+		pr_warn("Allocating idr failed, err: %d\n", error);
+		goto out_unlock;
+	}
+
+	sess_dev->device_id = error;
+	error = 0;
+
+out_unlock:
+	write_unlock(&srv_sess->index_lock);
+	idr_preload_end();
+	if (error) {
+		kfree(sess_dev);
+		return ERR_PTR(error);
+	}
+
+	return sess_dev;
+}
+
+static struct ibnbd_srv_dev *ibnbd_srv_init_srv_dev(const char *id,
+						    enum ibnbd_io_mode mode)
+{
+	struct ibnbd_srv_dev *dev;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev)
+		return ERR_PTR(-ENOMEM);
+
+	strlcpy(dev->id, id, sizeof(dev->id));
+	dev->mode = mode;
+	kref_init(&dev->kref);
+	INIT_LIST_HEAD(&dev->sess_dev_list);
+	mutex_init(&dev->lock);
+
+	return dev;
+}
+
+static struct ibnbd_srv_dev *
+ibnbd_srv_find_or_add_srv_dev(struct ibnbd_srv_dev *new_dev)
+{
+	struct ibnbd_srv_dev *dev;
+
+	spin_lock(&dev_lock);
+	list_for_each_entry(dev, &dev_list, list) {
+		if (!strncmp(dev->id, new_dev->id, sizeof(dev->id))) {
+			if (!kref_get_unless_zero(&dev->kref))
+				/*
+				 * We lost the race, device is almost dead.
+				 *  Continue traversing to find a valid one.
+				 */
+				continue;
+			spin_unlock(&dev_lock);
+			return dev;
+		}
+	}
+	list_add(&new_dev->list, &dev_list);
+	spin_unlock(&dev_lock);
+
+	return new_dev;
+}
+
+static int ibnbd_srv_check_update_open_perm(struct ibnbd_srv_dev *srv_dev,
+					    struct ibnbd_srv_session *srv_sess,
+					    enum ibnbd_io_mode io_mode,
+					    enum ibnbd_access_mode access_mode)
+{
+	int ret = -EPERM;
+
+	mutex_lock(&srv_dev->lock);
+
+	if (srv_dev->mode != io_mode) {
+		pr_err("Mapping device '%s' for session %s in %s mode forbidden,"
+		       " device is already mapped from other client(s) in"
+		       " %s mode\n", srv_dev->id, srv_sess->sessname,
+		       ibnbd_io_mode_str(io_mode),
+		       ibnbd_io_mode_str(srv_dev->mode));
+		goto out;
+	}
+
+	switch (access_mode) {
+	case IBNBD_ACCESS_RO:
+		ret = 0;
+		break;
+	case IBNBD_ACCESS_RW:
+		if (srv_dev->open_write_cnt == 0)  {
+			srv_dev->open_write_cnt++;
+			ret = 0;
+		} else {
+			pr_err("Mapping device '%s' for session %s with"
+			       " RW permissions failed. Device already opened"
+			       " as 'RW' by %d client(s) in %s mode.\n",
+			       srv_dev->id, srv_sess->sessname,
+			       srv_dev->open_write_cnt,
+			       ibnbd_io_mode_str(srv_dev->mode));
+		}
+		break;
+	case IBNBD_ACCESS_MIGRATION:
+		if (srv_dev->open_write_cnt < 2) {
+			srv_dev->open_write_cnt++;
+			ret = 0;
+		} else {
+			pr_err("Mapping device '%s' for session %s with"
+			       " migration permissions failed. Device already"
+			       " opened as 'RW' by %d client(s) in %s mode.\n",
+			       srv_dev->id, srv_sess->sessname,
+			       srv_dev->open_write_cnt,
+			       ibnbd_io_mode_str(srv_dev->mode));
+		}
+		break;
+	default:
+		pr_err("Received mapping request for device '%s' on session %s"
+		       " with invalid access mode: %d\n", srv_dev->id,
+		       srv_sess->sessname, access_mode);
+		ret = -EINVAL;
+	}
+
+out:
+	mutex_unlock(&srv_dev->lock);
+
+	return ret;
+}
+
+static struct ibnbd_srv_dev *
+ibnbd_srv_get_or_create_srv_dev(struct ibnbd_dev *ibnbd_dev,
+				struct ibnbd_srv_session *srv_sess,
+				enum ibnbd_io_mode io_mode,
+				enum ibnbd_access_mode access_mode)
+{
+	int ret;
+	struct ibnbd_srv_dev *new_dev, *dev;
+
+	new_dev = ibnbd_srv_init_srv_dev(ibnbd_dev->name, io_mode);
+	if (IS_ERR(new_dev))
+		return new_dev;
+
+	dev = ibnbd_srv_find_or_add_srv_dev(new_dev);
+	if (dev != new_dev)
+		kfree(new_dev);
+
+	ret = ibnbd_srv_check_update_open_perm(dev, srv_sess, io_mode,
+					       access_mode);
+	if (ret) {
+		ibnbd_put_srv_dev(dev);
+		return ERR_PTR(ret);
+	}
+
+	return dev;
+}
+
+static void ibnbd_srv_fill_msg_open_rsp(struct ibnbd_msg_open_rsp *rsp,
+					struct ibnbd_srv_sess_dev *sess_dev)
+{
+	struct ibnbd_dev *ibnbd_dev = sess_dev->ibnbd_dev;
+
+	rsp->hdr.type = cpu_to_le16(IBNBD_MSG_OPEN_RSP);
+	rsp->device_id =
+		cpu_to_le32(sess_dev->device_id);
+	rsp->nsectors =
+		cpu_to_le64(get_capacity(ibnbd_dev->bdev->bd_disk));
+	rsp->logical_block_size	=
+		cpu_to_le16(ibnbd_dev_get_logical_bsize(ibnbd_dev));
+	rsp->physical_block_size =
+		cpu_to_le16(ibnbd_dev_get_phys_bsize(ibnbd_dev));
+	rsp->max_segments =
+		cpu_to_le16(ibnbd_dev_get_max_segs(ibnbd_dev));
+	rsp->max_hw_sectors =
+		cpu_to_le32(ibnbd_dev_get_max_hw_sects(ibnbd_dev));
+	rsp->max_write_same_sectors =
+		cpu_to_le32(ibnbd_dev_get_max_write_same_sects(ibnbd_dev));
+	rsp->max_discard_sectors =
+		cpu_to_le32(ibnbd_dev_get_max_discard_sects(ibnbd_dev));
+	rsp->discard_granularity =
+		cpu_to_le32(ibnbd_dev_get_discard_granularity(ibnbd_dev));
+	rsp->discard_alignment =
+		cpu_to_le32(ibnbd_dev_get_discard_alignment(ibnbd_dev));
+	rsp->secure_discard =
+		cpu_to_le16(ibnbd_dev_get_secure_discard(ibnbd_dev));
+	rsp->rotational =
+		!blk_queue_nonrot(bdev_get_queue(ibnbd_dev->bdev));
+	rsp->io_mode =
+		ibnbd_dev->mode;
+}
+
+static struct ibnbd_srv_sess_dev *
+ibnbd_srv_create_set_sess_dev(struct ibnbd_srv_session *srv_sess,
+			      const struct ibnbd_msg_open *open_msg,
+			      struct ibnbd_dev *ibnbd_dev, fmode_t open_flags,
+			      struct ibnbd_srv_dev *srv_dev)
+{
+	struct ibnbd_srv_sess_dev *sdev = ibnbd_sess_dev_alloc(srv_sess);
+
+	if (IS_ERR(sdev))
+		return sdev;
+
+	kref_init(&sdev->kref);
+
+	strlcpy(sdev->pathname, open_msg->dev_name, sizeof(sdev->pathname));
+
+	sdev->ibnbd_dev		= ibnbd_dev;
+	sdev->sess		= srv_sess;
+	sdev->dev		= srv_dev;
+	sdev->open_flags	= open_flags;
+	sdev->access_mode	= open_msg->access_mode;
+
+	return sdev;
+}
+
+static char *ibnbd_srv_get_full_path(struct ibnbd_srv_session *srv_sess,
+				     const char *dev_name)
+{
+	char *full_path;
+	char *a, *b;
+
+	full_path = kmalloc(PATH_MAX, GFP_KERNEL);
+	if (!full_path)
+		return ERR_PTR(-ENOMEM);
+
+	/*
+	 * Replace %SESSNAME% with a real session name in order to
+	 * create device namespace.
+	 */
+	a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
+	if (a) {
+		int len = a - dev_search_path;
+
+		len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
+			       dev_search_path, srv_sess->sessname, dev_name);
+		if (len >= PATH_MAX) {
+			pr_err("Tooooo looong path: %s, %s, %s\n",
+			       dev_search_path, srv_sess->sessname, dev_name);
+			kfree(full_path);
+			return ERR_PTR(-EINVAL);
+		}
+	} else {
+		snprintf(full_path, PATH_MAX, "%s/%s",
+			 dev_search_path, dev_name);
+	}
+
+	/* eliminitate duplicated slashes */
+	a = strchr(full_path, '/');
+	b = a;
+	while (*b != '\0') {
+		if (*b == '/' && *a == '/') {
+			b++;
+		} else {
+			a++;
+			*a = *b;
+			b++;
+		}
+	}
+	a++;
+	*a = '\0';
+
+	return full_path;
+}
+
+static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
+				 struct ibnbd_srv_session *srv_sess,
+				 const void *msg, size_t len,
+				 void *data, size_t datalen)
+{
+	const struct ibnbd_msg_sess_info *sess_info_msg = msg;
+	struct ibnbd_msg_sess_info_rsp *rsp = data;
+
+	srv_sess->ver = min_t(u8, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
+	pr_debug("Session %s using protocol version %d (client version: %d,"
+		 " server version: %d)\n", srv_sess->sessname,
+		 srv_sess->ver, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
+
+	rsp->hdr.type = cpu_to_le16(IBNBD_MSG_SESS_INFO_RSP);
+	rsp->ver = srv_sess->ver;
+
+	return 0;
+}
+
+/**
+ * find_srv_sess_dev() - a dev is already opened by this name
+ *
+ * Return struct ibnbd_srv_sess_dev if srv_sess already opened the dev_name
+ * NULL if the session didn't open the device yet.
+ */
+static struct ibnbd_srv_sess_dev *
+find_srv_sess_dev(struct ibnbd_srv_session *srv_sess, const char *dev_name)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	if (list_empty(&srv_sess->sess_dev_list))
+		return NULL;
+
+	list_for_each_entry(sess_dev, &srv_sess->sess_dev_list, sess_list)
+		if (!strcmp(sess_dev->pathname, dev_name))
+			return sess_dev;
+
+	return NULL;
+}
+
+static int process_msg_open(struct ibtrs_srv *ibtrs,
+			    struct ibnbd_srv_session *srv_sess,
+			    const void *msg, size_t len,
+			    void *data, size_t datalen)
+{
+	int ret;
+	struct ibnbd_srv_dev *srv_dev;
+	struct ibnbd_srv_sess_dev *srv_sess_dev;
+	const struct ibnbd_msg_open *open_msg = msg;
+	fmode_t open_flags;
+	char *full_path;
+	struct ibnbd_dev *ibnbd_dev;
+	enum ibnbd_io_mode io_mode;
+	struct ibnbd_msg_open_rsp *rsp = data;
+
+	pr_debug("Open message received: session='%s' path='%s' access_mode=%d"
+		 " io_mode=%d\n", srv_sess->sessname, open_msg->dev_name,
+		 open_msg->access_mode, open_msg->io_mode);
+	open_flags = FMODE_READ;
+	if (open_msg->access_mode != IBNBD_ACCESS_RO)
+		open_flags |= FMODE_WRITE;
+
+	mutex_lock(&srv_sess->lock);
+
+	srv_sess_dev = find_srv_sess_dev(srv_sess, open_msg->dev_name);
+	if (srv_sess_dev)
+		goto fill_response;
+
+	if ((strlen(dev_search_path) + strlen(open_msg->dev_name))
+	    >= PATH_MAX) {
+		pr_err("Opening device for session %s failed, device path too"
+		       " long. '%s/%s' is longer than PATH_MAX (%d)\n",
+		       srv_sess->sessname, dev_search_path, open_msg->dev_name,
+		       PATH_MAX);
+		ret = -EINVAL;
+		goto reject;
+	}
+	full_path = ibnbd_srv_get_full_path(srv_sess, open_msg->dev_name);
+	if (IS_ERR(full_path)) {
+		ret = PTR_ERR(full_path);
+		pr_err("Opening device '%s' for client %s failed,"
+		       " failed to get device full path, err: %d\n",
+		       open_msg->dev_name, srv_sess->sessname, ret);
+		goto reject;
+	}
+
+	if (open_msg->io_mode == IBNBD_BLOCKIO)
+		io_mode = IBNBD_BLOCKIO;
+	else if (open_msg->io_mode == IBNBD_FILEIO)
+		io_mode = IBNBD_FILEIO;
+	else
+		io_mode = def_io_mode;
+
+	ibnbd_dev = ibnbd_dev_open(full_path, open_flags, io_mode,
+				   &srv_sess->sess_bio_set, ibnbd_endio);
+	if (IS_ERR(ibnbd_dev)) {
+		pr_err("Opening device '%s' on session %s failed,"
+		       " failed to open the block device, err: %ld\n",
+		       full_path, srv_sess->sessname, PTR_ERR(ibnbd_dev));
+		ret = PTR_ERR(ibnbd_dev);
+		goto free_path;
+	}
+
+	srv_dev = ibnbd_srv_get_or_create_srv_dev(ibnbd_dev, srv_sess, io_mode,
+						  open_msg->access_mode);
+	if (IS_ERR(srv_dev)) {
+		pr_err("Opening device '%s' on session %s failed,"
+		       " creating srv_dev failed, err: %ld\n",
+		       full_path, srv_sess->sessname, PTR_ERR(srv_dev));
+		ret = PTR_ERR(srv_dev);
+		goto ibnbd_dev_close;
+	}
+
+	srv_sess_dev = ibnbd_srv_create_set_sess_dev(srv_sess, open_msg,
+						     ibnbd_dev, open_flags,
+						     srv_dev);
+	if (IS_ERR(srv_sess_dev)) {
+		pr_err("Opening device '%s' on session %s failed,"
+		       " creating sess_dev failed, err: %ld\n",
+		       full_path, srv_sess->sessname, PTR_ERR(srv_sess_dev));
+		ret = PTR_ERR(srv_sess_dev);
+		goto srv_dev_put;
+	}
+
+	/* Create the srv_dev sysfs files if they haven't been created yet. The
+	 * reason to delay the creation is not to create the sysfs files before
+	 * we are sure the device can be opened.
+	 */
+	mutex_lock(&srv_dev->lock);
+	if (!srv_dev->dev_kobj.state_in_sysfs) {
+		ret = ibnbd_srv_create_dev_sysfs(srv_dev, ibnbd_dev->bdev,
+						 ibnbd_dev->name);
+		if (ret) {
+			mutex_unlock(&srv_dev->lock);
+			ibnbd_err(srv_sess_dev, "Opening device failed, failed to"
+				  " create device sysfs files, err: %d\n",
+				  ret);
+			goto free_srv_sess_dev;
+		}
+	}
+
+	ret = ibnbd_srv_create_dev_session_sysfs(srv_sess_dev);
+	if (ret) {
+		mutex_unlock(&srv_dev->lock);
+		ibnbd_err(srv_sess_dev, "Opening device failed, failed to create"
+			  " dev client sysfs files, err: %d\n", ret);
+		goto free_srv_sess_dev;
+	}
+
+	list_add(&srv_sess_dev->dev_list, &srv_dev->sess_dev_list);
+	mutex_unlock(&srv_dev->lock);
+
+	list_add(&srv_sess_dev->sess_list, &srv_sess->sess_dev_list);
+
+	ibnbd_info(srv_sess_dev, "Opened device '%s' in %s mode\n",
+		   srv_dev->id, ibnbd_io_mode_str(io_mode));
+
+	kfree(full_path);
+
+fill_response:
+	ibnbd_srv_fill_msg_open_rsp(rsp, srv_sess_dev);
+	mutex_unlock(&srv_sess->lock);
+	return 0;
+
+free_srv_sess_dev:
+	write_lock(&srv_sess->index_lock);
+	idr_remove(&srv_sess->index_idr, srv_sess_dev->device_id);
+	write_unlock(&srv_sess->index_lock);
+	kfree(srv_sess_dev);
+srv_dev_put:
+	if (open_msg->access_mode != IBNBD_ACCESS_RO) {
+		mutex_lock(&srv_dev->lock);
+		srv_dev->open_write_cnt--;
+		mutex_unlock(&srv_dev->lock);
+	}
+	ibnbd_put_srv_dev(srv_dev);
+ibnbd_dev_close:
+	ibnbd_dev_close(ibnbd_dev);
+free_path:
+	kfree(full_path);
+reject:
+	mutex_unlock(&srv_sess->lock);
+	return ret;
+}
+
+static struct ibtrs_srv_ctx *ibtrs_ctx;
+
+static int __init ibnbd_srv_init_module(void)
+{
+	int err;
+
+	pr_info("Loading module %s, version %s, proto %s\n",
+		KBUILD_MODNAME, IBNBD_VER_STRING, IBNBD_PROTO_VER_STRING);
+
+	ibtrs_ctx = ibtrs_srv_open(ibnbd_srv_rdma_ev, ibnbd_srv_link_ev,
+				   IBTRS_PORT);
+	if (unlikely(IS_ERR(ibtrs_ctx))) {
+		err = PTR_ERR(ibtrs_ctx);
+		pr_err("ibtrs_srv_open(), err: %d\n", err);
+		goto out;
+	}
+	err = ibnbd_dev_init();
+	if (err) {
+		pr_err("ibnbd_dev_init(), err: %d\n", err);
+		goto srv_close;
+	}
+
+	err = ibnbd_srv_create_sysfs_files();
+	if (err) {
+		pr_err("ibnbd_srv_create_sysfs_files(), err: %d\n", err);
+		goto dev_destroy;
+	}
+
+	return 0;
+
+dev_destroy:
+	ibnbd_dev_destroy();
+srv_close:
+	ibtrs_srv_close(ibtrs_ctx);
+out:
+
+	return err;
+}
+
+static void __exit ibnbd_srv_cleanup_module(void)
+{
+	ibtrs_srv_close(ibtrs_ctx);
+	WARN_ON(!list_empty(&sess_list));
+	ibnbd_srv_destroy_sysfs_files();
+	ibnbd_dev_destroy();
+	pr_info("Module unloaded\n");
+}
+
+module_init(ibnbd_srv_init_module);
+module_exit(ibnbd_srv_cleanup_module);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (19 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 20/25] ibnbd: server: main functionality Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-18 21:46   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 22/25] ibnbd: server: sysfs interface functions Jack Wang
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This provides helper functions for IO submission to file or block dev.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-srv-dev.c | 408 ++++++++++++++++++++++++++++
 drivers/block/ibnbd/ibnbd-srv-dev.h | 143 ++++++++++
 2 files changed, 551 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-srv-dev.c
 create mode 100644 drivers/block/ibnbd/ibnbd-srv-dev.h

diff --git a/drivers/block/ibnbd/ibnbd-srv-dev.c b/drivers/block/ibnbd/ibnbd-srv-dev.c
new file mode 100644
index 000000000000..5c1a518638b2
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-srv-dev.c
@@ -0,0 +1,408 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include "ibnbd-srv-dev.h"
+#include "ibnbd-log.h"
+
+#define IBNBD_DEV_MAX_FILEIO_ACTIVE_WORKERS 0
+
+struct ibnbd_dev_file_io_work {
+	struct ibnbd_dev	*dev;
+	void			*priv;
+
+	sector_t		sector;
+	void			*data;
+	size_t			len;
+	size_t			bi_size;
+	enum ibnbd_io_flags	flags;
+
+	struct work_struct	work;
+};
+
+struct ibnbd_dev_blk_io {
+	struct ibnbd_dev *dev;
+	void		 *priv;
+};
+
+static struct workqueue_struct *fileio_wq;
+
+int ibnbd_dev_init(void)
+{
+	fileio_wq = alloc_workqueue("%s", WQ_UNBOUND,
+				    IBNBD_DEV_MAX_FILEIO_ACTIVE_WORKERS,
+				    "ibnbd_server_fileio_wq");
+	if (!fileio_wq)
+		return -ENOMEM;
+
+	return 0;
+}
+
+void ibnbd_dev_destroy(void)
+{
+	destroy_workqueue(fileio_wq);
+}
+
+static inline struct block_device *ibnbd_dev_open_bdev(const char *path,
+						       fmode_t flags)
+{
+	return blkdev_get_by_path(path, flags, THIS_MODULE);
+}
+
+static int ibnbd_dev_blk_open(struct ibnbd_dev *dev, const char *path,
+			      fmode_t flags)
+{
+	dev->bdev = ibnbd_dev_open_bdev(path, flags);
+	return PTR_ERR_OR_ZERO(dev->bdev);
+}
+
+static int ibnbd_dev_vfs_open(struct ibnbd_dev *dev, const char *path,
+			      fmode_t flags)
+{
+	int oflags = O_DSYNC; /* enable write-through */
+
+	if (flags & FMODE_WRITE)
+		oflags |= O_RDWR;
+	else if (flags & FMODE_READ)
+		oflags |= O_RDONLY;
+	else
+		return -EINVAL;
+
+	dev->file = filp_open(path, oflags, 0);
+	return PTR_ERR_OR_ZERO(dev->file);
+}
+
+struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
+				 enum ibnbd_io_mode mode, struct bio_set *bs,
+				 ibnbd_dev_io_fn io_cb)
+{
+	struct ibnbd_dev *dev;
+	int ret;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev)
+		return ERR_PTR(-ENOMEM);
+
+	if (mode == IBNBD_BLOCKIO) {
+		dev->blk_open_flags = flags;
+		ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
+		if (ret)
+			goto err;
+	} else if (mode == IBNBD_FILEIO) {
+		dev->blk_open_flags = FMODE_READ;
+		ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
+		if (ret)
+			goto err;
+
+		ret = ibnbd_dev_vfs_open(dev, path, flags);
+		if (ret)
+			goto blk_put;
+	} else {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	dev->blk_open_flags	= flags;
+	dev->mode		= mode;
+	dev->io_cb		= io_cb;
+	bdevname(dev->bdev, dev->name);
+	dev->ibd_bio_set	= bs;
+
+	return dev;
+
+blk_put:
+	blkdev_put(dev->bdev, dev->blk_open_flags);
+err:
+	kfree(dev);
+	return ERR_PTR(ret);
+}
+
+void ibnbd_dev_close(struct ibnbd_dev *dev)
+{
+	flush_workqueue(fileio_wq);
+	blkdev_put(dev->bdev, dev->blk_open_flags);
+	if (dev->mode == IBNBD_FILEIO)
+		filp_close(dev->file, dev->file);
+	kfree(dev);
+}
+
+static void ibnbd_dev_bi_end_io(struct bio *bio)
+{
+	struct ibnbd_dev_blk_io *io = bio->bi_private;
+
+	io->dev->io_cb(io->priv, blk_status_to_errno(bio->bi_status));
+	bio_put(bio);
+	kfree(io);
+}
+
+static void bio_map_kern_endio(struct bio *bio)
+{
+	bio_put(bio);
+}
+
+/**
+ *	ibnbd_bio_map_kern	-	map kernel address into bio
+ *	@q: the struct request_queue for the bio
+ *	@data: pointer to buffer to map
+ *	@bs: bio_set to use.
+ *	@len: length in bytes
+ *	@gfp_mask: allocation flags for bio allocation
+ *
+ *	Map the kernel address into a bio suitable for io to a block
+ *	device. Returns an error pointer in case of error.
+ */
+static struct bio *ibnbd_bio_map_kern(struct request_queue *q, void *data,
+				      struct bio_set *bs,
+				      unsigned int len, gfp_t gfp_mask)
+{
+	unsigned long kaddr = (unsigned long)data;
+	unsigned long end = (kaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+	unsigned long start = kaddr >> PAGE_SHIFT;
+	const int nr_pages = end - start;
+	int offset, i;
+	struct bio *bio;
+
+	bio = bio_alloc_bioset(gfp_mask, nr_pages, bs);
+	if (!bio)
+		return ERR_PTR(-ENOMEM);
+
+	offset = offset_in_page(kaddr);
+	for (i = 0; i < nr_pages; i++) {
+		unsigned int bytes = PAGE_SIZE - offset;
+
+		if (len <= 0)
+			break;
+
+		if (bytes > len)
+			bytes = len;
+
+		if (bio_add_pc_page(q, bio, virt_to_page(data), bytes,
+				    offset) < bytes) {
+			/* we don't support partial mappings */
+			bio_put(bio);
+			return ERR_PTR(-EINVAL);
+		}
+
+		data += bytes;
+		len -= bytes;
+		offset = 0;
+	}
+
+	bio->bi_end_io = bio_map_kern_endio;
+	return bio;
+}
+
+static int ibnbd_dev_blk_submit_io(struct ibnbd_dev *dev, sector_t sector,
+				   void *data, size_t len, u32 bi_size,
+				   enum ibnbd_io_flags flags, short prio,
+				   void *priv)
+{
+	struct request_queue *q = bdev_get_queue(dev->bdev);
+	struct ibnbd_dev_blk_io *io;
+	struct bio *bio;
+
+	/* check if the buffer is suitable for bdev */
+	if (unlikely(WARN_ON(!blk_rq_aligned(q, (unsigned long)data, len))))
+		return -EINVAL;
+
+	/* Generate bio with pages pointing to the rdma buffer */
+	bio = ibnbd_bio_map_kern(q, data, dev->ibd_bio_set, len, GFP_KERNEL);
+	if (unlikely(IS_ERR(bio)))
+		return PTR_ERR(bio);
+
+	io = kmalloc(sizeof(*io), GFP_KERNEL);
+	if (unlikely(!io)) {
+		bio_put(bio);
+		return -ENOMEM;
+	}
+
+	io->dev		= dev;
+	io->priv	= priv;
+
+	bio->bi_end_io		= ibnbd_dev_bi_end_io;
+	bio->bi_private		= io;
+	bio->bi_opf		= ibnbd_to_bio_flags(flags);
+	bio->bi_iter.bi_sector	= sector;
+	bio->bi_iter.bi_size	= bi_size;
+	bio_set_prio(bio, prio);
+	bio_set_dev(bio, dev->bdev);
+
+	submit_bio(bio);
+
+	return 0;
+}
+
+static int ibnbd_dev_file_handle_flush(struct ibnbd_dev_file_io_work *w,
+				       loff_t start)
+{
+	int ret;
+	loff_t end;
+	int len = w->bi_size;
+
+	if (len)
+		end = start + len - 1;
+	else
+		end = LLONG_MAX;
+
+	ret = vfs_fsync_range(w->dev->file, start, end, 1);
+	if (unlikely(ret))
+		pr_info_ratelimited("I/O FLUSH failed on %s, vfs_sync err: %d\n",
+				    w->dev->name, ret);
+	return ret;
+}
+
+static int ibnbd_dev_file_handle_fua(struct ibnbd_dev_file_io_work *w,
+				     loff_t start)
+{
+	int ret;
+	loff_t end;
+	int len = w->bi_size;
+
+	if (len)
+		end = start + len - 1;
+	else
+		end = LLONG_MAX;
+
+	ret = vfs_fsync_range(w->dev->file, start, end, 1);
+	if (unlikely(ret))
+		pr_info_ratelimited("I/O FUA failed on %s, vfs_sync err: %d\n",
+				    w->dev->name, ret);
+	return ret;
+}
+
+static int ibnbd_dev_file_handle_write_same(struct ibnbd_dev_file_io_work *w)
+{
+	int i;
+
+	if (unlikely(WARN_ON(w->bi_size % w->len)))
+		return -EINVAL;
+
+	for (i = 1; i < w->bi_size / w->len; i++)
+		memcpy(w->data + i * w->len, w->data, w->len);
+
+	return 0;
+}
+
+static void ibnbd_dev_file_submit_io_worker(struct work_struct *w)
+{
+	struct ibnbd_dev_file_io_work *dev_work;
+	struct file *f;
+	int ret, len;
+	loff_t off;
+
+	dev_work = container_of(w, struct ibnbd_dev_file_io_work, work);
+	off = dev_work->sector * ibnbd_dev_get_logical_bsize(dev_work->dev);
+	f = dev_work->dev->file;
+	len = dev_work->bi_size;
+
+	if (ibnbd_op(dev_work->flags) == IBNBD_OP_FLUSH) {
+		ret = ibnbd_dev_file_handle_flush(dev_work, off);
+		if (unlikely(ret))
+			goto out;
+	}
+
+	if (ibnbd_op(dev_work->flags) == IBNBD_OP_WRITE_SAME) {
+		ret = ibnbd_dev_file_handle_write_same(dev_work);
+		if (unlikely(ret))
+			goto out;
+	}
+
+	/* TODO Implement support for DIRECT */
+	if (dev_work->bi_size) {
+		loff_t off_tmp = off;
+
+		if (ibnbd_op(dev_work->flags) == IBNBD_OP_WRITE)
+			ret = kernel_write(f, dev_work->data, dev_work->bi_size,
+					   &off_tmp);
+		else
+			ret = kernel_read(f, dev_work->data, dev_work->bi_size,
+					  &off_tmp);
+
+		if (unlikely(ret < 0)) {
+			goto out;
+		} else if (unlikely(ret != dev_work->bi_size)) {
+			/* TODO implement support for partial completions */
+			ret = -EIO;
+			goto out;
+		} else {
+			ret = 0;
+		}
+	}
+
+	if (dev_work->flags & IBNBD_F_FUA)
+		ret = ibnbd_dev_file_handle_fua(dev_work, off);
+out:
+	dev_work->dev->io_cb(dev_work->priv, ret);
+	kfree(dev_work);
+}
+
+static int ibnbd_dev_file_submit_io(struct ibnbd_dev *dev, sector_t sector,
+				    void *data, size_t len, size_t bi_size,
+				    enum ibnbd_io_flags flags, void *priv)
+{
+	struct ibnbd_dev_file_io_work *w;
+
+	if (!ibnbd_flags_supported(flags)) {
+		pr_info_ratelimited("Unsupported I/O flags: 0x%x on device "
+				    "%s\n", flags, dev->name);
+		return -ENOTSUPP;
+	}
+
+	w = kmalloc(sizeof(*w), GFP_KERNEL);
+	if (!w)
+		return -ENOMEM;
+
+	w->dev		= dev;
+	w->priv		= priv;
+	w->sector	= sector;
+	w->data		= data;
+	w->len		= len;
+	w->bi_size	= bi_size;
+	w->flags	= flags;
+	INIT_WORK(&w->work, ibnbd_dev_file_submit_io_worker);
+
+	if (unlikely(!queue_work(fileio_wq, &w->work))) {
+		kfree(w);
+		return -EEXIST;
+	}
+
+	return 0;
+}
+
+int ibnbd_dev_submit_io(struct ibnbd_dev *dev, sector_t sector, void *data,
+			size_t len, u32 bi_size, enum ibnbd_io_flags flags,
+			short prio, void *priv)
+{
+	if (dev->mode == IBNBD_FILEIO)
+		return ibnbd_dev_file_submit_io(dev, sector, data, len, bi_size,
+						flags, priv);
+	else if (dev->mode == IBNBD_BLOCKIO)
+		return ibnbd_dev_blk_submit_io(dev, sector, data, len, bi_size,
+					       flags, prio, priv);
+
+	pr_warn("Submitting I/O to %s failed, dev->mode contains invalid "
+		"value: '%d', memory corrupted?", dev->name, dev->mode);
+
+	return -EINVAL;
+}
diff --git a/drivers/block/ibnbd/ibnbd-srv-dev.h b/drivers/block/ibnbd/ibnbd-srv-dev.h
new file mode 100644
index 000000000000..131746e38a9d
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-srv-dev.h
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#ifndef IBNBD_SRV_DEV_H
+#define IBNBD_SRV_DEV_H
+
+#include <linux/fs.h>
+#include "ibnbd-proto.h"
+
+typedef void ibnbd_dev_io_fn(void *priv, int error);
+
+struct ibnbd_dev {
+	struct block_device	*bdev;
+	struct bio_set		*ibd_bio_set;
+	struct file		*file;
+	fmode_t			blk_open_flags;
+	enum ibnbd_io_mode	mode;
+	char			name[BDEVNAME_SIZE];
+	ibnbd_dev_io_fn		*io_cb;
+};
+
+/** ibnbd_dev_init() - Initialize ibnbd_dev
+ *
+ * This functions initialized the ibnbd-dev component.
+ * It has to be called 1x time before ibnbd_dev_open() is used
+ */
+int ibnbd_dev_init(void);
+
+/** ibnbd_dev_destroy() - Destroy ibnbd_dev
+ *
+ * This functions destroys the ibnbd-dev component.
+ * It has to be called after the last device was closed.
+ */
+void ibnbd_dev_destroy(void);
+
+/**
+ * ibnbd_dev_open() - Open a device
+ * @flags:	open flags
+ * @mode:	open via VFS or block layer
+ * @bs:		bio_set to use during block io,
+ * @io_cb:	is called when I/O finished
+ */
+struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
+				 enum ibnbd_io_mode mode, struct bio_set *bs,
+				 ibnbd_dev_io_fn io_cb);
+
+/**
+ * ibnbd_dev_close() - Close a device
+ */
+void ibnbd_dev_close(struct ibnbd_dev *dev);
+
+static inline int ibnbd_dev_get_logical_bsize(const struct ibnbd_dev *dev)
+{
+	return bdev_logical_block_size(dev->bdev);
+}
+
+static inline int ibnbd_dev_get_phys_bsize(const struct ibnbd_dev *dev)
+{
+	return bdev_physical_block_size(dev->bdev);
+}
+
+static inline int ibnbd_dev_get_max_segs(const struct ibnbd_dev *dev)
+{
+	return queue_max_segments(bdev_get_queue(dev->bdev));
+}
+
+static inline int ibnbd_dev_get_max_hw_sects(const struct ibnbd_dev *dev)
+{
+	return queue_max_hw_sectors(bdev_get_queue(dev->bdev));
+}
+
+static inline int
+ibnbd_dev_get_max_write_same_sects(const struct ibnbd_dev *dev)
+{
+	return bdev_write_same(dev->bdev);
+}
+
+static inline int ibnbd_dev_get_secure_discard(const struct ibnbd_dev *dev)
+{
+	if (dev->mode == IBNBD_BLOCKIO)
+		return blk_queue_secure_erase(bdev_get_queue(dev->bdev));
+	return 0;
+}
+
+static inline int ibnbd_dev_get_max_discard_sects(const struct ibnbd_dev *dev)
+{
+	if (!blk_queue_discard(bdev_get_queue(dev->bdev)))
+		return 0;
+
+	if (dev->mode == IBNBD_BLOCKIO)
+		return blk_queue_get_max_sectors(bdev_get_queue(dev->bdev),
+						 REQ_OP_DISCARD);
+	return 0;
+}
+
+static inline int ibnbd_dev_get_discard_granularity(const struct ibnbd_dev *dev)
+{
+	if (dev->mode == IBNBD_BLOCKIO)
+		return bdev_get_queue(dev->bdev)->limits.discard_granularity;
+	return 0;
+}
+
+static inline int ibnbd_dev_get_discard_alignment(const struct ibnbd_dev *dev)
+{
+	if (dev->mode == IBNBD_BLOCKIO)
+		return bdev_get_queue(dev->bdev)->limits.discard_alignment;
+	return 0;
+}
+
+/**
+ * ibnbd_dev_submit_io() - Submit an I/O to the disk
+ * @dev:	device to that the I/O is submitted
+ * @sector:	address to read/write data to
+ * @data:	I/O data to write or buffer to read I/O date into
+ * @len:	length of @data
+ * @bi_size:	Amount of data that will be read/written
+ * @prio:       IO priority
+ * @priv:	private data passed to @io_fn
+ */
+int ibnbd_dev_submit_io(struct ibnbd_dev *dev, sector_t sector, void *data,
+			size_t len, u32 bi_size, enum ibnbd_io_flags flags,
+			short prio, void *priv);
+
+#endif /* IBNBD_SRV_DEV_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 22/25] ibnbd: server: sysfs interface functions
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (20 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-06-20 15:03 ` [PATCH v4 23/25] ibnbd: include client and server modules into kernel compilation Jack Wang
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

This is the sysfs interface to IBNBD mapped devices on server side:

  /sys/devices/virtual/ibnbd-server/ctl/devices/<device_name>/
    |- block_dev
    |  *** link pointing to the corresponding block device sysfs entry
    |
    |- sessions/<session-name>/
    |  *** sessions directory
       |
       |- read_only
       |  *** is devices mapped as read only
       |
       |- mapping_path
          *** relative device path provided by the client during mapping

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/ibnbd-srv-sysfs.c | 270 ++++++++++++++++++++++++++
 1 file changed, 270 insertions(+)
 create mode 100644 drivers/block/ibnbd/ibnbd-srv-sysfs.c

diff --git a/drivers/block/ibnbd/ibnbd-srv-sysfs.c b/drivers/block/ibnbd/ibnbd-srv-sysfs.c
new file mode 100644
index 000000000000..2b40514950ed
--- /dev/null
+++ b/drivers/block/ibnbd/ibnbd-srv-sysfs.c
@@ -0,0 +1,270 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * InfiniBand Network Block Driver
+ *
+ * Copyright (c) 2014 - 2017 ProfitBricks GmbH. All rights reserved.
+ * Authors: Fabian Holler <mail@fholler.de>
+ *          Jack Wang <jinpu.wang@profitbricks.com>
+ *          Kleber Souza <kleber.souza@profitbricks.com>
+ *          Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Milind Dumbare <Milind.dumbare@gmail.com>
+ *
+ * Copyright (c) 2017 - 2018 ProfitBricks GmbH. All rights reserved.
+ * Authors: Danil Kipnis <danil.kipnis@profitbricks.com>
+ *          Roman Penyaev <roman.penyaev@profitbricks.com>
+ *
+ * Copyright (c) 2018 - 2019 1&1 IONOS Cloud GmbH. All rights reserved.
+ * Authors: Roman Penyaev <roman.penyaev@profitbricks.com>
+ *          Jack Wang <jinpu.wang@cloud.ionos.com>
+ *          Danil Kipnis <danil.kipnis@cloud.ionos.com>
+ */
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
+
+#include <uapi/linux/limits.h>
+#include <linux/kobject.h>
+#include <linux/sysfs.h>
+#include <linux/stat.h>
+#include <linux/genhd.h>
+#include <linux/list.h>
+#include <linux/moduleparam.h>
+#include <linux/device.h>
+
+#include "ibnbd-srv.h"
+
+static struct device *ibnbd_dev;
+static struct class *ibnbd_dev_class;
+static struct kobject *ibnbd_devs_kobj;
+
+static ssize_t ibnbd_srv_dev_mode_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibnbd_srv_dev *srv_dev;
+
+	srv_dev = container_of(kobj, struct ibnbd_srv_dev, dev_kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n",
+			 ibnbd_io_mode_str(srv_dev->mode));
+}
+
+static struct kobj_attribute ibnbd_srv_dev_mode_attr =
+	__ATTR(io_mode, 0444, ibnbd_srv_dev_mode_show, NULL);
+
+static struct attribute *ibnbd_srv_default_dev_attrs[] = {
+	&ibnbd_srv_dev_mode_attr.attr,
+	NULL,
+};
+
+static struct attribute_group ibnbd_srv_default_dev_attr_group = {
+	.attrs = ibnbd_srv_default_dev_attrs,
+};
+
+static struct kobj_type ktype = {
+	.sysfs_ops	= &kobj_sysfs_ops,
+};
+
+int ibnbd_srv_create_dev_sysfs(struct ibnbd_srv_dev *dev,
+			       struct block_device *bdev,
+			       const char *dir_name)
+{
+	struct kobject *bdev_kobj;
+	int ret;
+
+	ret = kobject_init_and_add(&dev->dev_kobj, &ktype,
+				   ibnbd_devs_kobj, dir_name);
+	if (ret)
+		return ret;
+
+	ret = kobject_init_and_add(&dev->dev_sessions_kobj,
+				   &ktype,
+				   &dev->dev_kobj, "sessions");
+	if (ret)
+		goto err;
+
+	ret = sysfs_create_group(&dev->dev_kobj,
+				 &ibnbd_srv_default_dev_attr_group);
+	if (ret)
+		goto err2;
+
+	bdev_kobj = &disk_to_dev(bdev->bd_disk)->kobj;
+	ret = sysfs_create_link(&dev->dev_kobj, bdev_kobj, "block_dev");
+	if (ret)
+		goto err3;
+
+	return 0;
+
+err3:
+	sysfs_remove_group(&dev->dev_kobj,
+			   &ibnbd_srv_default_dev_attr_group);
+err2:
+	kobject_del(&dev->dev_sessions_kobj);
+	kobject_put(&dev->dev_sessions_kobj);
+err:
+	kobject_del(&dev->dev_kobj);
+	kobject_put(&dev->dev_kobj);
+	return ret;
+}
+
+void ibnbd_srv_destroy_dev_sysfs(struct ibnbd_srv_dev *dev)
+{
+	sysfs_remove_link(&dev->dev_kobj, "block_dev");
+	sysfs_remove_group(&dev->dev_kobj, &ibnbd_srv_default_dev_attr_group);
+	kobject_del(&dev->dev_sessions_kobj);
+	kobject_put(&dev->dev_sessions_kobj);
+	kobject_del(&dev->dev_kobj);
+	kobject_put(&dev->dev_kobj);
+}
+
+static ssize_t ibnbd_srv_dev_session_ro_show(struct kobject *kobj,
+					     struct kobj_attribute *attr,
+					     char *page)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	sess_dev = container_of(kobj, struct ibnbd_srv_sess_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n",
+			 (sess_dev->open_flags & FMODE_WRITE) ? "0" : "1");
+}
+
+static struct kobj_attribute ibnbd_srv_dev_session_ro_attr =
+	__ATTR(read_only, 0444,
+	       ibnbd_srv_dev_session_ro_show,
+	       NULL);
+
+static ssize_t
+ibnbd_srv_dev_session_access_mode_show(struct kobject *kobj,
+				       struct kobj_attribute *attr,
+				       char *page)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	sess_dev = container_of(kobj, struct ibnbd_srv_sess_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n",
+			 ibnbd_access_mode_str(sess_dev->access_mode));
+}
+
+static struct kobj_attribute ibnbd_srv_dev_session_access_mode_attr =
+	__ATTR(access_mode, 0444,
+	       ibnbd_srv_dev_session_access_mode_show,
+	       NULL);
+
+static ssize_t
+ibnbd_srv_dev_session_mapping_path_show(struct kobject *kobj,
+					struct kobj_attribute *attr, char *page)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	sess_dev = container_of(kobj, struct ibnbd_srv_sess_dev, kobj);
+
+	return scnprintf(page, PAGE_SIZE, "%s\n", sess_dev->pathname);
+}
+
+static struct kobj_attribute ibnbd_srv_dev_session_mapping_path_attr =
+	__ATTR(mapping_path, 0444,
+	       ibnbd_srv_dev_session_mapping_path_show,
+	       NULL);
+
+static struct attribute *ibnbd_srv_default_dev_sessions_attrs[] = {
+	&ibnbd_srv_dev_session_access_mode_attr.attr,
+	&ibnbd_srv_dev_session_ro_attr.attr,
+	&ibnbd_srv_dev_session_mapping_path_attr.attr,
+	NULL,
+};
+
+static struct attribute_group ibnbd_srv_default_dev_session_attr_group = {
+	.attrs = ibnbd_srv_default_dev_sessions_attrs,
+};
+
+void ibnbd_srv_destroy_dev_session_sysfs(struct ibnbd_srv_sess_dev *sess_dev)
+{
+	DECLARE_COMPLETION_ONSTACK(sysfs_compl);
+
+	sysfs_remove_group(&sess_dev->kobj,
+			   &ibnbd_srv_default_dev_session_attr_group);
+
+	sess_dev->sysfs_release_compl = &sysfs_compl;
+	kobject_del(&sess_dev->kobj);
+	kobject_put(&sess_dev->kobj);
+	wait_for_completion(&sysfs_compl);
+}
+
+static void ibnbd_srv_sess_dev_release(struct kobject *kobj)
+{
+	struct ibnbd_srv_sess_dev *sess_dev;
+
+	sess_dev = container_of(kobj, struct ibnbd_srv_sess_dev, kobj);
+	if (sess_dev->sysfs_release_compl)
+		complete_all(sess_dev->sysfs_release_compl);
+}
+
+static struct kobj_type ibnbd_srv_sess_dev_ktype = {
+	.sysfs_ops	= &kobj_sysfs_ops,
+	.release	= ibnbd_srv_sess_dev_release,
+};
+
+int ibnbd_srv_create_dev_session_sysfs(struct ibnbd_srv_sess_dev *sess_dev)
+{
+	int ret;
+
+	ret = kobject_init_and_add(&sess_dev->kobj, &ibnbd_srv_sess_dev_ktype,
+				   &sess_dev->dev->dev_sessions_kobj, "%s",
+				   sess_dev->sess->sessname);
+	if (ret)
+		return ret;
+
+	ret = sysfs_create_group(&sess_dev->kobj,
+				 &ibnbd_srv_default_dev_session_attr_group);
+	if (ret)
+		goto err;
+
+	return 0;
+
+err:
+	kobject_del(&sess_dev->kobj);
+	kobject_put(&sess_dev->kobj);
+
+	return ret;
+}
+
+int ibnbd_srv_create_sysfs_files(void)
+{
+	int err;
+
+	ibnbd_dev_class = class_create(THIS_MODULE, "ibnbd-server");
+	if (unlikely(IS_ERR(ibnbd_dev_class)))
+		return PTR_ERR(ibnbd_dev_class);
+
+	ibnbd_dev = device_create(ibnbd_dev_class, NULL,
+				  MKDEV(0, 0), NULL, "ctl");
+	if (unlikely(IS_ERR(ibnbd_dev))) {
+		err = PTR_ERR(ibnbd_dev);
+		goto cls_destroy;
+	}
+	ibnbd_devs_kobj = kobject_create_and_add("devices", &ibnbd_dev->kobj);
+	if (unlikely(!ibnbd_devs_kobj)) {
+		err = -ENOMEM;
+		goto dev_destroy;
+	}
+
+	return 0;
+
+dev_destroy:
+	device_destroy(ibnbd_dev_class, MKDEV(0, 0));
+cls_destroy:
+	class_destroy(ibnbd_dev_class);
+
+	return err;
+}
+
+void ibnbd_srv_destroy_sysfs_files(void)
+{
+	kobject_del(ibnbd_devs_kobj);
+	kobject_put(ibnbd_devs_kobj);
+	device_destroy(ibnbd_dev_class, MKDEV(0, 0));
+	class_destroy(ibnbd_dev_class);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 23/25] ibnbd: include client and server modules into kernel compilation
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (21 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 22/25] ibnbd: server: sysfs interface functions Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-06-20 15:03 ` [PATCH v4 24/25] ibnbd: a bit of documentation Jack Wang
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

Add IBNBD Makefile, Kconfig and also corresponding lines into upper
block layer files.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/Kconfig        |  2 ++
 drivers/block/Makefile       |  1 +
 drivers/block/ibnbd/Kconfig  | 24 ++++++++++++++++++++++++
 drivers/block/ibnbd/Makefile | 13 +++++++++++++
 4 files changed, 40 insertions(+)
 create mode 100644 drivers/block/ibnbd/Kconfig
 create mode 100644 drivers/block/ibnbd/Makefile

diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index 20bb4bfa4be6..9904c030d488 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -468,4 +468,6 @@ config BLK_DEV_RSXX
 	  To compile this driver as a module, choose M here: the
 	  module will be called rsxx.
 
+source "drivers/block/ibnbd/Kconfig"
+
 endif # BLK_DEV
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index a53cc1e3a2d3..bde0b015e07a 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -36,6 +36,7 @@ obj-$(CONFIG_BLK_DEV_PCIESSD_MTIP32XX)	+= mtip32xx/
 
 obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/
 obj-$(CONFIG_ZRAM) += zram/
+obj-$(CONFIG_BLK_DEV_IBNBD)	+= ibnbd/
 
 obj-$(CONFIG_BLK_DEV_NULL_BLK)	+= null_blk.o
 null_blk-objs	:= null_blk_main.o
diff --git a/drivers/block/ibnbd/Kconfig b/drivers/block/ibnbd/Kconfig
new file mode 100644
index 000000000000..936a91c8392e
--- /dev/null
+++ b/drivers/block/ibnbd/Kconfig
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+config BLK_DEV_IBNBD
+	bool
+
+config BLK_DEV_IBNBD_CLIENT
+	tristate "Network block device driver on top of IBTRS transport"
+	depends on INFINIBAND_IBTRS_CLIENT
+	select BLK_DEV_IBNBD
+	help
+	  IBNBD client allows for mapping of a remote block devices over
+	  IBTRS protocol from a target system where IBNBD server is running.
+
+	  If unsure, say N.
+
+config BLK_DEV_IBNBD_SERVER
+	tristate "Network block device over RDMA Infiniband server support"
+	depends on INFINIBAND_IBTRS_SERVER
+	select BLK_DEV_IBNBD
+	help
+	  IBNBD server allows for exporting local block devices to a remote client
+	  over IBTRS protocol.
+
+	  If unsure, say N.
diff --git a/drivers/block/ibnbd/Makefile b/drivers/block/ibnbd/Makefile
new file mode 100644
index 000000000000..4eb817a16ed2
--- /dev/null
+++ b/drivers/block/ibnbd/Makefile
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+ccflags-y := -Idrivers/infiniband/ulp/ibtrs
+
+ibnbd-client-y := ibnbd-clt.o \
+		  ibnbd-clt-sysfs.o
+
+ibnbd-server-y := ibnbd-srv.o \
+		  ibnbd-srv-dev.o \
+		  ibnbd-srv-sysfs.o
+
+obj-$(CONFIG_BLK_DEV_IBNBD_CLIENT) += ibnbd-client.o
+obj-$(CONFIG_BLK_DEV_IBNBD_SERVER) += ibnbd-server.o
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 24/25] ibnbd: a bit of documentation
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (22 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 23/25] ibnbd: include client and server modules into kernel compilation Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-09-13 23:58   ` Bart Van Assche
  2019-06-20 15:03 ` [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules Jack Wang
  2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
  25 siblings, 1 reply; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

README with description of major sysfs entries.

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/ibnbd/README | 315 +++++++++++++++++++++++++++++++++++++
 1 file changed, 315 insertions(+)
 create mode 100644 drivers/block/ibnbd/README

diff --git a/drivers/block/ibnbd/README b/drivers/block/ibnbd/README
new file mode 100644
index 000000000000..ce9293c290b9
--- /dev/null
+++ b/drivers/block/ibnbd/README
@@ -0,0 +1,315 @@
+***************************************
+Infiniband Network Block Device (IBNBD)
+***************************************
+
+Introduction
+------------
+
+IBNBD (InfiniBand Network Block Device) is a pair of kernel modules
+(client and server) that allow for remote access of a block device on
+the server over IBTRS protocol using the RDMA (InfiniBand, RoCE, iWarp)
+transport. After being mapped, the remote block devices can be accessed
+on the client side as local block devices.
+
+I/O is transferred between client and server by the IBTRS transport
+modules. The administration of IBNBD and IBTRS modules is done via
+sysfs entries.
+
+Requirements
+------------
+
+  IBTRS kernel modules
+
+Quick Start
+-----------
+
+Server side:
+  # modprobe ibnbd_server
+
+Client side:
+  # modprobe ibnbd_client
+  # echo "sessname=blya path=ip:10.50.100.66 device_path=/dev/ram0" > \
+            /sys/devices/virtual/ibnbd-client/ctl/map_device
+
+  Where "sessname=" is a session name, a string to identify the session
+  on client and on server sides; "path=" is a destination IP address or
+  a pair of a source and a destination IPs, separated by comma.  Multiple
+  "path=" options can be specified in order to use multipath  (see IBTRS
+  description for details); "device_path=" is the block device to be
+  mapped from the server side. After the session to the server machine is
+  established, the mapped device will appear on the client side under
+  /dev/ibnbd<N>.
+
+
+======================
+Client Sysfs Interface
+======================
+
+All sysfs files that are not read-only provide the usage information on read:
+
+Example:
+  # cat /sys/devices/virtual/ibnbd-client/ctl/map_device
+
+  > Usage: echo "sessname=<name of the ibtrs session> path=<[srcaddr,]dstaddr>
+  > [path=<[srcaddr,]dstaddr>] device_path=<full path on remote side>
+  > [access_mode=<ro|rw|migration>]
+  > [io_mode=<fileio|blockio>]" > map_device
+  >
+  > addr ::= [ ip:<ipv4> | ip:<ipv6> | gid:<gid> ]
+
+Entries under /sys/devices/virtual/ibnbd-client/ctl/
+=======================================
+
+map_device (RW)
+---------------
+
+Expected format is the following:
+
+    sessname=<name of the ibtrs session>
+    path=<[srcaddr,]dstaddr> [path=<[srcaddr,]dstaddr> ...]
+    device_path=<full path on remote side>
+    [access_mode=<ro|rw|migration>]
+    [io_mode=<fileio|blockio>]
+
+Where:
+
+sessname: accepts a string not bigger than 256 chars, which identifies
+          a given session on the client and on the server.
+          I.e. "clt_hostname-srv_hostname" could be a natural choice.
+
+path:     describes a connection between the client and the server by
+      specifying destination and, when required, the source address.
+      The addresses are to be provided in the following format:
+
+            ip:<IPv6>
+            ip:<IPv4>
+            gid:<GID>
+
+          for example:
+
+          path=ip:10.0.0.66
+                         The single addr is treated as the destination.
+                         The connection will be established to this
+                         server from any client IP address.
+
+          path=ip:10.0.0.66,ip:10.0.1.66
+                         First addr is the source address and the second
+                         is the destination.
+
+          If multiple "path=" options are specified multiple connection
+          will be established and data will be sent according to
+          the selected multipath policy (see IBTRS mp_policy sysfs entry
+          description).
+
+device_path: Path to the block device on the server side. Path is specified
+         relative to the directory on server side configured in the
+         'dev_search_path' module parameter of the ibnbd_server.
+         The ibnbd_server prepends the <device_path> received from client
+         with <dev_search_path> and tries to open the
+         <dev_search_path>/<device_path> block device.  On success,
+         a /dev/ibnbd<N> device file, a /sys/block/ibnbd_client/ibnbd<N>/
+         directory and an entry in /sys/devices/virtual/ibnbd-client/ctl/devices
+         will be created.
+
+         If 'dev_search_path' contains '%SESSNAME%', then each session can
+         have different devices namespace, e.g. server was configured with
+         the following parameter "dev_search_path=/run/ibnbd-devs/%SESSNAME%",
+         client has this string "sessname=blya device_path=sda", then server
+         will try to open: /run/ibnbd-devs/blya/sda.
+
+access_mode: the access_mode parameter specifies if the device is to be
+             mapped as "ro" read-only or "rw" read-write. The server allows
+             a device to be exported in rw mode only once. The "migration"
+             access mode has to be specified if a second mapping in read-write
+             mode is desired.
+
+             By default "rw" is used.
+
+io_mode:  the io_mode parameter specifies if the device on the server
+          will be opened as block device "blockio" or as file "fileio".
+          When the device is opened as file, the VFS page cache is used
+          for read I/O operations, write I/O operations bypass the page
+          cache and go directly to disk (except meta updates, like file
+          access time).
+
+          By default "blockio" mode is used.
+
+Exit Codes:
+
+If the device is already mapped it will fail with EEXIST. If the input
+has an invalid format it will return EINVAL. If the device path cannot
+be found on the server, it will fail with ENOENT.
+
+Finding device file after mapping
+---------------------------------
+
+After mapping, the device file can be found by:
+ o  The symlink /sys/devices/virtual/ibnbd-client/ctl/devices/<device_id>
+    points to /sys/block/<dev-name>. The last part of the symlink destination
+    is the same as the device name.  By extracting the last part of the
+    path the path to the device /dev/<dev-name> can be build.
+
+ o /dev/block/$(cat /sys/devices/virtual/ibnbd-client/ctl/devices/<device_id>/dev)
+
+How to find the <device_id> of the device is described on the next
+section.
+
+Entries under /sys/devices/virtual/ibnbd-client/ctl/devices/
+============================================================
+
+For each device mapped on the client a new symbolic link is created as
+/sys/devices/virtual/ibnbd-client/ctl/devices/<device_id>, which points
+to the block device created by ibnbd (/sys/block/ibnbd<N>/).
+The <device_id> of each device is created as follows:
+
+- If the 'device_path' provided during mapping contains slashes ("/"),
+  they are replaced by exclamation mark ("!") and used as as the
+  <device_id>. Otherwise, the <device_id> will be the same as the
+  "device_path" provided.
+
+Entries under /sys/block/ibnbd<N>/ibnbd_client/
+===============================================
+
+unmap_device (RW)
+-----------------
+
+To unmap a volume, "normal" or "force" has to be written to:
+  /sys/block/ibnbd<N>/ibnbd_client/unmap_device
+
+When "normal" is used, the operation will fail with EBUSY if any process
+is using the device.  When "force" is used, the device is also unmapped
+when device is in use.  All I/Os that are in progress will fail.
+
+Example:
+
+   # echo "normal" > /sys/block/ibnbd0/ibnbd/unmap_device
+
+state (RO)
+----------
+
+The file contains the current state of the block device. The state file
+returns "open" when the device is successfully mapped from the server
+and accepting I/O requests. When the connection to the server gets
+disconnected in case of an error (e.g. link failure), the state file
+returns "closed" and all I/O requests submitted to it will fail with -EIO.
+
+session (RO)
+------------
+
+IBNBD uses IBTRS session to transport the data between client and
+server.  The entry "session" contains the name of the session, that
+was used to establish the IBTRS session.  It's the same name that
+was passed as server parameter to the map_device entry.
+
+mapping_path (RO)
+-----------------
+
+Contains the path that was passed as "device_path" to the map_device
+operation.
+
+io_mode (RO)
+------------
+
+Contains the way device is accessed: blockio or fileio
+
+access_mode (RO)
+----------------
+
+Contains the device access mode: ro, rw or migration.
+
+======================
+Server Sysfs Interface
+======================
+
+Entries under /sys/devices/virtual/ibnbd-server/ctl/
+====================================================
+
+When a client maps a device, a directory entry with the name of the
+block device is created under /sys/devices/virtual/ibnbd-server/ctl/devices/.
+
+Entries under /sys/devices/virtual/ibnbd-server/ctl/devices/<device_name>/
+==========================================================================
+
+block_dev (link)
+---------------
+
+Is a symlink to the sysfs entry of the exported device.
+
+Example:
+
+  block_dev -> ../../../../devices/virtual/block/ram0
+
+io_mode (RO)
+-----------
+
+Contains the way device is accessed: blockio or fileio
+
+Entries under /sys/devices/virtual/ibnbd-server/ctl/devices/<device_name>/sessions/
+===================================================================================
+
+For each client a particular device is exported to, following directory will be
+created:
+
+/sys/devices/virtual/ibnbd-server/ctl/devices/<device_name>/sessions/<session-name>/
+
+When the device is unmapped by that client, the directory will be removed.
+
+Entries under /sys/devices/virtual/ibnbd-server/ctl/devices/<device_name>/sessions/<session-name>
+=================================================================================================
+
+read_only (RO)
+--------------
+
+Contains '1' if device is mapped read-only, otherwise '0'.
+
+mapping_path (RO)
+-----------------
+
+Contains the relative device path provided by the user during mapping.
+
+access_mode (RO)
+-----------------
+
+Contains the device access mode: ro, rw or migration.
+
+
+==============================
+IBNBD-Server Module Parameters
+==============================
+
+dev_search_path
+---------------
+
+When a device is mapped from the client, the server generates the path
+to the block device on the server side by concatenating dev_search_path
+and the "device_path" that was specified in the map_device operation.
+
+The default dev_search_path is: "/".
+
+dev_search_path option can also contain %SESSNAME% in order to provide
+different deviec namespaces for different sessions.  See "device_path"
+option for details.
+
+==============================
+Protocol (ibnbd/ibnbd-proto.h)
+==============================
+
+1. Before mapping first device from a given server, client sends an
+IBNBD_MSG_SESS_INFO to the server. Server responds with
+IBNBD_MSG_SESS_INFO_RSP. Currently the messages only contain the protocol
+version for backward compatibility.
+
+2. Client requests to open a device by sending IBNBD_MSG_OPEN message. This
+contains the path to the device, access mode (read-only or writable), and
+io_mode which specifies if the device should be opened as block device or
+using file io. Server responds to the message with IBNBD_MSG_OPEN_RSP. This
+contains a 32 bit device id to be used for  IOs and device "geometry" related
+information: side, max_hw_sectors, etc.
+
+3. Client attaches IBNBD_MSG_IO to each IO message send to a device. This
+message contains device id, provided by server in his ibnbd_msg_open_rsp,
+sector to be accessed, read-write flags and bi_size.
+
+4. Client closes a device by sending IBNBD_MSG_CLOSE which contains only the
+device id provided by the server.
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (23 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 24/25] ibnbd: a bit of documentation Jack Wang
@ 2019-06-20 15:03 ` Jack Wang
  2019-07-09 15:10   ` Leon Romanovsky
  2019-09-13 23:56   ` Bart Van Assche
  2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
  25 siblings, 2 replies; 148+ messages in thread
From: Jack Wang @ 2019-06-20 15:03 UTC (permalink / raw)
  To: linux-block, linux-rdma
  Cc: axboe, hch, sagi, bvanassche, jgg, dledford, danil.kipnis,
	rpenyaev, Roman Pen, Jack Wang

From: Roman Pen <roman.penyaev@profitbricks.com>

Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 MAINTAINERS | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index a6954776a37e..0b7fd93f738d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7590,6 +7590,20 @@ IBM ServeRAID RAID DRIVER
 S:	Orphan
 F:	drivers/scsi/ips.*
 
+IBNBD BLOCK DRIVERS
+M:	IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
+L:	linux-block@vger.kernel.org
+S:	Maintained
+T:	git git://github.com/profitbricks/ibnbd.git
+F:	drivers/block/ibnbd/
+
+IBTRS TRANSPORT DRIVERS
+M:	IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
+L:	linux-rdma@vger.kernel.org
+S:	Maintained
+T:	git git://github.com/profitbricks/ibnbd.git
+F:	drivers/infiniband/ulp/ibtrs/
+
 ICH LPC AND GPIO DRIVER
 M:	Peter Tyser <ptyser@xes-inc.com>
 S:	Maintained
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
                   ` (24 preceding siblings ...)
  2019-06-20 15:03 ` [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules Jack Wang
@ 2019-07-09  9:55 ` Danil Kipnis
  2019-07-09 11:00   ` Leon Romanovsky
                     ` (2 more replies)
  25 siblings, 3 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-07-09  9:55 UTC (permalink / raw)
  To: Jack Wang
  Cc: linux-block, linux-rdma, axboe, Christoph Hellwig, Sagi Grimberg,
	bvanassche, jgg, dledford, Roman Pen, gregkh

Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,

Could you please provide some feedback to the IBNBD driver and the
IBTRS library?
So far we addressed all the requests provided by the community and
continue to maintain our code up-to-date with the upstream kernel
while having an extra compatibility layer for older kernels in our
out-of-tree repository.
I understand that SRP and NVMEoF which are in the kernel already do
provide equivalent functionality for the majority of the use cases.
IBNBD on the other hand is showing higher performance and more
importantly includes the IBTRS - a general purpose library to
establish connections and transport BIO-like read/write sg-lists over
RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
I believe IBNBD does meet the kernel coding standards, it doesn't have
a lot of users, while SRP and NVMEoF are widely accepted. Do you think
it would make sense for us to rework our patchset and try pushing it
for staging tree first, so that we can proof IBNBD is well maintained,
beneficial for the eco-system, find a proper location for it within
block/rdma subsystems? This would make it easier for people to try it
out and would also be a huge step for us in terms of maintenance
effort.
The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
near future). Do you think it would make sense to rename the driver to
RNBD/RTRS?

Thank you,
Best Regards,
Danil

On Thu, Jun 20, 2019 at 5:03 PM Jack Wang <jinpuwang@gmail.com> wrote:
>
> Hi all,
>
> Here is v4 of IBNBD/IBTRS patches, which have minor changes
>
>  Changelog
>  ---------
> v4:
>   o Protocol extended to transport IO priorities
>   o Support for Mellanox ConnectX-4/X-5
>   o Minor sysfs extentions (display access mode on server side)
>   o Bug fixes: cleaning up sysfs folders, race on deallocation of resources
>   o Style fixes
>
> v3:
>   o Sparse fixes:
>      - le32 -> le16 conversion
>      - pcpu and RCU wrong declaration
>      - sysfs: dynamically alloc array of sockaddr structures to reduce
>            size of a stack frame
>
>   o Rename sysfs folder on client and server sides to show source and
>     destination addresses of the connection, i.e.:
>            .../<session-name>/paths/<src@dst>/
>
>   o Remove external inclusions from Makefiles.
>   * https://lwn.net/Articles/756994/
>
> v2:
>   o IBNBD:
>      - No legacy request IO mode, only MQ is left.
>
>   o IBTRS:
>      - No FMR registration, only FR is left.
>
>   * https://lwn.net/Articles/755075/
>
> v1:
>   - IBTRS: load-balancing and IO fail-over using multipath features were added.
>
>   - Major parts of the code were rewritten, simplified and overall code
>     size was reduced by a quarter.
>
>   * https://lwn.net/Articles/746342/
>
> v0:
>   - Initial submission
>
>   * https://lwn.net/Articles/718181/
>
>
>  Introduction
>  -------------
>
> IBTRS (InfiniBand Transport) is a reliable high speed transport library
> which allows for establishing connection between client and server
> machines via RDMA. It is based on RDMA-CM, so expect also to support RoCE
> and iWARP, but we mainly tested in IB environment. It is optimized to
> transfer (read/write) IO blocks in the sense that it follows the BIO
> semantics of providing the possibility to either write data from a
> scatter-gather list to the remote side or to request ("read") data
> transfer from the remote side into a given set of buffers.
>
> IBTRS is multipath capable and provides I/O fail-over and load-balancing
> functionality, i.e. in IBTRS terminology, an IBTRS path is a set of RDMA
> CMs and particular path is selected according to the load-balancing policy.
> It can be used for other components not bind to IBNBD.
>
>
> IBNBD (InfiniBand Network Block Device) is a pair of kernel modules
> (client and server) that allow for remote access of a block device on
> the server over IBTRS protocol. After being mapped, the remote block
> devices can be accessed on the client side as local block devices.
> Internally IBNBD uses IBTRS as an RDMA transport library.
>
>
>    - IBNBD/IBTRS is developed in order to map thin provisioned volumes,
>      thus internal protocol is simple.
>    - IBTRS was developed as an independent RDMA transport library, which
>      supports fail-over and load-balancing policies using multipath, thus
>      it can be used for any other IO needs rather than only for block
>      device.
>    - IBNBD/IBTRS is fast.
>      Old comparison results:
>      https://www.spinics.net/lists/linux-rdma/msg48799.html
>      New comparison results: see performance measurements section below.
>
> Key features of IBTRS transport library and IBNBD block device:
>
> o High throughput and low latency due to:
>    - Only two RDMA messages per IO.
>    - IMM InfiniBand messages on responses to reduce round trip latency.
>    - Simplified memory management: memory allocation happens once on
>      server side when IBTRS session is established.
>
> o IO fail-over and load-balancing by using multipath.  According to
>   our test loads additional path brings ~20% of bandwidth.
>
> o Simple configuration of IBNBD:
>    - Server side is completely passive: volumes do not need to be
>      explicitly exported.
>    - Only IB port GID and device path needed on client side to map
>      a block device.
>    - A device is remapped automatically i.e. after storage reboot.
>
> Commits for kernel can be found here:
>    https://github.com/ionos-enterprise/ibnbd/tree/linux-5.2-rc3--ibnbd-v4
> The out-of-tree modules are here:
>    https://github.com/ionos-enterprise/ibnbd
>
> Vault 2017 presentation:
>   https://events.static.linuxfound.org/sites/events/files/slides/IBNBD-Vault-2017.pdf
>
>  Performance measurements
>  ------------------------
>
> o IBNBD and NVMEoRDMA
>
>   Performance results for the v5.2-rc3 kernel
>   link: https://github.com/ionos-enterprise/ibnbd/tree/develop/performance/v4-v5.2-rc3
>
> Roman Pen (25):
>   sysfs: export sysfs_remove_file_self()
>   ibtrs: public interface header to establish RDMA connections
>   ibtrs: private headers with IBTRS protocol structs and helpers
>   ibtrs: core: lib functions shared between client and server modules
>   ibtrs: client: private header with client structs and functions
>   ibtrs: client: main functionality
>   ibtrs: client: statistics functions
>   ibtrs: client: sysfs interface functions
>   ibtrs: server: private header with server structs and functions
>   ibtrs: server: main functionality
>   ibtrs: server: statistics functions
>   ibtrs: server: sysfs interface functions
>   ibtrs: include client and server modules into kernel compilation
>   ibtrs: a bit of documentation
>   ibnbd: private headers with IBNBD protocol structs and helpers
>   ibnbd: client: private header with client structs and functions
>   ibnbd: client: main functionality
>   ibnbd: client: sysfs interface functions
>   ibnbd: server: private header with server structs and functions
>   ibnbd: server: main functionality
>   ibnbd: server: functionality for IO submission to file or block dev
>   ibnbd: server: sysfs interface functions
>   ibnbd: include client and server modules into kernel compilation
>   ibnbd: a bit of documentation
>   MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
>
>  MAINTAINERS                                   |   14 +
>  drivers/block/Kconfig                         |    2 +
>  drivers/block/Makefile                        |    1 +
>  drivers/block/ibnbd/Kconfig                   |   24 +
>  drivers/block/ibnbd/Makefile                  |   13 +
>  drivers/block/ibnbd/README                    |  315 ++
>  drivers/block/ibnbd/ibnbd-clt-sysfs.c         |  691 ++++
>  drivers/block/ibnbd/ibnbd-clt.c               | 1832 +++++++++++
>  drivers/block/ibnbd/ibnbd-clt.h               |  166 +
>  drivers/block/ibnbd/ibnbd-log.h               |   59 +
>  drivers/block/ibnbd/ibnbd-proto.h             |  378 +++
>  drivers/block/ibnbd/ibnbd-srv-dev.c           |  408 +++
>  drivers/block/ibnbd/ibnbd-srv-dev.h           |  143 +
>  drivers/block/ibnbd/ibnbd-srv-sysfs.c         |  270 ++
>  drivers/block/ibnbd/ibnbd-srv.c               |  945 ++++++
>  drivers/block/ibnbd/ibnbd-srv.h               |   94 +
>  drivers/infiniband/Kconfig                    |    1 +
>  drivers/infiniband/ulp/Makefile               |    1 +
>  drivers/infiniband/ulp/ibtrs/Kconfig          |   22 +
>  drivers/infiniband/ulp/ibtrs/Makefile         |   15 +
>  drivers/infiniband/ulp/ibtrs/README           |  385 +++
>  .../infiniband/ulp/ibtrs/ibtrs-clt-stats.c    |  447 +++
>  .../infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c    |  514 +++
>  drivers/infiniband/ulp/ibtrs/ibtrs-clt.c      | 2844 +++++++++++++++++
>  drivers/infiniband/ulp/ibtrs/ibtrs-clt.h      |  308 ++
>  drivers/infiniband/ulp/ibtrs/ibtrs-log.h      |   84 +
>  drivers/infiniband/ulp/ibtrs/ibtrs-pri.h      |  463 +++
>  .../infiniband/ulp/ibtrs/ibtrs-srv-stats.c    |  103 +
>  .../infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c    |  303 ++
>  drivers/infiniband/ulp/ibtrs/ibtrs-srv.c      | 1998 ++++++++++++
>  drivers/infiniband/ulp/ibtrs/ibtrs-srv.h      |  170 +
>  drivers/infiniband/ulp/ibtrs/ibtrs.c          |  610 ++++
>  drivers/infiniband/ulp/ibtrs/ibtrs.h          |  318 ++
>  fs/sysfs/file.c                               |    1 +
>  34 files changed, 13942 insertions(+)
>  create mode 100644 drivers/block/ibnbd/Kconfig
>  create mode 100644 drivers/block/ibnbd/Makefile
>  create mode 100644 drivers/block/ibnbd/README
>  create mode 100644 drivers/block/ibnbd/ibnbd-clt-sysfs.c
>  create mode 100644 drivers/block/ibnbd/ibnbd-clt.c
>  create mode 100644 drivers/block/ibnbd/ibnbd-clt.h
>  create mode 100644 drivers/block/ibnbd/ibnbd-log.h
>  create mode 100644 drivers/block/ibnbd/ibnbd-proto.h
>  create mode 100644 drivers/block/ibnbd/ibnbd-srv-dev.c
>  create mode 100644 drivers/block/ibnbd/ibnbd-srv-dev.h
>  create mode 100644 drivers/block/ibnbd/ibnbd-srv-sysfs.c
>  create mode 100644 drivers/block/ibnbd/ibnbd-srv.c
>  create mode 100644 drivers/block/ibnbd/ibnbd-srv.h
>  create mode 100644 drivers/infiniband/ulp/ibtrs/Kconfig
>  create mode 100644 drivers/infiniband/ulp/ibtrs/Makefile
>  create mode 100644 drivers/infiniband/ulp/ibtrs/README
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt-stats.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt-sysfs.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-clt.h
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-log.h
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-pri.h
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv-stats.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv-sysfs.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs-srv.h
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs.c
>  create mode 100644 drivers/infiniband/ulp/ibtrs/ibtrs.h
>
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
@ 2019-07-09 11:00   ` Leon Romanovsky
  2019-07-09 11:17     ` Greg KH
                       ` (2 more replies)
  2019-07-09 12:04   ` Jason Gunthorpe
  2019-07-09 19:45   ` Sagi Grimberg
  2 siblings, 3 replies; 148+ messages in thread
From: Leon Romanovsky @ 2019-07-09 11:00 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, axboe, Christoph Hellwig,
	Sagi Grimberg, bvanassche, jgg, dledford, Roman Pen, gregkh

On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
>
> Could you please provide some feedback to the IBNBD driver and the
> IBTRS library?
> So far we addressed all the requests provided by the community and
> continue to maintain our code up-to-date with the upstream kernel
> while having an extra compatibility layer for older kernels in our
> out-of-tree repository.
> I understand that SRP and NVMEoF which are in the kernel already do
> provide equivalent functionality for the majority of the use cases.
> IBNBD on the other hand is showing higher performance and more
> importantly includes the IBTRS - a general purpose library to
> establish connections and transport BIO-like read/write sg-lists over
> RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> I believe IBNBD does meet the kernel coding standards, it doesn't have
> a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> it would make sense for us to rework our patchset and try pushing it
> for staging tree first, so that we can proof IBNBD is well maintained,
> beneficial for the eco-system, find a proper location for it within
> block/rdma subsystems? This would make it easier for people to try it
> out and would also be a huge step for us in terms of maintenance
> effort.
> The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> near future). Do you think it would make sense to rename the driver to
> RNBD/RTRS?

It is better to avoid "staging" tree, because it will lack attention of
relevant people and your efforts will be lost once you will try to move
out of staging. We are all remembering Lustre and don't want to see it
again.

Back then, you was asked to provide support for performance superiority.
Can you please share any numbers with us?

Thanks

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:00   ` Leon Romanovsky
@ 2019-07-09 11:17     ` Greg KH
  2019-07-09 11:57       ` Jinpu Wang
                         ` (2 more replies)
  2019-07-09 11:37     ` Jinpu Wang
  2019-07-10 14:55     ` Danil Kipnis
  2 siblings, 3 replies; 148+ messages in thread
From: Greg KH @ 2019-07-09 11:17 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, Sagi Grimberg, bvanassche, jgg, dledford,
	Roman Pen

On Tue, Jul 09, 2019 at 02:00:36PM +0300, Leon Romanovsky wrote:
> On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> >
> > Could you please provide some feedback to the IBNBD driver and the
> > IBTRS library?
> > So far we addressed all the requests provided by the community and
> > continue to maintain our code up-to-date with the upstream kernel
> > while having an extra compatibility layer for older kernels in our
> > out-of-tree repository.
> > I understand that SRP and NVMEoF which are in the kernel already do
> > provide equivalent functionality for the majority of the use cases.
> > IBNBD on the other hand is showing higher performance and more
> > importantly includes the IBTRS - a general purpose library to
> > establish connections and transport BIO-like read/write sg-lists over
> > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > it would make sense for us to rework our patchset and try pushing it
> > for staging tree first, so that we can proof IBNBD is well maintained,
> > beneficial for the eco-system, find a proper location for it within
> > block/rdma subsystems? This would make it easier for people to try it
> > out and would also be a huge step for us in terms of maintenance
> > effort.
> > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > near future). Do you think it would make sense to rename the driver to
> > RNBD/RTRS?
> 
> It is better to avoid "staging" tree, because it will lack attention of
> relevant people and your efforts will be lost once you will try to move
> out of staging. We are all remembering Lustre and don't want to see it
> again.

That's up to the developers, that had nothing to do with the fact that
the code was in the staging tree.  If the Lustre developers had actually
done the requested work, it would have moved out of the staging tree.

So if these developers are willing to do the work to get something out
of staging, and into the "real" part of the kernel, I will gladly take
it.

But I will note that it is almost always easier to just do the work
ahead of time, and merge it in "correctly" than to go from staging into
the real part of the kernel.  But it's up to the developers what they
want to do.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:00   ` Leon Romanovsky
  2019-07-09 11:17     ` Greg KH
@ 2019-07-09 11:37     ` Jinpu Wang
  2019-07-09 12:06       ` Jason Gunthorpe
  2019-07-10 14:55     ` Danil Kipnis
  2 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-07-09 11:37 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Danil Kipnis, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, bvanassche, jgg, dledford,
	Roman Pen, Greg Kroah-Hartman, Jinpu Wang

Leon Romanovsky <leon@kernel.org> 于2019年7月9日周二 下午1:00写道:
>
> On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> >
> > Could you please provide some feedback to the IBNBD driver and the
> > IBTRS library?
> > So far we addressed all the requests provided by the community and
> > continue to maintain our code up-to-date with the upstream kernel
> > while having an extra compatibility layer for older kernels in our
> > out-of-tree repository.
> > I understand that SRP and NVMEoF which are in the kernel already do
> > provide equivalent functionality for the majority of the use cases.
> > IBNBD on the other hand is showing higher performance and more
> > importantly includes the IBTRS - a general purpose library to
> > establish connections and transport BIO-like read/write sg-lists over
> > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > it would make sense for us to rework our patchset and try pushing it
> > for staging tree first, so that we can proof IBNBD is well maintained,
> > beneficial for the eco-system, find a proper location for it within
> > block/rdma subsystems? This would make it easier for people to try it
> > out and would also be a huge step for us in terms of maintenance
> > effort.
> > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > near future). Do you think it would make sense to rename the driver to
> > RNBD/RTRS?
>
> It is better to avoid "staging" tree, because it will lack attention of
> relevant people and your efforts will be lost once you will try to move
> out of staging. We are all remembering Lustre and don't want to see it
> again.
>
> Back then, you was asked to provide support for performance superiority.
> Can you please share any numbers with us?
Hi Leon,

Thanks for you feedback.

For performance numbers,  Danil did intensive benchmark, and create
some PDF with graphes here:
https://github.com/ionos-enterprise/ibnbd/tree/master/performance/v4-v5.2-rc3

It includes both single path results also different multipath policy results.

If you have any question regarding the results, please let us know.

>
> Thanks

Thanks
Jack Wang

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:17     ` Greg KH
@ 2019-07-09 11:57       ` Jinpu Wang
  2019-07-09 13:32       ` Leon Romanovsky
  2019-07-09 15:39       ` Bart Van Assche
  2 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-07-09 11:57 UTC (permalink / raw)
  To: Greg KH
  Cc: Leon Romanovsky, Danil Kipnis, linux-block, linux-rdma,
	Jens Axboe, Christoph Hellwig, Sagi Grimberg, bvanassche, jgg,
	dledford, Roman Pen, Jinpu Wang

Greg KH <gregkh@linuxfoundation.org> 于2019年7月9日周二 下午1:17写道:
>
> On Tue, Jul 09, 2019 at 02:00:36PM +0300, Leon Romanovsky wrote:
> > On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> > >
> > > Could you please provide some feedback to the IBNBD driver and the
> > > IBTRS library?
> > > So far we addressed all the requests provided by the community and
> > > continue to maintain our code up-to-date with the upstream kernel
> > > while having an extra compatibility layer for older kernels in our
> > > out-of-tree repository.
> > > I understand that SRP and NVMEoF which are in the kernel already do
> > > provide equivalent functionality for the majority of the use cases.
> > > IBNBD on the other hand is showing higher performance and more
> > > importantly includes the IBTRS - a general purpose library to
> > > establish connections and transport BIO-like read/write sg-lists over
> > > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > > it would make sense for us to rework our patchset and try pushing it
> > > for staging tree first, so that we can proof IBNBD is well maintained,
> > > beneficial for the eco-system, find a proper location for it within
> > > block/rdma subsystems? This would make it easier for people to try it
> > > out and would also be a huge step for us in terms of maintenance
> > > effort.
> > > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > > near future). Do you think it would make sense to rename the driver to
> > > RNBD/RTRS?
> >
> > It is better to avoid "staging" tree, because it will lack attention of
> > relevant people and your efforts will be lost once you will try to move
> > out of staging. We are all remembering Lustre and don't want to see it
> > again.
>
> That's up to the developers, that had nothing to do with the fact that
> the code was in the staging tree.  If the Lustre developers had actually
> done the requested work, it would have moved out of the staging tree.
>
> So if these developers are willing to do the work to get something out
> of staging, and into the "real" part of the kernel, I will gladly take
> it.
Thanks Greg,

This is encouraging, we ARE willing to do the work to get IBNBD/IBTRS merged to
upstream kernel. We regularly contribute to stable kernel also
upsteam, backport patches, testing
stable rc release etc. We believe in opensource and the power of community.

Sure, we will try to go with so called real kernel, this is also what
we are doing
and did in the past, but since v3, we did not receive any real feedback.

We will see how thing will go.

Thanks again!
Jack Wang @ 1 & 1 IONOS Cloud GmbH


>
> But I will note that it is almost always easier to just do the work
> ahead of time, and merge it in "correctly" than to go from staging into
> the real part of the kernel.  But it's up to the developers what they
> want to do.
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
  2019-07-09 11:00   ` Leon Romanovsky
@ 2019-07-09 12:04   ` Jason Gunthorpe
  2019-07-09 19:45   ` Sagi Grimberg
  2 siblings, 0 replies; 148+ messages in thread
From: Jason Gunthorpe @ 2019-07-09 12:04 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, axboe, Christoph Hellwig,
	Sagi Grimberg, bvanassche, dledford, Roman Pen, gregkh

On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> 
> Could you please provide some feedback to the IBNBD driver and the
> IBTRS library?

From my perspective you need to get people from the block community to
go over this.

It is the merge window right now so nobody is really looking at
patches, you may need to resend it after rc1 to get attention.

Jason

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:37     ` Jinpu Wang
@ 2019-07-09 12:06       ` Jason Gunthorpe
  2019-07-09 13:15         ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Jason Gunthorpe @ 2019-07-09 12:06 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Leon Romanovsky, Danil Kipnis, linux-block, linux-rdma,
	Jens Axboe, Christoph Hellwig, Sagi Grimberg, bvanassche,
	dledford, Roman Pen, Greg Kroah-Hartman, Jinpu Wang

On Tue, Jul 09, 2019 at 01:37:39PM +0200, Jinpu Wang wrote:
> Leon Romanovsky <leon@kernel.org> 于2019年7月9日周二 下午1:00写道:
> >
> > On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> > >
> > > Could you please provide some feedback to the IBNBD driver and the
> > > IBTRS library?
> > > So far we addressed all the requests provided by the community and
> > > continue to maintain our code up-to-date with the upstream kernel
> > > while having an extra compatibility layer for older kernels in our
> > > out-of-tree repository.
> > > I understand that SRP and NVMEoF which are in the kernel already do
> > > provide equivalent functionality for the majority of the use cases.
> > > IBNBD on the other hand is showing higher performance and more
> > > importantly includes the IBTRS - a general purpose library to
> > > establish connections and transport BIO-like read/write sg-lists over
> > > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > > it would make sense for us to rework our patchset and try pushing it
> > > for staging tree first, so that we can proof IBNBD is well maintained,
> > > beneficial for the eco-system, find a proper location for it within
> > > block/rdma subsystems? This would make it easier for people to try it
> > > out and would also be a huge step for us in terms of maintenance
> > > effort.
> > > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > > near future). Do you think it would make sense to rename the driver to
> > > RNBD/RTRS?
> >
> > It is better to avoid "staging" tree, because it will lack attention of
> > relevant people and your efforts will be lost once you will try to move
> > out of staging. We are all remembering Lustre and don't want to see it
> > again.
> >
> > Back then, you was asked to provide support for performance superiority.
> > Can you please share any numbers with us?
> Hi Leon,
> 
> Thanks for you feedback.
> 
> For performance numbers,  Danil did intensive benchmark, and create
> some PDF with graphes here:
> https://github.com/ionos-enterprise/ibnbd/tree/master/performance/v4-v5.2-rc3
> 
> It includes both single path results also different multipath policy results.
> 
> If you have any question regarding the results, please let us know.

I kind of recall that last time the perf numbers were skewed toward
IBNBD because the invalidation model for MR was wrong - did this get
fixed?

Jason

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 12:06       ` Jason Gunthorpe
@ 2019-07-09 13:15         ` Jinpu Wang
  2019-07-09 13:19           ` Jason Gunthorpe
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-07-09 13:15 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jinpu Wang, Leon Romanovsky, Danil Kipnis, linux-block,
	linux-rdma, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	bvanassche, dledford, Roman Pen, Greg Kroah-Hartman

On Tue, Jul 9, 2019 at 2:06 PM Jason Gunthorpe <jgg@mellanox.com> wrote:
>
> On Tue, Jul 09, 2019 at 01:37:39PM +0200, Jinpu Wang wrote:
> > Leon Romanovsky <leon@kernel.org> 于2019年7月9日周二 下午1:00写道:
> > >
> > > On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > > > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> > > >
> > > > Could you please provide some feedback to the IBNBD driver and the
> > > > IBTRS library?
> > > > So far we addressed all the requests provided by the community and
> > > > continue to maintain our code up-to-date with the upstream kernel
> > > > while having an extra compatibility layer for older kernels in our
> > > > out-of-tree repository.
> > > > I understand that SRP and NVMEoF which are in the kernel already do
> > > > provide equivalent functionality for the majority of the use cases.
> > > > IBNBD on the other hand is showing higher performance and more
> > > > importantly includes the IBTRS - a general purpose library to
> > > > establish connections and transport BIO-like read/write sg-lists over
> > > > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > > > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > > > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > > > it would make sense for us to rework our patchset and try pushing it
> > > > for staging tree first, so that we can proof IBNBD is well maintained,
> > > > beneficial for the eco-system, find a proper location for it within
> > > > block/rdma subsystems? This would make it easier for people to try it
> > > > out and would also be a huge step for us in terms of maintenance
> > > > effort.
> > > > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > > > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > > > near future). Do you think it would make sense to rename the driver to
> > > > RNBD/RTRS?
> > >
> > > It is better to avoid "staging" tree, because it will lack attention of
> > > relevant people and your efforts will be lost once you will try to move
> > > out of staging. We are all remembering Lustre and don't want to see it
> > > again.
> > >
> > > Back then, you was asked to provide support for performance superiority.
> > > Can you please share any numbers with us?
> > Hi Leon,
> >
> > Thanks for you feedback.
> >
> > For performance numbers,  Danil did intensive benchmark, and create
> > some PDF with graphes here:
> > https://github.com/ionos-enterprise/ibnbd/tree/master/performance/v4-v5.2-rc3
> >
> > It includes both single path results also different multipath policy results.
> >
> > If you have any question regarding the results, please let us know.
>
> I kind of recall that last time the perf numbers were skewed toward
> IBNBD because the invalidation model for MR was wrong - did this get
> fixed?
>
> Jason

Thanks Jason for feedback.
Can you be  more specific about  "the invalidation model for MR was wrong"

I checked in the history of the email thread, only found
"I think from the RDMA side, before we accept something like this, I'd
like to hear from Christoph, Chuck or Sagi that the dataplane
implementation of this is correct, eg it uses the MRs properly and
invalidates at the right time, sequences with dma_ops as required,
etc.
"
And no reply from any of you since then.

Thanks,
Jack

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 13:15         ` Jinpu Wang
@ 2019-07-09 13:19           ` Jason Gunthorpe
  2019-07-09 14:17             ` Jinpu Wang
  2019-07-09 21:27             ` Sagi Grimberg
  0 siblings, 2 replies; 148+ messages in thread
From: Jason Gunthorpe @ 2019-07-09 13:19 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jinpu Wang, Leon Romanovsky, Danil Kipnis, linux-block,
	linux-rdma, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	bvanassche, dledford, Roman Pen, Greg Kroah-Hartman

On Tue, Jul 09, 2019 at 03:15:46PM +0200, Jinpu Wang wrote:
> On Tue, Jul 9, 2019 at 2:06 PM Jason Gunthorpe <jgg@mellanox.com> wrote:
> >
> > On Tue, Jul 09, 2019 at 01:37:39PM +0200, Jinpu Wang wrote:
> > > Leon Romanovsky <leon@kernel.org> 于2019年7月9日周二 下午1:00写道:
> > > >
> > > > On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > > > > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> > > > >
> > > > > Could you please provide some feedback to the IBNBD driver and the
> > > > > IBTRS library?
> > > > > So far we addressed all the requests provided by the community and
> > > > > continue to maintain our code up-to-date with the upstream kernel
> > > > > while having an extra compatibility layer for older kernels in our
> > > > > out-of-tree repository.
> > > > > I understand that SRP and NVMEoF which are in the kernel already do
> > > > > provide equivalent functionality for the majority of the use cases.
> > > > > IBNBD on the other hand is showing higher performance and more
> > > > > importantly includes the IBTRS - a general purpose library to
> > > > > establish connections and transport BIO-like read/write sg-lists over
> > > > > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > > > > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > > > > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > > > > it would make sense for us to rework our patchset and try pushing it
> > > > > for staging tree first, so that we can proof IBNBD is well maintained,
> > > > > beneficial for the eco-system, find a proper location for it within
> > > > > block/rdma subsystems? This would make it easier for people to try it
> > > > > out and would also be a huge step for us in terms of maintenance
> > > > > effort.
> > > > > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > > > > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > > > > near future). Do you think it would make sense to rename the driver to
> > > > > RNBD/RTRS?
> > > >
> > > > It is better to avoid "staging" tree, because it will lack attention of
> > > > relevant people and your efforts will be lost once you will try to move
> > > > out of staging. We are all remembering Lustre and don't want to see it
> > > > again.
> > > >
> > > > Back then, you was asked to provide support for performance superiority.
> > > > Can you please share any numbers with us?
> > > Hi Leon,
> > >
> > > Thanks for you feedback.
> > >
> > > For performance numbers,  Danil did intensive benchmark, and create
> > > some PDF with graphes here:
> > > https://github.com/ionos-enterprise/ibnbd/tree/master/performance/v4-v5.2-rc3
> > >
> > > It includes both single path results also different multipath policy results.
> > >
> > > If you have any question regarding the results, please let us know.
> >
> > I kind of recall that last time the perf numbers were skewed toward
> > IBNBD because the invalidation model for MR was wrong - did this get
> > fixed?
> >
> > Jason
> 
> Thanks Jason for feedback.
> Can you be  more specific about  "the invalidation model for MR was wrong"

MR's must be invalidated before data is handed over to the block
layer. It can't leave MRs open for access and then touch the memory
the MR covers.

IMHO this is the most likely explanation for any performance difference
from nvme..

> I checked in the history of the email thread, only found
> "I think from the RDMA side, before we accept something like this, I'd
> like to hear from Christoph, Chuck or Sagi that the dataplane
> implementation of this is correct, eg it uses the MRs properly and
> invalidates at the right time, sequences with dma_ops as required,
> etc.
> "
> And no reply from any of you since then.

This task still needs to happen..

Jason

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:17     ` Greg KH
  2019-07-09 11:57       ` Jinpu Wang
@ 2019-07-09 13:32       ` Leon Romanovsky
  2019-07-09 15:39       ` Bart Van Assche
  2 siblings, 0 replies; 148+ messages in thread
From: Leon Romanovsky @ 2019-07-09 13:32 UTC (permalink / raw)
  To: Greg KH
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, Sagi Grimberg, bvanassche, jgg, dledford,
	Roman Pen

On Tue, Jul 09, 2019 at 01:17:37PM +0200, Greg KH wrote:
> On Tue, Jul 09, 2019 at 02:00:36PM +0300, Leon Romanovsky wrote:
> > On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> > >
> > > Could you please provide some feedback to the IBNBD driver and the
> > > IBTRS library?
> > > So far we addressed all the requests provided by the community and
> > > continue to maintain our code up-to-date with the upstream kernel
> > > while having an extra compatibility layer for older kernels in our
> > > out-of-tree repository.
> > > I understand that SRP and NVMEoF which are in the kernel already do
> > > provide equivalent functionality for the majority of the use cases.
> > > IBNBD on the other hand is showing higher performance and more
> > > importantly includes the IBTRS - a general purpose library to
> > > establish connections and transport BIO-like read/write sg-lists over
> > > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > > it would make sense for us to rework our patchset and try pushing it
> > > for staging tree first, so that we can proof IBNBD is well maintained,
> > > beneficial for the eco-system, find a proper location for it within
> > > block/rdma subsystems? This would make it easier for people to try it
> > > out and would also be a huge step for us in terms of maintenance
> > > effort.
> > > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > > near future). Do you think it would make sense to rename the driver to
> > > RNBD/RTRS?
> >
> > It is better to avoid "staging" tree, because it will lack attention of
> > relevant people and your efforts will be lost once you will try to move
> > out of staging. We are all remembering Lustre and don't want to see it
> > again.
>
> That's up to the developers, that had nothing to do with the fact that
> the code was in the staging tree.  If the Lustre developers had actually
> done the requested work, it would have moved out of the staging tree.
>
> So if these developers are willing to do the work to get something out
> of staging, and into the "real" part of the kernel, I will gladly take
> it.

Greg,

It is not matter of how much *real* work developers will do, but
it is a matter of guidance to do *right* thing, which is hard to achieve
if people mentioned in the beginning of this thread wouldn't look on
staging code.

>
> But I will note that it is almost always easier to just do the work
> ahead of time, and merge it in "correctly" than to go from staging into
> the real part of the kernel.  But it's up to the developers what they
> want to do.
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 13:19           ` Jason Gunthorpe
@ 2019-07-09 14:17             ` Jinpu Wang
  2019-07-09 21:27             ` Sagi Grimberg
  1 sibling, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-07-09 14:17 UTC (permalink / raw)
  To: Jason Gunthorpe, Christoph Hellwig, Sagi Grimberg, bvanassche,
	chuck.lever
  Cc: Jinpu Wang, Leon Romanovsky, Danil Kipnis, linux-block,
	linux-rdma, Jens Axboe, dledford, Roman Pen, Greg Kroah-Hartman

On Tue, Jul 9, 2019 at 3:19 PM Jason Gunthorpe <jgg@mellanox.com> wrote:
>
> On Tue, Jul 09, 2019 at 03:15:46PM +0200, Jinpu Wang wrote:
> > On Tue, Jul 9, 2019 at 2:06 PM Jason Gunthorpe <jgg@mellanox.com> wrote:
> > >
> > > On Tue, Jul 09, 2019 at 01:37:39PM +0200, Jinpu Wang wrote:
> > > > Leon Romanovsky <leon@kernel.org> 于2019年7月9日周二 下午1:00写道:
> > > > >
> > > > > On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > > > > > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> > > > > >
> > > > > > Could you please provide some feedback to the IBNBD driver and the
> > > > > > IBTRS library?
> > > > > > So far we addressed all the requests provided by the community and
> > > > > > continue to maintain our code up-to-date with the upstream kernel
> > > > > > while having an extra compatibility layer for older kernels in our
> > > > > > out-of-tree repository.
> > > > > > I understand that SRP and NVMEoF which are in the kernel already do
> > > > > > provide equivalent functionality for the majority of the use cases.
> > > > > > IBNBD on the other hand is showing higher performance and more
> > > > > > importantly includes the IBTRS - a general purpose library to
> > > > > > establish connections and transport BIO-like read/write sg-lists over
> > > > > > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > > > > > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > > > > > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > > > > > it would make sense for us to rework our patchset and try pushing it
> > > > > > for staging tree first, so that we can proof IBNBD is well maintained,
> > > > > > beneficial for the eco-system, find a proper location for it within
> > > > > > block/rdma subsystems? This would make it easier for people to try it
> > > > > > out and would also be a huge step for us in terms of maintenance
> > > > > > effort.
> > > > > > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > > > > > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > > > > > near future). Do you think it would make sense to rename the driver to
> > > > > > RNBD/RTRS?
> > > > >
> > > > > It is better to avoid "staging" tree, because it will lack attention of
> > > > > relevant people and your efforts will be lost once you will try to move
> > > > > out of staging. We are all remembering Lustre and don't want to see it
> > > > > again.
> > > > >
> > > > > Back then, you was asked to provide support for performance superiority.
> > > > > Can you please share any numbers with us?
> > > > Hi Leon,
> > > >
> > > > Thanks for you feedback.
> > > >
> > > > For performance numbers,  Danil did intensive benchmark, and create
> > > > some PDF with graphes here:
> > > > https://github.com/ionos-enterprise/ibnbd/tree/master/performance/v4-v5.2-rc3
> > > >
> > > > It includes both single path results also different multipath policy results.
> > > >
> > > > If you have any question regarding the results, please let us know.
> > >
> > > I kind of recall that last time the perf numbers were skewed toward
> > > IBNBD because the invalidation model for MR was wrong - did this get
> > > fixed?
> > >
> > > Jason
> >
> > Thanks Jason for feedback.
> > Can you be  more specific about  "the invalidation model for MR was wrong"
>
> MR's must be invalidated before data is handed over to the block
> layer. It can't leave MRs open for access and then touch the memory
> the MR covers.
>
> IMHO this is the most likely explanation for any performance difference
> from nvme..
>
> > I checked in the history of the email thread, only found
> > "I think from the RDMA side, before we accept something like this, I'd
> > like to hear from Christoph, Chuck or Sagi that the dataplane
> > implementation of this is correct, eg it uses the MRs properly and
> > invalidates at the right time, sequences with dma_ops as required,
> > etc.
> > "
> > And no reply from any of you since then.
>
> This task still needs to happen..
>
> Jason

We did extensive testing and cross-checked how iser and nvmeof does
invalidation of MR,
doesn't find a problem.

+ Chuck
It will be appreciated if Christoph, Chuck, Sagi or Bart could give a
check, thank you in advance.

Thanks
Jack

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
  2019-06-20 15:03 ` [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules Jack Wang
@ 2019-07-09 15:10   ` Leon Romanovsky
  2019-07-09 15:18     ` Jinpu Wang
  2019-09-13 23:56   ` Bart Van Assche
  1 sibling, 1 reply; 148+ messages in thread
From: Leon Romanovsky @ 2019-07-09 15:10 UTC (permalink / raw)
  To: Jack Wang
  Cc: linux-block, linux-rdma, axboe, hch, sagi, bvanassche, jgg,
	dledford, danil.kipnis, rpenyaev, Roman Pen, Jack Wang

On Thu, Jun 20, 2019 at 05:03:37PM +0200, Jack Wang wrote:
> From: Roman Pen <roman.penyaev@profitbricks.com>
>
> Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> ---
>  MAINTAINERS | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a6954776a37e..0b7fd93f738d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -7590,6 +7590,20 @@ IBM ServeRAID RAID DRIVER
>  S:	Orphan
>  F:	drivers/scsi/ips.*
>
> +IBNBD BLOCK DRIVERS
> +M:	IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> +L:	linux-block@vger.kernel.org
> +S:	Maintained
> +T:	git git://github.com/profitbricks/ibnbd.git
> +F:	drivers/block/ibnbd/
> +
> +IBTRS TRANSPORT DRIVERS
> +M:	IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>

I don't know if it rule or not, but can you please add real
person/persons to Maintainers list? Many times, those global
support lists are simply ignored.

> +L:	linux-rdma@vger.kernel.org
> +S:	Maintained
> +T:	git git://github.com/profitbricks/ibnbd.git

How did you imagine patch flow for ULP, while your tree is
external to RDMA tree?

> +F:	drivers/infiniband/ulp/ibtrs/
> +
>  ICH LPC AND GPIO DRIVER
>  M:	Peter Tyser <ptyser@xes-inc.com>
>  S:	Maintained
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
  2019-07-09 15:10   ` Leon Romanovsky
@ 2019-07-09 15:18     ` Jinpu Wang
  2019-07-09 15:51       ` Leon Romanovsky
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-07-09 15:18 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Bart Van Assche,
	Jason Gunthorpe, Doug Ledford, Danil Kipnis, rpenyaev, Roman Pen

On Tue, Jul 9, 2019 at 5:10 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Thu, Jun 20, 2019 at 05:03:37PM +0200, Jack Wang wrote:
> > From: Roman Pen <roman.penyaev@profitbricks.com>
> >
> > Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
> > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> > ---
> >  MAINTAINERS | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index a6954776a37e..0b7fd93f738d 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -7590,6 +7590,20 @@ IBM ServeRAID RAID DRIVER
> >  S:   Orphan
> >  F:   drivers/scsi/ips.*
> >
> > +IBNBD BLOCK DRIVERS
> > +M:   IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> > +L:   linux-block@vger.kernel.org
> > +S:   Maintained
> > +T:   git git://github.com/profitbricks/ibnbd.git
> > +F:   drivers/block/ibnbd/
> > +
> > +IBTRS TRANSPORT DRIVERS
> > +M:   IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
>
> I don't know if it rule or not, but can you please add real
> person/persons to Maintainers list? Many times, those global
> support lists are simply ignored.

Sure, we can use my and Danil 's name in next round.

>
> > +L:   linux-rdma@vger.kernel.org
> > +S:   Maintained
> > +T:   git git://github.com/profitbricks/ibnbd.git
>
> How did you imagine patch flow for ULP, while your tree is
> external to RDMA tree?

Plan was we gather the patch in the git tree, and
send patches to the list via git send email, do we accept pull request
from github?
What the preferred way?

Thanks Leon.
Jack
>
> > +F:   drivers/infiniband/ulp/ibtrs/
> > +
> >  ICH LPC AND GPIO DRIVER
> >  M:   Peter Tyser <ptyser@xes-inc.com>
> >  S:   Maintained
> > --
> > 2.17.1
> >

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:17     ` Greg KH
  2019-07-09 11:57       ` Jinpu Wang
  2019-07-09 13:32       ` Leon Romanovsky
@ 2019-07-09 15:39       ` Bart Van Assche
  2 siblings, 0 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-07-09 15:39 UTC (permalink / raw)
  To: Greg KH, Leon Romanovsky
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, Sagi Grimberg, jgg, dledford, Roman Pen

On 7/9/19 4:17 AM, Greg KH wrote:
> So if these developers are willing to do the work to get something out
> of staging, and into the "real" part of the kernel, I will gladly take
> it.

Linus once famously said "given enough eyeballs, all bugs are shallow".
There are already two block-over-RDMA driver pairs upstream (NVMeOF and
SRP). Accepting the IBTRS and IBNBD drivers upstream would reduce the
number of users of the upstream block-over-RDMA drivers and hence would
fragment the block-over-RDMA driver user base further. Additionally, I'm
not yet convinced that the interesting parts of IBNBD cannot be
integrated into the existing upstream drivers. So it's not clear to me
whether taking the IBTRS and IBNBD drivers upstream would help the Linux
user community.

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
  2019-07-09 15:18     ` Jinpu Wang
@ 2019-07-09 15:51       ` Leon Romanovsky
  0 siblings, 0 replies; 148+ messages in thread
From: Leon Romanovsky @ 2019-07-09 15:51 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Bart Van Assche,
	Jason Gunthorpe, Doug Ledford, Danil Kipnis, rpenyaev, Roman Pen

On Tue, Jul 09, 2019 at 05:18:37PM +0200, Jinpu Wang wrote:
> On Tue, Jul 9, 2019 at 5:10 PM Leon Romanovsky <leon@kernel.org> wrote:
> >
> > On Thu, Jun 20, 2019 at 05:03:37PM +0200, Jack Wang wrote:
> > > From: Roman Pen <roman.penyaev@profitbricks.com>
> > >
> > > Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
> > > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> > > ---
> > >  MAINTAINERS | 14 ++++++++++++++
> > >  1 file changed, 14 insertions(+)
> > >
> > > diff --git a/MAINTAINERS b/MAINTAINERS
> > > index a6954776a37e..0b7fd93f738d 100644
> > > --- a/MAINTAINERS
> > > +++ b/MAINTAINERS
> > > @@ -7590,6 +7590,20 @@ IBM ServeRAID RAID DRIVER
> > >  S:   Orphan
> > >  F:   drivers/scsi/ips.*
> > >
> > > +IBNBD BLOCK DRIVERS
> > > +M:   IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> > > +L:   linux-block@vger.kernel.org
> > > +S:   Maintained
> > > +T:   git git://github.com/profitbricks/ibnbd.git
> > > +F:   drivers/block/ibnbd/
> > > +
> > > +IBTRS TRANSPORT DRIVERS
> > > +M:   IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> >
> > I don't know if it rule or not, but can you please add real
> > person/persons to Maintainers list? Many times, those global
> > support lists are simply ignored.
>
> Sure, we can use my and Danil 's name in next round.
>
> >
> > > +L:   linux-rdma@vger.kernel.org
> > > +S:   Maintained
> > > +T:   git git://github.com/profitbricks/ibnbd.git
> >
> > How did you imagine patch flow for ULP, while your tree is
> > external to RDMA tree?
>
> Plan was we gather the patch in the git tree, and
> send patches to the list via git send email, do we accept pull request
> from github?
> What the preferred way?

The preferred way is to start with sending patches directly
to the mailing and allow RDMA maintainers to collect and
apply them by themselves. It gives an easy way to other people
to do cross-subsystem changes and we are doing a lot of them.

Till you will be asked to send PRs the "T:" link should point to RDMA subsystem.

Thanks

>
> Thanks Leon.
> Jack
> >
> > > +F:   drivers/infiniband/ulp/ibtrs/
> > > +
> > >  ICH LPC AND GPIO DRIVER
> > >  M:   Peter Tyser <ptyser@xes-inc.com>
> > >  S:   Maintained
> > > --
> > > 2.17.1
> > >

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
  2019-07-09 11:00   ` Leon Romanovsky
  2019-07-09 12:04   ` Jason Gunthorpe
@ 2019-07-09 19:45   ` Sagi Grimberg
  2019-07-10 13:55     ` Jason Gunthorpe
  2019-07-11  8:54     ` Danil Kipnis
  2 siblings, 2 replies; 148+ messages in thread
From: Sagi Grimberg @ 2019-07-09 19:45 UTC (permalink / raw)
  To: Danil Kipnis, Jack Wang
  Cc: linux-block, linux-rdma, axboe, Christoph Hellwig, bvanassche,
	jgg, dledford, Roman Pen, gregkh

Hi Danil and Jack,

> Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> 
> Could you please provide some feedback to the IBNBD driver and the
> IBTRS library?
> So far we addressed all the requests provided by the community

That is not exactly correct AFAIR,

My main issues which were raised before are:
- IMO there isn't any justification to this ibtrs layering separation
   given that the only user of this is your ibnbd. Unless you are
   trying to submit another consumer, you should avoid adding another
   subsystem that is not really general purpose.

- ibtrs in general is using almost no infrastructure from the existing
   kernel subsystems. Examples are:
   - tag allocation mechanism (which I'm not clear why its needed)
   - rdma rw abstraction similar to what we have in the core
   - list_next_or_null_rr_rcu ??
   - few other examples sprinkled around..

Another question, from what I understand from the code, the client
always rdma_writes data on writes (with imm) from a remote pool of
server buffers dedicated to it. Essentially all writes are immediate (no
rdma reads ever). How is that different than using send wrs to a set of
pre-posted recv buffers (like all others are doing)? Is it faster?

Also, given that the server pre-allocate a substantial amount of memory
for each connection, is it documented the requirements from the server
side? Usually kernel implementations (especially upstream ones) will
avoid imposing such large longstanding memory requirements on the system
by default. I don't have a firm stand on this, but wanted to highlight
this as you are sending this for upstream inclusion.

  and
> continue to maintain our code up-to-date with the upstream kernel
> while having an extra compatibility layer for older kernels in our
> out-of-tree repository.

Overall, while I absolutely support your cause to lower your maintenance
overhead by having this sit upstream, I don't see why this can be
helpful to anyone else in the rdma community. If instead you can
crystallize why/how ibnbd is faster than anything else, and perhaps
contribute a common infrastructure piece (or enhance an existing one)
such that other existing ulps can leverage, it will be a lot more
compelling to include it upstream.

> I understand that SRP and NVMEoF which are in the kernel already do
> provide equivalent functionality for the majority of the use cases.
> IBNBD on the other hand is showing higher performance and more
> importantly includes the IBTRS - a general purpose library to
> establish connections and transport BIO-like read/write sg-lists over
> RDMA,

But who needs it? Can other ulps use it or pieces of it? I keep failing
to understand why is this a benefit if its specific to your ibnbd?

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 13:19           ` Jason Gunthorpe
  2019-07-09 14:17             ` Jinpu Wang
@ 2019-07-09 21:27             ` Sagi Grimberg
  2019-07-19 13:12               ` Danil Kipnis
  1 sibling, 1 reply; 148+ messages in thread
From: Sagi Grimberg @ 2019-07-09 21:27 UTC (permalink / raw)
  To: Jason Gunthorpe, Jinpu Wang
  Cc: Jinpu Wang, Leon Romanovsky, Danil Kipnis, linux-block,
	linux-rdma, Jens Axboe, Christoph Hellwig, bvanassche, dledford,
	Roman Pen, Greg Kroah-Hartman


>> Thanks Jason for feedback.
>> Can you be  more specific about  "the invalidation model for MR was wrong"
> 
> MR's must be invalidated before data is handed over to the block
> layer. It can't leave MRs open for access and then touch the memory
> the MR covers.

Jason is referring to these fixes:
2f122e4f5107 ("nvme-rdma: wait for local invalidation before completing 
a request")
4af7f7ff92a4 ("nvme-rdma: don't complete requests before a send work 
request has completed")
b4b591c87f2b ("nvme-rdma: don't suppress send completions")

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 19:45   ` Sagi Grimberg
@ 2019-07-10 13:55     ` Jason Gunthorpe
  2019-07-10 16:25       ` Sagi Grimberg
  2019-07-11  8:54     ` Danil Kipnis
  1 sibling, 1 reply; 148+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 13:55 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, bvanassche, dledford, Roman Pen, gregkh

On Tue, Jul 09, 2019 at 12:45:57PM -0700, Sagi Grimberg wrote:

> Another question, from what I understand from the code, the client
> always rdma_writes data on writes (with imm) from a remote pool of
> server buffers dedicated to it. Essentially all writes are immediate (no
> rdma reads ever). How is that different than using send wrs to a set of
> pre-posted recv buffers (like all others are doing)? Is it faster?

RDMA WRITE only is generally a bit faster, and if you use a buffer
pool in a smart way it is possible to get very good data packing. With
SEND the number of recvq entries dictates how big the rx buffer can
be, or you waste even more memory by using partial send buffers..

A scheme like this seems like a high performance idea, but on the
other side, I have no idea how you could possibly manage invalidations
efficiently with a shared RX buffer pool...

The RXer has to push out an invalidation for the shared buffer pool
MR, but we don't have protocols for partial MR invalidation.

Which is back to my earlier thought that the main reason this perfoms
better is because it doesn't have synchronous MR invalidation.

Maybe this is fine, but it needs to be made very clear that it uses
this insecure operating model to get higher performance..

Jason

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 11:00   ` Leon Romanovsky
  2019-07-09 11:17     ` Greg KH
  2019-07-09 11:37     ` Jinpu Wang
@ 2019-07-10 14:55     ` Danil Kipnis
  2 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-07-10 14:55 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jack Wang, linux-block, linux-rdma, axboe, Christoph Hellwig,
	Sagi Grimberg, bvanassche, jgg, dledford, Roman Pen, gregkh

Hi Leon,

thanks for the feedback!

On Tue, Jul 9, 2019 at 1:00 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Tue, Jul 09, 2019 at 11:55:03AM +0200, Danil Kipnis wrote:
> > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> >
> > Could you please provide some feedback to the IBNBD driver and the
> > IBTRS library?
> > So far we addressed all the requests provided by the community and
> > continue to maintain our code up-to-date with the upstream kernel
> > while having an extra compatibility layer for older kernels in our
> > out-of-tree repository.
> > I understand that SRP and NVMEoF which are in the kernel already do
> > provide equivalent functionality for the majority of the use cases.
> > IBNBD on the other hand is showing higher performance and more
> > importantly includes the IBTRS - a general purpose library to
> > establish connections and transport BIO-like read/write sg-lists over
> > RDMA, while SRP is targeting SCSI and NVMEoF is addressing NVME. While
> > I believe IBNBD does meet the kernel coding standards, it doesn't have
> > a lot of users, while SRP and NVMEoF are widely accepted. Do you think
> > it would make sense for us to rework our patchset and try pushing it
> > for staging tree first, so that we can proof IBNBD is well maintained,
> > beneficial for the eco-system, find a proper location for it within
> > block/rdma subsystems? This would make it easier for people to try it
> > out and would also be a huge step for us in terms of maintenance
> > effort.
> > The names IBNBD and IBTRS are in fact misleading. IBTRS sits on top of
> > RDMA and is not bound to IB (We will evaluate IBTRS with ROCE in the
> > near future). Do you think it would make sense to rename the driver to
> > RNBD/RTRS?
>
> It is better to avoid "staging" tree, because it will lack attention of
> relevant people and your efforts will be lost once you will try to move
> out of staging. We are all remembering Lustre and don't want to see it
> again.
>
> Back then, you was asked to provide support for performance superiority.

I have only theories of why ibnbd is showing better numbers than nvmeof:
1. The way we utilize the MQ framework in IBNBD. We promise to have
queue_depth (say 512) requests on each of the num_cpus hardware queues
of each device, but in fact we have only queue_depth for the whole
"session" toward a given server. The moment we have queue_depth
inflights we need stop the queue (on a device on a cpu) we get more
requests on. We need to start them again after some requests are
completed. We maintain per cpu lists of stopped HW queues, a bitmap
showing which lists are not empty, etc. to wake them up in a
round-robin fashion to avoid starvation of any devices.
2. We only do rdma writes with imm. A server reserves queue_depth of
max_io_size buffers for a given client. The client manages those
himself. Client uses imm field to tell to the server which buffer has
been written (and where) and server uses the imm field to send back
errno. If our max_io_size is 64K and queue_depth 512 and client only
issues 4K IOs all the time, then 60*512K memory is wasted. On the
other hand we do no buffer allocation/registration in io path on
server side. Server sends rdma addresses and keys to those
preregistered buffers on connection establishment and
deallocates/unregisters them when a session is closed. That's for
writes. For reads, client registers user buffers (after fr) and sends
the addresses and keys to the server (with an rdma write with imm).
Server rdma writes into those buffers. Client does the
unregistering/invalidation and completes the request.

> Can you please share any numbers with us?
Apart from github
(https://github.com/ionos-enterprise/ibnbd/tree/master/performance/v4-v5.2-rc3)
the performance results for v5.2-rc3 on two different systems can be
accessed under dcd.ionos.com/ibnbd-performance-report. The page allows
to filter out test scenarios interesting for comparison.

>
> Thanks

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-10 13:55     ` Jason Gunthorpe
@ 2019-07-10 16:25       ` Sagi Grimberg
  2019-07-10 17:25         ` Jason Gunthorpe
  0 siblings, 1 reply; 148+ messages in thread
From: Sagi Grimberg @ 2019-07-10 16:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, bvanassche, dledford, Roman Pen, gregkh


>> Another question, from what I understand from the code, the client
>> always rdma_writes data on writes (with imm) from a remote pool of
>> server buffers dedicated to it. Essentially all writes are immediate (no
>> rdma reads ever). How is that different than using send wrs to a set of
>> pre-posted recv buffers (like all others are doing)? Is it faster?
> 
> RDMA WRITE only is generally a bit faster, and if you use a buffer
> pool in a smart way it is possible to get very good data packing.

There is no packing, its used exactly as send/recv, but with a remote
buffer pool (pool of 512K buffers) and the client selects one and rdma
write with imm to it.

> With
> SEND the number of recvq entries dictates how big the rx buffer can
> be, or you waste even more memory by using partial send buffers..

This is exactly how it used here.

> A scheme like this seems like a high performance idea, but on the
> other side, I have no idea how you could possibly manage invalidations
> efficiently with a shared RX buffer pool...

There are no invalidations, this remote server pool is registered once
and long lived with the session.

> The RXer has to push out an invalidation for the shared buffer pool
> MR, but we don't have protocols for partial MR invalidation.
> 
> Which is back to my earlier thought that the main reason this perfoms
> better is because it doesn't have synchronous MR invalidation.

This issue only exists on the client side. The server never
invalidates any of its buffers.

> Maybe this is fine, but it needs to be made very clear that it uses
> this insecure operating model to get higher performance..

I still do not understand why this should give any notice-able
performance advantage.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-10 16:25       ` Sagi Grimberg
@ 2019-07-10 17:25         ` Jason Gunthorpe
  2019-07-10 19:11           ` Sagi Grimberg
  0 siblings, 1 reply; 148+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 17:25 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, bvanassche, dledford, Roman Pen, gregkh

On Wed, Jul 10, 2019 at 09:25:05AM -0700, Sagi Grimberg wrote:
> 
> > > Another question, from what I understand from the code, the client
> > > always rdma_writes data on writes (with imm) from a remote pool of
> > > server buffers dedicated to it. Essentially all writes are immediate (no
> > > rdma reads ever). How is that different than using send wrs to a set of
> > > pre-posted recv buffers (like all others are doing)? Is it faster?
> > 
> > RDMA WRITE only is generally a bit faster, and if you use a buffer
> > pool in a smart way it is possible to get very good data packing.
> 
> There is no packing, its used exactly as send/recv, but with a remote
> buffer pool (pool of 512K buffers) and the client selects one and rdma
> write with imm to it.

Well that makes little sense then:)

> > Maybe this is fine, but it needs to be made very clear that it uses
> > this insecure operating model to get higher performance..
> 
> I still do not understand why this should give any notice-able
> performance advantage.

Usually omitting invalidations gives a healthy bump.

Also, RDMA WRITE is generally faster than READ at the HW level in
various ways.

Jason

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-10 17:25         ` Jason Gunthorpe
@ 2019-07-10 19:11           ` Sagi Grimberg
  2019-07-11  7:27             ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Sagi Grimberg @ 2019-07-10 19:11 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, bvanassche, dledford, Roman Pen, gregkh


>> I still do not understand why this should give any notice-able
>> performance advantage.
> 
> Usually omitting invalidations gives a healthy bump.
> 
> Also, RDMA WRITE is generally faster than READ at the HW level in
> various ways.

Yes, but this should be essentially identical to running nvme-rdma
with 512KB of immediate-data (the nvme term is in-capsule data).

In the upstream nvme target we have inline_data_size port attribute
that is tunable for that (defaults to PAGE_SIZE).

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-10 19:11           ` Sagi Grimberg
@ 2019-07-11  7:27             ` Danil Kipnis
  0 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-07-11  7:27 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jason Gunthorpe, Jack Wang, linux-block, linux-rdma, axboe,
	Christoph Hellwig, bvanassche, dledford, Roman Pen, gregkh

Hi Sagi,

thanks a lot for the analysis. I didn't know about about the
inline_data_size parameter in nvmet. It is at PAGE_SIZE on our
systems.
Will rerun our benchmarks with
echo 2097152 > /sys/kernel/config/nvmet/ports/1/param_inline_data_size
echo 2097152 > /sys/kernel/config/nvmet/ports/2/param_inline_data_size
before enabling the port.
Best
Danil.

On Wed, Jul 10, 2019 at 9:11 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> >> I still do not understand why this should give any notice-able
> >> performance advantage.
> >
> > Usually omitting invalidations gives a healthy bump.
> >
> > Also, RDMA WRITE is generally faster than READ at the HW level in
> > various ways.
>
> Yes, but this should be essentially identical to running nvme-rdma
> with 512KB of immediate-data (the nvme term is in-capsule data).
>
> In the upstream nvme target we have inline_data_size port attribute
> that is tunable for that (defaults to PAGE_SIZE).



-- 
Danil Kipnis
Linux Kernel Developer

1&1 IONOS Cloud GmbH | Greifswalder Str. 207 | 10405 Berlin | Germany
E-mail: danil.kipnis@cloud.ionos.com | Web: www.ionos.de


Head Office: Berlin, Germany
District Court Berlin Charlottenburg, Registration number: HRB 125506 B
Executive Management: Christoph Steffens, Matthias Steinberg, Achim Weiss

Member of United Internet

This e-mail may contain confidential and/or privileged information. If
you are not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this
e-mail in any way is prohibited. If you have received this e-mail in
error, please notify the sender and delete the e-mail.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 19:45   ` Sagi Grimberg
  2019-07-10 13:55     ` Jason Gunthorpe
@ 2019-07-11  8:54     ` Danil Kipnis
  2019-07-12  0:22       ` Sagi Grimberg
  1 sibling, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-07-11  8:54 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jack Wang, linux-block, linux-rdma, axboe, Christoph Hellwig,
	bvanassche, jgg, dledford, Roman Pen, gregkh

Hi Sagi,

thanks a lot for the detailed reply. Answers inline below:

On Tue, Jul 9, 2019 at 9:46 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
> Hi Danil and Jack,
>
> > Hallo Doug, Hallo Jason, Hallo Jens, Hallo Greg,
> >
> > Could you please provide some feedback to the IBNBD driver and the
> > IBTRS library?
> > So far we addressed all the requests provided by the community
>
> That is not exactly correct AFAIR,
>
> My main issues which were raised before are:
> - IMO there isn't any justification to this ibtrs layering separation
>    given that the only user of this is your ibnbd. Unless you are
>    trying to submit another consumer, you should avoid adding another
>    subsystem that is not really general purpose.
We designed ibtrs not only with the IBNBD in mind but also as the
transport layer for a distributed SDS. We'd like to be able to do what
ceph is capable of (automatic up/down scaling of the storage cluster,
automatic recovery) but using in-kernel rdma-based IO transport
drivers, thin-provisioned volume managers, etc. to keep the highest
possible performance. That modest plan of ours should among others
cover for the following:
When using IBNBD/SRP/NVMEoF to export devices (say, thin-provisioned
volumes) from server to client and building an (md-)raid on top of the
imported devices on client side in order to provide for redundancy
across different machines, one gets very decent throughput and low
latency, since the IOs are sent in parallel to the storage machines.
One downside of this setup, is that the resync traffic has to flow
over the client, where the md-raid is sitting. Ideally the resync
traffic should flow directly between the two "legs" (storage machines)
of the raid. The server side of such a "distributed raid" capable of
this direct syncing between the array members would necessarily
require to have some logic on server side and hence could also sit on
top of ibtrs. (To continue the analogy, the "server" side of an
md-raid build on top of say two NVMEoF devices are just two block
devices, which couldn't communicate with each other)
All in all itbrs is a library to establish a "fat", multipath,
autoreconnectable connection between two hosts on top of rdma,
optimized for transport of IO traffic.

> - ibtrs in general is using almost no infrastructure from the existing
>    kernel subsystems. Examples are:
>    - tag allocation mechanism (which I'm not clear why its needed)
As you correctly noticed our client manages the buffers allocated and
registered by the server on the connection establishment. Our tags are
just a mechanism to take and release those buffers for incoming
requests on client side. Since the buffers allocated by the server are
to be shared between all the devices mapped from that server and all
their HW queues (each having num_cpus of them) the mechanism behind
get_tag/put_tag also takes care of the fairness.

>    - rdma rw abstraction similar to what we have in the core
On the one hand we have only single IO related function:
ibtrs_clt_request(READ/WRITE, session,...), which executes rdma write
with imm, or requests an rdma write with imm to be executed by the
server. On the other hand we provide an abstraction to establish and
manage what we call "session", which consist of multiple paths (to do
failover and multipath with different policies), where each path
consists of num_cpu rdma connections. Once you established a session
you can add or remove paths from it on the fly. In case the connection
to server is lost, the client does periodic attempts to reconnect
automatically. On the server side you get just sg-lists with a
direction READ or WRITE as requested by the client. We designed this
interface not only as the minimum required to build a block device on
top of rdma but also with a distributed raid in mind.

>    - list_next_or_null_rr_rcu ??
We use that for multipath. The macro (and more importantly the way we
use it) has been reviewed by Linus and quit closely by Paul E.
McKenney. AFAIR the conclusion was that Romans implementation is
correct, but too tricky to use correctly in order to be included into
kernel as a public interface. See https://lkml.org/lkml/2018/5/18/659

>    - few other examples sprinkled around..
To my best knowledge we addressed everything we got comments on and
will definitely do so in the future.

> Another question, from what I understand from the code, the client
> always rdma_writes data on writes (with imm) from a remote pool of
> server buffers dedicated to it. Essentially all writes are immediate (no
> rdma reads ever). How is that different than using send wrs to a set of
> pre-posted recv buffers (like all others are doing)? Is it faster?
At the very beginning of the project we did some measurements and saw,
that it is faster. I'm not sure if this is still true, since the
hardware and the drivers and rdma subsystem did change in that time.
Also it seemed to make the code simpler.

> Also, given that the server pre-allocate a substantial amount of memory
> for each connection, is it documented the requirements from the server
> side? Usually kernel implementations (especially upstream ones) will
> avoid imposing such large longstanding memory requirements on the system
> by default. I don't have a firm stand on this, but wanted to highlight
> this as you are sending this for upstream inclusion.
We definitely need to stress that somewhere. Will include into readme
and add to the cover letter next time. Our memory management is indeed
basically absent in favor of performance: The server reserves
queue_depth of say 512K buffers. Each buffer is used by client for
single IO only, no matter how big the request is. So if client only
issues 4K IOs, we do waste 508*queue_depth K of memory. We were aiming
for lowest possible latency from the beginning. It is probably
possible to implement some clever allocator on the server side which
wouldn't affect the performance a lot.

>
>   and
> > continue to maintain our code up-to-date with the upstream kernel
> > while having an extra compatibility layer for older kernels in our
> > out-of-tree repository.
>
> Overall, while I absolutely support your cause to lower your maintenance
> overhead by having this sit upstream, I don't see why this can be
> helpful to anyone else in the rdma community. If instead you can
> crystallize why/how ibnbd is faster than anything else, and perhaps
> contribute a common infrastructure piece (or enhance an existing one)
> such that other existing ulps can leverage, it will be a lot more
> compelling to include it upstream.
>
> > I understand that SRP and NVMEoF which are in the kernel already do
> > provide equivalent functionality for the majority of the use cases.
> > IBNBD on the other hand is showing higher performance and more
> > importantly includes the IBTRS - a general purpose library to
> > establish connections and transport BIO-like read/write sg-lists over
> > RDMA,
>
> But who needs it? Can other ulps use it or pieces of it? I keep failing
> to understand why is this a benefit if its specific to your ibnbd?
See above and please ask if you have more questions to this.

Thank you,
Danil.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-11  8:54     ` Danil Kipnis
@ 2019-07-12  0:22       ` Sagi Grimberg
  2019-07-12  7:57         ` Jinpu Wang
  2019-07-12 10:58         ` Danil Kipnis
  0 siblings, 2 replies; 148+ messages in thread
From: Sagi Grimberg @ 2019-07-12  0:22 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, axboe, Christoph Hellwig,
	bvanassche, jgg, dledford, Roman Pen, gregkh


>> My main issues which were raised before are:
>> - IMO there isn't any justification to this ibtrs layering separation
>>     given that the only user of this is your ibnbd. Unless you are
>>     trying to submit another consumer, you should avoid adding another
>>     subsystem that is not really general purpose.
> We designed ibtrs not only with the IBNBD in mind but also as the
> transport layer for a distributed SDS. We'd like to be able to do what
> ceph is capable of (automatic up/down scaling of the storage cluster,
> automatic recovery) but using in-kernel rdma-based IO transport
> drivers, thin-provisioned volume managers, etc. to keep the highest
> possible performance.

Sounds lovely, but still very much bound to your ibnbd. And that part
is not included in the patch set, so I still don't see why should this
be considered as a "generic" transport subsystem (it clearly isn't).

> All in all itbrs is a library to establish a "fat", multipath,
> autoreconnectable connection between two hosts on top of rdma,
> optimized for transport of IO traffic.

That is also dictating a wire-protocol which makes it useless to pretty
much any other consumer. Personally, I don't see how this library
would ever be used outside of your ibnbd.

>> - ibtrs in general is using almost no infrastructure from the existing
>>     kernel subsystems. Examples are:
>>     - tag allocation mechanism (which I'm not clear why its needed)
> As you correctly noticed our client manages the buffers allocated and
> registered by the server on the connection establishment. Our tags are
> just a mechanism to take and release those buffers for incoming
> requests on client side. Since the buffers allocated by the server are
> to be shared between all the devices mapped from that server and all
> their HW queues (each having num_cpus of them) the mechanism behind
> get_tag/put_tag also takes care of the fairness.

We have infrastructure for this, sbitmaps.

>>     - rdma rw abstraction similar to what we have in the core
> On the one hand we have only single IO related function:
> ibtrs_clt_request(READ/WRITE, session,...), which executes rdma write
> with imm, or requests an rdma write with imm to be executed by the
> server.

For sure you can enhance the rw API to have imm support?

> On the other hand we provide an abstraction to establish and
> manage what we call "session", which consist of multiple paths (to do
> failover and multipath with different policies), where each path
> consists of num_cpu rdma connections.

That's fine, but it doesn't mean that it also needs to re-write
infrastructure that we already have.

> Once you established a session
> you can add or remove paths from it on the fly. In case the connection
> to server is lost, the client does periodic attempts to reconnect
> automatically. On the server side you get just sg-lists with a
> direction READ or WRITE as requested by the client. We designed this
> interface not only as the minimum required to build a block device on
> top of rdma but also with a distributed raid in mind.

I suggest you take a look at the rw API and use that in your transport.

>> Another question, from what I understand from the code, the client
>> always rdma_writes data on writes (with imm) from a remote pool of
>> server buffers dedicated to it. Essentially all writes are immediate (no
>> rdma reads ever). How is that different than using send wrs to a set of
>> pre-posted recv buffers (like all others are doing)? Is it faster?
> At the very beginning of the project we did some measurements and saw,
> that it is faster. I'm not sure if this is still true

Its not significantly faster (can't imagine why it would be).
What could make a difference is probably the fact that you never
do rdma reads for I/O writes which might be better. Also perhaps the
fact that you normally don't wait for send completions before completing
I/O (which is broken), and the fact that you batch recv operations.

I would be interested to understand what indeed makes ibnbd run faster
though.

>> Also, given that the server pre-allocate a substantial amount of memory
>> for each connection, is it documented the requirements from the server
>> side? Usually kernel implementations (especially upstream ones) will
>> avoid imposing such large longstanding memory requirements on the system
>> by default. I don't have a firm stand on this, but wanted to highlight
>> this as you are sending this for upstream inclusion.
> We definitely need to stress that somewhere. Will include into readme
> and add to the cover letter next time. Our memory management is indeed
> basically absent in favor of performance: The server reserves
> queue_depth of say 512K buffers. Each buffer is used by client for
> single IO only, no matter how big the request is. So if client only
> issues 4K IOs, we do waste 508*queue_depth K of memory. We were aiming
> for lowest possible latency from the beginning. It is probably
> possible to implement some clever allocator on the server side which
> wouldn't affect the performance a lot.

Or you can fallback to rdma_read like the rest of the ulps.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-12  0:22       ` Sagi Grimberg
@ 2019-07-12  7:57         ` Jinpu Wang
  2019-07-12 19:40           ` Sagi Grimberg
  2019-07-12 10:58         ` Danil Kipnis
  1 sibling, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-07-12  7:57 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Danil Kipnis, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, bvanassche, jgg, dledford, Roman Pen,
	Greg Kroah-Hartman

Hi Sagi,

> >> Another question, from what I understand from the code, the client
> >> always rdma_writes data on writes (with imm) from a remote pool of
> >> server buffers dedicated to it. Essentially all writes are immediate (no
> >> rdma reads ever). How is that different than using send wrs to a set of
> >> pre-posted recv buffers (like all others are doing)? Is it faster?
> > At the very beginning of the project we did some measurements and saw,
> > that it is faster. I'm not sure if this is still true
>
> Its not significantly faster (can't imagine why it would be).
> What could make a difference is probably the fact that you never
> do rdma reads for I/O writes which might be better. Also perhaps the
> fact that you normally don't wait for send completions before completing
> I/O (which is broken), and the fact that you batch recv operations.

I don't know how do you come to the conclusion we don't wait for send
completion before completing IO.

We do chain wr on successfull read request from server, see funtion
rdma_write_sg,

 318 static int rdma_write_sg(struct ibtrs_srv_op *id)
 319 {
 320         struct ibtrs_srv_sess *sess = to_srv_sess(id->con->c.sess);
 321         dma_addr_t dma_addr = sess->dma_addr[id->msg_id];
 322         struct ibtrs_srv *srv = sess->srv;
 323         struct ib_send_wr inv_wr, imm_wr;
 324         struct ib_rdma_wr *wr = NULL;
snip
333         need_inval = le16_to_cpu(id->rd_msg->flags) &
IBTRS_MSG_NEED_INVAL_F;
snip
 357                 wr->wr.wr_cqe   = &io_comp_cqe;
 358                 wr->wr.sg_list  = list;
 359                 wr->wr.num_sge  = 1;
 360                 wr->remote_addr = le64_to_cpu(id->rd_msg->desc[i].addr);
 361                 wr->rkey        = le32_to_cpu(id->rd_msg->desc[i].key);
 snip
368                 if (i < (sg_cnt - 1))
 369                         wr->wr.next = &id->tx_wr[i + 1].wr;
 370                 else if (need_inval)
 371                         wr->wr.next = &inv_wr;
 372                 else
 373                         wr->wr.next = &imm_wr;
 374
 375                 wr->wr.opcode = IB_WR_RDMA_WRITE;
 376                 wr->wr.ex.imm_data = 0;
 377                 wr->wr.send_flags  = 0;
snip
 386         if (need_inval) {
 387                 inv_wr.next = &imm_wr;
 388                 inv_wr.wr_cqe = &io_comp_cqe;
 389                 inv_wr.sg_list = NULL;
 390                 inv_wr.num_sge = 0;
 391                 inv_wr.opcode = IB_WR_SEND_WITH_INV;
 392                 inv_wr.send_flags = 0;
 393                 inv_wr.ex.invalidate_rkey = rkey;
 394         }
 395         imm_wr.next = NULL;
 396         imm_wr.wr_cqe = &io_comp_cqe;
 397         imm_wr.sg_list = NULL;
 398         imm_wr.num_sge = 0;
 399         imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
 400         imm_wr.send_flags = flags;
 401         imm_wr.ex.imm_data = cpu_to_be32(ibtrs_to_io_rsp_imm(id->msg_id,
 402                                                              0,
need_inval));
 403


when we need to do invalidation of remote memory, there will chain WR
togather, last 2 are inv_wr, and imm_wr.
imm_wr is the last one, this is important, due to the fact RC QP are
ordered, we know when when we received
IB_WC_RECV_RDMA_WITH_IMM and w_inval is true, hardware should already
finished it's job to invalidate the MR.
If server fails to invalidate, we will do local invalidation, and wait
for completion.

On client side
284 static void complete_rdma_req(struct ibtrs_clt_io_req *req, int errno,
 285                               bool notify, bool can_wait)
 286 {
 287         struct ibtrs_clt_con *con = req->con;
 288         struct ibtrs_clt_sess *sess;
 289         struct ibtrs_clt *clt;
 290         int err;
 291
 292         if (WARN_ON(!req->in_use))
 293                 return;
 294         if (WARN_ON(!req->con))
 295                 return;
 296         sess = to_clt_sess(con->c.sess);
 297         clt = sess->clt;
 298
 299         if (req->sg_cnt) {
 300                 if (unlikely(req->dir == DMA_FROM_DEVICE &&
req->need_inv)) {
 301                         /*
 302                          * We are here to invalidate RDMA read requests
 303                          * ourselves.  In normal scenario server should
 304                          * send INV for all requested RDMA reads, but
 305                          * we are here, thus two things could happen:
 306                          *
 307                          *    1.  this is failover, when errno != 0
 308                          *        and can_wait == 1,
 309                          *
 310                          *    2.  something totally bad happened and
 311                          *        server forgot to send INV, so we
 312                          *        should do that ourselves.
 313                          */
 314
 315                         if (likely(can_wait)) {
 316                                 req->need_inv_comp = true;
 317                         } else {
 318                                 /* This should be IO path, so
always notify */
 319                                 WARN_ON(!notify);
 320                                 /* Save errno for INV callback */
 321                                 req->inv_errno = errno;
 322                         }
 323
 324                         err = ibtrs_inv_rkey(req);
 325                         if (unlikely(err)) {
 326                                 ibtrs_err(sess, "Send INV WR
key=%#x: %d\n",
 327                                           req->mr->rkey, err);
 328                         } else if (likely(can_wait)) {
 329                                 wait_for_completion(&req->inv_comp);
 330                         } else {
330                         } else {
 331                                 /*
 332                                  * Something went wrong, so request will be
 333                                  * completed from INV callback.
 334                                  */
 335                                 WARN_ON_ONCE(1);
 336
 337                                 return;
 338                         }
 339                 }
 340                 ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist,
 341                                 req->sg_cnt, req->dir);
 342         }
 343         if (sess->stats.enable_rdma_lat)
 344                 ibtrs_clt_update_rdma_lat(&sess->stats,
 345                                 req->dir == DMA_FROM_DEVICE,
 346                                 jiffies_to_msecs(jiffies -
req->start_jiffies));
 347         ibtrs_clt_decrease_inflight(&sess->stats);
 348
 349         req->in_use = false;
 350         req->con = NULL;
 351
 352         if (notify)
 353                 req->conf(req->priv, errno);
 354 }

 356 static void process_io_rsp(struct ibtrs_clt_sess *sess, u32
msg_id,
 357                            s16 errno, bool w_inval)
 358 {
 359         struct ibtrs_clt_io_req *req;
 360
 361         if (WARN_ON(msg_id >= sess->queue_depth))
 362                 return;
 363
 364         req = &sess->reqs[msg_id];
 365         /* Drop need_inv if server responsed with invalidation */
 366         req->need_inv &= !w_inval;
 367         complete_rdma_req(req, errno, true, false);
 368 }

Hope this clears the doubt.

Regards,
Jack

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-12  0:22       ` Sagi Grimberg
  2019-07-12  7:57         ` Jinpu Wang
@ 2019-07-12 10:58         ` Danil Kipnis
  1 sibling, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-07-12 10:58 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jack Wang, linux-block, linux-rdma, axboe, Christoph Hellwig,
	bvanassche, jgg, dledford, Roman Pen, gregkh

On Fri, Jul 12, 2019 at 2:22 AM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> >> My main issues which were raised before are:
> >> - IMO there isn't any justification to this ibtrs layering separation
> >>     given that the only user of this is your ibnbd. Unless you are
> >>     trying to submit another consumer, you should avoid adding another
> >>     subsystem that is not really general purpose.
> > We designed ibtrs not only with the IBNBD in mind but also as the
> > transport layer for a distributed SDS. We'd like to be able to do what
> > ceph is capable of (automatic up/down scaling of the storage cluster,
> > automatic recovery) but using in-kernel rdma-based IO transport
> > drivers, thin-provisioned volume managers, etc. to keep the highest
> > possible performance.
>
> Sounds lovely, but still very much bound to your ibnbd. And that part
> is not included in the patch set, so I still don't see why should this
> be considered as a "generic" transport subsystem (it clearly isn't).
Having IBTRS sit on a storage enables that storage to communicate with
other storages (forward requests, request read from other storages
i.e. for sync traffic). IBTRS is generic in the sense that it removes
the strict separation into initiator (converting BIOs into some
hardware specific protocol messages) and target (which forwards those
messages to some local device supporting that protocol).
It appears less generic to me to talk SCSI or NVME between storages if
some storages have SCSI, other NVME disks or LVM volumes, or mixed
setup. IBTRS allows to just send or request read of an sg-list between
machines over rdma - the very minimum required to transport a BIO.
It would in-deed support our case with the library if we would propose
at least two users of it. We now only have a very early stage
prototype capable of organizing storages in pools, multiplexing io
between different storages, etc. sitting on top of ibtrs, it's not
functional yet. On the other hand ibnbd with ibtrs alone already make
over 10000 lines.

> > All in all itbrs is a library to establish a "fat", multipath,
> > autoreconnectable connection between two hosts on top of rdma,
> > optimized for transport of IO traffic.
>
> That is also dictating a wire-protocol which makes it useless to pretty
> much any other consumer. Personally, I don't see how this library
> would ever be used outside of your ibnbd.
Its true, IBTRS also imposes a protocol for connection establishment
and IO path. I think at least the IO part we did reduce to a bare
minimum:
350 * Write *
351
352 1. When processing a write request client selects one of the memory chunks
353 on the server side and rdma writes there the user data, user header and the
354 IBTRS_MSG_RDMA_WRITE message. Apart from the type (write), the message only
355 contains size of the user header. The client tells the server
which chunk has
356 been accessed and at what offset the IBTRS_MSG_RDMA_WRITE can be found by
357 using the IMM field.
358
359 2. When confirming a write request server sends an "empty" rdma message with
360 an immediate field. The 32 bit field is used to specify the outstanding
361 inflight IO and for the error code.
362
363 CLT                                                          SRV
364 usr_data + usr_hdr + ibtrs_msg_rdma_write ----------------->
[IBTRS_IO_REQ_IMM]
365 [IBTRS_IO_RSP_IMM]                        <----------------- (id + errno)
366
367 * Read *
368
369 1. When processing a read request client selects one of the memory chunks
370 on the server side and rdma writes there the user header and the
371 IBTRS_MSG_RDMA_READ message. This message contains the type (read), size of
372 the user header, flags (specifying if memory invalidation is
necessary) and the
373 list of addresses along with keys for the data to be read into.
374
375 2. When confirming a read request server transfers the requested data first,
376 attaches an invalidation message if requested and finally an "empty" rdma
377 message with an immediate field. The 32 bit field is used to specify the
378 outstanding inflight IO and the error code.
379
380 CLT                                           SRV
381 usr_hdr + ibtrs_msg_rdma_read --------------> [IBTRS_IO_REQ_IMM]
382 [IBTRS_IO_RSP_IMM]            <-------------- usr_data + (id + errno)
383 or in case client requested invalidation:
384 [IBTRS_IO_RSP_IMM_W_INV]      <-------------- usr_data + (INV) +
(id + errno)

> >> - ibtrs in general is using almost no infrastructure from the existing
> >>     kernel subsystems. Examples are:
> >>     - tag allocation mechanism (which I'm not clear why its needed)
> > As you correctly noticed our client manages the buffers allocated and
> > registered by the server on the connection establishment. Our tags are
> > just a mechanism to take and release those buffers for incoming
> > requests on client side. Since the buffers allocated by the server are
> > to be shared between all the devices mapped from that server and all
> > their HW queues (each having num_cpus of them) the mechanism behind
> > get_tag/put_tag also takes care of the fairness.
>
> We have infrastructure for this, sbitmaps.
AFAIR Roman did try to use sbitmap but found no benefits in terms of
readability or number of lines:
" What is left unchanged on IBTRS side but was suggested to modify:
     - Bart suggested to use sbitmap instead of calling find_first_zero_bit()
  and friends.  I found calling pure bit API is more explicit in
  comparison to sbitmap - there is no need in using sbitmap_queue
  and all the power of wait queues, no benefits in terms of LoC
  as well." https://lwn.net/Articles/756994/

If sbitmap is a must for our use case from the infrastructure point of
view, we will reiterate on it.

>
> >>     - rdma rw abstraction similar to what we have in the core
> > On the one hand we have only single IO related function:
> > ibtrs_clt_request(READ/WRITE, session,...), which executes rdma write
> > with imm, or requests an rdma write with imm to be executed by the
> > server.
>
> For sure you can enhance the rw API to have imm support?
I'm not familiar with the architectural intention behind rw.c.
Extending the API with the support of imm field is (I guess) doable.

> > On the other hand we provide an abstraction to establish and
> > manage what we call "session", which consist of multiple paths (to do
> > failover and multipath with different policies), where each path
> > consists of num_cpu rdma connections.
>
> That's fine, but it doesn't mean that it also needs to re-write
> infrastructure that we already have.
Do you refer to rw.c?

> > Once you established a session
> > you can add or remove paths from it on the fly. In case the connection
> > to server is lost, the client does periodic attempts to reconnect
> > automatically. On the server side you get just sg-lists with a
> > direction READ or WRITE as requested by the client. We designed this
> > interface not only as the minimum required to build a block device on
> > top of rdma but also with a distributed raid in mind.
>
> I suggest you take a look at the rw API and use that in your transport.
We will look into rw.c. Do you suggest we move the multipath and the
multiple QPs per path and connection establishment on *top* of it or
*into* it?

> >> Another question, from what I understand from the code, the client
> >> always rdma_writes data on writes (with imm) from a remote pool of
> >> server buffers dedicated to it. Essentially all writes are immediate (no
> >> rdma reads ever). How is that different than using send wrs to a set of
> >> pre-posted recv buffers (like all others are doing)? Is it faster?
> > At the very beginning of the project we did some measurements and saw,
> > that it is faster. I'm not sure if this is still true
>
> Its not significantly faster (can't imagine why it would be).
> What could make a difference is probably the fact that you never
> do rdma reads for I/O writes which might be better. Also perhaps the
> fact that you normally don't wait for send completions before completing
> I/O (which is broken), and the fact that you batch recv operations.
>
> I would be interested to understand what indeed makes ibnbd run faster
> though.
Yes, we would like to understand this too. I will try increasing the
inline_data_size on nvme in our benchmarks as the next step to check
if this influences the results.

> >> Also, given that the server pre-allocate a substantial amount of memory
> >> for each connection, is it documented the requirements from the server
> >> side? Usually kernel implementations (especially upstream ones) will
> >> avoid imposing such large longstanding memory requirements on the system
> >> by default. I don't have a firm stand on this, but wanted to highlight
> >> this as you are sending this for upstream inclusion.
> > We definitely need to stress that somewhere. Will include into readme
> > and add to the cover letter next time. Our memory management is indeed
> > basically absent in favor of performance: The server reserves
> > queue_depth of say 512K buffers. Each buffer is used by client for
> > single IO only, no matter how big the request is. So if client only
> > issues 4K IOs, we do waste 508*queue_depth K of memory. We were aiming
> > for lowest possible latency from the beginning. It is probably
> > possible to implement some clever allocator on the server side which
> > wouldn't affect the performance a lot.
>
> Or you can fallback to rdma_read like the rest of the ulps.
We currently have a single round trip for every write IO: write + ack.
Wouldn't switching to rdma_read make 2 round trips out of it: command
+ rdma_read + ack?

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-12  7:57         ` Jinpu Wang
@ 2019-07-12 19:40           ` Sagi Grimberg
  2019-07-15 11:21             ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Sagi Grimberg @ 2019-07-12 19:40 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Danil Kipnis, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, bvanassche, jgg, dledford, Roman Pen,
	Greg Kroah-Hartman


> Hi Sagi,
> 
>>>> Another question, from what I understand from the code, the client
>>>> always rdma_writes data on writes (with imm) from a remote pool of
>>>> server buffers dedicated to it. Essentially all writes are immediate (no
>>>> rdma reads ever). How is that different than using send wrs to a set of
>>>> pre-posted recv buffers (like all others are doing)? Is it faster?
>>> At the very beginning of the project we did some measurements and saw,
>>> that it is faster. I'm not sure if this is still true
>>
>> Its not significantly faster (can't imagine why it would be).
>> What could make a difference is probably the fact that you never
>> do rdma reads for I/O writes which might be better. Also perhaps the
>> fact that you normally don't wait for send completions before completing
>> I/O (which is broken), and the fact that you batch recv operations.
> 
> I don't know how do you come to the conclusion we don't wait for send
> completion before completing IO.
> 
> We do chain wr on successfull read request from server, see funtion
> rdma_write_sg,

I was referring to the client side

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-12 19:40           ` Sagi Grimberg
@ 2019-07-15 11:21             ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-07-15 11:21 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Danil Kipnis, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, bvanassche, jgg, dledford, Roman Pen,
	Greg Kroah-Hartman

Sagi Grimberg <sagi@grimberg.me> 于2019年7月12日周五 下午9:40写道:
>
>
> > Hi Sagi,
> >
> >>>> Another question, from what I understand from the code, the client
> >>>> always rdma_writes data on writes (with imm) from a remote pool of
> >>>> server buffers dedicated to it. Essentially all writes are immediate (no
> >>>> rdma reads ever). How is that different than using send wrs to a set of
> >>>> pre-posted recv buffers (like all others are doing)? Is it faster?
> >>> At the very beginning of the project we did some measurements and saw,
> >>> that it is faster. I'm not sure if this is still true
> >>
> >> Its not significantly faster (can't imagine why it would be).
> >> What could make a difference is probably the fact that you never
> >> do rdma reads for I/O writes which might be better. Also perhaps the
> >> fact that you normally don't wait for send completions before completing
> >> I/O (which is broken), and the fact that you batch recv operations.
> >
> > I don't know how do you come to the conclusion we don't wait for send
> > completion before completing IO.
> >
> > We do chain wr on successfull read request from server, see funtion
> > rdma_write_sg,
>
> I was referring to the client side
Hi Sagi,

I checked the 3 commits you mentioned in earlier thread again, I now
get your point.
You meant the behavior following commits try to fix.

4af7f7ff92a4 ("nvme-rdma: don't complete requests before a send work
request has completed")
b4b591c87f2b ("nvme-rdma: don't suppress send completions")

In this sense, ibtrs client side are not waiting for the completions
for RDMA WRITE WR to finish.
But we did it right for local invalidation.

I checked SRP/iser, they are not even wait for local invalidation, no
signal flag set.

If it's a problem, we should fix them too, maybe more.

My question is do you see the behavior (HCA retry send due to drop ack
) in the field,
is it possible to reproduce?

Thanks,
Jack

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)
  2019-07-09 21:27             ` Sagi Grimberg
@ 2019-07-19 13:12               ` Danil Kipnis
  0 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-07-19 13:12 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jason Gunthorpe, Jinpu Wang, Jinpu Wang, Leon Romanovsky,
	linux-block, linux-rdma, Jens Axboe, Christoph Hellwig,
	bvanassche, dledford, Roman Pen, Greg Kroah-Hartman

Hi Sagi,

thanks a lot for the information. We are doing the right thing
regarding the invalidation (your 2f122e4f5107), but we do use
unsignalled sends and need to fix that. Please correct me if I'm
wrong: The patches (b4b591c87f2b, b4b591c87f2b) fix the problem that
if the ack from target is lost for some reason, the initiators HCA
will resend it even after the request is completed.
But doesn't the same problem persist also other way around: for the
lost acks from client? I mean, target is did a send for the "read"
IOs; client completed the request (after invalidation, refcount
dropped to 0, etc), but the ack is not delivered to the HCA of the
target, so the target will also resend it. This seems unfixable, since
the client can't possible know if the server received his ack or not?
Doesn't the problem go away, if rdma_conn_param.retry_count is just set to 0?

Thanks for your help,
Best,
Danil.

On Tue, Jul 9, 2019 at 11:27 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> >> Thanks Jason for feedback.
> >> Can you be  more specific about  "the invalidation model for MR was wrong"
> >
> > MR's must be invalidated before data is handed over to the block
> > layer. It can't leave MRs open for access and then touch the memory
> > the MR covers.
>
> Jason is referring to these fixes:
> 2f122e4f5107 ("nvme-rdma: wait for local invalidation before completing
> a request")
> 4af7f7ff92a4 ("nvme-rdma: don't complete requests before a send work
> request has completed")
> b4b591c87f2b ("nvme-rdma: don't suppress send completions")

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-06-20 15:03 ` [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers Jack Wang
@ 2019-09-13 22:10   ` Bart Van Assche
  2019-09-15 14:30     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-13 22:10 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +#define ibnbd_log(fn, dev, fmt, ...) ({				\
> +	__builtin_choose_expr(						\
> +		__builtin_types_compatible_p(				\
> +			typeof(dev), struct ibnbd_clt_dev *),		\
> +		fn("<%s@%s> " fmt, (dev)->pathname,			\
> +		(dev)->sess->sessname,					\
> +		   ##__VA_ARGS__),					\
> +		__builtin_choose_expr(					\
> +			__builtin_types_compatible_p(typeof(dev),	\
> +					struct ibnbd_srv_sess_dev *),	\
> +			fn("<%s@%s>: " fmt, (dev)->pathname,		\
> +			   (dev)->sess->sessname, ##__VA_ARGS__),	\
> +			unknown_type()));				\
> +})

Please remove the __builtin_choose_expr() / 
__builtin_types_compatible_p() construct and split this macro into two 
macros or inline functions: one for struct ibnbd_clt_dev and another one 
for struct ibnbd_srv_sess_dev.

> +#define IBNBD_PROTO_VER_MAJOR 2
> +#define IBNBD_PROTO_VER_MINOR 0
> +
> +#define IBNBD_PROTO_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
> +			       __stringify(IBNBD_PROTO_VER_MINOR)
> +
> +#ifndef IBNBD_VER_STRING
> +#define IBNBD_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
> +			 __stringify(IBNBD_PROTO_VER_MINOR)

Upstream code should not have a version number.

> +/* TODO: should be configurable */
> +#define IBTRS_PORT 1234

How about converting this macro into a kernel module parameter?

> +enum ibnbd_access_mode {
> +	IBNBD_ACCESS_RO,
> +	IBNBD_ACCESS_RW,
> +	IBNBD_ACCESS_MIGRATION,
> +};

Some more information about what IBNBD_ACCESS_MIGRATION represents would 
be welcome.

> +#define _IBNBD_FILEIO  0
> +#define _IBNBD_BLOCKIO 1
> +#define _IBNBD_AUTOIO  2
 >
> +enum ibnbd_io_mode {
> +	IBNBD_FILEIO = _IBNBD_FILEIO,
> +	IBNBD_BLOCKIO = _IBNBD_BLOCKIO,
> +	IBNBD_AUTOIO = _IBNBD_AUTOIO,
> +};

Since the IBNBD_* and _IBNBD_* constants have the same numerical value, 
are the former constants really necessary?

> +/**
> + * struct ibnbd_msg_sess_info - initial session info from client to server
> + * @hdr:		message header
> + * @ver:		IBNBD protocol version
> + */
> +struct ibnbd_msg_sess_info {
> +	struct ibnbd_msg_hdr hdr;
> +	u8		ver;
> +	u8		reserved[31];
> +};

Since the wire protocol is versioned, is it really necessary to add 31 
reserved bytes?

> +struct ibnbd_msg_sess_info_rsp {
> +	struct ibnbd_msg_hdr hdr;
> +	u8		ver;
> +	u8		reserved[31];
> +};

Same comment here.

> +/**
> + * struct ibnbd_msg_open_rsp - response message to IBNBD_MSG_OPEN
> + * @hdr:		message header
> + * @nsectors:		number of sectors

What is the size of a single sector?

> + * @device_id:		device_id on server side to identify the device

Please use the same order for the members in the kernel-doc header as in 
the structure.

> + * @queue_flags:	queue_flags of the device on server side

Where is the queue_flags member?

> + * @discard_granularity: size of the internal discard allocation unit
> + * @discard_alignment: offset from internal allocation assignment
> + * @physical_block_size: physical block size device supports
> + * @logical_block_size: logical block size device supports

What is the unit for these four members?

> + * @max_segments:	max segments hardware support in one transfer

Does 'hardware' refer to the RDMA adapter that transfers the IBNBD 
message or to the storage device? In the latter case, I assume that 
transfer refers to a DMA transaction?

> + * @io_mode:		io_mode device is opened.

Should a reference to enum ibnbd_io_mode be added?

> +	u8			__padding[10];

Why ten padding bytes? Does alignment really matter for a data structure 
like this one?

> +/**
> + * struct ibnbd_msg_io_old - message for I/O read/write for
> + * ver < IBNBD_PROTO_VER_MAJOR
> + * This structure is there only to know the size of the "old" message format
> + * @hdr:	message header
> + * @device_id:	device_id on server side to find the right device
> + * @sector:	bi_sector attribute from struct bio
> + * @rw:		bitmask, valid values are defined in enum ibnbd_io_flags
> + * @bi_size:    number of bytes for I/O read/write
> + * @prio:       priority
> + */
> +struct ibnbd_msg_io_old {
> +	struct ibnbd_msg_hdr hdr;
> +	__le32		device_id;
> +	__le64		sector;
> +	__le32		rw;
> +	__le32		bi_size;
> +};

Since this is the first version of IBNBD that is being sent upstream, I 
think that ibnbd_msg_io_old should be left out.

> +
> +/**
> + * struct ibnbd_msg_io - message for I/O read/write
> + * @hdr:	message header
> + * @device_id:	device_id on server side to find the right device
> + * @sector:	bi_sector attribute from struct bio
> + * @rw:		bitmask, valid values are defined in enum ibnbd_io_flags

enum ibnbd_io_flags doesn't look like a bitmask but rather like a bit 
field (https://en.wikipedia.org/wiki/Bit_field)?

> +static inline u32 ibnbd_to_bio_flags(u32 ibnbd_flags)
> +{
> +	u32 bio_flags;

The names ibnbd_flags and bio_flags are confusing since these two 
variables not only contain flags but also an operation. How about 
changing 'flags' into 'opf' or 'op_flags'?

> +static inline const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
> +{
> +	switch (mode) {
> +	case IBNBD_FILEIO:
> +		return "fileio";
> +	case IBNBD_BLOCKIO:
> +		return "blockio";
> +	case IBNBD_AUTOIO:
> +		return "autoio";
> +	default:
> +		return "unknown";
> +	}
> +}
> +
> +static inline const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
> +{
> +	switch (mode) {
> +	case IBNBD_ACCESS_RO:
> +		return "ro";
> +	case IBNBD_ACCESS_RW:
> +		return "rw";
> +	case IBNBD_ACCESS_MIGRATION:
> +		return "migration";
> +	default:
> +		return "unknown";
> +	}
> +}

These two functions are not in the hot path and hence should not be 
inline functions.

Note: I plan to review the entire patch series but it may take some time 
before I have finished reviewing the entire patch series.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 16/25] ibnbd: client: private header with client structs and functions
  2019-06-20 15:03 ` [PATCH v4 16/25] ibnbd: client: private header with client structs and functions Jack Wang
@ 2019-09-13 22:25   ` Bart Van Assche
  2019-09-17 16:36     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-13 22:25 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +	char			pathname[NAME_MAX];
[ ... ]
 > +	char			blk_symlink_name[NAME_MAX];

Please allocate path names dynamically instead of hard-coding the upper 
length for a path.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-06-20 15:03 ` [PATCH v4 17/25] ibnbd: client: main functionality Jack Wang
@ 2019-09-13 23:46   ` Bart Van Assche
  2019-09-16 14:17     ` Danil Kipnis
                       ` (2 more replies)
  2019-09-14  0:00   ` Bart Van Assche
  1 sibling, 3 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-13 23:46 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +MODULE_VERSION(IBNBD_VER_STRING);

No version numbers in upstream code please.

> +/*
> + * This is for closing devices when unloading the module:
> + * we might be closing a lot (>256) of devices in parallel
> + * and it is better not to use the system_wq.
> + */
> +static struct workqueue_struct *unload_wq;

I think that a better motivation is needed for the introduction of a new 
workqueue.

> +#define KERNEL_SECTOR_SIZE      512

Please use SECTOR_SIZE instead of redefining it.

> +static int ibnbd_clt_revalidate_disk(struct ibnbd_clt_dev *dev,
> +				     size_t new_nsectors)
> +{
> +	int err = 0;
> +
> +	ibnbd_info(dev, "Device size changed from %zu to %zu sectors\n",
> +		   dev->nsectors, new_nsectors);
> +	dev->nsectors = new_nsectors;
> +	set_capacity(dev->gd,
> +		     dev->nsectors * (dev->logical_block_size /
> +				      KERNEL_SECTOR_SIZE));
> +	err = revalidate_disk(dev->gd);
> +	if (err)
> +		ibnbd_err(dev, "Failed to change device size from"
> +			  " %zu to %zu, err: %d\n", dev->nsectors,
> +			  new_nsectors, err);
> +	return err;
> +}

Since this function changes the block device size, I think that the name 
ibnbd_clt_revalidate_disk() is confusing. Please rename this function.

> +/**
> + * ibnbd_get_cpu_qlist() - finds a list with HW queues to be requeued
> + *
> + * Description:
> + *     Each CPU has a list of HW queues, which needs to be requeed.  If a list
> + *     is not empty - it is marked with a bit.  This function finds first
> + *     set bit in a bitmap and returns corresponding CPU list.
> + */

What does it mean to requeue a queue? Queue elements can be requeued but 
a queue in its entirety not. Please make this comment more clear.

> +/**
> + * ibnbd_requeue_if_needed() - requeue if CPU queue is marked as non empty
> + *
> + * Description:
> + *     Each CPU has it's own list of HW queues, which should be requeued.
> + *     Function finds such list with HW queues, takes a list lock, picks up
> + *     the first HW queue out of the list and requeues it.
> + *
> + * Return:
> + *     True if the queue was requeued, false otherwise.
> + *
> + * Context:
> + *     Does not matter.
> + */

Same comment here.

> +/**
> + * ibnbd_requeue_all_if_idle() - requeue all queues left in the list if
> + *     session is idling (there are no requests in-flight).
> + *
> + * Description:
> + *     This function tries to rerun all stopped queues if there are no
> + *     requests in-flight anymore.  This function tries to solve an obvious
> + *     problem, when number of tags < than number of queues (hctx), which
> + *     are stopped and put to sleep.  If last tag, which has been just put,
> + *     does not wake up all left queues (hctxs), IO requests hang forever.
> + *
> + *     That can happen when all number of tags, say N, have been exhausted
> + *     from one CPU, and we have many block devices per session, say M.
> + *     Each block device has it's own queue (hctx) for each CPU, so eventually
> + *     we can put that number of queues (hctxs) to sleep: M x nr_cpu_ids.
> + *     If number of tags N < M x nr_cpu_ids finally we will get an IO hang.
> + *
> + *     To avoid this hang last caller of ibnbd_put_tag() (last caller is the
> + *     one who observes sess->busy == 0) must wake up all remaining queues.
> + *
> + * Context:
> + *     Does not matter.
> + */

Same comment here.

A more general question is why ibnbd needs its own queue management 
while no other block driver needs this?

> +static void ibnbd_softirq_done_fn(struct request *rq)
> +{
> +	struct ibnbd_clt_dev *dev	= rq->rq_disk->private_data;
> +	struct ibnbd_clt_session *sess	= dev->sess;
> +	struct ibnbd_iu *iu;
> +
> +	iu = blk_mq_rq_to_pdu(rq);
> +	ibnbd_put_tag(sess, iu->tag);
> +	blk_mq_end_request(rq, iu->status);
> +}
> +
> +static void msg_io_conf(void *priv, int errno)
> +{
> +	struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
> +	struct ibnbd_clt_dev *dev = iu->dev;
> +	struct request *rq = iu->rq;
> +
> +	iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
> +
> +	if (softirq_enable) {
> +		blk_mq_complete_request(rq);
> +	} else {
> +		ibnbd_put_tag(dev->sess, iu->tag);
> +		blk_mq_end_request(rq, iu->status);
> +	}

Block drivers must call blk_mq_complete_request() instead of 
blk_mq_end_request() to complete a request after processing of the 
request has been started. Calling blk_mq_end_request() to complete a 
request is racy in case a timeout occurs while blk_mq_end_request() is 
in progress.

> +static void msg_conf(void *priv, int errno)
> +{
> +	struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;

The kernel code I'm familiar with does not cast void pointers explicitly 
into another type. Please follow that convention and leave the cast out 
from the above and also from similar statements.

> +static int send_usr_msg(struct ibtrs_clt *ibtrs, int dir,
> +			struct ibnbd_iu *iu, struct kvec *vec, size_t nr,
> +			size_t len, struct scatterlist *sg, unsigned int sg_len,
> +			void (*conf)(struct work_struct *work),
> +			int *errno, bool wait)
> +{
> +	int err;
> +
> +	INIT_WORK(&iu->work, conf);
> +	err = ibtrs_clt_request(dir, msg_conf, ibtrs, iu->tag,
> +				iu, vec, nr, len, sg, sg_len);
> +	if (!err && wait) {
> +		wait_event(iu->comp.wait, iu->comp.errno != INT_MAX);

This looks weird. Why is this a wait_event() call instead of a 
wait_for_completion() call?

> +static struct blk_mq_ops ibnbd_mq_ops;
> +static int setup_mq_tags(struct ibnbd_clt_session *sess)
> +{
> +	struct blk_mq_tag_set *tags = &sess->tag_set;
> +
> +	memset(tags, 0, sizeof(*tags));
> +	tags->ops		= &ibnbd_mq_ops;
> +	tags->queue_depth	= sess->queue_depth;
> +	tags->numa_node		= NUMA_NO_NODE;
> +	tags->flags		= BLK_MQ_F_SHOULD_MERGE |
> +				  BLK_MQ_F_TAG_SHARED;
> +	tags->cmd_size		= sizeof(struct ibnbd_iu);
> +	tags->nr_hw_queues	= num_online_cpus();
> +
> +	return blk_mq_alloc_tag_set(tags);
> +}

Forward declarations should be avoided when possible. Can the forward 
declaration of ibnbd_mq_ops be avoided by moving the definition of 
setup_mq_tags() down?

> +static inline void wake_up_ibtrs_waiters(struct ibnbd_clt_session *sess)
> +{
> +	/* paired with rmb() in wait_for_ibtrs_connection() */
> +	smp_wmb();
> +	sess->ibtrs_ready = true;
> +	wake_up_all(&sess->ibtrs_waitq);
> +}

The placement of the smp_wmb() call looks wrong to me. Since 
wake_up_all() and wait_event() already guarantee acquire/release 
behavior, I think that the explicit barriers can be left out from this 
function and also from wait_for_ibtrs_connection().

> +static void wait_for_ibtrs_disconnection(struct ibnbd_clt_session *sess)
> +__releases(&sess_lock)
> +__acquires(&sess_lock)
> +{
> +	DEFINE_WAIT_FUNC(wait, autoremove_wake_function);
> +
> +	prepare_to_wait(&sess->ibtrs_waitq, &wait, TASK_UNINTERRUPTIBLE);
> +	if (IS_ERR_OR_NULL(sess->ibtrs)) {
> +		finish_wait(&sess->ibtrs_waitq, &wait);
> +		return;
> +	}
> +	mutex_unlock(&sess_lock);
> +	/* After unlock session can be freed, so careful */
> +	schedule();
> +	mutex_lock(&sess_lock);
> +}

This doesn't look right: any random wake_up() call can wake up this 
function. Shouldn't there be a loop in this function that causes the 
schedule() call to be repeated until the disconnect has happened?

> +
> +static struct ibnbd_clt_session *__find_and_get_sess(const char *sessname)
> +__releases(&sess_lock)
> +__acquires(&sess_lock)
> +{
> +	struct ibnbd_clt_session *sess;
> +	int err;
> +
> +again:
> +	list_for_each_entry(sess, &sess_list, list) {
> +		if (strcmp(sessname, sess->sessname))
> +			continue;
> +
> +		if (unlikely(sess->ibtrs_ready && IS_ERR_OR_NULL(sess->ibtrs)))
> +			/*
> +			 * No IBTRS connection, session is dying.
> +			 */
> +			continue;
> +
> +		if (likely(ibnbd_clt_get_sess(sess))) {
> +			/*
> +			 * Alive session is found, wait for IBTRS connection.
> +			 */
> +			mutex_unlock(&sess_lock);
> +			err = wait_for_ibtrs_connection(sess);
> +			if (unlikely(err))
> +				ibnbd_clt_put_sess(sess);
> +			mutex_lock(&sess_lock);
> +
> +			if (unlikely(err))
> +				/* Session is dying, repeat the loop */
> +				goto again;
> +
> +			return sess;
> +		}
> +		/*
> +		 * Ref is 0, session is dying, wait for IBTRS disconnect
> +		 * in order to avoid session names clashes.
> +		 */
> +		wait_for_ibtrs_disconnection(sess);
> +		/*
> +		 * IBTRS is disconnected and soon session will be freed,
> +		 * so repeat a loop.
> +		 */
> +		goto again;
> +	}
> +
> +	return NULL;
> +}
 >
> +
> +static struct ibnbd_clt_session *find_and_get_sess(const char *sessname)
> +{
> +	struct ibnbd_clt_session *sess;
> +
> +	mutex_lock(&sess_lock);
> +	sess = __find_and_get_sess(sessname);
> +	mutex_unlock(&sess_lock);
> +
> +	return sess;
> +}

Shouldn't __find_and_get_sess() function increase the reference count of 
sess before it returns? In other words, what prevents that the session 
is freed from another thread before find_and_get_sess() returns?

> +/*
> + * Get iorio of current task
> + */
> +static short ibnbd_current_ioprio(void)
> +{
> +	struct task_struct *tsp = current;
> +	unsigned short prio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
> +
> +	if (likely(tsp->io_context))
> +		prio = tsp->io_context->ioprio;
> +	return prio;
> +}

ibnbd should use req_get_ioprio() and should not look at 
current->io_context->ioprio. I think it is the responsibility of the 
block layer to extract the I/O priority from the task context. As an 
example, here is how the aio code does this:

		req->ki_ioprio = get_current_ioprio();

> +static blk_status_t ibnbd_queue_rq(struct blk_mq_hw_ctx *hctx,
> +				   const struct blk_mq_queue_data *bd)
> +{
> +	struct request *rq = bd->rq;
> +	struct ibnbd_clt_dev *dev = rq->rq_disk->private_data;
> +	struct ibnbd_iu *iu = blk_mq_rq_to_pdu(rq);
> +	int err;
> +
> +	if (unlikely(!ibnbd_clt_dev_is_mapped(dev)))
> +		return BLK_STS_IOERR;
> +
> +	iu->tag = ibnbd_get_tag(dev->sess, IBTRS_IO_CON, IBTRS_TAG_NOWAIT);
> +	if (unlikely(!iu->tag)) {
> +		ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_IFBUSY);
> +		return BLK_STS_RESOURCE;
> +	}
> +
> +	blk_mq_start_request(rq);
> +	err = ibnbd_client_xfer_request(dev, rq, iu);
> +	if (likely(err == 0))
> +		return BLK_STS_OK;
> +	if (unlikely(err == -EAGAIN || err == -ENOMEM)) {
> +		ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_10ms);
> +		ibnbd_put_tag(dev->sess, iu->tag);
> +		return BLK_STS_RESOURCE;
> +	}
> +
> +	ibnbd_put_tag(dev->sess, iu->tag);
> +	return BLK_STS_IOERR;
> +}

Every other block driver relies on the block layer core for tag 
allocation. Why does ibnbd need its own tag management?

> +static void setup_request_queue(struct ibnbd_clt_dev *dev)
> +{
> +	blk_queue_logical_block_size(dev->queue, dev->logical_block_size);
> +	blk_queue_physical_block_size(dev->queue, dev->physical_block_size);
> +	blk_queue_max_hw_sectors(dev->queue, dev->max_hw_sectors);
> +	blk_queue_max_write_same_sectors(dev->queue,
> +					 dev->max_write_same_sectors);
> +
> +	/*
> +	 * we don't support discards to "discontiguous" segments
> +	 * in on request
               ^^
               one?
> +	 */
> +	blk_queue_max_discard_segments(dev->queue, 1);
> +
> +	blk_queue_max_discard_sectors(dev->queue, dev->max_discard_sectors);
> +	dev->queue->limits.discard_granularity	= dev->discard_granularity;
> +	dev->queue->limits.discard_alignment	= dev->discard_alignment;
> +	if (dev->max_discard_sectors)
> +		blk_queue_flag_set(QUEUE_FLAG_DISCARD, dev->queue);
> +	if (dev->secure_discard)
> +		blk_queue_flag_set(QUEUE_FLAG_SECERASE, dev->queue);
> +
> +	blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
> +	blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
> +	blk_queue_max_segments(dev->queue, dev->max_segments);
> +	blk_queue_io_opt(dev->queue, dev->sess->max_io_size);
> +	blk_queue_virt_boundary(dev->queue, 4095);
> +	blk_queue_write_cache(dev->queue, true, true);
> +	dev->queue->queuedata = dev;
> +}

> +static void destroy_gen_disk(struct ibnbd_clt_dev *dev)
> +{
> +	del_gendisk(dev->gd);

> +	/*
> +	 * Before marking queue as dying (blk_cleanup_queue() does that)
> +	 * we have to be sure that everything in-flight has gone.
> +	 * Blink with freeze/unfreeze.
> +	 */
> +	blk_mq_freeze_queue(dev->queue);
> +	blk_mq_unfreeze_queue(dev->queue);

Please remove the above seven lines. blk_cleanup_queue() calls 
blk_set_queue_dying() and the second call in blk_set_queue_dying() is 
blk_freeze_queue_start().

> +	blk_cleanup_queue(dev->queue);
> +	put_disk(dev->gd);
> +}

> +
> +static void destroy_sysfs(struct ibnbd_clt_dev *dev,
> +			  const struct attribute *sysfs_self)
> +{
> +	ibnbd_clt_remove_dev_symlink(dev);
> +	if (dev->kobj.state_initialized) {
> +		if (sysfs_self)
> +			/* To avoid deadlock firstly commit suicide */
                                                             ^^^^^^^
Please chose terminology that is more appropriate for a professional 
context.

> +			sysfs_remove_file_self(&dev->kobj, sysfs_self);
> +		kobject_del(&dev->kobj);
> +		kobject_put(&dev->kobj);
> +	}
> +}

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
  2019-06-20 15:03 ` [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules Jack Wang
  2019-07-09 15:10   ` Leon Romanovsky
@ 2019-09-13 23:56   ` Bart Van Assche
  2019-09-19 10:30     ` Jinpu Wang
  1 sibling, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-13 23:56 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> From: Roman Pen <roman.penyaev@profitbricks.com>
> 
> Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> ---
>   MAINTAINERS | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a6954776a37e..0b7fd93f738d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -7590,6 +7590,20 @@ IBM ServeRAID RAID DRIVER
>   S:	Orphan
>   F:	drivers/scsi/ips.*
>   
> +IBNBD BLOCK DRIVERS
> +M:	IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> +L:	linux-block@vger.kernel.org
> +S:	Maintained
> +T:	git git://github.com/profitbricks/ibnbd.git
> +F:	drivers/block/ibnbd/
> +
> +IBTRS TRANSPORT DRIVERS
> +M:	IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> +L:	linux-rdma@vger.kernel.org
> +S:	Maintained
> +T:	git git://github.com/profitbricks/ibnbd.git
> +F:	drivers/infiniband/ulp/ibtrs/
> +
>   ICH LPC AND GPIO DRIVER
>   M:	Peter Tyser <ptyser@xes-inc.com>
>   S:	Maintained

I think the T: entry is for kernel trees against which developers should 
prepare their patches. Since the ibnbd repository on github is an 
out-of-tree kernel driver I don't think that it should appear in the 
MAINTAINERS file.

Bart.



^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 24/25] ibnbd: a bit of documentation
  2019-06-20 15:03 ` [PATCH v4 24/25] ibnbd: a bit of documentation Jack Wang
@ 2019-09-13 23:58   ` Bart Van Assche
  2019-09-18 12:22     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-13 23:58 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> From: Roman Pen <roman.penyaev@profitbricks.com>
> 
> README with description of major sysfs entries.

Please have a look at Documentation/ABI/README and follow the 
instructions from that document.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-06-20 15:03 ` [PATCH v4 17/25] ibnbd: client: main functionality Jack Wang
  2019-09-13 23:46   ` Bart Van Assche
@ 2019-09-14  0:00   ` Bart Van Assche
  1 sibling, 0 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-14  0:00 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> From: Roman Pen <roman.penyaev@profitbricks.com>

A "From" address should be a valid email address. For the above address 
I got the following reply:

550 5.1.1 The email account that you tried to reach does not exist. 
Please try double-checking the recipient's email address for typos or 
unnecessary spaces.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-13 22:10   ` Bart Van Assche
@ 2019-09-15 14:30     ` Jinpu Wang
  2019-09-16  5:27       ` Leon Romanovsky
                         ` (3 more replies)
  0 siblings, 4 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-15 14:30 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

Thanks Bart for detailed review, reply inline.

On Sat, Sep 14, 2019 at 12:10 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +#define ibnbd_log(fn, dev, fmt, ...) ({                              \
> > +     __builtin_choose_expr(                                          \
> > +             __builtin_types_compatible_p(                           \
> > +                     typeof(dev), struct ibnbd_clt_dev *),           \
> > +             fn("<%s@%s> " fmt, (dev)->pathname,                     \
> > +             (dev)->sess->sessname,                                  \
> > +                ##__VA_ARGS__),                                      \
> > +             __builtin_choose_expr(                                  \
> > +                     __builtin_types_compatible_p(typeof(dev),       \
> > +                                     struct ibnbd_srv_sess_dev *),   \
> > +                     fn("<%s@%s>: " fmt, (dev)->pathname,            \
> > +                        (dev)->sess->sessname, ##__VA_ARGS__),       \
> > +                     unknown_type()));                               \
> > +})
>
> Please remove the __builtin_choose_expr() /
> __builtin_types_compatible_p() construct and split this macro into two
> macros or inline functions: one for struct ibnbd_clt_dev and another one
> for struct ibnbd_srv_sess_dev.
Ok, will split to two macros.

>
> > +#define IBNBD_PROTO_VER_MAJOR 2
> > +#define IBNBD_PROTO_VER_MINOR 0
> > +
> > +#define IBNBD_PROTO_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
> > +                            __stringify(IBNBD_PROTO_VER_MINOR)
> > +
> > +#ifndef IBNBD_VER_STRING
> > +#define IBNBD_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
> > +                      __stringify(IBNBD_PROTO_VER_MINOR)
>
> Upstream code should not have a version number.
IBNBD_VER_STRING can be removed together with MODULE_VERSION.
>
> > +/* TODO: should be configurable */
> > +#define IBTRS_PORT 1234
>
> How about converting this macro into a kernel module parameter?
Sounds good, will do.
>
> > +enum ibnbd_access_mode {
> > +     IBNBD_ACCESS_RO,
> > +     IBNBD_ACCESS_RW,
> > +     IBNBD_ACCESS_MIGRATION,
> > +};
>
> Some more information about what IBNBD_ACCESS_MIGRATION represents would
> be welcome.
This is a special mode to allow temporarily RW access mode during VM
migration, will add  comments next round.
>
> > +#define _IBNBD_FILEIO  0
> > +#define _IBNBD_BLOCKIO 1
> > +#define _IBNBD_AUTOIO  2
>  >
> > +enum ibnbd_io_mode {
> > +     IBNBD_FILEIO = _IBNBD_FILEIO,
> > +     IBNBD_BLOCKIO = _IBNBD_BLOCKIO,
> > +     IBNBD_AUTOIO = _IBNBD_AUTOIO,
> > +};
>
> Since the IBNBD_* and _IBNBD_* constants have the same numerical value,
> are the former constants really necessary?
Seems we can remove _IBNBD_*.
>
> > +/**
> > + * struct ibnbd_msg_sess_info - initial session info from client to server
> > + * @hdr:             message header
> > + * @ver:             IBNBD protocol version
> > + */
> > +struct ibnbd_msg_sess_info {
> > +     struct ibnbd_msg_hdr hdr;
> > +     u8              ver;
> > +     u8              reserved[31];
> > +};
>
> Since the wire protocol is versioned, is it really necessary to add 31
> reserved bytes?
You will never know, we prefer to keep the reserved bytes for future extension,
31 bytes is not much, isn't it?


>
> > +struct ibnbd_msg_sess_info_rsp {
> > +     struct ibnbd_msg_hdr hdr;
> > +     u8              ver;
> > +     u8              reserved[31];
> > +};
>
> Same comment here.
Dito.
>
> > +/**
> > + * struct ibnbd_msg_open_rsp - response message to IBNBD_MSG_OPEN
> > + * @hdr:             message header
> > + * @nsectors:                number of sectors
>
> What is the size of a single sector?
512b, will mention explicitly in the next round.
>
> > + * @device_id:               device_id on server side to identify the device
>
> Please use the same order for the members in the kernel-doc header as in
> the structure.
Ok, will fix
>
> > + * @queue_flags:     queue_flags of the device on server side
>
> Where is the queue_flags member?
Oh, will remove it, left over.
>
> > + * @discard_granularity: size of the internal discard allocation unit
> > + * @discard_alignment: offset from internal allocation assignment
> > + * @physical_block_size: physical block size device supports
> > + * @logical_block_size: logical block size device supports
>
> What is the unit for these four members?
will update to be more clear.
>
> > + * @max_segments:    max segments hardware support in one transfer
>
> Does 'hardware' refer to the RDMA adapter that transfers the IBNBD
> message or to the storage device? In the latter case, I assume that
> transfer refers to a DMA transaction?
"hardware" refers to the storage device on the server-side.

>
> > + * @io_mode:         io_mode device is opened.
>
> Should a reference to enum ibnbd_io_mode be added?
sounds good.
>
> > +     u8                      __padding[10];
>
> Why ten padding bytes? Does alignment really matter for a data structure
> like this one?
It's more a reserved space for future usage, will rename padding to reserved.
>
> > +/**
> > + * struct ibnbd_msg_io_old - message for I/O read/write for
> > + * ver < IBNBD_PROTO_VER_MAJOR
> > + * This structure is there only to know the size of the "old" message format
> > + * @hdr:     message header
> > + * @device_id:       device_id on server side to find the right device
> > + * @sector:  bi_sector attribute from struct bio
> > + * @rw:              bitmask, valid values are defined in enum ibnbd_io_flags
> > + * @bi_size:    number of bytes for I/O read/write
> > + * @prio:       priority
> > + */
> > +struct ibnbd_msg_io_old {
> > +     struct ibnbd_msg_hdr hdr;
> > +     __le32          device_id;
> > +     __le64          sector;
> > +     __le32          rw;
> > +     __le32          bi_size;
> > +};
>
> Since this is the first version of IBNBD that is being sent upstream, I
> think that ibnbd_msg_io_old should be left out.

>
> > +
> > +/**
> > + * struct ibnbd_msg_io - message for I/O read/write
> > + * @hdr:     message header
> > + * @device_id:       device_id on server side to find the right device
> > + * @sector:  bi_sector attribute from struct bio
> > + * @rw:              bitmask, valid values are defined in enum ibnbd_io_flags
>
> enum ibnbd_io_flags doesn't look like a bitmask but rather like a bit
> field (https://en.wikipedia.org/wiki/Bit_field)?
I will remove the "bitmask", I probably will also rename "rw "to "opf".
>
> > +static inline u32 ibnbd_to_bio_flags(u32 ibnbd_flags)
> > +{
> > +     u32 bio_flags;
>
> The names ibnbd_flags and bio_flags are confusing since these two
> variables not only contain flags but also an operation. How about
> changing 'flags' into 'opf' or 'op_flags'?
Sounds good, will change to ibnbd_opf and bio_opf.
>
> > +static inline const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
> > +{
> > +     switch (mode) {
> > +     case IBNBD_FILEIO:
> > +             return "fileio";
> > +     case IBNBD_BLOCKIO:
> > +             return "blockio";
> > +     case IBNBD_AUTOIO:
> > +             return "autoio";
> > +     default:
> > +             return "unknown";
> > +     }
> > +}
> > +
> > +static inline const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
> > +{
> > +     switch (mode) {
> > +     case IBNBD_ACCESS_RO:
> > +             return "ro";
> > +     case IBNBD_ACCESS_RW:
> > +             return "rw";
> > +     case IBNBD_ACCESS_MIGRATION:
> > +             return "migration";
> > +     default:
> > +             return "unknown";
> > +     }
> > +}
>
> These two functions are not in the hot path and hence should not be
> inline functions.
Sounds reasonable, will remove the inline.
>
> Note: I plan to review the entire patch series but it may take some time
> before I have finished reviewing the entire patch series.
>
That will be great, thanks a  lot, Bart
> Bart.


Regards,
-- 
Jack Wang
Linux Kernel Developer
Platform Engineering Compute (IONOS Cloud)

1&1 IONOS SE | Greifswalder Str. 207 | 10405 Berlin | Germany
Phone: +49 30 57700-8042 | Fax: +49 30 57700-8598
E-mail: jinpu.wang@cloud.ionos.com | Web: www.ionos.de

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-15 14:30     ` Jinpu Wang
@ 2019-09-16  5:27       ` Leon Romanovsky
  2019-09-16 13:45         ` Bart Van Assche
  2019-09-16  7:08       ` Danil Kipnis
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 148+ messages in thread
From: Leon Romanovsky @ 2019-09-16  5:27 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Sun, Sep 15, 2019 at 04:30:04PM +0200, Jinpu Wang wrote:
> Thanks Bart for detailed review, reply inline.
>
> On Sat, Sep 14, 2019 at 12:10 AM Bart Van Assche <bvanassche@acm.org> wrote:
> >
> > On 6/20/19 8:03 AM, Jack Wang wrote:
> > > +#define ibnbd_log(fn, dev, fmt, ...) ({                              \
> > > +     __builtin_choose_expr(                                          \
> > > +             __builtin_types_compatible_p(                           \
> > > +                     typeof(dev), struct ibnbd_clt_dev *),           \
> > > +             fn("<%s@%s> " fmt, (dev)->pathname,                     \
> > > +             (dev)->sess->sessname,                                  \
> > > +                ##__VA_ARGS__),                                      \
> > > +             __builtin_choose_expr(                                  \
> > > +                     __builtin_types_compatible_p(typeof(dev),       \
> > > +                                     struct ibnbd_srv_sess_dev *),   \
> > > +                     fn("<%s@%s>: " fmt, (dev)->pathname,            \
> > > +                        (dev)->sess->sessname, ##__VA_ARGS__),       \
> > > +                     unknown_type()));                               \
> > > +})
> >
> > Please remove the __builtin_choose_expr() /
> > __builtin_types_compatible_p() construct and split this macro into two
> > macros or inline functions: one for struct ibnbd_clt_dev and another one
> > for struct ibnbd_srv_sess_dev.
> Ok, will split to two macros.
>
> >
> > > +#define IBNBD_PROTO_VER_MAJOR 2
> > > +#define IBNBD_PROTO_VER_MINOR 0
> > > +
> > > +#define IBNBD_PROTO_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
> > > +                            __stringify(IBNBD_PROTO_VER_MINOR)
> > > +
> > > +#ifndef IBNBD_VER_STRING
> > > +#define IBNBD_VER_STRING __stringify(IBNBD_PROTO_VER_MAJOR) "." \
> > > +                      __stringify(IBNBD_PROTO_VER_MINOR)
> >
> > Upstream code should not have a version number.
> IBNBD_VER_STRING can be removed together with MODULE_VERSION.
> >
> > > +/* TODO: should be configurable */
> > > +#define IBTRS_PORT 1234
> >
> > How about converting this macro into a kernel module parameter?
> Sounds good, will do.

Don't rush to do it and defer it to be the last change before merging,
this is controversial request which not everyone will like here.

Thanks

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-15 14:30     ` Jinpu Wang
  2019-09-16  5:27       ` Leon Romanovsky
@ 2019-09-16  7:08       ` Danil Kipnis
  2019-09-16 14:57       ` Jinpu Wang
  2019-09-16 15:39       ` Jinpu Wang
  3 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-09-16  7:08 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Roman Pen

> > > +/**
> > > + * struct ibnbd_msg_open_rsp - response message to IBNBD_MSG_OPEN
> > > + * @hdr:             message header
> > > + * @nsectors:                number of sectors
> >
> > What is the size of a single sector?
> 512b, will mention explicitly in the next round.
We only have KERNEL_SECTOR_SIZE=512, defined in ibnbd-clt.c. Looks we
only depend on this exact number to set the capacity of the block
device on client side. I'm not sure whether it is worth extending the
protocol to send the number from the server instead.

> > > + * @max_segments:    max segments hardware support in one transfer
> >
> > Does 'hardware' refer to the RDMA adapter that transfers the IBNBD
> > message or to the storage device? In the latter case, I assume that
> > transfer refers to a DMA transaction?
> "hardware" refers to the storage device on the server-side.
The field contains queue_max_segments() of the target block device.
And is used to call blk_queue_max_segments() on the corresponding
device on the client side.
We do also have BMAX_SEGMENTS define in ibnbd-clt.h which sets an
upper limit to max_segments and does refer to the capabilities of the
RDMA adapter. This information should only be known to the transport
module and ideally would be returned to IBNBD during the registration
in IBTRS.

> > Note: I plan to review the entire patch series but it may take some time
> > before I have finished reviewing the entire patch series.
> >
> That will be great, thanks a  lot, Bart
Thank you Bart!

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-16  5:27       ` Leon Romanovsky
@ 2019-09-16 13:45         ` Bart Van Assche
  2019-09-17 15:41           ` Leon Romanovsky
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-16 13:45 UTC (permalink / raw)
  To: Leon Romanovsky, Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On 9/15/19 10:27 PM, Leon Romanovsky wrote:
> On Sun, Sep 15, 2019 at 04:30:04PM +0200, Jinpu Wang wrote:
>> On Sat, Sep 14, 2019 at 12:10 AM Bart Van Assche <bvanassche@acm.org> wrote:
>>>> +/* TODO: should be configurable */
>>>> +#define IBTRS_PORT 1234
>>>
>>> How about converting this macro into a kernel module parameter?
>> Sounds good, will do.
> 
> Don't rush to do it and defer it to be the last change before merging,
> this is controversial request which not everyone will like here.

Hi Leon,

If you do not agree with changing this macro into a kernel module 
parameter please suggest an alternative.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-13 23:46   ` Bart Van Assche
@ 2019-09-16 14:17     ` Danil Kipnis
  2019-09-16 16:46       ` Bart Van Assche
  2019-09-17 13:09     ` Jinpu Wang
  2019-09-18 16:05     ` Jinpu Wang
  2 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-16 14:17 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang, Roman Pen

On Sat, Sep 14, 2019 at 1:46 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +MODULE_VERSION(IBNBD_VER_STRING);
>
> No version numbers in upstream code please.
Will drop this, thanks.
>
> > +/*
> > + * This is for closing devices when unloading the module:
> > + * we might be closing a lot (>256) of devices in parallel
> > + * and it is better not to use the system_wq.
> > + */
> > +static struct workqueue_struct *unload_wq;
>
> I think that a better motivation is needed for the introduction of a new
> workqueue.
We didn't want to pollute the system workqueue when unmapping a big
number of devices at once in parallel. Will reiterate on it.

>
> > +#define KERNEL_SECTOR_SIZE      512
>
> Please use SECTOR_SIZE instead of redefining it.
Right.

>
> > +static int ibnbd_clt_revalidate_disk(struct ibnbd_clt_dev *dev,
> > +                                  size_t new_nsectors)
> > +{
> > +     int err = 0;
> > +
> > +     ibnbd_info(dev, "Device size changed from %zu to %zu sectors\n",
> > +                dev->nsectors, new_nsectors);
> > +     dev->nsectors = new_nsectors;
> > +     set_capacity(dev->gd,
> > +                  dev->nsectors * (dev->logical_block_size /
> > +                                   KERNEL_SECTOR_SIZE));
> > +     err = revalidate_disk(dev->gd);
> > +     if (err)
> > +             ibnbd_err(dev, "Failed to change device size from"
> > +                       " %zu to %zu, err: %d\n", dev->nsectors,
> > +                       new_nsectors, err);
> > +     return err;
> > +}
>
> Since this function changes the block device size, I think that the name
> ibnbd_clt_revalidate_disk() is confusing. Please rename this function.
I guess ibnbd_clt_resize_disk() would be more appropriate.

>
> > +/**
> > + * ibnbd_get_cpu_qlist() - finds a list with HW queues to be requeued
> > + *
> > + * Description:
> > + *     Each CPU has a list of HW queues, which needs to be requeed.  If a list
> > + *     is not empty - it is marked with a bit.  This function finds first
> > + *     set bit in a bitmap and returns corresponding CPU list.
> > + */
>
> What does it mean to requeue a queue? Queue elements can be requeued but
> a queue in its entirety not. Please make this comment more clear.
Will fix the comment. The right wording should probably be "..., which
need to be rerun". We have a list of "stopped" queues for each cpu. We
need to select a list and a queue on that list to rerun, when an IO is
completed.

>
> > +/**
> > + * ibnbd_requeue_if_needed() - requeue if CPU queue is marked as non empty
> > + *
> > + * Description:
> > + *     Each CPU has it's own list of HW queues, which should be requeued.
> > + *     Function finds such list with HW queues, takes a list lock, picks up
> > + *     the first HW queue out of the list and requeues it.
> > + *
> > + * Return:
> > + *     True if the queue was requeued, false otherwise.
> > + *
> > + * Context:
> > + *     Does not matter.
> > + */
>
> Same comment here.
>
> > +/**
> > + * ibnbd_requeue_all_if_idle() - requeue all queues left in the list if
> > + *     session is idling (there are no requests in-flight).
> > + *
> > + * Description:
> > + *     This function tries to rerun all stopped queues if there are no
> > + *     requests in-flight anymore.  This function tries to solve an obvious
> > + *     problem, when number of tags < than number of queues (hctx), which
> > + *     are stopped and put to sleep.  If last tag, which has been just put,
> > + *     does not wake up all left queues (hctxs), IO requests hang forever.
> > + *
> > + *     That can happen when all number of tags, say N, have been exhausted
> > + *     from one CPU, and we have many block devices per session, say M.
> > + *     Each block device has it's own queue (hctx) for each CPU, so eventually
> > + *     we can put that number of queues (hctxs) to sleep: M x nr_cpu_ids.
> > + *     If number of tags N < M x nr_cpu_ids finally we will get an IO hang.
> > + *
> > + *     To avoid this hang last caller of ibnbd_put_tag() (last caller is the
> > + *     one who observes sess->busy == 0) must wake up all remaining queues.
> > + *
> > + * Context:
> > + *     Does not matter.
> > + */
>
> Same comment here.
>
> A more general question is why ibnbd needs its own queue management
> while no other block driver needs this?
Each IBNBD device promises to have a queue_depth (of say 512) on each
of its num_cpus hardware queues. In fact we can only process a
queue_depth inflights at once on the whole ibtrs session connecting a
given client with a given server. Those 512 inflights (corresponding
to the number of buffers reserved by the server for this particular
client) have to be shared among all the devices mapped on this
session. This leads to the situation, that we receive more requests
than we can process at the moment. So we need to stop queues and start
them again later in some fair fashion.

>
> > +static void ibnbd_softirq_done_fn(struct request *rq)
> > +{
> > +     struct ibnbd_clt_dev *dev       = rq->rq_disk->private_data;
> > +     struct ibnbd_clt_session *sess  = dev->sess;
> > +     struct ibnbd_iu *iu;
> > +
> > +     iu = blk_mq_rq_to_pdu(rq);
> > +     ibnbd_put_tag(sess, iu->tag);
> > +     blk_mq_end_request(rq, iu->status);
> > +}
> > +
> > +static void msg_io_conf(void *priv, int errno)
> > +{
> > +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
> > +     struct ibnbd_clt_dev *dev = iu->dev;
> > +     struct request *rq = iu->rq;
> > +
> > +     iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
> > +
> > +     if (softirq_enable) {
> > +             blk_mq_complete_request(rq);
> > +     } else {
> > +             ibnbd_put_tag(dev->sess, iu->tag);
> > +             blk_mq_end_request(rq, iu->status);
> > +     }
>
> Block drivers must call blk_mq_complete_request() instead of
> blk_mq_end_request() to complete a request after processing of the
> request has been started. Calling blk_mq_end_request() to complete a
> request is racy in case a timeout occurs while blk_mq_end_request() is
> in progress.
I need some time to give this part a closer look.

>
> > +static void msg_conf(void *priv, int errno)
> > +{
> > +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
>
> The kernel code I'm familiar with does not cast void pointers explicitly
> into another type. Please follow that convention and leave the cast out
> from the above and also from similar statements.
msg_conf() is a callback which IBNBD passes down with a request to
IBTRS when calling ibtrs_clt_request(). msg_conf() is called when a
request is completed with a pointer to a struct defined in IBNBD. So
IBTRS as transport doesn't know what's inside the private pointer
which IBNBD passed down with the request, it's opaque, since struct
ibnbd_iu is not visible in IBTRS. I will try to find how others avoid
a cast in similar situations.

>
> > +static int send_usr_msg(struct ibtrs_clt *ibtrs, int dir,
> > +                     struct ibnbd_iu *iu, struct kvec *vec, size_t nr,
> > +                     size_t len, struct scatterlist *sg, unsigned int sg_len,
> > +                     void (*conf)(struct work_struct *work),
> > +                     int *errno, bool wait)
> > +{
> > +     int err;
> > +
> > +     INIT_WORK(&iu->work, conf);
> > +     err = ibtrs_clt_request(dir, msg_conf, ibtrs, iu->tag,
> > +                             iu, vec, nr, len, sg, sg_len);
> > +     if (!err && wait) {
> > +             wait_event(iu->comp.wait, iu->comp.errno != INT_MAX);
>
> This looks weird. Why is this a wait_event() call instead of a
> wait_for_completion() call?
Looks, we could just use a wait_for_completion here.

>
> > +static struct blk_mq_ops ibnbd_mq_ops;
> > +static int setup_mq_tags(struct ibnbd_clt_session *sess)
> > +{
> > +     struct blk_mq_tag_set *tags = &sess->tag_set;
> > +
> > +     memset(tags, 0, sizeof(*tags));
> > +     tags->ops               = &ibnbd_mq_ops;
> > +     tags->queue_depth       = sess->queue_depth;
> > +     tags->numa_node         = NUMA_NO_NODE;
> > +     tags->flags             = BLK_MQ_F_SHOULD_MERGE |
> > +                               BLK_MQ_F_TAG_SHARED;
> > +     tags->cmd_size          = sizeof(struct ibnbd_iu);
> > +     tags->nr_hw_queues      = num_online_cpus();
> > +
> > +     return blk_mq_alloc_tag_set(tags);
> > +}
>
> Forward declarations should be avoided when possible. Can the forward
> declaration of ibnbd_mq_ops be avoided by moving the definition of
> setup_mq_tags() down?
Yes we can by moving a couple of things around, thank you!

>
> > +static inline void wake_up_ibtrs_waiters(struct ibnbd_clt_session *sess)
> > +{
> > +     /* paired with rmb() in wait_for_ibtrs_connection() */
> > +     smp_wmb();
> > +     sess->ibtrs_ready = true;
> > +     wake_up_all(&sess->ibtrs_waitq);
> > +}
>
> The placement of the smp_wmb() call looks wrong to me. Since
> wake_up_all() and wait_event() already guarantee acquire/release
> behavior, I think that the explicit barriers can be left out from this
> function and also from wait_for_ibtrs_connection().
I will have to look into this part again. At first glance wmb seems to
have to be after Sess->ibtrs_ready = true.

>
> > +static void wait_for_ibtrs_disconnection(struct ibnbd_clt_session *sess)
> > +__releases(&sess_lock)
> > +__acquires(&sess_lock)
> > +{
> > +     DEFINE_WAIT_FUNC(wait, autoremove_wake_function);
> > +
> > +     prepare_to_wait(&sess->ibtrs_waitq, &wait, TASK_UNINTERRUPTIBLE);
> > +     if (IS_ERR_OR_NULL(sess->ibtrs)) {
> > +             finish_wait(&sess->ibtrs_waitq, &wait);
> > +             return;
> > +     }
> > +     mutex_unlock(&sess_lock);
> > +     /* After unlock session can be freed, so careful */
> > +     schedule();
> > +     mutex_lock(&sess_lock);
> > +}
>
> This doesn't look right: any random wake_up() call can wake up this
> function. Shouldn't there be a loop in this function that causes the
> schedule() call to be repeated until the disconnect has happened?
The loop is inside __find_and_get_sess(), which is calling that
function. We need to schedule() here in order for another thread to be
able to remove the dying session we just found and tried to get
reference to from the list of sessions, so that we can go over the
list again in __find_and_get_sess().

>
> > +
> > +static struct ibnbd_clt_session *__find_and_get_sess(const char *sessname)
> > +__releases(&sess_lock)
> > +__acquires(&sess_lock)
> > +{
> > +     struct ibnbd_clt_session *sess;
> > +     int err;
> > +
> > +again:
> > +     list_for_each_entry(sess, &sess_list, list) {
> > +             if (strcmp(sessname, sess->sessname))
> > +                     continue;
> > +
> > +             if (unlikely(sess->ibtrs_ready && IS_ERR_OR_NULL(sess->ibtrs)))
> > +                     /*
> > +                      * No IBTRS connection, session is dying.
> > +                      */
> > +                     continue;
> > +
> > +             if (likely(ibnbd_clt_get_sess(sess))) {
> > +                     /*
> > +                      * Alive session is found, wait for IBTRS connection.
> > +                      */
> > +                     mutex_unlock(&sess_lock);
> > +                     err = wait_for_ibtrs_connection(sess);
> > +                     if (unlikely(err))
> > +                             ibnbd_clt_put_sess(sess);
> > +                     mutex_lock(&sess_lock);
> > +
> > +                     if (unlikely(err))
> > +                             /* Session is dying, repeat the loop */
> > +                             goto again;
> > +
> > +                     return sess;
> > +             }
> > +             /*
> > +              * Ref is 0, session is dying, wait for IBTRS disconnect
> > +              * in order to avoid session names clashes.
> > +              */
> > +             wait_for_ibtrs_disconnection(sess);
> > +             /*
> > +              * IBTRS is disconnected and soon session will be freed,
> > +              * so repeat a loop.
> > +              */
> > +             goto again;
> > +     }
> > +
> > +     return NULL;
> > +}
>  >
> > +
> > +static struct ibnbd_clt_session *find_and_get_sess(const char *sessname)
> > +{
> > +     struct ibnbd_clt_session *sess;
> > +
> > +     mutex_lock(&sess_lock);
> > +     sess = __find_and_get_sess(sessname);
> > +     mutex_unlock(&sess_lock);
> > +
> > +     return sess;
> > +}
>
> Shouldn't __find_and_get_sess() function increase the reference count of
> sess before it returns? In other words, what prevents that the session
> is freed from another thread before find_and_get_sess() returns?
It does increase the refcount inside __find_and_get_sess()
(...ibnbd_clt_get_sess(sess) call).

> > +/*
> > + * Get iorio of current task
> > + */
> > +static short ibnbd_current_ioprio(void)
> > +{
> > +     struct task_struct *tsp = current;
> > +     unsigned short prio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
> > +
> > +     if (likely(tsp->io_context))
> > +             prio = tsp->io_context->ioprio;
> > +     return prio;
> > +}
>
> ibnbd should use req_get_ioprio() and should not look at
> current->io_context->ioprio. I think it is the responsibility of the
> block layer to extract the I/O priority from the task context. As an
> example, here is how the aio code does this:
>
>                 req->ki_ioprio = get_current_ioprio();
>
Didn't notice the get_current_ioprio(), thank you.
ibnbd_current_ioprio() is doing exactly the same, will drop it.

> > +static blk_status_t ibnbd_queue_rq(struct blk_mq_hw ctx *hctx,
> > +                                const struct blk_mq_queue_data *bd)
> > +{
> > +     struct request *rq = bd->rq;
> > +     struct ibnbd_clt_dev *dev = rq->rq_disk->private_data;
> > +     struct ibnbd_iu *iu = blk_mq_rq_to_pdu(rq);
> > +     int err;
> > +
> > +     if (unlikely(!ibnbd_clt_dev_is_mapped(dev)))
> > +             return BLK_STS_IOERR;
> > +
> > +     iu->tag = ibnbd_get_tag(dev->sess, IBTRS_IO_CON, IBTRS_TAG_NOWAIT);
> > +     if (unlikely(!iu->tag)) {
> > +             ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_IFBUSY);
> > +             return BLK_STS_RESOURCE;
> > +     }
> > +
> > +     blk_mq_start_request(rq);
> > +     err = ibnbd_client_xfer_request(dev, rq, iu);
> > +     if (likely(err == 0))
> > +             return BLK_STS_OK;
> > +     if (unlikely(err == -EAGAIN || err == -ENOMEM)) {
> > +             ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_10ms);
> > +             ibnbd_put_tag(dev->sess, iu->tag);
> > +             return BLK_STS_RESOURCE;
> > +     }
> > +
> > +     ibnbd_put_tag(dev->sess, iu->tag);
> > +     return BLK_STS_IOERR;
> > +}
>
> Every other block driver relies on the block layer core for tag
> allocation. Why does ibnbd need its own tag management?
Those tags are wrappers around the transport layer (ibtrs) "permits"
(ibtrs_tags) - one such ibtrs_tag/"permits" is a reservation of one
particular memory chunk on server side. Those "permits" are shared
among all the devices mapped on a given session and all their hardware
queues. Maybe we should use a different word like "permit" for them to
avoid confusion?

>
> > +static void setup_request_queue(struct ibnbd_clt_dev *dev)
> > +{
> > +     blk_queue_logical_block_size(dev->queue, dev->logical_block_size);
> > +     blk_queue_physical_block_size(dev->queue, dev->physical_block_size);
> > +     blk_queue_max_hw_sectors(dev->queue, dev->max_hw_sectors);
> > +     blk_queue_max_write_same_sectors(dev->queue,
> > +                                      dev->max_write_same_sectors);
> > +
> > +     /*
> > +      * we don't support discards to "discontiguous" segments
> > +      * in on request
>                ^^
>                one?
> > +      */
> > +     blk_queue_max_discard_segments(dev->queue, 1);
> > +
> > +     blk_queue_max_discard_sectors(dev->queue, dev->max_discard_sectors);
> > +     dev->queue->limits.discard_granularity  = dev->discard_granularity;
> > +     dev->queue->limits.discard_alignment    = dev->discard_alignment;
> > +     if (dev->max_discard_sectors)
> > +             blk_queue_flag_set(QUEUE_FLAG_DISCARD, dev->queue);
> > +     if (dev->secure_discard)
> > +             blk_queue_flag_set(QUEUE_FLAG_SECERASE, dev->queue);
> > +
> > +     blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
> > +     blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
> > +     blk_queue_max_segments(dev->queue, dev->max_segments);
> > +     blk_queue_io_opt(dev->queue, dev->sess->max_io_size);
> > +     blk_queue_virt_boundary(dev->queue, 4095);
> > +     blk_queue_write_cache(dev->queue, true, true);
> > +     dev->queue->queuedata = dev;
> > +}
>
> > +static void destroy_gen_disk(struct ibnbd_clt_dev *dev)
> > +{
> > +     del_gendisk(dev->gd);
>
> > +     /*
> > +      * Before marking queue as dying (blk_cleanup_queue() does that)
> > +      * we have to be sure that everything in-flight has gone.
> > +      * Blink with freeze/unfreeze.
> > +      */
> > +     blk_mq_freeze_queue(dev->queue);
> > +     blk_mq_unfreeze_queue(dev->queue);
>
> Please remove the above seven lines. blk_cleanup_queue() calls
> blk_set_queue_dying() and the second call in blk_set_queue_dying() is
> blk_freeze_queue_start().
Thanks, will check this out.

>
> > +     blk_cleanup_queue(dev->queue);
> > +     put_disk(dev->gd);
> > +}
>
> > +
> > +static void destroy_sysfs(struct ibnbd_clt_dev *dev,
> > +                       const struct attribute *sysfs_self)
> > +{
> > +     ibnbd_clt_remove_dev_symlink(dev);
> > +     if (dev->kobj.state_initialized) {
> > +             if (sysfs_self)
> > +                     /* To avoid deadlock firstly commit suicide */
>                                                              ^^^^^^^
> Please chose terminology that is more appropriate for a professional
> context.
Will rephrase the comment, thanks.

>
> > +                     sysfs_remove_file_self(&dev->kobj, sysfs_self);
> > +             kobject_del(&dev->kobj);
> > +             kobject_put(&dev->kobj);
> > +     }
> > +}
>
> Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-15 14:30     ` Jinpu Wang
  2019-09-16  5:27       ` Leon Romanovsky
  2019-09-16  7:08       ` Danil Kipnis
@ 2019-09-16 14:57       ` Jinpu Wang
  2019-09-16 17:25         ` Bart Van Assche
  2019-09-16 15:39       ` Jinpu Wang
  3 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-09-16 14:57 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

> > > +#define _IBNBD_FILEIO  0
> > > +#define _IBNBD_BLOCKIO 1
> > > +#define _IBNBD_AUTOIO  2
> >  >
> > > +enum ibnbd_io_mode {
> > > +     IBNBD_FILEIO = _IBNBD_FILEIO,
> > > +     IBNBD_BLOCKIO = _IBNBD_BLOCKIO,
> > > +     IBNBD_AUTOIO = _IBNBD_AUTOIO,
> > > +};
> >
> > Since the IBNBD_* and _IBNBD_* constants have the same numerical value,
> > are the former constants really necessary?
> Seems we can remove _IBNBD_*.
Sorry, checked again,  we defined _IBNBD_* constants to show the right
value for def_io_mode description.
If we remove the _IBNBD_*, then the modinfo shows:
def_io_mode:By default, export devices in blockio(IBNBD_BLOCKIO) or
fileio(IBNBD_FILEIO) mode. (default: IBNBD_BLOCKIO (blockio))
instead of:
parm:           def_io_mode:By default, export devices in blockio(1)
or fileio(0) mode. (default: 1 (blockio))


> > > +/**
> > > + * struct ibnbd_msg_io_old - message for I/O read/write for
> > > + * ver < IBNBD_PROTO_VER_MAJOR
> > > + * This structure is there only to know the size of the "old" message format
> > > + * @hdr:     message header
> > > + * @device_id:       device_id on server side to find the right device
> > > + * @sector:  bi_sector attribute from struct bio
> > > + * @rw:              bitmask, valid values are defined in enum ibnbd_io_flags
> > > + * @bi_size:    number of bytes for I/O read/write
> > > + * @prio:       priority
> > > + */
> > > +struct ibnbd_msg_io_old {
> > > +     struct ibnbd_msg_hdr hdr;
> > > +     __le32          device_id;
> > > +     __le64          sector;
> > > +     __le32          rw;
> > > +     __le32          bi_size;
> > > +};
> >
> > Since this is the first version of IBNBD that is being sent upstream, I
> > think that ibnbd_msg_io_old should be left out.
After discuss with Danil, we will remove the ibnbd_msg_io_old next round.

Regards,

--
Jack Wang
Linux Kernel Developer
Platform Engineering Compute (IONOS Cloud)

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-15 14:30     ` Jinpu Wang
                         ` (2 preceding siblings ...)
  2019-09-16 14:57       ` Jinpu Wang
@ 2019-09-16 15:39       ` Jinpu Wang
  2019-09-18 15:26         ` Bart Van Assche
  3 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-09-16 15:39 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

- Roman's pb emal address, it's no longer valid, will fix next round.


> >
> > > +static inline const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
> > > +{
> > > +     switch (mode) {
> > > +     case IBNBD_FILEIO:
> > > +             return "fileio";
> > > +     case IBNBD_BLOCKIO:
> > > +             return "blockio";
> > > +     case IBNBD_AUTOIO:
> > > +             return "autoio";
> > > +     default:
> > > +             return "unknown";
> > > +     }
> > > +}
> > > +
> > > +static inline const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
> > > +{
> > > +     switch (mode) {
> > > +     case IBNBD_ACCESS_RO:
> > > +             return "ro";
> > > +     case IBNBD_ACCESS_RW:
> > > +             return "rw";
> > > +     case IBNBD_ACCESS_MIGRATION:
> > > +             return "migration";
> > > +     default:
> > > +             return "unknown";
> > > +     }
> > > +}
> >
> > These two functions are not in the hot path and hence should not be
> > inline functions.
> Sounds reasonable, will remove the inline.
inline was added to fix the -Wunused-function warning  eg:

  CC [M]  /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.o
In file included from /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.h:34,
                 from /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.c:33:
/<<PKGBUILDDIR>>/ibnbd/ibnbd-proto.h:362:20: warning:
'ibnbd_access_mode_str' defined but not used [-Wunused-function]
 static const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
                    ^~~~~~~~~~~~~~~~~~~~~
/<<PKGBUILDDIR>>/ibnbd/ibnbd-proto.h:348:20: warning:
'ibnbd_io_mode_str' defined but not used [-Wunused-function]
 static const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)

We have to move both functions to a separate header file if we really
want to do it.
The function is simple and small, if you insist, I will do it.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-16 14:17     ` Danil Kipnis
@ 2019-09-16 16:46       ` Bart Van Assche
  2019-09-17 11:39         ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-16 16:46 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang, Roman Pen

On 9/16/19 7:17 AM, Danil Kipnis wrote:
> On Sat, Sep 14, 2019 at 1:46 AM Bart Van Assche <bvanassche@acm.org> wrote:
>> On 6/20/19 8:03 AM, Jack Wang wrote:
>>> +/*
>>> + * This is for closing devices when unloading the module:
>>> + * we might be closing a lot (>256) of devices in parallel
>>> + * and it is better not to use the system_wq.
>>> + */
>>> +static struct workqueue_struct *unload_wq;
>>
>> I think that a better motivation is needed for the introduction of a new
>> workqueue.
 >
> We didn't want to pollute the system workqueue when unmapping a big
> number of devices at once in parallel. Will reiterate on it.

There are multiple system workqueues. From <linux/workqueue.h>:

extern struct workqueue_struct *system_wq;
extern struct workqueue_struct *system_highpri_wq;
extern struct workqueue_struct *system_long_wq;
extern struct workqueue_struct *system_unbound_wq;
extern struct workqueue_struct *system_freezable_wq;
extern struct workqueue_struct *system_power_efficient_wq;
extern struct workqueue_struct *system_freezable_power_efficient_wq;

Has it been considered to use e.g. system_long_wq?

>> A more general question is why ibnbd needs its own queue management
>> while no other block driver needs this?
>
> Each IBNBD device promises to have a queue_depth (of say 512) on each
> of its num_cpus hardware queues. In fact we can only process a
> queue_depth inflights at once on the whole ibtrs session connecting a
> given client with a given server. Those 512 inflights (corresponding
> to the number of buffers reserved by the server for this particular
> client) have to be shared among all the devices mapped on this
> session. This leads to the situation, that we receive more requests
> than we can process at the moment. So we need to stop queues and start
> them again later in some fair fashion.

Can a single CPU really sustain a queue depth of 512 commands? Is it 
really necessary to have one hardware queue per CPU or is e.g. four 
queues per NUMA node sufficient? Has it been considered to send the 
number of hardware queues that the initiator wants to use and also the 
command depth per queue during login to the target side? That would 
allow the target side to allocate an independent set of buffers for each 
initiator hardware queue and would allow to remove the queue management 
at the initiator side. This might even yield better performance.

>>> +static void msg_conf(void *priv, int errno)
>>> +{
>>> +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
>>
>> The kernel code I'm familiar with does not cast void pointers explicitly
>> into another type. Please follow that convention and leave the cast out
>> from the above and also from similar statements.
> msg_conf() is a callback which IBNBD passes down with a request to
> IBTRS when calling ibtrs_clt_request(). msg_conf() is called when a
> request is completed with a pointer to a struct defined in IBNBD. So
> IBTRS as transport doesn't know what's inside the private pointer
> which IBNBD passed down with the request, it's opaque, since struct
> ibnbd_iu is not visible in IBTRS. I will try to find how others avoid
> a cast in similar situations.

Are you aware that the C language can cast a void pointer into a 
non-void pointer implicitly, that means, without having to use a cast?


>>> +static void wait_for_ibtrs_disconnection(struct ibnbd_clt_session *sess)
>>> +__releases(&sess_lock)
>>> +__acquires(&sess_lock)
>>> +{
>>> +     DEFINE_WAIT_FUNC(wait, autoremove_wake_function);
>>> +
>>> +     prepare_to_wait(&sess->ibtrs_waitq, &wait, TASK_UNINTERRUPTIBLE);
>>> +     if (IS_ERR_OR_NULL(sess->ibtrs)) {
>>> +             finish_wait(&sess->ibtrs_waitq, &wait);
>>> +             return;
>>> +     }
>>> +     mutex_unlock(&sess_lock);
>>> +     /* After unlock session can be freed, so careful */
>>> +     schedule();
>>> +     mutex_lock(&sess_lock);
>>> +}
>>
>> This doesn't look right: any random wake_up() call can wake up this
>> function. Shouldn't there be a loop in this function that causes the
>> schedule() call to be repeated until the disconnect has happened?
> The loop is inside __find_and_get_sess(), which is calling that
> function. We need to schedule() here in order for another thread to be
> able to remove the dying session we just found and tried to get
> reference to from the list of sessions, so that we can go over the
> list again in __find_and_get_sess().

Thanks for the clarification.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-16 14:57       ` Jinpu Wang
@ 2019-09-16 17:25         ` Bart Van Assche
  2019-09-17 12:27           ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-16 17:25 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On 9/16/19 7:57 AM, Jinpu Wang wrote:
>>>> +#define _IBNBD_FILEIO  0
>>>> +#define _IBNBD_BLOCKIO 1
>>>> +#define _IBNBD_AUTOIO  2
>>>>
>>>> +enum ibnbd_io_mode {
>>>> +     IBNBD_FILEIO = _IBNBD_FILEIO,
>>>> +     IBNBD_BLOCKIO = _IBNBD_BLOCKIO,
>>>> +     IBNBD_AUTOIO = _IBNBD_AUTOIO,
>>>> +};
>>>
>>> Since the IBNBD_* and _IBNBD_* constants have the same numerical value,
>>> are the former constants really necessary?
 >>
>> Seems we can remove _IBNBD_*.
 >
> Sorry, checked again,  we defined _IBNBD_* constants to show the right
> value for def_io_mode description.
> If we remove the _IBNBD_*, then the modinfo shows:
> def_io_mode:By default, export devices in blockio(IBNBD_BLOCKIO) or
> fileio(IBNBD_FILEIO) mode. (default: IBNBD_BLOCKIO (blockio))
> instead of:
> parm:           def_io_mode:By default, export devices in blockio(1)
> or fileio(0) mode. (default: 1 (blockio))

So the user is required to enter def_io_mode as a number? Wouldn't it be 
more friendly towards users to change that parameter from a number into 
a string?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-16 16:46       ` Bart Van Assche
@ 2019-09-17 11:39         ` Danil Kipnis
  2019-09-18  7:14           ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-17 11:39 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang, Roman Pen

On Mon, Sep 16, 2019 at 6:46 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/16/19 7:17 AM, Danil Kipnis wrote:
> > On Sat, Sep 14, 2019 at 1:46 AM Bart Van Assche <bvanassche@acm.org> wrote:
> >> On 6/20/19 8:03 AM, Jack Wang wrote:
> >>> +/*
> >>> + * This is for closing devices when unloading the module:
> >>> + * we might be closing a lot (>256) of devices in parallel
> >>> + * and it is better not to use the system_wq.
> >>> + */
> >>> +static struct workqueue_struct *unload_wq;
> >>
> >> I think that a better motivation is needed for the introduction of a new
> >> workqueue.
>  >
> > We didn't want to pollute the system workqueue when unmapping a big
> > number of devices at once in parallel. Will reiterate on it.
>
> There are multiple system workqueues. From <linux/workqueue.h>:
>
> extern struct workqueue_struct *system_wq;
> extern struct workqueue_struct *system_highpri_wq;
> extern struct workqueue_struct *system_long_wq;
> extern struct workqueue_struct *system_unbound_wq;
> extern struct workqueue_struct *system_freezable_wq;
> extern struct workqueue_struct *system_power_efficient_wq;
> extern struct workqueue_struct *system_freezable_power_efficient_wq;
>
> Has it been considered to use e.g. system_long_wq?
Will try to switch to system_long_wq, I do agree that a new wq for
just closing devices does make an impression of an overreaction.

>
> >> A more general question is why ibnbd needs its own queue management
> >> while no other block driver needs this?
> >
> > Each IBNBD device promises to have a queue_depth (of say 512) on each
> > of its num_cpus hardware queues. In fact we can only process a
> > queue_depth inflights at once on the whole ibtrs session connecting a
> > given client with a given server. Those 512 inflights (corresponding
> > to the number of buffers reserved by the server for this particular
> > client) have to be shared among all the devices mapped on this
> > session. This leads to the situation, that we receive more requests
> > than we can process at the moment. So we need to stop queues and start
> > them again later in some fair fashion.
>
> Can a single CPU really sustain a queue depth of 512 commands? Is it
> really necessary to have one hardware queue per CPU or is e.g. four
> queues per NUMA node sufficient? Has it been considered to send the
> number of hardware queues that the initiator wants to use and also the
> command depth per queue during login to the target side? That would
> allow the target side to allocate an independent set of buffers for each
> initiator hardware queue and would allow to remove the queue management
> at the initiator side. This might even yield better performance.
We needed a way which would allow us to address one particular
requirement: we'd like to be able to "enforce" that a response to an
IO would be processed on the same CPU the IO was originally submitted
on. In order to be able to do so we establish one rdma connection per
cpu, each having a separate cq_vector. The administrator can then
assign the corresponding IRQs to distinct CPUs. The server always
replies to an IO on the same connection he received the request on. If
the administrator did configure the /proc/irq/y/smp_affinity
accordingly, the response sent by the server will generate interrupt
on the same cpu, the IO was originally submitted on. The administrator
can configure IRQs differently, for example assign a given irq
(<->cq_vector) to a range of cpus belonging to a numa node, or
whatever assignment is best for his use-case.
Our transport module IBTRS establishes number of cpus connections
between a client and a server. The user of the transport module (i.e.
IBNBD) has no knowledge about the rdma connections, it only has a
pointer to an abstract "session", which connects  him somehow to a
remote host. IBNBD as a user of IBTRS creates block devices and uses a
given "session" to send IOs from all the block devices it created for
that session. That means IBNBD is limited in maximum number of his
inflights toward a given remote host by the capability of the
corresponding "session". So it needs to share the resources provided
by the session (in our current model those resources are in fact some
pre registered buffers on server side) among his devices.
It is possible to extend the IBTRS API so that the user (IBNBD) could
specify how many connections he wants to have on the session to be
established. It is also possible to extend the ibtrs_clt_get_tag API
(this is to get a send "permit") with a parameter specifying the
connection, the future IO is to be send on.
We now might have to change our communication model in IBTRS a bit in
order to fix the potential security problem raised during the recent
RDMA MC: https://etherpad.net/p/LPC2019_RDMA.

>
> >>> +static void msg_conf(void *priv, int errno)
> >>> +{
> >>> +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
> >>
> >> The kernel code I'm familiar with does not cast void pointers explicitly
> >> into another type. Please follow that convention and leave the cast out
> >> from the above and also from similar statements.
> > msg_conf() is a callback which IBNBD passes down with a request to
> > IBTRS when calling ibtrs_clt_request(). msg_conf() is called when a
> > request is completed with a pointer to a struct defined in IBNBD. So
> > IBTRS as transport doesn't know what's inside the private pointer
> > which IBNBD passed down with the request, it's opaque, since struct
> > ibnbd_iu is not visible in IBTRS. I will try to find how others avoid
> > a cast in similar situations.
>
> Are you aware that the C language can cast a void pointer into a
> non-void pointer implicitly, that means, without having to use a cast?
Oh, I misunderstood your original comment: you suggest to just remove
the explicit (struct ibnbd_iu *) and similar casts from void pointers.
I think an explicit cast makes it easier for readers to follow the
code. But it does say "Casting the return value which is a void
pointer is redundant." at least in the "Allocating Memory" section of
https://www.kernel.org/doc/html/v4.10/process/coding-style.html and
seems others don't do that, at least not when declaring a variable.
Will drop those casts.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-16 17:25         ` Bart Van Assche
@ 2019-09-17 12:27           ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-17 12:27 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Mon, Sep 16, 2019 at 7:25 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/16/19 7:57 AM, Jinpu Wang wrote:
> >>>> +#define _IBNBD_FILEIO  0
> >>>> +#define _IBNBD_BLOCKIO 1
> >>>> +#define _IBNBD_AUTOIO  2
> >>>>
> >>>> +enum ibnbd_io_mode {
> >>>> +     IBNBD_FILEIO = _IBNBD_FILEIO,
> >>>> +     IBNBD_BLOCKIO = _IBNBD_BLOCKIO,
> >>>> +     IBNBD_AUTOIO = _IBNBD_AUTOIO,
> >>>> +};
> >>>
> >>> Since the IBNBD_* and _IBNBD_* constants have the same numerical value,
> >>> are the former constants really necessary?
>  >>
> >> Seems we can remove _IBNBD_*.
>  >
> > Sorry, checked again,  we defined _IBNBD_* constants to show the right
> > value for def_io_mode description.
> > If we remove the _IBNBD_*, then the modinfo shows:
> > def_io_mode:By default, export devices in blockio(IBNBD_BLOCKIO) or
> > fileio(IBNBD_FILEIO) mode. (default: IBNBD_BLOCKIO (blockio))
> > instead of:
> > parm:           def_io_mode:By default, export devices in blockio(1)
> > or fileio(0) mode. (default: 1 (blockio))
>
> So the user is required to enter def_io_mode as a number? Wouldn't it be
> more friendly towards users to change that parameter from a number into
> a string?
>
Ok, it's a bit more code, will change to allow user to set "blockio"
or "fileio" as string.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-13 23:46   ` Bart Van Assche
  2019-09-16 14:17     ` Danil Kipnis
@ 2019-09-17 13:09     ` Jinpu Wang
  2019-09-17 16:46       ` Bart Van Assche
  2019-09-18 16:05     ` Jinpu Wang
  2 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-09-17 13:09 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

> > +static void ibnbd_softirq_done_fn(struct request *rq)
> > +{
> > +     struct ibnbd_clt_dev *dev       = rq->rq_disk->private_data;
> > +     struct ibnbd_clt_session *sess  = dev->sess;
> > +     struct ibnbd_iu *iu;
> > +
> > +     iu = blk_mq_rq_to_pdu(rq);
> > +     ibnbd_put_tag(sess, iu->tag);
> > +     blk_mq_end_request(rq, iu->status);
> > +}
> > +
> > +static void msg_io_conf(void *priv, int errno)
> > +{
> > +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
> > +     struct ibnbd_clt_dev *dev = iu->dev;
> > +     struct request *rq = iu->rq;
> > +
> > +     iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
> > +
> > +     if (softirq_enable) {
> > +             blk_mq_complete_request(rq);
> > +     } else {
> > +             ibnbd_put_tag(dev->sess, iu->tag);
> > +             blk_mq_end_request(rq, iu->status);
> > +     }
>
> Block drivers must call blk_mq_complete_request() instead of
> blk_mq_end_request() to complete a request after processing of the
> request has been started. Calling blk_mq_end_request() to complete a
> request is racy in case a timeout occurs while blk_mq_end_request() is
> in progress.

Hi Bart,

Could you elaborate a bit more, blk_mq_end_request is exported function and
used by a lot of block drivers: scsi, dm, etc.
Is there an open bug report for the problem?

Regards,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-16 13:45         ` Bart Van Assche
@ 2019-09-17 15:41           ` Leon Romanovsky
  2019-09-17 15:52             ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Leon Romanovsky @ 2019-09-17 15:41 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jinpu Wang, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Mon, Sep 16, 2019 at 06:45:17AM -0700, Bart Van Assche wrote:
> On 9/15/19 10:27 PM, Leon Romanovsky wrote:
> > On Sun, Sep 15, 2019 at 04:30:04PM +0200, Jinpu Wang wrote:
> > > On Sat, Sep 14, 2019 at 12:10 AM Bart Van Assche <bvanassche@acm.org> wrote:
> > > > > +/* TODO: should be configurable */
> > > > > +#define IBTRS_PORT 1234
> > > >
> > > > How about converting this macro into a kernel module parameter?
> > > Sounds good, will do.
> >
> > Don't rush to do it and defer it to be the last change before merging,
> > this is controversial request which not everyone will like here.
>
> Hi Leon,
>
> If you do not agree with changing this macro into a kernel module parameter
> please suggest an alternative.

I didn't review code so my answer can be not fully accurate, but opening
some port to use this IB* seems strange from my non-sysadmin POV.
What about using RDMA-CM, like NVMe?

Thanks

>
> Thanks,
>
> Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-17 15:41           ` Leon Romanovsky
@ 2019-09-17 15:52             ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-17 15:52 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 17, 2019 at 5:42 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Mon, Sep 16, 2019 at 06:45:17AM -0700, Bart Van Assche wrote:
> > On 9/15/19 10:27 PM, Leon Romanovsky wrote:
> > > On Sun, Sep 15, 2019 at 04:30:04PM +0200, Jinpu Wang wrote:
> > > > On Sat, Sep 14, 2019 at 12:10 AM Bart Van Assche <bvanassche@acm.org> wrote:
> > > > > > +/* TODO: should be configurable */
> > > > > > +#define IBTRS_PORT 1234
> > > > >
> > > > > How about converting this macro into a kernel module parameter?
> > > > Sounds good, will do.
> > >
> > > Don't rush to do it and defer it to be the last change before merging,
> > > this is controversial request which not everyone will like here.
> >
> > Hi Leon,
> >
> > If you do not agree with changing this macro into a kernel module parameter
> > please suggest an alternative.
>
> I didn't review code so my answer can be not fully accurate, but opening
> some port to use this IB* seems strange from my non-sysadmin POV.
> What about using RDMA-CM, like NVMe?
Hi Leon,

We are using rdma-cm, the port number here is same like addr_trsvcid
in NVMeoF, it controls which port
rdma_listen is listening on.

Currently, it's hardcoded, I've adapted the code to have a kernel
module parameter port_nr in ibnbd_server, so it's possible
to change it if the sysadmin wants.

Thanks,
-- 
Jack Wang
Linux Kernel Developer
Platform Engineering Compute (IONOS Cloud)

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 16/25] ibnbd: client: private header with client structs and functions
  2019-09-13 22:25   ` Bart Van Assche
@ 2019-09-17 16:36     ` Jinpu Wang
  2019-09-25 23:43       ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-09-17 16:36 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Sat, Sep 14, 2019 at 12:25 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +     char                    pathname[NAME_MAX];
> [ ... ]
>  > +    char                    blk_symlink_name[NAME_MAX];
>
> Please allocate path names dynamically instead of hard-coding the upper
> length for a path.
>
> Bart.
Hi Bart,

ok,  will dynamically allocate the path and blk_symlink_name as you suggested.

Thank you
Jinpu

--
Jack Wang

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-17 13:09     ` Jinpu Wang
@ 2019-09-17 16:46       ` Bart Van Assche
  2019-09-18 12:02         ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-17 16:46 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On 9/17/19 6:09 AM, Jinpu Wang wrote:
>>> +static void ibnbd_softirq_done_fn(struct request *rq)
>>> +{
>>> +     struct ibnbd_clt_dev *dev       = rq->rq_disk->private_data;
>>> +     struct ibnbd_clt_session *sess  = dev->sess;
>>> +     struct ibnbd_iu *iu;
>>> +
>>> +     iu = blk_mq_rq_to_pdu(rq);
>>> +     ibnbd_put_tag(sess, iu->tag);
>>> +     blk_mq_end_request(rq, iu->status);
>>> +}
>>> +
>>> +static void msg_io_conf(void *priv, int errno)
>>> +{
>>> +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
>>> +     struct ibnbd_clt_dev *dev = iu->dev;
>>> +     struct request *rq = iu->rq;
>>> +
>>> +     iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
>>> +
>>> +     if (softirq_enable) {
>>> +             blk_mq_complete_request(rq);
>>> +     } else {
>>> +             ibnbd_put_tag(dev->sess, iu->tag);
>>> +             blk_mq_end_request(rq, iu->status);
>>> +     }
>>
>> Block drivers must call blk_mq_complete_request() instead of
>> blk_mq_end_request() to complete a request after processing of the
>> request has been started. Calling blk_mq_end_request() to complete a
>> request is racy in case a timeout occurs while blk_mq_end_request() is
>> in progress.
> 
> Could you elaborate a bit more, blk_mq_end_request is exported function and
> used by a lot of block drivers: scsi, dm, etc.
> Is there an open bug report for the problem?

Hi Jinpu,

There is only one blk_mq_end_request() call in the SCSI code and it's 
inside the FC timeout handler (fc_bsg_job_timeout()). Calling 
blk_mq_end_request() from inside a timeout handler is fine but not to 
report to the block layer that a request has completed from outside the 
timeout handler after a request has started.

The device mapper calls blk_mq_complete_request() to report request 
completion to the block layer. See also dm_complete_request(). 
blk_mq_end_request() is only called by the device mapper from inside 
dm_softirq_done(). That last function is called from inside 
blk_mq_complete_request() and is not called directly.

The NVMe PCIe driver only calls blk_mq_end_request() from inside 
nvme_complete_rq(). nvme_complete_rq() is called by the PCIe driver from 
inside nvme_pci_complete_rq() and that last function is called from 
inside blk_mq_complete_request().

In other words, the SCSI core, the device mapper and the NVMe PCIe 
driver all use blk_mq_complete_request() to report request completion to 
the block layer from outside timeout handlers after a request has been 
started.

This is not a new requirement. I think that the legacy block layer 
equivalent, blk_complete_request(), was introduced in 2006 and that 
since then block drivers are required to call blk_complete_request() to 
report completion of requests from outside a timeout handler after these 
have been started.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-17 11:39         ` Danil Kipnis
@ 2019-09-18  7:14           ` Danil Kipnis
  2019-09-18 15:47             ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-18  7:14 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang, Roman Pen

> > > On Sat, Sep 14, 2019 at 1:46 AM Bart Van Assche <bvanassche@acm.org> wrote:
> > >> A more general question is why ibnbd needs its own queue management
> > >> while no other block driver needs this?
> > >
> > > Each IBNBD device promises to have a queue_depth (of say 512) on each
> > > of its num_cpus hardware queues. In fact we can only process a
> > > queue_depth inflights at once on the whole ibtrs session connecting a
> > > given client with a given server. Those 512 inflights (corresponding
> > > to the number of buffers reserved by the server for this particular
> > > client) have to be shared among all the devices mapped on this
> > > session. This leads to the situation, that we receive more requests
> > > than we can process at the moment. So we need to stop queues and start
> > > them again later in some fair fashion.
> >
> > Can a single CPU really sustain a queue depth of 512 commands? Is it
> > really necessary to have one hardware queue per CPU or is e.g. four
> > queues per NUMA node sufficient? Has it been considered to send the
> > number of hardware queues that the initiator wants to use and also the
> > command depth per queue during login to the target side? That would
> > allow the target side to allocate an independent set of buffers for each
> > initiator hardware queue and would allow to remove the queue management
> > at the initiator side. This might even yield better performance.
> We needed a way which would allow us to address one particular
> requirement: we'd like to be able to "enforce" that a response to an
> IO would be processed on the same CPU the IO was originally submitted
> on. In order to be able to do so we establish one rdma connection per
> cpu, each having a separate cq_vector. The administrator can then
> assign the corresponding IRQs to distinct CPUs. The server always
> replies to an IO on the same connection he received the request on. If
> the administrator did configure the /proc/irq/y/smp_affinity
> accordingly, the response sent by the server will generate interrupt
> on the same cpu, the IO was originally submitted on. The administrator
> can configure IRQs differently, for example assign a given irq
> (<->cq_vector) to a range of cpus belonging to a numa node, or
> whatever assignment is best for his use-case.
> Our transport module IBTRS establishes number of cpus connections
> between a client and a server. The user of the transport module (i.e.
> IBNBD) has no knowledge about the rdma connections, it only has a
> pointer to an abstract "session", which connects  him somehow to a
> remote host. IBNBD as a user of IBTRS creates block devices and uses a
> given "session" to send IOs from all the block devices it created for
> that session. That means IBNBD is limited in maximum number of his
> inflights toward a given remote host by the capability of the
> corresponding "session". So it needs to share the resources provided
> by the session (in our current model those resources are in fact some
> pre registered buffers on server side) among his devices.
> It is possible to extend the IBTRS API so that the user (IBNBD) could
> specify how many connections he wants to have on the session to be
> established. It is also possible to extend the ibtrs_clt_get_tag API
> (this is to get a send "permit") with a parameter specifying the
> connection, the future IO is to be send on.
> We now might have to change our communication model in IBTRS a bit in
> order to fix the potential security problem raised during the recent
> RDMA MC: https://etherpad.net/p/LPC2019_RDMA.
>
I'm not familiar with dm code, but don't they need to deal with the
same situation: if I configure 100 logical volumes on top of a single
NVME drive with X hardware queues, each queue_depth deep, then each dm
block device would need to advertise X hardware queues in order to
achieve highest performance in case only this one volume is accessed,
while in fact those X physical queues have to be shared among all 100
logical volumes, if they are accessed in parallel?

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-17 16:46       ` Bart Van Assche
@ 2019-09-18 12:02         ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-18 12:02 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 17, 2019 at 6:46 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/17/19 6:09 AM, Jinpu Wang wrote:
> >>> +static void ibnbd_softirq_done_fn(struct request *rq)
> >>> +{
> >>> +     struct ibnbd_clt_dev *dev       = rq->rq_disk->private_data;
> >>> +     struct ibnbd_clt_session *sess  = dev->sess;
> >>> +     struct ibnbd_iu *iu;
> >>> +
> >>> +     iu = blk_mq_rq_to_pdu(rq);
> >>> +     ibnbd_put_tag(sess, iu->tag);
> >>> +     blk_mq_end_request(rq, iu->status);
> >>> +}
> >>> +
> >>> +static void msg_io_conf(void *priv, int errno)
> >>> +{
> >>> +     struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
> >>> +     struct ibnbd_clt_dev *dev = iu->dev;
> >>> +     struct request *rq = iu->rq;
> >>> +
> >>> +     iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
> >>> +
> >>> +     if (softirq_enable) {
> >>> +             blk_mq_complete_request(rq);
> >>> +     } else {
> >>> +             ibnbd_put_tag(dev->sess, iu->tag);
> >>> +             blk_mq_end_request(rq, iu->status);
> >>> +     }
> >>
> >> Block drivers must call blk_mq_complete_request() instead of
> >> blk_mq_end_request() to complete a request after processing of the
> >> request has been started. Calling blk_mq_end_request() to complete a
> >> request is racy in case a timeout occurs while blk_mq_end_request() is
> >> in progress.
> >
> > Could you elaborate a bit more, blk_mq_end_request is exported function and
> > used by a lot of block drivers: scsi, dm, etc.
> > Is there an open bug report for the problem?
>
> Hi Jinpu,
>
> There is only one blk_mq_end_request() call in the SCSI code and it's
> inside the FC timeout handler (fc_bsg_job_timeout()). Calling
> blk_mq_end_request() from inside a timeout handler is fine but not to
> report to the block layer that a request has completed from outside the
> timeout handler after a request has started.
>
> The device mapper calls blk_mq_complete_request() to report request
> completion to the block layer. See also dm_complete_request().
> blk_mq_end_request() is only called by the device mapper from inside
> dm_softirq_done(). That last function is called from inside
> blk_mq_complete_request() and is not called directly.
>
> The NVMe PCIe driver only calls blk_mq_end_request() from inside
> nvme_complete_rq(). nvme_complete_rq() is called by the PCIe driver from
> inside nvme_pci_complete_rq() and that last function is called from
> inside blk_mq_complete_request().
>
> In other words, the SCSI core, the device mapper and the NVMe PCIe
> driver all use blk_mq_complete_request() to report request completion to
> the block layer from outside timeout handlers after a request has been
> started.
>
> This is not a new requirement. I think that the legacy block layer
> equivalent, blk_complete_request(), was introduced in 2006 and that
> since then block drivers are required to call blk_complete_request() to
> report completion of requests from outside a timeout handler after these
> have been started.
>
> Bart.

Thanks for the detailed explanation, I will switch to
blk_mq_complete_request(), will also drop the
softirq_done module parameter, not useful.

Regards,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 24/25] ibnbd: a bit of documentation
  2019-09-13 23:58   ` Bart Van Assche
@ 2019-09-18 12:22     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-18 12:22 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Sat, Sep 14, 2019 at 1:58 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > From: Roman Pen <roman.penyaev@profitbricks.com>
> >
> > README with description of major sysfs entries.
>
> Please have a look at Documentation/ABI/README and follow the
> instructions from that document.
>
> Thanks,
>
> Bart.

Thanks, will move the sysfs description to
Documentation/ABI/testing/[sysfs-class-ibnbd-client|sysfs-block-ibnbd],
will also move ibtrs sysfs description there.


Regards,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-16 15:39       ` Jinpu Wang
@ 2019-09-18 15:26         ` Bart Van Assche
  2019-09-18 16:11           ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-18 15:26 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On 9/16/19 8:39 AM, Jinpu Wang wrote:
> - Roman's pb emal address, it's no longer valid, will fix next round.
> 
> 
>>>
>>>> +static inline const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
>>>> +{
>>>> +     switch (mode) {
>>>> +     case IBNBD_FILEIO:
>>>> +             return "fileio";
>>>> +     case IBNBD_BLOCKIO:
>>>> +             return "blockio";
>>>> +     case IBNBD_AUTOIO:
>>>> +             return "autoio";
>>>> +     default:
>>>> +             return "unknown";
>>>> +     }
>>>> +}
>>>> +
>>>> +static inline const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
>>>> +{
>>>> +     switch (mode) {
>>>> +     case IBNBD_ACCESS_RO:
>>>> +             return "ro";
>>>> +     case IBNBD_ACCESS_RW:
>>>> +             return "rw";
>>>> +     case IBNBD_ACCESS_MIGRATION:
>>>> +             return "migration";
>>>> +     default:
>>>> +             return "unknown";
>>>> +     }
>>>> +}
>>>
>>> These two functions are not in the hot path and hence should not be
>>> inline functions.
>> Sounds reasonable, will remove the inline.
> inline was added to fix the -Wunused-function warning  eg:
> 
>    CC [M]  /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.o
> In file included from /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.h:34,
>                   from /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.c:33:
> /<<PKGBUILDDIR>>/ibnbd/ibnbd-proto.h:362:20: warning:
> 'ibnbd_access_mode_str' defined but not used [-Wunused-function]
>   static const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
>                      ^~~~~~~~~~~~~~~~~~~~~
> /<<PKGBUILDDIR>>/ibnbd/ibnbd-proto.h:348:20: warning:
> 'ibnbd_io_mode_str' defined but not used [-Wunused-function]
>   static const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
> 
> We have to move both functions to a separate header file if we really
> want to do it.
> The function is simple and small, if you insist, I will do it.

Please move these functions into a .c file. That will reduce the size of 
the kernel modules and will also reduce the size of the header file.

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-18  7:14           ` Danil Kipnis
@ 2019-09-18 15:47             ` Bart Van Assche
  2019-09-20  8:29               ` Danil Kipnis
  2019-09-25 22:26               ` Danil Kipnis
  0 siblings, 2 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-18 15:47 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang, Roman Pen

On 9/18/19 12:14 AM, Danil Kipnis wrote:
> I'm not familiar with dm code, but don't they need to deal with the
> same situation: if I configure 100 logical volumes on top of a single
> NVME drive with X hardware queues, each queue_depth deep, then each dm
> block device would need to advertise X hardware queues in order to
> achieve highest performance in case only this one volume is accessed,
> while in fact those X physical queues have to be shared among all 100
> logical volumes, if they are accessed in parallel?

Combining multiple queues (a) into a single queue (b) that is smaller 
than the combined source queues without sacrificing performance is 
tricky. We already have one such implementation in the block layer core 
and it took considerable time to get that implementation right. See e.g. 
blk_mq_sched_mark_restart_hctx() and blk_mq_sched_restart().

dm drivers are expected to return DM_MAPIO_REQUEUE or 
DM_MAPIO_DELAY_REQUEUE if the queue (b) is full. It turned out to be 
difficult to get this right in the dm-mpath driver and at the same time 
to achieve good performance.

The ibnbd driver introduces a third implementation of code that combines 
multiple (per-cpu) queues into one queue per CPU. It is considered 
important in the Linux kernel to avoid code duplication. Hence my 
question whether ibnbd can reuse the block layer infrastructure for 
sharing tag sets.

Thanks,

Bart.



^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-13 23:46   ` Bart Van Assche
  2019-09-16 14:17     ` Danil Kipnis
  2019-09-17 13:09     ` Jinpu Wang
@ 2019-09-18 16:05     ` Jinpu Wang
  2 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-18 16:05 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

> > +static void destroy_gen_disk(struct ibnbd_clt_dev *dev)
> > +{
> > +     del_gendisk(dev->gd);
>
> > +     /*
> > +      * Before marking queue as dying (blk_cleanup_queue() does that)
> > +      * we have to be sure that everything in-flight has gone.
> > +      * Blink with freeze/unfreeze.
> > +      */
> > +     blk_mq_freeze_queue(dev->queue);
> > +     blk_mq_unfreeze_queue(dev->queue);
>
> Please remove the above seven lines. blk_cleanup_queue() calls
> blk_set_queue_dying() and the second call in blk_set_queue_dying() is
> blk_freeze_queue_start().
>
It was an old bug we had in 2016, we retested with newer kernel like
4.14+, the bug is fixed,
I will remove the above seven lines.

Thanks
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers
  2019-09-18 15:26         ` Bart Van Assche
@ 2019-09-18 16:11           ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-18 16:11 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Wed, Sep 18, 2019 at 5:26 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/16/19 8:39 AM, Jinpu Wang wrote:
> > - Roman's pb emal address, it's no longer valid, will fix next round.
> >
> >
> >>>
> >>>> +static inline const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
> >>>> +{
> >>>> +     switch (mode) {
> >>>> +     case IBNBD_FILEIO:
> >>>> +             return "fileio";
> >>>> +     case IBNBD_BLOCKIO:
> >>>> +             return "blockio";
> >>>> +     case IBNBD_AUTOIO:
> >>>> +             return "autoio";
> >>>> +     default:
> >>>> +             return "unknown";
> >>>> +     }
> >>>> +}
> >>>> +
> >>>> +static inline const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
> >>>> +{
> >>>> +     switch (mode) {
> >>>> +     case IBNBD_ACCESS_RO:
> >>>> +             return "ro";
> >>>> +     case IBNBD_ACCESS_RW:
> >>>> +             return "rw";
> >>>> +     case IBNBD_ACCESS_MIGRATION:
> >>>> +             return "migration";
> >>>> +     default:
> >>>> +             return "unknown";
> >>>> +     }
> >>>> +}
> >>>
> >>> These two functions are not in the hot path and hence should not be
> >>> inline functions.
> >> Sounds reasonable, will remove the inline.
> > inline was added to fix the -Wunused-function warning  eg:
> >
> >    CC [M]  /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.o
> > In file included from /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.h:34,
> >                   from /<<PKGBUILDDIR>>/ibnbd/ibnbd-clt.c:33:
> > /<<PKGBUILDDIR>>/ibnbd/ibnbd-proto.h:362:20: warning:
> > 'ibnbd_access_mode_str' defined but not used [-Wunused-function]
> >   static const char *ibnbd_access_mode_str(enum ibnbd_access_mode mode)
> >                      ^~~~~~~~~~~~~~~~~~~~~
> > /<<PKGBUILDDIR>>/ibnbd/ibnbd-proto.h:348:20: warning:
> > 'ibnbd_io_mode_str' defined but not used [-Wunused-function]
> >   static const char *ibnbd_io_mode_str(enum ibnbd_io_mode mode)
> >
> > We have to move both functions to a separate header file if we really
> > want to do it.
> > The function is simple and small, if you insist, I will do it.
>
> Please move these functions into a .c file. That will reduce the size of
> the kernel modules and will also reduce the size of the header file.
>
> Thanks,
>
> Bart.
>

Ok, will do.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 18/25] ibnbd: client: sysfs interface functions
  2019-06-20 15:03 ` [PATCH v4 18/25] ibnbd: client: sysfs interface functions Jack Wang
@ 2019-09-18 16:28   ` Bart Van Assche
  2019-09-19 15:55     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-18 16:28 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +#undef pr_fmt
> +#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt

Including the line number in all messages is too much information. 
Please don't do this. Additionally, this will make the line number occur 
twice in messages produced by pr_debug().

> +static unsigned int ibnbd_opt_mandatory[] = {
> +	IBNBD_OPT_PATH,
> +	IBNBD_OPT_DEV_PATH,
> +	IBNBD_OPT_SESSNAME,
> +};

Should this array have been declared const?

 > +/* remove new line from string */
 > +static void strip(char *s)
 > +{
 > +	char *p = s;
 > +
 > +	while (*s != '\0') {
 > +		if (*s != '\n')
 > +			*p++ = *s++;
 > +		else
 > +			++s;
 > +	}
 > +	*p = '\0';
 > +}

This function can remove a newline from the middle of a string. Are you 
sure that's what you want?

Is it useful to strip newline characters only and to keep other 
whitespace? Could this function be dropped and can the callers use 
strim() instead?

> +static int ibnbd_clt_parse_map_options(const char *buf,
> +				       char *sessname,
> +				       struct ibtrs_addr *paths,
> +				       size_t *path_cnt,
> +				       size_t max_path_cnt,
> +				       char *pathname,
> +				       enum ibnbd_access_mode *access_mode,
> +				       enum ibnbd_io_mode *io_mode)
> +{

Please introduce a structure for all the output parameters of this 
function and pass a pointer to that structure to this function. That 
will make it easier to introduce support for new parameters.

> +	char *options, *sep_opt;
> +	char *p;
> +	substring_t args[MAX_OPT_ARGS];
> +	int opt_mask = 0;
> +	int token;
> +	int ret = -EINVAL;
> +	int i;
> +	int p_cnt = 0;
> +
> +	options = kstrdup(buf, GFP_KERNEL);
> +	if (!options)
> +		return -ENOMEM;
> +
> +	sep_opt = strstrip(options);
> +	strip(sep_opt);

Are you sure that strstrip() does not remove trailing newline characters?

> +	while ((p = strsep(&sep_opt, " ")) != NULL) {
> +		if (!*p)
> +			continue;
> +
> +		token = match_token(p, ibnbd_opt_tokens, args);
> +		opt_mask |= token;
> +
> +		switch (token) {
> +		case IBNBD_OPT_SESSNAME:
> +			p = match_strdup(args);
> +			if (!p) {
> +				ret = -ENOMEM;
> +				goto out;
> +			}
> +			if (strlen(p) > NAME_MAX) {
> +				pr_err("map_device: sessname too long\n");
> +				ret = -EINVAL;
> +				kfree(p);
> +				goto out;
> +			}
> +			strlcpy(sessname, p, NAME_MAX);
> +			kfree(p);
> +			break;

Please change sessname from a fixed size buffer into a dynamically 
allocated buffer. That will remove the need to perform a strlcpy() and 
will also allow to remove the NAME_MAX checks.

> +		case IBNBD_OPT_DEV_PATH:
> +			p = match_strdup(args);
> +			if (!p) {
> +				ret = -ENOMEM;
> +				goto out;
> +			}
> +			if (strlen(p) > NAME_MAX) {
> +				pr_err("map_device: Device path too long\n");
> +				ret = -EINVAL;
> +				kfree(p);
> +				goto out;
> +			}
> +			strlcpy(pathname, p, NAME_MAX);
> +			kfree(p);
> +			break;

Same comment here - please change pathname from a fixed-size array into 
a dynamically allocated buffer.

> +static ssize_t ibnbd_clt_state_show(struct kobject *kobj,
> +				    struct kobj_attribute *attr, char *page)
> +{
> +	struct ibnbd_clt_dev *dev;
> +
> +	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
> +
> +	switch (dev->dev_state) {
> +	case (DEV_STATE_INIT):
> +		return scnprintf(page, PAGE_SIZE, "init\n");
> +	case (DEV_STATE_MAPPED):
> +		/* TODO fix cli tool before changing to proper state */
> +		return scnprintf(page, PAGE_SIZE, "open\n");
> +	case (DEV_STATE_MAPPED_DISCONNECTED):
> +		/* TODO fix cli tool before changing to proper state */
> +		return scnprintf(page, PAGE_SIZE, "closed\n");
> +	case (DEV_STATE_UNMAPPED):
> +		return scnprintf(page, PAGE_SIZE, "unmapped\n");
> +	default:
> +		return scnprintf(page, PAGE_SIZE, "unknown\n");
> +	}
> +}

Please remove the superfluous parentheses from around the DEV_STATE_* 
constants.

Additionally, using scnprintf() here is overkill. snprintf() should be 
sufficient.

> +static struct kobj_attribute ibnbd_clt_state_attr =
> +	__ATTR(state, 0444, ibnbd_clt_state_show, NULL);

Please use DEVICE_ATTR_RO() instead of __ATTR() for all read-only 
attributes.

> +static ssize_t ibnbd_clt_unmap_dev_store(struct kobject *kobj,
> +					 struct kobj_attribute *attr,
> +					 const char *buf, size_t count)
> +{
> +	struct ibnbd_clt_dev *dev;
> +	char *opt, *options;
> +	bool force;
> +	int err;
> +
> +	opt = kstrdup(buf, GFP_KERNEL);
> +	if (!opt)
> +		return -ENOMEM;
> +
> +	options = strstrip(opt);
> +	strip(options);
> +
> +	dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
> +
> +	if (sysfs_streq(options, "normal")) {
> +		force = false;
> +	} else if (sysfs_streq(options, "force")) {
> +		force = true;
> +	} else {
> +		ibnbd_err(dev, "unmap_device: Invalid value: %s\n", options);
> +		err = -EINVAL;
> +		goto out;
> +	}

Wasn't sysfs_streq() introduced to avoid having to duplicate and strip 
the input string?

> +	/*
> +	 * We take explicit module reference only for one reason: do not
> +	 * race with lockless ibnbd_destroy_sessions().
> +	 */
> +	if (!try_module_get(THIS_MODULE)) {
> +		err = -ENODEV;
> +		goto out;
> +	}
> +	err = ibnbd_clt_unmap_device(dev, force, &attr->attr);
> +	if (unlikely(err)) {
> +		if (unlikely(err != -EALREADY))
> +			ibnbd_err(dev, "unmap_device: %d\n",  err);
> +		goto module_put;
> +	}
> +
> +	/*
> +	 * Here device can be vanished!
> +	 */
> +
> +	err = count;
> +
> +module_put:
> +	module_put(THIS_MODULE);

I've never before seen a module_get() / module_put() pair inside a sysfs 
  callback function. Can this race be fixed by making 
ibnbd_destroy_sessions() remove this sysfs attribute before it tries to 
destroy any sessions?

> +void ibnbd_clt_remove_dev_symlink(struct ibnbd_clt_dev *dev)
> +{
> +	/*
> +	 * The module_is_live() check is crucial and helps to avoid annoying
> +	 * sysfs warning raised in sysfs_remove_link(), when the whole sysfs
> +	 * path was just removed, see ibnbd_close_sessions().
> +	 */
> +	if (strlen(dev->blk_symlink_name) && module_is_live(THIS_MODULE))
> +		sysfs_remove_link(ibnbd_devs_kobj, dev->blk_symlink_name);
> +}

I haven't been able to find any other sysfs code that calls 
module_is_live()? Please elaborate why that check is needed.

> +int ibnbd_clt_create_sysfs_files(void)
> +{
> +	int err;
> +
> +	ibnbd_dev_class = class_create(THIS_MODULE, "ibnbd-client");
> +	if (unlikely(IS_ERR(ibnbd_dev_class)))
> +		return PTR_ERR(ibnbd_dev_class);
> +
> +	ibnbd_dev = device_create(ibnbd_dev_class, NULL,
> +				  MKDEV(0, 0), NULL, "ctl");
> +	if (unlikely(IS_ERR(ibnbd_dev))) {
> +		err = PTR_ERR(ibnbd_dev);
> +		goto cls_destroy;
> +	}
> +	ibnbd_devs_kobj = kobject_create_and_add("devices", &ibnbd_dev->kobj);
> +	if (unlikely(!ibnbd_devs_kobj)) {
> +		err = -ENOMEM;
> +		goto dev_destroy;
> +	}
> +	err = sysfs_create_group(&ibnbd_dev->kobj, &default_attr_group);
> +	if (unlikely(err))
> +		goto put_devs_kobj;
> +
> +	return 0;
> +
> +put_devs_kobj:
> +	kobject_del(ibnbd_devs_kobj);
> +	kobject_put(ibnbd_devs_kobj);
> +dev_destroy:
> +	device_destroy(ibnbd_dev_class, MKDEV(0, 0));
> +cls_destroy:
> +	class_destroy(ibnbd_dev_class);
> +
> +	return err;
> +}

I think this is the wrong way to create a device node because this 
approach will inform udev about device creation before the sysfs group 
has been created. Please use device_create_with_groups() instead of 
calling device_create() and sysfs_create_group() separately.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 20/25] ibnbd: server: main functionality
  2019-06-20 15:03 ` [PATCH v4 20/25] ibnbd: server: main functionality Jack Wang
@ 2019-09-18 17:41   ` Bart Van Assche
  2019-09-20  7:36     ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-18 17:41 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +#undef pr_fmt
> +#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt

Same comment here as for a previous patch - please do not include line 
number information in pr_fmt().

> +MODULE_AUTHOR("ibnbd@profitbricks.com");
> +MODULE_VERSION(IBNBD_VER_STRING);
> +MODULE_DESCRIPTION("InfiniBand Network Block Device Server");
> +MODULE_LICENSE("GPL");

Please remove the version number (MODULE_VERSION()).

> +static char dev_search_path[PATH_MAX] = DEFAULT_DEV_SEARCH_PATH;

Please change dev_search_path[] into a dynamically allocated string to 
avoid a hard-coded length limit.

> +	if (dup[strlen(dup) - 1] == '\n')
> +		dup[strlen(dup) - 1] = '\0';

Can this be changed into a call to strim()?

> +static void ibnbd_endio(void *priv, int error)
> +{
> +	struct ibnbd_io_private *ibnbd_priv = priv;
> +	struct ibnbd_srv_sess_dev *sess_dev = ibnbd_priv->sess_dev;
> +
> +	ibnbd_put_sess_dev(sess_dev);
> +
> +	ibtrs_srv_resp_rdma(ibnbd_priv->id, error);
> +
> +	kfree(priv);
> +}

Since ibtrs_srv_resp_rdma() starts an RDMA WRITE without waiting for the 
write completion, shouldn't the session reference be retained until the 
completion for that RDMA WRITE has been received? In other words, is 
there a risk with the current approach that the buffer that is being 
transferred to the client will be freed before the RDMA WRITE has finished?

> +static struct ibnbd_srv_sess_dev *
> +ibnbd_get_sess_dev(int dev_id, struct ibnbd_srv_session *srv_sess)
> +{
> +	struct ibnbd_srv_sess_dev *sess_dev;
> +	int ret = 0;
> +
> +	read_lock(&srv_sess->index_lock);
> +	sess_dev = idr_find(&srv_sess->index_idr, dev_id);
> +	if (likely(sess_dev))
> +		ret = kref_get_unless_zero(&sess_dev->kref);
> +	read_unlock(&srv_sess->index_lock);
> +
> +	if (unlikely(!sess_dev || !ret))
> +		return ERR_PTR(-ENXIO);
> +
> +	return sess_dev;
> +}

Something that is not important: isn't the sess_dev check superfluous in 
the if-statement just above the return statement? If ret == 1, does that 
imply that sess_dev != 0 ?

Has it been considered to return -ENODEV instead of -ENXIO if no device 
is found?

> +static int create_sess(struct ibtrs_srv *ibtrs)
> +{
 > [ ... ]
> +	strlcpy(srv_sess->sessname, sessname, sizeof(srv_sess->sessname));

Please change the session name into a dynamically allocated string such 
that strdup() can be used instead of strlcpy().

> +static int process_msg_open(struct ibtrs_srv *ibtrs,
> +			    struct ibnbd_srv_session *srv_sess,
> +			    const void *msg, size_t len,
> +			    void *data, size_t datalen);
> +
> +static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
> +				 struct ibnbd_srv_session *srv_sess,
> +				 const void *msg, size_t len,
> +				 void *data, size_t datalen);

Can the code be reordered such that these forward declarations can be 
dropped?

> +static struct ibnbd_srv_sess_dev *
> +ibnbd_srv_create_set_sess_dev(struct ibnbd_srv_session *srv_sess,
> +			      const struct ibnbd_msg_open *open_msg,
> +			      struct ibnbd_dev *ibnbd_dev, fmode_t open_flags,
> +			      struct ibnbd_srv_dev *srv_dev)
> +{
> +	struct ibnbd_srv_sess_dev *sdev = ibnbd_sess_dev_alloc(srv_sess);
> +
> +	if (IS_ERR(sdev))
> +		return sdev;
> +
> +	kref_init(&sdev->kref);
> +
> +	strlcpy(sdev->pathname, open_msg->dev_name, sizeof(sdev->pathname));

Can the path name be changed into a dynamically allocated string?

> +static char *ibnbd_srv_get_full_path(struct ibnbd_srv_session *srv_sess,
> +				     const char *dev_name)
> +{
> +	char *full_path;
> +	char *a, *b;
> +
> +	full_path = kmalloc(PATH_MAX, GFP_KERNEL);
> +	if (!full_path)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/*
> +	 * Replace %SESSNAME% with a real session name in order to
> +	 * create device namespace.
> +	 */
> +	a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
> +	if (a) {
> +		int len = a - dev_search_path;
> +
> +		len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
> +			       dev_search_path, srv_sess->sessname, dev_name);
> +		if (len >= PATH_MAX) {
> +			pr_err("Tooooo looong path: %s, %s, %s\n",
> +			       dev_search_path, srv_sess->sessname, dev_name);
> +			kfree(full_path);
> +			return ERR_PTR(-EINVAL);
> +		}
> +	} else {
> +		snprintf(full_path, PATH_MAX, "%s/%s",
> +			 dev_search_path, dev_name);
> +	}

Has it been considered to use kasprintf() instead of kmalloc() + snprintf()?

> +static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
> +				 struct ibnbd_srv_session *srv_sess,
> +				 const void *msg, size_t len,
> +				 void *data, size_t datalen)
> +{
> +	const struct ibnbd_msg_sess_info *sess_info_msg = msg;
> +	struct ibnbd_msg_sess_info_rsp *rsp = data;
> +
> +	srv_sess->ver = min_t(u8, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
> +	pr_debug("Session %s using protocol version %d (client version: %d,"
> +		 " server version: %d)\n", srv_sess->sessname,
> +		 srv_sess->ver, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);

Has this patch been verified with checkpatch? I think checkpatch 
recommends not to split literal strings.

> +/**
> + * find_srv_sess_dev() - a dev is already opened by this name
> + *
> + * Return struct ibnbd_srv_sess_dev if srv_sess already opened the dev_name
> + * NULL if the session didn't open the device yet.
> + */
> +static struct ibnbd_srv_sess_dev *
> +find_srv_sess_dev(struct ibnbd_srv_session *srv_sess, const char *dev_name)
> +{
> +	struct ibnbd_srv_sess_dev *sess_dev;
> +
> +	if (list_empty(&srv_sess->sess_dev_list))
> +		return NULL;
> +
> +	list_for_each_entry(sess_dev, &srv_sess->sess_dev_list, sess_list)
> +		if (!strcmp(sess_dev->pathname, dev_name))
> +			return sess_dev;
> +
> +	return NULL;
> +}

Is explicit the list_empty() check really necessary? Would the behavior 
of this function change if that check is left out?

Has the posted code been compiled with W=1? I'm asking this because the 
documentation of the function arguments is missing from the kernel-doc 
header. I expect that a warning will be reported if this code is 
compiled with W=1.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-06-20 15:03 ` [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev Jack Wang
@ 2019-09-18 21:46   ` Bart Van Assche
  2019-09-26 14:04     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-18 21:46 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +#undef pr_fmt
> +#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt

Same comment as for a previous patch: please do not include line number 
information in pr_fmt().

> +static int ibnbd_dev_vfs_open(struct ibnbd_dev *dev, const char *path,
> +			      fmode_t flags)
> +{
> +	int oflags = O_DSYNC; /* enable write-through */
> +
> +	if (flags & FMODE_WRITE)
> +		oflags |= O_RDWR;
> +	else if (flags & FMODE_READ)
> +		oflags |= O_RDONLY;
> +	else
> +		return -EINVAL;
> +
> +	dev->file = filp_open(path, oflags, 0);
> +	return PTR_ERR_OR_ZERO(dev->file);
> +}

Isn't the use of O_DSYNC something that should be configurable?

> +struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
> +				 enum ibnbd_io_mode mode, struct bio_set *bs,
> +				 ibnbd_dev_io_fn io_cb)
> +{
> +	struct ibnbd_dev *dev;
> +	int ret;
> +
> +	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> +	if (!dev)
> +		return ERR_PTR(-ENOMEM);
> +
> +	if (mode == IBNBD_BLOCKIO) {
> +		dev->blk_open_flags = flags;
> +		ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
> +		if (ret)
> +			goto err;
> +	} else if (mode == IBNBD_FILEIO) {
> +		dev->blk_open_flags = FMODE_READ;
> +		ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
> +		if (ret)
> +			goto err;
> +
> +		ret = ibnbd_dev_vfs_open(dev, path, flags);
> +		if (ret)
> +			goto blk_put;

This looks really weird. Why to call ibnbd_dev_blk_open() first for file 
I/O mode? Why to set dev->blk_open_flags to FMODE_READ in file I/O mode?

> +static int ibnbd_dev_blk_submit_io(struct ibnbd_dev *dev, sector_t sector,
> +				   void *data, size_t len, u32 bi_size,
> +				   enum ibnbd_io_flags flags, short prio,
> +				   void *priv)
> +{
> +	struct request_queue *q = bdev_get_queue(dev->bdev);
> +	struct ibnbd_dev_blk_io *io;
> +	struct bio *bio;
> +
> +	/* check if the buffer is suitable for bdev */
> +	if (unlikely(WARN_ON(!blk_rq_aligned(q, (unsigned long)data, len))))
> +		return -EINVAL;
> +
> +	/* Generate bio with pages pointing to the rdma buffer */
> +	bio = ibnbd_bio_map_kern(q, data, dev->ibd_bio_set, len, GFP_KERNEL);
> +	if (unlikely(IS_ERR(bio)))
> +		return PTR_ERR(bio);
> +
> +	io = kmalloc(sizeof(*io), GFP_KERNEL);
> +	if (unlikely(!io)) {
> +		bio_put(bio);
> +		return -ENOMEM;
> +	}
> +
> +	io->dev		= dev;
> +	io->priv	= priv;
> +
> +	bio->bi_end_io		= ibnbd_dev_bi_end_io;
> +	bio->bi_private		= io;
> +	bio->bi_opf		= ibnbd_to_bio_flags(flags);
> +	bio->bi_iter.bi_sector	= sector;
> +	bio->bi_iter.bi_size	= bi_size;
> +	bio_set_prio(bio, prio);
> +	bio_set_dev(bio, dev->bdev);
> +
> +	submit_bio(bio);
> +
> +	return 0;
> +}

Can struct bio and struct ibnbd_dev_blk_io be combined into a single 
data structure by passing the size of the latter data structure as the 
front_pad argument to bioset_init()?

> +static void ibnbd_dev_file_submit_io_worker(struct work_struct *w)
> +{
> +	struct ibnbd_dev_file_io_work *dev_work;
> +	struct file *f;
> +	int ret, len;
> +	loff_t off;
> +
> +	dev_work = container_of(w, struct ibnbd_dev_file_io_work, work);
> +	off = dev_work->sector * ibnbd_dev_get_logical_bsize(dev_work->dev);
> +	f = dev_work->dev->file;
> +	len = dev_work->bi_size;
> +
> +	if (ibnbd_op(dev_work->flags) == IBNBD_OP_FLUSH) {
> +		ret = ibnbd_dev_file_handle_flush(dev_work, off);
> +		if (unlikely(ret))
> +			goto out;
> +	}
> +
> +	if (ibnbd_op(dev_work->flags) == IBNBD_OP_WRITE_SAME) {
> +		ret = ibnbd_dev_file_handle_write_same(dev_work);
> +		if (unlikely(ret))
> +			goto out;
> +	}
> +
> +	/* TODO Implement support for DIRECT */
> +	if (dev_work->bi_size) {
> +		loff_t off_tmp = off;
> +
> +		if (ibnbd_op(dev_work->flags) == IBNBD_OP_WRITE)
> +			ret = kernel_write(f, dev_work->data, dev_work->bi_size,
> +					   &off_tmp);
> +		else
> +			ret = kernel_read(f, dev_work->data, dev_work->bi_size,
> +					  &off_tmp);
> +
> +		if (unlikely(ret < 0)) {
> +			goto out;
> +		} else if (unlikely(ret != dev_work->bi_size)) {
> +			/* TODO implement support for partial completions */
> +			ret = -EIO;
> +			goto out;
> +		} else {
> +			ret = 0;
> +		}
> +	}
> +
> +	if (dev_work->flags & IBNBD_F_FUA)
> +		ret = ibnbd_dev_file_handle_fua(dev_work, off);
> +out:
> +	dev_work->dev->io_cb(dev_work->priv, ret);
> +	kfree(dev_work);
> +}
> +
> +static int ibnbd_dev_file_submit_io(struct ibnbd_dev *dev, sector_t sector,
> +				    void *data, size_t len, size_t bi_size,
> +				    enum ibnbd_io_flags flags, void *priv)
> +{
> +	struct ibnbd_dev_file_io_work *w;
> +
> +	if (!ibnbd_flags_supported(flags)) {
> +		pr_info_ratelimited("Unsupported I/O flags: 0x%x on device "
> +				    "%s\n", flags, dev->name);
> +		return -ENOTSUPP;
> +	}
> +
> +	w = kmalloc(sizeof(*w), GFP_KERNEL);
> +	if (!w)
> +		return -ENOMEM;
> +
> +	w->dev		= dev;
> +	w->priv		= priv;
> +	w->sector	= sector;
> +	w->data		= data;
> +	w->len		= len;
> +	w->bi_size	= bi_size;
> +	w->flags	= flags;
> +	INIT_WORK(&w->work, ibnbd_dev_file_submit_io_worker);
> +
> +	if (unlikely(!queue_work(fileio_wq, &w->work))) {
> +		kfree(w);
> +		return -EEXIST;
> +	}
> +
> +	return 0;
> +}

Please use the in-kernel asynchronous I/O API instead of kernel_read() 
and kernel_write() and remove the fileio_wq workqueue. Examples of how 
to use call_read_iter() and call_write_iter() are available in the loop 
driver and also in drivers/target/target_core_file.c.

> +/** ibnbd_dev_init() - Initialize ibnbd_dev
> + *
> + * This functions initialized the ibnbd-dev component.
> + * It has to be called 1x time before ibnbd_dev_open() is used
> + */
> +int ibnbd_dev_init(void);

It is great so see kernel-doc headers above functions but I'm not sure 
these should be in .h files. I think most kernel developers prefer to 
see kernel-doc headers for functions in .c files because that makes it 
more likely that the implementation and the documentation stay in sync.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules
  2019-09-13 23:56   ` Bart Van Assche
@ 2019-09-19 10:30     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-19 10:30 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Sat, Sep 14, 2019 at 1:56 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > From: Roman Pen <roman.penyaev@profitbricks.com>
> >
> > Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
> > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> > ---
> >   MAINTAINERS | 14 ++++++++++++++
> >   1 file changed, 14 insertions(+)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index a6954776a37e..0b7fd93f738d 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -7590,6 +7590,20 @@ IBM ServeRAID RAID DRIVER
> >   S:  Orphan
> >   F:  drivers/scsi/ips.*
> >
> > +IBNBD BLOCK DRIVERS
> > +M:   IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> > +L:   linux-block@vger.kernel.org
> > +S:   Maintained
> > +T:   git git://github.com/profitbricks/ibnbd.git
> > +F:   drivers/block/ibnbd/
> > +
> > +IBTRS TRANSPORT DRIVERS
> > +M:   IBNBD/IBTRS Storage Team <ibnbd@cloud.ionos.com>
> > +L:   linux-rdma@vger.kernel.org
> > +S:   Maintained
> > +T:   git git://github.com/profitbricks/ibnbd.git
> > +F:   drivers/infiniband/ulp/ibtrs/
> > +
> >   ICH LPC AND GPIO DRIVER
> >   M:  Peter Tyser <ptyser@xes-inc.com>
> >   S:  Maintained
>
> I think the T: entry is for kernel trees against which developers should
> prepare their patches. Since the ibnbd repository on github is an
> out-of-tree kernel driver I don't think that it should appear in the
> MAINTAINERS file.
>
> Bart.
>
>
Ok, we will remove the link to github.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 18/25] ibnbd: client: sysfs interface functions
  2019-09-18 16:28   ` Bart Van Assche
@ 2019-09-19 15:55     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-19 15:55 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Wed, Sep 18, 2019 at 6:28 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +#undef pr_fmt
> > +#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
>
> Including the line number in all messages is too much information.
> Please don't do this. Additionally, this will make the line number occur
> twice in messages produced by pr_debug().
We feel it's quite handy for debugging to have line number, I checked
in mainline,
some driver even include __func__ and __LINE__.
Also did a test, the line number occur only once from pr_debug.
>
> > +static unsigned int ibnbd_opt_mandatory[] = {
> > +     IBNBD_OPT_PATH,
> > +     IBNBD_OPT_DEV_PATH,
> > +     IBNBD_OPT_SESSNAME,
> > +};
>
> Should this array have been declared const?
Sounds good.
>
>  > +/* remove new line from string */
>  > +static void strip(char *s)
>  > +{
>  > +    char *p = s;
>  > +
>  > +    while (*s != '\0') {
>  > +            if (*s != '\n')
>  > +                    *p++ = *s++;
>  > +            else
>  > +                    ++s;
>  > +    }
>  > +    *p = '\0';
>  > +}
>
> This function can remove a newline from the middle of a string. Are you
> sure that's what you want?
Yes, we want a strip all newline in the string, when print with
> Is it useful to strip newline characters only and to keep other
> whitespace? Could this function be dropped and can the callers use
> strim() instead?
 We strstrip/strim afterwards to remove the whitespace.
>
> > +static int ibnbd_clt_parse_map_options(const char *buf,
> > +                                    char *sessname,
> > +                                    struct ibtrs_addr *paths,
> > +                                    size_t *path_cnt,
> > +                                    size_t max_path_cnt,
> > +                                    char *pathname,
> > +                                    enum ibnbd_access_mode *access_mode,
> > +                                    enum ibnbd_io_mode *io_mode)
> > +{
>
> Please introduce a structure for all the output parameters of this
> function and pass a pointer to that structure to this function. That
> will make it easier to introduce support for new parameters.
>
> > +     char *options, *sep_opt;
> > +     char *p;
> > +     substring_t args[MAX_OPT_ARGS];
> > +     int opt_mask = 0;
> > +     int token;
> > +     int ret = -EINVAL;
> > +     int i;
> > +     int p_cnt = 0;
> > +
> > +     options = kstrdup(buf, GFP_KERNEL);
> > +     if (!options)
> > +             return -ENOMEM;
> > +
> > +     sep_opt = strstrip(options);
> > +     strip(sep_opt);
>
> Are you sure that strstrip() does not remove trailing newline characters?
Yes, it only removes the whitespace
>
> > +     while ((p = strsep(&sep_opt, " ")) != NULL) {
> > +             if (!*p)
> > +                     continue;
> > +
> > +             token = match_token(p, ibnbd_opt_tokens, args);
> > +             opt_mask |= token;
> > +
> > +             switch (token) {
> > +             case IBNBD_OPT_SESSNAME:
> > +                     p = match_strdup(args);
> > +                     if (!p) {
> > +                             ret = -ENOMEM;
> > +                             goto out;
> > +                     }
> > +                     if (strlen(p) > NAME_MAX) {
> > +                             pr_err("map_device: sessname too long\n");
> > +                             ret = -EINVAL;
> > +                             kfree(p);
> > +                             goto out;
> > +                     }
> > +                     strlcpy(sessname, p, NAME_MAX);
> > +                     kfree(p);
> > +                     break;
>
> Please change sessname from a fixed size buffer into a dynamically
> allocated buffer. That will remove the need to perform a strlcpy() and
> will also allow to remove the NAME_MAX checks.
We can change sessname to be dynamically allocated, but I think the
the NAME_MAX check
is not conflicting, we don't want to have that long sessname anyway.

>
> > +             case IBNBD_OPT_DEV_PATH:
> > +                     p = match_strdup(args);
> > +                     if (!p) {
> > +                             ret = -ENOMEM;
> > +                             goto out;
> > +                     }
> > +                     if (strlen(p) > NAME_MAX) {
> > +                             pr_err("map_device: Device path too long\n");
> > +                             ret = -EINVAL;
> > +                             kfree(p);
> > +                             goto out;
> > +                     }
> > +                     strlcpy(pathname, p, NAME_MAX);
> > +                     kfree(p);
> > +                     break;
>
> Same comment here - please change pathname from a fixed-size array into
> a dynamically allocated buffer.
Ditto
>
> > +static ssize_t ibnbd_clt_state_show(struct kobject *kobj,
> > +                                 struct kobj_attribute *attr, char *page)
> > +{
> > +     struct ibnbd_clt_dev *dev;
> > +
> > +     dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
> > +
> > +     switch (dev->dev_state) {
> > +     case (DEV_STATE_INIT):
> > +             return scnprintf(page, PAGE_SIZE, "init\n");
> > +     case (DEV_STATE_MAPPED):
> > +             /* TODO fix cli tool before changing to proper state */
> > +             return scnprintf(page, PAGE_SIZE, "open\n");
> > +     case (DEV_STATE_MAPPED_DISCONNECTED):
> > +             /* TODO fix cli tool before changing to proper state */
> > +             return scnprintf(page, PAGE_SIZE, "closed\n");
> > +     case (DEV_STATE_UNMAPPED):
> > +             return scnprintf(page, PAGE_SIZE, "unmapped\n");
> > +     default:
> > +             return scnprintf(page, PAGE_SIZE, "unknown\n");
> > +     }
> > +}
>
> Please remove the superfluous parentheses from around the DEV_STATE_*
> constants.
>
> Additionally, using scnprintf() here is overkill. snprintf() should be
> sufficient.
You're right, will address both.
>
> > +static struct kobj_attribute ibnbd_clt_state_attr =
> > +     __ATTR(state, 0444, ibnbd_clt_state_show, NULL);
>
> Please use DEVICE_ATTR_RO() instead of __ATTR() for all read-only
> attributes.
DEVICE_ATTR_RO doesn't fit here, will use __ATTR_RO, thanks
>
> > +static ssize_t ibnbd_clt_unmap_dev_store(struct kobject *kobj,
> > +                                      struct kobj_attribute *attr,
> > +                                      const char *buf, size_t count)
> > +{
> > +     struct ibnbd_clt_dev *dev;
> > +     char *opt, *options;
> > +     bool force;
> > +     int err;
> > +
> > +     opt = kstrdup(buf, GFP_KERNEL);
> > +     if (!opt)
> > +             return -ENOMEM;
> > +
> > +     options = strstrip(opt);
> > +     strip(options);
> > +
> > +     dev = container_of(kobj, struct ibnbd_clt_dev, kobj);
> > +
> > +     if (sysfs_streq(options, "normal")) {
> > +             force = false;
> > +     } else if (sysfs_streq(options, "force")) {
> > +             force = true;
> > +     } else {
> > +             ibnbd_err(dev, "unmap_device: Invalid value: %s\n", options);
> > +             err = -EINVAL;
> > +             goto out;
> > +     }
>
> Wasn't sysfs_streq() introduced to avoid having to duplicate and strip
> the input string?
sysfs_streq is only tolerant for trailing newline. we use strstrip to
strip whitespaces, strip for newlines.

>
> > +     /*
> > +      * We take explicit module reference only for one reason: do not
> > +      * race with lockless ibnbd_destroy_sessions().
> > +      */
> > +     if (!try_module_get(THIS_MODULE)) {
> > +             err = -ENODEV;
> > +             goto out;
> > +     }
> > +     err = ibnbd_clt_unmap_device(dev, force, &attr->attr);
> > +     if (unlikely(err)) {
> > +             if (unlikely(err != -EALREADY))
> > +                     ibnbd_err(dev, "unmap_device: %d\n",  err);
> > +             goto module_put;
> > +     }
> > +
> > +     /*
> > +      * Here device can be vanished!
> > +      */
> > +
> > +     err = count;
> > +
> > +module_put:
> > +     module_put(THIS_MODULE);
>
> I've never before seen a module_get() / module_put() pair inside a sysfs
>   callback function. Can this race be fixed by making
> ibnbd_destroy_sessions() remove this sysfs attribute before it tries to
> destroy any sessions?
That's the first thing we do in ibnbd_destroy_sessions already.
>
> > +void ibnbd_clt_remove_dev_symlink(struct ibnbd_clt_dev *dev)
> > +{
> > +     /*
> > +      * The module_is_live() check is crucial and helps to avoid annoying
> > +      * sysfs warning raised in sysfs_remove_link(), when the whole sysfs
> > +      * path was just removed, see ibnbd_close_sessions().
> > +      */
> > +     if (strlen(dev->blk_symlink_name) && module_is_live(THIS_MODULE))
> > +             sysfs_remove_link(ibnbd_devs_kobj, dev->blk_symlink_name);
> > +}
>
> I haven't been able to find any other sysfs code that calls
> module_is_live()? Please elaborate why that check is needed.

The reason might be lost in the dust, I can retest without module_* to
see if our tests don't break

>
> > +int ibnbd_clt_create_sysfs_files(void)
> > +{
> > +     int err;
> > +
> > +     ibnbd_dev_class = class_create(THIS_MODULE, "ibnbd-client");
> > +     if (unlikely(IS_ERR(ibnbd_dev_class)))
> > +             return PTR_ERR(ibnbd_dev_class);
> > +
> > +     ibnbd_dev = device_create(ibnbd_dev_class, NULL,
> > +                               MKDEV(0, 0), NULL, "ctl");
> > +     if (unlikely(IS_ERR(ibnbd_dev))) {
> > +             err = PTR_ERR(ibnbd_dev);
> > +             goto cls_destroy;
> > +     }
> > +     ibnbd_devs_kobj = kobject_create_and_add("devices", &ibnbd_dev->kobj);
> > +     if (unlikely(!ibnbd_devs_kobj)) {
> > +             err = -ENOMEM;
> > +             goto dev_destroy;
> > +     }
> > +     err = sysfs_create_group(&ibnbd_dev->kobj, &default_attr_group);
> > +     if (unlikely(err))
> > +             goto put_devs_kobj;
> > +
> > +     return 0;
> > +
> > +put_devs_kobj:
> > +     kobject_del(ibnbd_devs_kobj);
> > +     kobject_put(ibnbd_devs_kobj);
> > +dev_destroy:
> > +     device_destroy(ibnbd_dev_class, MKDEV(0, 0));
> > +cls_destroy:
> > +     class_destroy(ibnbd_dev_class);
> > +
> > +     return err;
> > +}
>
> I think this is the wrong way to create a device node because this
> approach will inform udev about device creation before the sysfs group
> has been created. Please use device_create_with_groups() instead of
> calling device_create() and sysfs_create_group() separately.
>
> Bart.
I'm not aware of device_create_with_groups, will try it out.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 20/25] ibnbd: server: main functionality
  2019-09-18 17:41   ` Bart Van Assche
@ 2019-09-20  7:36     ` Danil Kipnis
  2019-09-20 15:42       ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-20  7:36 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Roman Pen, Jack Wang

On Wed, Sep 18, 2019 at 7:41 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +#undef pr_fmt
> > +#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
>
> Same comment here as for a previous patch - please do not include line
> number information in pr_fmt().

Will drop it, thanks.

> > +MODULE_AUTHOR("ibnbd@profitbricks.com");
> > +MODULE_VERSION(IBNBD_VER_STRING);
> > +MODULE_DESCRIPTION("InfiniBand Network Block Device Server");
> > +MODULE_LICENSE("GPL");
>
> Please remove the version number (MODULE_VERSION()).

OK.

> > +static char dev_search_path[PATH_MAX] = DEFAULT_DEV_SEARCH_PATH;
>
> Please change dev_search_path[] into a dynamically allocated string to
> avoid a hard-coded length limit.

OK.

> > +     if (dup[strlen(dup) - 1] == '\n')
> > +             dup[strlen(dup) - 1] = '\0';
>
> Can this be changed into a call to strim()?

A directory name can start and end with spaces, for example this
works: mkdir "     x      "

> > +static void ibnbd_endio(void *priv, int error)
> > +{
> > +     struct ibnbd_io_private *ibnbd_priv = priv;
> > +     struct ibnbd_srv_sess_dev *sess_dev = ibnbd_priv->sess_dev;
> > +
> > +     ibnbd_put_sess_dev(sess_dev);
> > +
> > +     ibtrs_srv_resp_rdma(ibnbd_priv->id, error);
> > +
> > +     kfree(priv);
> > +}
>
> Since ibtrs_srv_resp_rdma() starts an RDMA WRITE without waiting for the
> write completion, shouldn't the session reference be retained until the
> completion for that RDMA WRITE has been received? In other words, is
> there a risk with the current approach that the buffer that is being
> transferred to the client will be freed before the RDMA WRITE has finished?

ibtrs-srv.c is keeping track of inflights. When closing session it
first marks the queue as closing, so that no new write requests would
be posted, when IBNBD calls ibtrs_srv_resp_rdma():
1831         if (ibtrs_srv_change_state_get_old(sess, IBTRS_SRV_CLOSING,
1832                                            &old_state)
Then ibtrs-srv schedules the ibtrs_srv_close_work, that drains the
queue and then waits for all inflights to return from IBNBD:
...
1274                 ib_drain_qp(con->c.qp);
1275         }
1276         /* Wait for all inflights */
1277         ibtrs_srv_wait_ops_ids(sess);
....
Only then the resources can be deallocated:
1282         unmap_cont_bufs(sess);
1283         ibtrs_srv_free_ops_ids(sess);

>
> > +static struct ibnbd_srv_sess_dev *
> > +ibnbd_get_sess_dev(int dev_id, struct ibnbd_srv_session *srv_sess)
> > +{
> > +     struct ibnbd_srv_sess_dev *sess_dev;
> > +     int ret = 0;
> > +
> > +     read_lock(&srv_sess->index_lock);
> > +     sess_dev = idr_find(&srv_sess->index_idr, dev_id);
> > +     if (likely(sess_dev))
> > +             ret = kref_get_unless_zero(&sess_dev->kref);
> > +     read_unlock(&srv_sess->index_lock);
> > +
> > +     if (unlikely(!sess_dev || !ret))
> > +             return ERR_PTR(-ENXIO);
> > +
> > +     return sess_dev;
> > +}
>
> Something that is not important: isn't the sess_dev check superfluous in
> the if-statement just above the return statement? If ret == 1, does that
> imply that sess_dev != 0 ?

We want to have found the device (sess_dev != NULL) and we want to
have been able to take reference to it (ret != 0)... You are right, if
ret != 0 then sess_dev can't be NULL.

> Has it been considered to return -ENODEV instead of -ENXIO if no device
> is found?

The backend block device, i.e. /dev/nullb0, is still there and might
even be still exported over other session(s). So we thought "No such
device or address" is more appropriate.

>
> > +static int create_sess(struct ibtrs_srv *ibtrs)
> > +{
>  > [ ... ]
> > +     strlcpy(srv_sess->sessname, sessname, sizeof(srv_sess->sessname));
>
> Please change the session name into a dynamically allocated string such
> that strdup() can be used instead of strlcpy().

OK.

>
> > +static int process_msg_open(struct ibtrs_srv *ibtrs,
> > +                         struct ibnbd_srv_session *srv_sess,
> > +                         const void *msg, size_t len,
> > +                         void *data, size_t datalen);
> > +
> > +static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
> > +                              struct ibnbd_srv_session *srv_sess,
> > +                              const void *msg, size_t len,
> > +                              void *data, size_t datalen);
>
> Can the code be reordered such that these forward declarations can be
> dropped?

Will try to.

>
> > +static struct ibnbd_srv_sess_dev *
> > +ibnbd_srv_create_set_sess_dev(struct ibnbd_srv_session *srv_sess,
> > +                           const struct ibnbd_msg_open *open_msg,
> > +                           struct ibnbd_dev *ibnbd_dev, fmode_t open_flags,
> > +                           struct ibnbd_srv_dev *srv_dev)
> > +{
> > +     struct ibnbd_srv_sess_dev *sdev = ibnbd_sess_dev_alloc(srv_sess);
> > +
> > +     if (IS_ERR(sdev))
> > +             return sdev;
> > +
> > +     kref_init(&sdev->kref);
> > +
> > +     strlcpy(sdev->pathname, open_msg->dev_name, sizeof(sdev->pathname));
>
> Can the path name be changed into a dynamically allocated string?

Probably we could just do strdup() and free it afterwards...

>
> > +static char *ibnbd_srv_get_full_path(struct ibnbd_srv_session *srv_sess,
> > +                                  const char *dev_name)
> > +{
> > +     char *full_path;
> > +     char *a, *b;
> > +
> > +     full_path = kmalloc(PATH_MAX, GFP_KERNEL);
> > +     if (!full_path)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     /*
> > +      * Replace %SESSNAME% with a real session name in order to
> > +      * create device namespace.
> > +      */
> > +     a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
> > +     if (a) {
> > +             int len = a - dev_search_path;
> > +
> > +             len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
> > +                            dev_search_path, srv_sess->sessname, dev_name);
> > +             if (len >= PATH_MAX) {
> > +                     pr_err("Tooooo looong path: %s, %s, %s\n",
> > +                            dev_search_path, srv_sess->sessname, dev_name);
> > +                     kfree(full_path);
> > +                     return ERR_PTR(-EINVAL);
> > +             }
> > +     } else {
> > +             snprintf(full_path, PATH_MAX, "%s/%s",
> > +                      dev_search_path, dev_name);
> > +     }
>
> Has it been considered to use kasprintf() instead of kmalloc() + snprintf()?

I didn't know there is kasprintf()... Looks it would fit here.

> > +static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
> > +                              struct ibnbd_srv_session *srv_sess,
> > +                              const void *msg, size_t len,
> > +                              void *data, size_t datalen)
> > +{
> > +     const struct ibnbd_msg_sess_info *sess_info_msg = msg;
> > +     struct ibnbd_msg_sess_info_rsp *rsp = data;
> > +
> > +     srv_sess->ver = min_t(u8, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
> > +     pr_debug("Session %s using protocol version %d (client version: %d,"
> > +              " server version: %d)\n", srv_sess->sessname,
> > +              srv_sess->ver, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
>
> Has this patch been verified with checkpatch? I think checkpatch
> recommends not to split literal strings.

Yes it does complain about our splitted strings. But it's either
splitted string or line over 80 chars or "Avoid line continuations in
quoted strings" if we use backslash on previous line. I don't know how
to avoid all three of them.

> > +/**
> > + * find_srv_sess_dev() - a dev is already opened by this name
> > + *
> > + * Return struct ibnbd_srv_sess_dev if srv_sess already opened the dev_name
> > + * NULL if the session didn't open the device yet.
> > + */
> > +static struct ibnbd_srv_sess_dev *
> > +find_srv_sess_dev(struct ibnbd_srv_session *srv_sess, const char *dev_name)
> > +{
> > +     struct ibnbd_srv_sess_dev *sess_dev;
> > +
> > +     if (list_empty(&srv_sess->sess_dev_list))
> > +             return NULL;
> > +
> > +     list_for_each_entry(sess_dev, &srv_sess->sess_dev_list, sess_list)
> > +             if (!strcmp(sess_dev->pathname, dev_name))
> > +                     return sess_dev;
> > +
> > +     return NULL;
> > +}
>
> Is explicit the list_empty() check really necessary? Would the behavior
> of this function change if that check is left out?
Will drop the check and fix if necessary if doesn't work without
(which I hope it does), thanks.

> Has the posted code been compiled with W=1? I'm asking this because the
> documentation of the function arguments is missing from the kernel-doc
> header. I expect that a warning will be reported if this code is
> compiled with W=1.
Yes it does, I didn't know about W=1. Will fix those warnings, thank you!

>
> Thanks,
>
> Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-18 15:47             ` Bart Van Assche
@ 2019-09-20  8:29               ` Danil Kipnis
  2019-09-25 22:26               ` Danil Kipnis
  1 sibling, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-09-20  8:29 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang, Roman Pen

On Wed, Sep 18, 2019 at 5:47 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/18/19 12:14 AM, Danil Kipnis wrote:
> > I'm not familiar with dm code, but don't they need to deal with the
> > same situation: if I configure 100 logical volumes on top of a single
> > NVME drive with X hardware queues, each queue_depth deep, then each dm
> > block device would need to advertise X hardware queues in order to
> > achieve highest performance in case only this one volume is accessed,
> > while in fact those X physical queues have to be shared among all 100
> > logical volumes, if they are accessed in parallel?
>
> Combining multiple queues (a) into a single queue (b) that is smaller
> than the combined source queues without sacrificing performance is
> tricky. We already have one such implementation in the block layer core
> and it took considerable time to get that implementation right. See e.g.
> blk_mq_sched_mark_restart_hctx() and blk_mq_sched_restart().
We will need some time, to check if we can reuse those...

> dm drivers are expected to return DM_MAPIO_REQUEUE or
> DM_MAPIO_DELAY_REQUEUE if the queue (b) is full. It turned out to be
> difficult to get this right in the dm-mpath driver and at the same time
> to achieve good performance.
We also first tried to just return error codes in case we can't
process an incoming request, but this was causing huge performance
degradation when number of devices mapped over the same session is
growing. Since we introduced those per cpu per devices lists of
stopped queues, we do scale very well.

>
> The ibnbd driver introduces a third implementation of code that combines
> multiple (per-cpu) queues into one queue per CPU. It is considered
> important in the Linux kernel to avoid code duplication. Hence my
> question whether ibnbd can reuse the block layer infrastructure for
> sharing tag sets.
Yes, will have to reiterate on this.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 20/25] ibnbd: server: main functionality
  2019-09-20  7:36     ` Danil Kipnis
@ 2019-09-20 15:42       ` Bart Van Assche
  2019-09-23 15:19         ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-20 15:42 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Roman Pen, Jack Wang

On 9/20/19 12:36 AM, Danil Kipnis wrote:
> On Wed, Sep 18, 2019 at 7:41 PM Bart Van Assche <bvanassche@acm.org> wrote:
>> On 6/20/19 8:03 AM, Jack Wang wrote:
>>> +static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
>>> +                              struct ibnbd_srv_session *srv_sess,
>>> +                              const void *msg, size_t len,
>>> +                              void *data, size_t datalen)
>>> +{
>>> +     const struct ibnbd_msg_sess_info *sess_info_msg = msg;
>>> +     struct ibnbd_msg_sess_info_rsp *rsp = data;
>>> +
>>> +     srv_sess->ver = min_t(u8, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
>>> +     pr_debug("Session %s using protocol version %d (client version: %d,"
>>> +              " server version: %d)\n", srv_sess->sessname,
>>> +              srv_sess->ver, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
>>
>> Has this patch been verified with checkpatch? I think checkpatch
>> recommends not to split literal strings.
> 
> Yes it does complain about our splitted strings. But it's either
> splitted string or line over 80 chars or "Avoid line continuations in
> quoted strings" if we use backslash on previous line. I don't know how
> to avoid all three of them.

Checkpatch shouldn't complain about constant strings that exceed 80 
columns. If it complains about such strings then that's a checkpatch bug.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 20/25] ibnbd: server: main functionality
  2019-09-20 15:42       ` Bart Van Assche
@ 2019-09-23 15:19         ` Danil Kipnis
  0 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-09-23 15:19 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Roman Pen, Jack Wang

On Fri, Sep 20, 2019 at 5:42 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/20/19 12:36 AM, Danil Kipnis wrote:
> > On Wed, Sep 18, 2019 at 7:41 PM Bart Van Assche <bvanassche@acm.org> wrote:
> >> On 6/20/19 8:03 AM, Jack Wang wrote:
> >>> +static int process_msg_sess_info(struct ibtrs_srv *ibtrs,
> >>> +                              struct ibnbd_srv_session *srv_sess,
> >>> +                              const void *msg, size_t len,
> >>> +                              void *data, size_t datalen)
> >>> +{
> >>> +     const struct ibnbd_msg_sess_info *sess_info_msg = msg;
> >>> +     struct ibnbd_msg_sess_info_rsp *rsp = data;
> >>> +
> >>> +     srv_sess->ver = min_t(u8, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
> >>> +     pr_debug("Session %s using protocol version %d (client version: %d,"
> >>> +              " server version: %d)\n", srv_sess->sessname,
> >>> +              srv_sess->ver, sess_info_msg->ver, IBNBD_PROTO_VER_MAJOR);
> >>
> >> Has this patch been verified with checkpatch? I think checkpatch
> >> recommends not to split literal strings.
> >
> > Yes it does complain about our splitted strings. But it's either
> > splitted string or line over 80 chars or "Avoid line continuations in
> > quoted strings" if we use backslash on previous line. I don't know how
> > to avoid all three of them.
>
> Checkpatch shouldn't complain about constant strings that exceed 80
> columns. If it complains about such strings then that's a checkpatch bug.
It doesn't in deed... Will concat those splitted quoted string, thank you.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 01/25] sysfs: export sysfs_remove_file_self()
  2019-06-20 15:03 ` [PATCH v4 01/25] sysfs: export sysfs_remove_file_self() Jack Wang
@ 2019-09-23 17:21   ` Bart Van Assche
  2019-09-25  9:30     ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 17:21 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, linux-kernel

On 6/20/19 8:03 AM, Jack Wang wrote:
> Function is going to be used in transport over RDMA module
> in subsequent patches.

It seems like several words are missing from this patch description.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections
  2019-06-20 15:03 ` [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections Jack Wang
@ 2019-09-23 17:44   ` Bart Van Assche
  2019-09-25 10:20     ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 17:44 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> From: Roman Pen <roman.penyaev@profitbricks.com>
> 
> Introduce public header which provides set of API functions to
> establish RDMA connections from client to server machine using
> IBTRS protocol, which manages RDMA connections for each session,
> does multipathing and load balancing.
> 
> Main functions for client (active) side:
> 
>   ibtrs_clt_open() - Creates set of RDMA connections incapsulated
                              ^^^                       ^^^^^^^^^^^^
                                a?                      encapsulated?

>                      in IBTRS session and returns pointer on IBTRS
                         ^^^                       ^^^       ^^
                          a?                        a?       to an?
> 		    session object.
[ ... ]
> +/**
> + * enum ibtrs_clt_link_ev - Events about connectivity state of a client
> + * @IBTRS_CLT_LINK_EV_RECONNECTED	Client was reconnected.
> + * @IBTRS_CLT_LINK_EV_DISCONNECTED	Client was disconnected.
> + */
> +enum ibtrs_clt_link_ev {
> +	IBTRS_CLT_LINK_EV_RECONNECTED,
> +	IBTRS_CLT_LINK_EV_DISCONNECTED,
> +};
> +
> +/**
> + * Source and destination address of a path to be established
> + */
> +struct ibtrs_addr {
> +	struct sockaddr_storage *src;
> +	struct sockaddr_storage *dst;
> +};

Is it really useful to define a structure to hold two pointers or can 
these two pointers also be passed as separate arguments?

> +/**
> + * ibtrs_clt_open() - Open a session to a IBTRS client
> + * @priv:		User supplied private data.
> + * @link_ev:		Event notification for connection state changes
> + *	@priv:			user supplied data that was passed to
> + *				ibtrs_clt_open()
> + *	@ev:			Occurred event
> + * @sessname: name of the session
> + * @paths: Paths to be established defined by their src and dst addresses
> + * @path_cnt: Number of elemnts in the @paths array
> + * @port: port to be used by the IBTRS session
> + * @pdu_sz: Size of extra payload which can be accessed after tag allocation.
> + * @max_inflight_msg: Max. number of parallel inflight messages for the session
> + * @max_segments: Max. number of segments per IO request
> + * @reconnect_delay_sec: time between reconnect tries
> + * @max_reconnect_attempts: Number of times to reconnect on error before giving
> + *			    up, 0 for * disabled, -1 for forever
> + *
> + * Starts session establishment with the ibtrs_server. The function can block
> + * up to ~2000ms until it returns.
> + *
> + * Return a valid pointer on success otherwise PTR_ERR.
> + */
> +struct ibtrs_clt *ibtrs_clt_open(void *priv, link_clt_ev_fn *link_ev,
> +				 const char *sessname,
> +				 const struct ibtrs_addr *paths,
> +				 size_t path_cnt, short port,
> +				 size_t pdu_sz, u8 reconnect_delay_sec,
> +				 u16 max_segments,
> +				 s16 max_reconnect_attempts);

Having detailed kernel-doc headers for describing API functions is great 
but I'm not sure a .h file is the best location for such documentation. 
Many kernel developers keep kernel-doc headers in .c files because that 
makes it more likely that the documentation and the implementation stay 
in sync.

> +
> +/**
> + * ibtrs_clt_close() - Close a session
> + * @sess: Session handler, is freed on return
                      ^^^^^^^
                      handle?

This sentence suggests that the handle is freed on return. I guess that 
you meant that the session is freed upon return?

> +/**
> + * ibtrs_clt_get_tag() - allocates tag for future RDMA operation
> + * @sess:	Current session
> + * @con_type:	Type of connection to use with the tag
> + * @wait:	Wait type
> + *
> + * Description:
> + *    Allocates tag for the following RDMA operation.  Tag is used
> + *    to preallocate all resources and to propagate memory pressure
> + *    up earlier.
> + *
> + * Context:
> + *    Can sleep if @wait == IBTRS_TAG_WAIT
> + */
> +struct ibtrs_tag *ibtrs_clt_get_tag(struct ibtrs_clt *sess,
> +				    enum ibtrs_clt_con_type con_type,
> +				    int wait);

Since struct ibtrs_tag has another role than what is called a tag in the 
block layer I think a better description is needed of what struct 
ibtrs_tag actually represents.

> +/*
> + * Here goes IBTRS server API
> + */

Most software either uses the client API or the server API but not both 
at the same time. Has it been considered to use separate header files 
for the client and server APIs?

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-06-20 15:03 ` [PATCH v4 06/25] ibtrs: client: main functionality Jack Wang
@ 2019-09-23 21:51   ` Bart Van Assche
  2019-09-25 17:36     ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 21:51 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +static const struct ibtrs_ib_dev_pool_ops dev_pool_ops;
> +static struct ibtrs_ib_dev_pool dev_pool = {
> +	.ops = &dev_pool_ops
> +};

Can the definitions in this file be reordered such that the forward 
declaration of dev_pool_ops can be removed?

> +static void ibtrs_rdma_error_recovery(struct ibtrs_clt_con *con);
> +static int ibtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
> +				     struct rdma_cm_event *ev);
> +static void ibtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc);
> +static void complete_rdma_req(struct ibtrs_clt_io_req *req, int errno,
> +			      bool notify, bool can_wait);
> +static int ibtrs_clt_write_req(struct ibtrs_clt_io_req *req);
> +static int ibtrs_clt_read_req(struct ibtrs_clt_io_req *req);

Please also remove these forward declarations.

> +bool ibtrs_clt_sess_is_connected(const struct ibtrs_clt_sess *sess)
> +{
> +	return sess->state == IBTRS_CLT_CONNECTED;
> +}

Is it really useful to introduce a one line function for testing the 
session state?

> +static inline struct ibtrs_tag *
> +__ibtrs_get_tag(struct ibtrs_clt *clt, enum ibtrs_clt_con_type con_type)
> +{
> +	size_t max_depth = clt->queue_depth;
> +	struct ibtrs_tag *tag;
> +	int cpu, bit;
> +
> +	cpu = get_cpu();
> +	do {
> +		bit = find_first_zero_bit(clt->tags_map, max_depth);
> +		if (unlikely(bit >= max_depth)) {
> +			put_cpu();
> +			return NULL;
> +		}
> +
> +	} while (unlikely(test_and_set_bit_lock(bit, clt->tags_map)));
> +	put_cpu();
> +
> +	tag = GET_TAG(clt, bit);
> +	WARN_ON(tag->mem_id != bit);
> +	tag->cpu_id = cpu;
> +	tag->con_type = con_type;
> +
> +	return tag;
> +}

What is the role of the get_cpu() and put_cpu() calls in this function? 
How can it make sense to assign the cpu number to tag->cpu_id after 
put_cpu() has been called?

> +static inline void ibtrs_clt_init_req(struct ibtrs_clt_io_req *req,
> +				      struct ibtrs_clt_sess *sess,
> +				      ibtrs_conf_fn *conf,
> +				      struct ibtrs_tag *tag, void *priv,
> +				      const struct kvec *vec, size_t usr_len,
> +				      struct scatterlist *sg, size_t sg_cnt,
> +				      size_t data_len, int dir)
> +{
> +	struct iov_iter iter;
> +	size_t len;
> +
> +	req->tag = tag;
> +	req->in_use = true;
> +	req->usr_len = usr_len;
> +	req->data_len = data_len;
> +	req->sglist = sg;
> +	req->sg_cnt = sg_cnt;
> +	req->priv = priv;
> +	req->dir = dir;
> +	req->con = ibtrs_tag_to_clt_con(sess, tag);
> +	req->conf = conf;
> +	req->need_inv = false;
> +	req->need_inv_comp = false;
> +	req->inv_errno = 0;
> +
> +	iov_iter_kvec(&iter, READ, vec, 1, usr_len);
> +	len = _copy_from_iter(req->iu->buf, usr_len, &iter);
> +	WARN_ON(len != usr_len);
> +
> +	reinit_completion(&req->inv_comp);
> +	if (sess->stats.enable_rdma_lat)
> +		req->start_jiffies = jiffies;
> +}

A comment that explains what "req" stands for would be welcome. Since 
this function copies the entire payload, I assume that it is only used 
for control messages and not for reading or writing data from a block 
device?

> +static int ibtrs_clt_failover_req(struct ibtrs_clt *clt,
> +				  struct ibtrs_clt_io_req *fail_req)
> +{
> +	struct ibtrs_clt_sess *alive_sess;
> +	struct ibtrs_clt_io_req *req;
> +	int err = -ECONNABORTED;
> +	struct path_it it;
> +
> +	do_each_path(alive_sess, clt, &it) {
> +		if (unlikely(alive_sess->state != IBTRS_CLT_CONNECTED))
> +			continue;
> +		req = ibtrs_clt_get_copy_req(alive_sess, fail_req);
> +		if (req->dir == DMA_TO_DEVICE)
> +			err = ibtrs_clt_write_req(req);
> +		else
> +			err = ibtrs_clt_read_req(req);
> +		if (unlikely(err)) {
> +			req->in_use = false;
> +			continue;
> +		}
> +		/* Success path */
> +		ibtrs_clt_inc_failover_cnt(&alive_sess->stats);
> +		break;
> +	} while_each_path(&it);
> +
> +	return err;
> +}

Also for this function, a comment that explains the purpose of this 
function would be welcome.

> +static void fail_all_outstanding_reqs(struct ibtrs_clt_sess *sess)
> +{
> +	struct ibtrs_clt *clt = sess->clt;
> +	struct ibtrs_clt_io_req *req;
> +	int i, err;
> +
> +	if (!sess->reqs)
> +		return;
> +	for (i = 0; i < sess->queue_depth; ++i) {
> +		req = &sess->reqs[i];
> +		if (!req->in_use)
> +			continue;
> +
> +		/*
> +		 * Safely (without notification) complete failed request.
> +		 * After completion this request is still usebale and can
> +		 * be failovered to another path.
> +		 */
> +		complete_rdma_req(req, -ECONNABORTED, false, true);
> +
> +		err = ibtrs_clt_failover_req(clt, req);
> +		if (unlikely(err))
> +			/* Failover failed, notify anyway */
> +			req->conf(req->priv, err);
> +	}
> +}

What guarantees that this function does not call complete_rdma_req() 
while complete_rdma_req() is called from the regular completion path?

> +static bool __ibtrs_clt_change_state(struct ibtrs_clt_sess *sess,
> +				     enum ibtrs_clt_state new_state)
> +{
> +	enum ibtrs_clt_state old_state;
> +	bool changed = false;
> +
> +	old_state = sess->state;
> +	switch (new_state) {

Please use lockdep_assert_held() inside this function to verify at 
runtime that session state changes are serialized properly.

> +static enum ibtrs_clt_state ibtrs_clt_state(struct ibtrs_clt_sess *sess)
> +{
> +	enum ibtrs_clt_state state;
> +
> +	spin_lock_irq(&sess->state_wq.lock);
> +	state = sess->state;
> +	spin_unlock_irq(&sess->state_wq.lock);
> +
> +	return state;
> +}

Please remove this function and read sess->state without holding 
state_wq.lock.

> +static void ibtrs_clt_hb_err_handler(struct ibtrs_con *c, int err)
> +{
> +	struct ibtrs_clt_con *con;
> +
> +	(void)err;
> +	con = container_of(c, typeof(*con), c);
> +	ibtrs_rdma_error_recovery(con);
> +}

Can "(void)err" be left out?

Can the declaration and assignment of 'con' be merged into a single line 
of code?

> +static int create_con(struct ibtrs_clt_sess *sess, unsigned int cid)
> +{
> +	struct ibtrs_clt_con *con;
> +
> +	con = kzalloc(sizeof(*con), GFP_KERNEL);
> +	if (unlikely(!con))
> +		return -ENOMEM;
> +
> +	/* Map first two connections to the first CPU */
> +	con->cpu  = (cid ? cid - 1 : 0) % nr_cpu_ids;
> +	con->c.cid = cid;
> +	con->c.sess = &sess->s;
> +	atomic_set(&con->io_cnt, 0);
> +
> +	sess->s.con[cid] = &con->c;
> +
> +	return 0;
> +}

The code to map a connection ID to onto a CPU occurs multiple times. Has 
it been considered to introduce a function for that mapping? Although 
one-line inline functions are not recommended in general, such a 
function will also make it easier to experiment with other mapping 
approaches, e.g. mapping hypertread siblings onto the same connection ID.

> +static inline bool xchg_sessions(struct ibtrs_clt_sess __rcu **rcu_ppcpu_path,
> +				 struct ibtrs_clt_sess *sess,
> +				 struct ibtrs_clt_sess *next)
> +{
> +	struct ibtrs_clt_sess **ppcpu_path;
> +
> +	/* Call cmpxchg() without sparse warnings */
> +	ppcpu_path = (typeof(ppcpu_path))rcu_ppcpu_path;
> +	return (sess == cmpxchg(ppcpu_path, sess, next));
> +}

This looks suspicious. Has it been considered to protect changes of 
rcu_ppcpu_path with a mutex and to protect reads with an RCU read lock?

> +static void ibtrs_clt_add_path_to_arr(struct ibtrs_clt_sess *sess,
> +				      struct ibtrs_addr *addr)
> +{
> +	struct ibtrs_clt *clt = sess->clt;
> +
> +	mutex_lock(&clt->paths_mutex);
> +	clt->paths_num++;
> +
> +	/*
> +	 * Firstly increase paths_num, wait for GP and then
> +	 * add path to the list.  Why?  Since we add path with
> +	 * !CONNECTED state explanation is similar to what has
> +	 * been written in ibtrs_clt_remove_path_from_arr().
> +	 */
> +	synchronize_rcu();
> +
> +	list_add_tail_rcu(&sess->s.entry, &clt->paths_list);
> +	mutex_unlock(&clt->paths_mutex);
> +}

synchronize_rcu() while a mutex is being held? Really?

> +static void ibtrs_clt_close_work(struct work_struct *work)
> +{
> +	struct ibtrs_clt_sess *sess;
> +
> +	sess = container_of(work, struct ibtrs_clt_sess, close_work);
> +
> +	cancel_delayed_work_sync(&sess->reconnect_dwork);
> +	ibtrs_clt_stop_and_destroy_conns(sess);
> +	/*
> +	 * Sounds stupid, huh?  No, it is not.  Consider this sequence:
> +	 *
> +	 *   #CPU0                              #CPU1
> +	 *   1.  CONNECTED->RECONNECTING
> +	 *   2.                                 RECONNECTING->CLOSING
> +	 *   3.  queue_work(&reconnect_dwork)
> +	 *   4.                                 queue_work(&close_work);
> +	 *   5.  reconnect_work();              close_work();
> +	 *
> +	 * To avoid that case do cancel twice: before and after.
> +	 */
> +	cancel_delayed_work_sync(&sess->reconnect_dwork);
> +	ibtrs_clt_change_state(sess, IBTRS_CLT_CLOSED);
> +}

The above code looks suspicious to me. I think there should be an 
additional state change at the start of this function to prevent that 
reconnect_dwork gets requeued after having been canceled.

> +static void ibtrs_clt_dev_release(struct device *dev)
> +{
> +	/* Nobody plays with device references, so nop */
> +}

That comment sounds wrong. Have you reviewed all of the device driver 
core code and checked that there is no code in there that manipulates 
struct device refcounts? I think the code that frees struct ibtrs_clt 
should be moved from free_clt() into the above function.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers
  2019-06-20 15:03 ` [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers Jack Wang
@ 2019-09-23 22:50   ` Bart Van Assche
  2019-09-25 21:45     ` Danil Kipnis
  2019-09-27  8:56     ` Jinpu Wang
  0 siblings, 2 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 22:50 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +#define P1 )
> +#define P2 ))
> +#define P3 )))
> +#define P4 ))))
> +#define P(N) P ## N
> +
> +#define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)
> +#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__
> +
> +#define LIST(...)						\
> +	__VA_ARGS__,						\
> +	({ unknown_type(); NULL; })				\
> +	CAT(P, COUNT_ARGS(__VA_ARGS__))				\
> +
> +#define EMPTY()
> +#define DEFER(id) id EMPTY()
> +
> +#define _CASE(obj, type, member)				\
> +	__builtin_choose_expr(					\
> +	__builtin_types_compatible_p(				\
> +		typeof(obj), type),				\
> +		((type)obj)->member
> +#define CASE(o, t, m) DEFER(_CASE)(o, t, m)
> +
> +/*
> + * Below we define retrieving of sessname from common IBTRS types.
> + * Client or server related types have to be defined by special
> + * TYPES_TO_SESSNAME macro.
> + */
> +
> +void unknown_type(void);
> +
> +#ifndef TYPES_TO_SESSNAME
> +#define TYPES_TO_SESSNAME(...) ({ unknown_type(); NULL; })
> +#endif
> +
> +#define ibtrs_prefix(obj)					\
> +	_CASE(obj, struct ibtrs_con *,  sess->sessname),	\
> +	_CASE(obj, struct ibtrs_sess *, sessname),		\
> +	TYPES_TO_SESSNAME(obj)					\
> +	))

No preprocessor voodoo please. Please remove all of the above and modify 
the logging statements such that these pass the proper name string as 
first argument to logging macros.

> +struct ibtrs_msg_conn_req {
> +	u8		__cma_version; /* Is set to 0 by cma.c in case of
> +					* AF_IB, do not touch that. */
> +	u8		__ip_version;  /* On sender side that should be
> +					* set to 0, or cma_save_ip_info()
> +					* extract garbage and will fail. */
> +	__le16		magic;
> +	__le16		version;
> +	__le16		cid;
> +	__le16		cid_num;
> +	__le16		recon_cnt;
> +	uuid_t		sess_uuid;
> +	uuid_t		paths_uuid;
> +	u8		reserved[12];
> +};

Please remove the reserved[] array and check private_data_len in the 
code that receives the login request.

> +/**
> + * struct ibtrs_msg_conn_rsp - Server connection response to the client
> + * @magic:	   IBTRS magic
> + * @version:	   IBTRS protocol version
> + * @errno:	   If rdma_accept() then 0, if rdma_reject() indicates error
> + * @queue_depth:   max inflight messages (queue-depth) in this session
> + * @max_io_size:   max io size server supports
> + * @max_hdr_size:  max msg header size server supports
> + *
> + * NOTE: size is 56 bytes, max possible is 136 bytes, see man rdma_accept().
> + */
> +struct ibtrs_msg_conn_rsp {
> +	__le16		magic;
> +	__le16		version;
> +	__le16		errno;
> +	__le16		queue_depth;
> +	__le32		max_io_size;
> +	__le32		max_hdr_size;
> +	u8		reserved[40];
> +};

Same comment here: please remove the reserved[] array and check 
private_data_len in the code that processes this data structure.

> +static inline int sockaddr_cmp(const struct sockaddr *a,
> +			       const struct sockaddr *b)
> +{
> +	switch (a->sa_family) {
> +	case AF_IB:
> +		return memcmp(&((struct sockaddr_ib *)a)->sib_addr,
> +			      &((struct sockaddr_ib *)b)->sib_addr,
> +			      sizeof(struct ib_addr));
> +	case AF_INET:
> +		return memcmp(&((struct sockaddr_in *)a)->sin_addr,
> +			      &((struct sockaddr_in *)b)->sin_addr,
> +			      sizeof(struct in_addr));
> +	case AF_INET6:
> +		return memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
> +			      &((struct sockaddr_in6 *)b)->sin6_addr,
> +			      sizeof(struct in6_addr));
> +	default:
> +		return -ENOENT;
> +	}
> +}
> +
> +static inline int sockaddr_to_str(const struct sockaddr *addr,
> +				   char *buf, size_t len)
> +{
> +	int cnt;
> +
> +	switch (addr->sa_family) {
> +	case AF_IB:
> +		cnt = scnprintf(buf, len, "gid:%pI6",
> +			&((struct sockaddr_ib *)addr)->sib_addr.sib_raw);
> +		return cnt;
> +	case AF_INET:
> +		cnt = scnprintf(buf, len, "ip:%pI4",
> +			&((struct sockaddr_in *)addr)->sin_addr);
> +		return cnt;
> +	case AF_INET6:
> +		cnt = scnprintf(buf, len, "ip:%pI6c",
> +			  &((struct sockaddr_in6 *)addr)->sin6_addr);
> +		return cnt;
> +	}
> +	cnt = scnprintf(buf, len, "<invalid address family>");
> +	pr_err("Invalid address family\n");
> +	return cnt;
> +}

Since these functions are not in the hot path, please move these into a 
.c file.

> +/**
> + * ibtrs_invalidate_flag() - returns proper flags for invalidation
> + *
> + * NOTE: This function is needed for compat layer, so think twice before
> + *       rename or remove.
> + */
> +static inline u32 ibtrs_invalidate_flag(void)
> +{
> +	return IBTRS_MSG_NEED_INVAL_F;
> +}

An inline function that does nothing else than returning a compile-time 
constant? That does not look useful to me. How about inlining this function?

> +#define STAT_STORE_FUNC(type, store, reset)				\
> +static ssize_t store##_store(struct kobject *kobj,			\
> +			     struct kobj_attribute *attr,		\
> +			     const char *buf, size_t count)		\
> +{									\
> +	int ret = -EINVAL;						\
> +	type *sess = container_of(kobj, type, kobj_stats);		\
> +									\
> +	if (sysfs_streq(buf, "1"))					\
> +		ret = reset(&sess->stats, true);			\
> +	else if (sysfs_streq(buf, "0"))					\
> +		ret = reset(&sess->stats, false);			\
> +	if (ret)							\
> +		return ret;						\
> +									\
> +	return count;							\
> +}

The above macro concatenates the suffix "_store" to a macro argument 
with the name 'store'. Please chose a less confusing name for the macro 
argument. Additionally, using 'reset' for the name of an macro argument 
that is a function that stores a value seems confusing to me. How about 
renaming that macro argument into 'set' or 'store_value'?

> +#define STAT_SHOW_FUNC(type, show, print)				\
> +static ssize_t show##_show(struct kobject *kobj,			\
> +			   struct kobj_attribute *attr,			\
> +			   char *page)					\
> +{									\
> +	type *sess = container_of(kobj, type, kobj_stats);		\
> +									\
> +	return print(&sess->stats, page, PAGE_SIZE);			\
> +}

Same comment for the macro argument 'show' in the above function.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules
  2019-06-20 15:03 ` [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules Jack Wang
@ 2019-09-23 23:03   ` Bart Van Assche
  2019-09-27 10:13     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 23:03 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +static int ibtrs_str_gid_to_sockaddr(const char *addr, size_t len,
> +				     short port, struct sockaddr_storage *dst)
> +{
> +	struct sockaddr_ib *dst_ib = (struct sockaddr_ib *)dst;
> +	int ret;
> +
> +	/*
> +	 * We can use some of the I6 functions since GID is a valid
> +	 * IPv6 address format
> +	 */
> +	ret = in6_pton(addr, len, dst_ib->sib_addr.sib_raw, '\0', NULL);
> +	if (ret == 0)
> +		return -EINVAL;
> +
> +	dst_ib->sib_family = AF_IB;
> +	/*
> +	 * Use the same TCP server port number as the IB service ID
> +	 * on the IB port space range
> +	 */
> +	dst_ib->sib_sid = cpu_to_be64(RDMA_IB_IP_PS_IB | port);
> +	dst_ib->sib_sid_mask = cpu_to_be64(0xffffffffffffffffULL);
> +	dst_ib->sib_pkey = cpu_to_be16(0xffff);
> +
> +	return 0;
> +}
> +
> +/**
> + * ibtrs_str_to_sockaddr() - Convert ibtrs address string to sockaddr
> + * @addr	String representation of an addr (IPv4, IPv6 or IB GID):
> + *              - "ip:192.168.1.1"
> + *              - "ip:fe80::200:5aee:feaa:20a2"
> + *              - "gid:fe80::200:5aee:feaa:20a2"
> + * @len         String address length
> + * @port	Destination port
> + * @dst		Destination sockaddr structure
> + *
> + * Returns 0 if conversion successful. Non-zero on error.
> + */
> +static int ibtrs_str_to_sockaddr(const char *addr, size_t len,
> +				 short port, struct sockaddr_storage *dst)
> +{
> +	if (strncmp(addr, "gid:", 4) == 0) {
> +		return ibtrs_str_gid_to_sockaddr(addr + 4, len - 4, port, dst);
> +	} else if (strncmp(addr, "ip:", 3) == 0) {
> +		char port_str[8];
> +		char *cpy;
> +		int err;
> +
> +		snprintf(port_str, sizeof(port_str), "%u", port);
> +		cpy = kstrndup(addr + 3, len - 3, GFP_KERNEL);
> +		err = cpy ? inet_pton_with_scope(&init_net, AF_UNSPEC,
> +						 cpy, port_str, dst) : -ENOMEM;
> +		kfree(cpy);
> +
> +		return err;
> +	}
> +	return -EPROTONOSUPPORT;
> +}

A considerable amount of code is required to support the IB/CM. Does 
supporting the IB/CM add any value? If that code would be left out, 
would anything break? Is it really useful to support IB networks where 
no IP address has been assigned to each IB port?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 05/25] ibtrs: client: private header with client structs and functions
  2019-06-20 15:03 ` [PATCH v4 05/25] ibtrs: client: private header with client structs and functions Jack Wang
@ 2019-09-23 23:05   ` Bart Van Assche
  2019-09-27 10:18     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 23:05 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +static inline const char *ibtrs_clt_state_str(enum ibtrs_clt_state state)
> +{
> +	switch (state) {
> +	case IBTRS_CLT_CONNECTING:
> +		return "IBTRS_CLT_CONNECTING";
> +	case IBTRS_CLT_CONNECTING_ERR:
> +		return "IBTRS_CLT_CONNECTING_ERR";
> +	case IBTRS_CLT_RECONNECTING:
> +		return "IBTRS_CLT_RECONNECTING";
> +	case IBTRS_CLT_CONNECTED:
> +		return "IBTRS_CLT_CONNECTED";
> +	case IBTRS_CLT_CLOSING:
> +		return "IBTRS_CLT_CLOSING";
> +	case IBTRS_CLT_CLOSED:
> +		return "IBTRS_CLT_CLOSED";
> +	case IBTRS_CLT_DEAD:
> +		return "IBTRS_CLT_DEAD";
> +	default:
> +		return "UNKNOWN";
> +	}
> +}

Since this code is not in the hot path, please move it from a .h into a 
.c file.

> +static inline struct ibtrs_clt_con *to_clt_con(struct ibtrs_con *c)
> +{
> +	return container_of(c, struct ibtrs_clt_con, c);
> +}
> +
> +static inline struct ibtrs_clt_sess *to_clt_sess(struct ibtrs_sess *s)
> +{
> +	return container_of(s, struct ibtrs_clt_sess, s);
> +}

Is it really useful to define functions for these conversions? Has it 
been considered to inline these functions?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 07/25] ibtrs: client: statistics functions
  2019-06-20 15:03 ` [PATCH v4 07/25] ibtrs: client: statistics functions Jack Wang
@ 2019-09-23 23:15   ` Bart Van Assche
  2019-09-27 12:00     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 23:15 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +void ibtrs_clt_update_rdma_lat(struct ibtrs_clt_stats *stats, bool read,
> +			       unsigned long ms)
> +{
> +	struct ibtrs_clt_stats_pcpu *s;
> +	int id;
> +
> +	id = ibtrs_clt_ms_to_id(ms);
> +	s = this_cpu_ptr(stats->pcpu_stats);
> +	if (read) {
> +		s->rdma_lat_distr[id].read++;
> +		if (s->rdma_lat_max.read < ms)
> +			s->rdma_lat_max.read = ms;
> +	} else {
> +		s->rdma_lat_distr[id].write++;
> +		if (s->rdma_lat_max.write < ms)
> +			s->rdma_lat_max.write = ms;
> +	}
> +}

Can it happen that this function is called simultaneously from thread 
context and from interrupt context?

> +void ibtrs_clt_update_wc_stats(struct ibtrs_clt_con *con)
> +{
> +	struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
> +	struct ibtrs_clt_stats *stats = &sess->stats;
> +	struct ibtrs_clt_stats_pcpu *s;
> +	int cpu;
> +
> +	cpu = raw_smp_processor_id();
> +	s = this_cpu_ptr(stats->pcpu_stats);
> +	s->wc_comp.cnt++;
> +	s->wc_comp.total_cnt++;
> +	if (unlikely(con->cpu != cpu)) {
> +		s->cpu_migr.to++;
> +
> +		/* Careful here, override s pointer */
> +		s = per_cpu_ptr(stats->pcpu_stats, con->cpu);
> +		atomic_inc(&s->cpu_migr.from);
> +	}
> +}

Same question here.

> +void ibtrs_clt_inc_failover_cnt(struct ibtrs_clt_stats *stats)
> +{
> +	struct ibtrs_clt_stats_pcpu *s;
> +
> +	s = this_cpu_ptr(stats->pcpu_stats);
> +	s->rdma.failover_cnt++;
> +}

And here ...

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 09/25] ibtrs: server: private header with server structs and functions
  2019-06-20 15:03 ` [PATCH v4 09/25] ibtrs: server: private header with server structs and functions Jack Wang
@ 2019-09-23 23:21   ` Bart Van Assche
  2019-09-27 12:04     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 23:21 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +static inline const char *ibtrs_srv_state_str(enum ibtrs_srv_state state)
> +{
> +	switch (state) {
> +	case IBTRS_SRV_CONNECTING:
> +		return "IBTRS_SRV_CONNECTING";
> +	case IBTRS_SRV_CONNECTED:
> +		return "IBTRS_SRV_CONNECTED";
> +	case IBTRS_SRV_CLOSING:
> +		return "IBTRS_SRV_CLOSING";
> +	case IBTRS_SRV_CLOSED:
> +		return "IBTRS_SRV_CLOSED";
> +	default:
> +		return "UNKNOWN";
> +	}
> +}

Since this function is not in the hot path, please move it into a .c file.

> +/* See ibtrs-log.h */
> +#define TYPES_TO_SESSNAME(obj)						\
> +	LIST(CASE(obj, struct ibtrs_srv_sess *, s.sessname))

Please remove this macro and pass 'sessname' explicitly to logging 
functions.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 10/25] ibtrs: server: main functionality
  2019-06-20 15:03 ` [PATCH v4 10/25] ibtrs: server: main functionality Jack Wang
@ 2019-09-23 23:49   ` Bart Van Assche
  2019-09-27 15:03     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 23:49 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +module_param_named(max_chunk_size, max_chunk_size, int, 0444);
> +MODULE_PARM_DESC(max_chunk_size,
> +		 "Max size for each IO request, when change the unit is in byte"
> +		 " (default: " __stringify(DEFAULT_MAX_CHUNK_SIZE_KB) "KB)");

Where can I find the definition of DEFAULT_MAX_CHUNK_SIZE_KB?

> +static char cq_affinity_list[256] = "";

No empty initializers for file-scope variables please.

> +	pr_info("cq_affinity_list changed to %*pbl\n",
> +		cpumask_pr_args(&cq_affinity_mask));

Should this pr_info() call perhaps be changed into pr_debug()?

> +static bool __ibtrs_srv_change_state(struct ibtrs_srv_sess *sess,
> +				     enum ibtrs_srv_state new_state)
> +{
> +	enum ibtrs_srv_state old_state;
> +	bool changed = false;
> +
> +	old_state = sess->state;
> +	switch (new_state) {

Please add a lockdep_assert_held() statement that checks whether calls 
of this function are serialized properly.

> +/**
> + * rdma_write_sg() - response on successful READ request
> + */
> +static int rdma_write_sg(struct ibtrs_srv_op *id)
> +{
> +	struct ibtrs_srv_sess *sess = to_srv_sess(id->con->c.sess);
> +	dma_addr_t dma_addr = sess->dma_addr[id->msg_id];
> +	struct ibtrs_srv *srv = sess->srv;
> +	struct ib_send_wr inv_wr, imm_wr;
> +	struct ib_rdma_wr *wr = NULL;
> +	const struct ib_send_wr *bad_wr;
> +	enum ib_send_flags flags;
> +	size_t sg_cnt;
> +	int err, i, offset;
> +	bool need_inval;
> +	u32 rkey = 0;
> +
> +	sg_cnt = le16_to_cpu(id->rd_msg->sg_cnt);
> +	need_inval = le16_to_cpu(id->rd_msg->flags) & IBTRS_MSG_NEED_INVAL_F;
> +	if (unlikely(!sg_cnt))
> +		return -EINVAL;
> +
> +	offset = 0;
> +	for (i = 0; i < sg_cnt; i++) {
> +		struct ib_sge *list;
> +
> +		wr		= &id->tx_wr[i];
> +		list		= &id->tx_sg[i];
> +		list->addr	= dma_addr + offset;
> +		list->length	= le32_to_cpu(id->rd_msg->desc[i].len);
> +
> +		/* WR will fail with length error
> +		 * if this is 0
> +		 */
> +		if (unlikely(list->length == 0)) {
> +			ibtrs_err(sess, "Invalid RDMA-Write sg list length 0\n");
> +			return -EINVAL;
> +		}
> +
> +		list->lkey = sess->s.dev->ib_pd->local_dma_lkey;
> +		offset += list->length;
> +
> +		wr->wr.wr_cqe	= &io_comp_cqe;
> +		wr->wr.sg_list	= list;
> +		wr->wr.num_sge	= 1;
> +		wr->remote_addr	= le64_to_cpu(id->rd_msg->desc[i].addr);
> +		wr->rkey	= le32_to_cpu(id->rd_msg->desc[i].key);
> +		if (rkey == 0)
> +			rkey = wr->rkey;
> +		else
> +			/* Only one key is actually used */
> +			WARN_ON_ONCE(rkey != wr->rkey);
> +
> +		if (i < (sg_cnt - 1))
> +			wr->wr.next = &id->tx_wr[i + 1].wr;
> +		else if (need_inval)
> +			wr->wr.next = &inv_wr;
> +		else
> +			wr->wr.next = &imm_wr;
> +
> +		wr->wr.opcode = IB_WR_RDMA_WRITE;
> +		wr->wr.ex.imm_data = 0;
> +		wr->wr.send_flags  = 0;
> +	}
> +	/*
> +	 * From time to time we have to post signalled sends,
> +	 * or send queue will fill up and only QP reset can help.
> +	 */
> +	flags = atomic_inc_return(&id->con->wr_cnt) % srv->queue_depth ?
> +			0 : IB_SEND_SIGNALED;
> +
> +	if (need_inval) {
> +		inv_wr.next = &imm_wr;
> +		inv_wr.wr_cqe = &io_comp_cqe;
> +		inv_wr.sg_list = NULL;
> +		inv_wr.num_sge = 0;
> +		inv_wr.opcode = IB_WR_SEND_WITH_INV;
> +		inv_wr.send_flags = 0;
> +		inv_wr.ex.invalidate_rkey = rkey;
> +	}
> +	imm_wr.next = NULL;
> +	imm_wr.wr_cqe = &io_comp_cqe;
> +	imm_wr.sg_list = NULL;
> +	imm_wr.num_sge = 0;
> +	imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
> +	imm_wr.send_flags = flags;
> +	imm_wr.ex.imm_data = cpu_to_be32(ibtrs_to_io_rsp_imm(id->msg_id,
> +							     0, need_inval));
> +
> +	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, dma_addr,
> +				      offset, DMA_BIDIRECTIONAL);
> +
> +	err = ib_post_send(id->con->c.qp, &id->tx_wr[0].wr, &bad_wr);
> +	if (unlikely(err))
> +		ibtrs_err(sess,
> +			  "Posting RDMA-Write-Request to QP failed, err: %d\n",
> +			  err);
> +
> +	return err;
> +}

All other RDMA server implementations use rdma_rw_ctx_init() and 
rdma_rw_ctx_wrs(). Please use these functions in IBTRS too.

> +static void ibtrs_srv_hb_err_handler(struct ibtrs_con *c, int err)
> +{
> +	(void)err;
> +	close_sess(to_srv_sess(c->sess));
> +}

Is the (void)err statement really necessary?

> +static int ibtrs_srv_rdma_init(struct ibtrs_srv_ctx *ctx, unsigned int port)
> +{
> +	struct sockaddr_in6 sin = {
> +		.sin6_family	= AF_INET6,
> +		.sin6_addr	= IN6ADDR_ANY_INIT,
> +		.sin6_port	= htons(port),
> +	};
> +	struct sockaddr_ib sib = {
> +		.sib_family			= AF_IB,
> +		.sib_addr.sib_subnet_prefix	= 0ULL,
> +		.sib_addr.sib_interface_id	= 0ULL,
> +		.sib_sid	= cpu_to_be64(RDMA_IB_IP_PS_IB | port),
> +		.sib_sid_mask	= cpu_to_be64(0xffffffffffffffffULL),
> +		.sib_pkey	= cpu_to_be16(0xffff),
> +	};
> +	struct rdma_cm_id *cm_ip, *cm_ib;
> +	int ret;
> +
> +	/*
> +	 * We accept both IPoIB and IB connections, so we need to keep
> +	 * two cm id's, one for each socket type and port space.
> +	 * If the cm initialization of one of the id's fails, we abort
> +	 * everything.
> +	 */
> +	cm_ip = ibtrs_srv_cm_init(ctx, (struct sockaddr *)&sin, RDMA_PS_TCP);
> +	if (unlikely(IS_ERR(cm_ip)))
> +		return PTR_ERR(cm_ip);
> +
> +	cm_ib = ibtrs_srv_cm_init(ctx, (struct sockaddr *)&sib, RDMA_PS_IB);
> +	if (unlikely(IS_ERR(cm_ib))) {
> +		ret = PTR_ERR(cm_ib);
> +		goto free_cm_ip;
> +	}
> +
> +	ctx->cm_id_ip = cm_ip;
> +	ctx->cm_id_ib = cm_ib;
> +
> +	return 0;
> +
> +free_cm_ip:
> +	rdma_destroy_id(cm_ip);
> +
> +	return ret;
> +}

Will the above work if CONFIG_IPV6=n?

> +static int __init ibtrs_server_init(void)
> +{
> +	int err;
> +
> +	if (!strlen(cq_affinity_list))
> +		init_cq_affinity();

Is the above if-test useful? Can that if-test be left out?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 11/25] ibtrs: server: statistics functions
  2019-06-20 15:03 ` [PATCH v4 11/25] ibtrs: server: statistics functions Jack Wang
@ 2019-09-23 23:56   ` Bart Van Assche
  2019-10-02 15:15     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-23 23:56 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
> +				    char *page, size_t len)
> +{
> +	struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
> +	struct ibtrs_srv_sess *sess;
> +
> +	sess = container_of(stats, typeof(*sess), stats);
> +
> +	return scnprintf(page, len, "%lld %lld %lld %lld %u\n",
> +			 (s64)atomic64_read(&r->dir[READ].cnt),
> +			 (s64)atomic64_read(&r->dir[READ].size_total),
> +			 (s64)atomic64_read(&r->dir[WRITE].cnt),
> +			 (s64)atomic64_read(&r->dir[WRITE].size_total),
> +			 atomic_read(&sess->ids_inflight));
> +}

Does this follow the sysfs one-value-per-file rule? See also 
Documentation/filesystems/sysfs.txt.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 12/25] ibtrs: server: sysfs interface functions
  2019-06-20 15:03 ` [PATCH v4 12/25] ibtrs: server: sysfs interface functions Jack Wang
@ 2019-09-24  0:00   ` Bart Van Assche
  2019-10-02 15:11     ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-24  0:00 UTC (permalink / raw)
  To: Jack Wang, linux-block, linux-rdma
  Cc: axboe, hch, sagi, jgg, dledford, danil.kipnis, rpenyaev,
	Roman Pen, Jack Wang

On 6/20/19 8:03 AM, Jack Wang wrote:
> +static void ibtrs_srv_dev_release(struct device *dev)
> +{
> +	/* Nobody plays with device references, so nop */
> +}

I doubt that the above comment is correct.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 01/25] sysfs: export sysfs_remove_file_self()
  2019-09-23 17:21   ` Bart Van Assche
@ 2019-09-25  9:30     ` Danil Kipnis
  0 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25  9:30 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, linux-kernel

On Mon, Sep 23, 2019 at 7:21 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > Function is going to be used in transport over RDMA module
> > in subsequent patches.
>
> It seems like several words are missing from this patch description.
Will extend with corresponding description of the function from
fs/sysfs/file.c and explanation why we need it.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections
  2019-09-23 17:44   ` Bart Van Assche
@ 2019-09-25 10:20     ` Danil Kipnis
  2019-09-25 15:38       ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 10:20 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Mon, Sep 23, 2019 at 7:44 PM Bart Van Assche <bvanassche@acm.org> wrote:
> > +/**
> > + * enum ibtrs_clt_link_ev - Events about connectivity state of a client
> > + * @IBTRS_CLT_LINK_EV_RECONNECTED    Client was reconnected.
> > + * @IBTRS_CLT_LINK_EV_DISCONNECTED   Client was disconnected.
> > + */
> > +enum ibtrs_clt_link_ev {
> > +     IBTRS_CLT_LINK_EV_RECONNECTED,
> > +     IBTRS_CLT_LINK_EV_DISCONNECTED,
> > +};
> > +
> > +/**
> > + * Source and destination address of a path to be established
> > + */
> > +struct ibtrs_addr {
> > +     struct sockaddr_storage *src;
> > +     struct sockaddr_storage *dst;
> > +};
>
> Is it really useful to define a structure to hold two pointers or can
> these two pointers also be passed as separate arguments?
We always need both src and dst throughout ibnbd and ibtrs code and
indeed one reason to introduce this struct is that "f(struct
ibtrs_addr *addr, ...);" is shorter than "f(struct sockaddr_storage
*src, struct sockaddr_storage *dst, ...);". But it also makes it
easier to extend the address information describing one ibtrs path in
the future.

> > +/**
> > + * ibtrs_clt_open() - Open a session to a IBTRS client
> > + * @priv:            User supplied private data.
> > + * @link_ev:         Event notification for connection state changes
> > + *   @priv:                  user supplied data that was passed to
> > + *                           ibtrs_clt_open()
> > + *   @ev:                    Occurred event
> > + * @sessname: name of the session
> > + * @paths: Paths to be established defined by their src and dst addresses
> > + * @path_cnt: Number of elemnts in the @paths array
> > + * @port: port to be used by the IBTRS session
> > + * @pdu_sz: Size of extra payload which can be accessed after tag allocation.
> > + * @max_inflight_msg: Max. number of parallel inflight messages for the session
> > + * @max_segments: Max. number of segments per IO request
> > + * @reconnect_delay_sec: time between reconnect tries
> > + * @max_reconnect_attempts: Number of times to reconnect on error before giving
> > + *                       up, 0 for * disabled, -1 for forever
> > + *
> > + * Starts session establishment with the ibtrs_server. The function can block
> > + * up to ~2000ms until it returns.
> > + *
> > + * Return a valid pointer on success otherwise PTR_ERR.
> > + */
> > +struct ibtrs_clt *ibtrs_clt_open(void *priv, link_clt_ev_fn *link_ev,
> > +                              const char *sessname,
> > +                              const struct ibtrs_addr *paths,
> > +                              size_t path_cnt, short port,
> > +                              size_t pdu_sz, u8 reconnect_delay_sec,
> > +                              u16 max_segments,
> > +                              s16 max_reconnect_attempts);
>
> Having detailed kernel-doc headers for describing API functions is great
> but I'm not sure a .h file is the best location for such documentation.
> Many kernel developers keep kernel-doc headers in .c files because that
> makes it more likely that the documentation and the implementation stay
> in sync.
What is better: to move it or to only copy it to the corresponding C file?

>
> > +
> > +/**
> > + * ibtrs_clt_close() - Close a session
> > + * @sess: Session handler, is freed on return
>                       ^^^^^^^
>                       handle?
>
> This sentence suggests that the handle is freed on return. I guess that
> you meant that the session is freed upon return?
Right, will fix the wording.

>
> > +/**
> > + * ibtrs_clt_get_tag() - allocates tag for future RDMA operation
> > + * @sess:    Current session
> > + * @con_type:        Type of connection to use with the tag
> > + * @wait:    Wait type
> > + *
> > + * Description:
> > + *    Allocates tag for the following RDMA operation.  Tag is used
> > + *    to preallocate all resources and to propagate memory pressure
> > + *    up earlier.
> > + *
> > + * Context:
> > + *    Can sleep if @wait == IBTRS_TAG_WAIT
> > + */
> > +struct ibtrs_tag *ibtrs_clt_get_tag(struct ibtrs_clt *sess,
> > +                                 enum ibtrs_clt_con_type con_type,
> > +                                 int wait);
>
> Since struct ibtrs_tag has another role than what is called a tag in the
> block layer I think a better description is needed of what struct
> ibtrs_tag actually represents.
I think it would be better to rename it to ibtrs_permit in order to
avoid confusion with block layer tags. Will extend the description
also.

> > +/*
> > + * Here goes IBTRS server API
> > + */
>
> Most software either uses the client API or the server API but not both
> at the same time. Has it been considered to use separate header files
> for the client and server APIs?
I don't have any really good reason to put API of server and client
into a single file. Except may be that the reader can see API calls
corresponding the full sequence of request -> indication -> response
-> confirmation in one place.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections
  2019-09-25 10:20     ` Danil Kipnis
@ 2019-09-25 15:38       ` Bart Van Assche
  0 siblings, 0 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-25 15:38 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/25/19 3:20 AM, Danil Kipnis wrote:
> On Mon, Sep 23, 2019 at 7:44 PM Bart Van Assche <bvanassche@acm.org> wrote:
>>> +/**
>>> + * ibtrs_clt_open() - Open a session to a IBTRS client
>>> + * @priv:            User supplied private data.
>>> + * @link_ev:         Event notification for connection state changes
>>> + *   @priv:                  user supplied data that was passed to
>>> + *                           ibtrs_clt_open()
>>> + *   @ev:                    Occurred event
>>> + * @sessname: name of the session
>>> + * @paths: Paths to be established defined by their src and dst addresses
>>> + * @path_cnt: Number of elemnts in the @paths array
>>> + * @port: port to be used by the IBTRS session
>>> + * @pdu_sz: Size of extra payload which can be accessed after tag allocation.
>>> + * @max_inflight_msg: Max. number of parallel inflight messages for the session
>>> + * @max_segments: Max. number of segments per IO request
>>> + * @reconnect_delay_sec: time between reconnect tries
>>> + * @max_reconnect_attempts: Number of times to reconnect on error before giving
>>> + *                       up, 0 for * disabled, -1 for forever
>>> + *
>>> + * Starts session establishment with the ibtrs_server. The function can block
>>> + * up to ~2000ms until it returns.
>>> + *
>>> + * Return a valid pointer on success otherwise PTR_ERR.
>>> + */
>>> +struct ibtrs_clt *ibtrs_clt_open(void *priv, link_clt_ev_fn *link_ev,
>>> +                              const char *sessname,
>>> +                              const struct ibtrs_addr *paths,
>>> +                              size_t path_cnt, short port,
>>> +                              size_t pdu_sz, u8 reconnect_delay_sec,
>>> +                              u16 max_segments,
>>> +                              s16 max_reconnect_attempts);
>>
>> Having detailed kernel-doc headers for describing API functions is great
>> but I'm not sure a .h file is the best location for such documentation.
>> Many kernel developers keep kernel-doc headers in .c files because that
>> makes it more likely that the documentation and the implementation stay
>> in sync.
 >
> What is better: to move it or to only copy it to the corresponding C file?

Please move the kernel-doc header into the corresponding .c file and 
remove the kernel-doc header from the .h file.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-23 21:51   ` Bart Van Assche
@ 2019-09-25 17:36     ` Danil Kipnis
  2019-09-25 18:55       ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 17:36 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

Hallo Bart,

On Mon, Sep 23, 2019 at 11:51 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +static const struct ibtrs_ib_dev_pool_ops dev_pool_ops;
> > +static struct ibtrs_ib_dev_pool dev_pool = {
> > +     .ops = &dev_pool_ops
> > +};
>
> Can the definitions in this file be reordered such that the forward
> declaration of dev_pool_ops can be removed?
Will try to.

> > +static void ibtrs_rdma_error_recovery(struct ibtrs_clt_con *con);
> > +static int ibtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
> > +                                  struct rdma_cm_event *ev);
> > +static void ibtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc);
> > +static void complete_rdma_req(struct ibtrs_clt_io_req *req, int errno,
> > +                           bool notify, bool can_wait);
> > +static int ibtrs_clt_write_req(struct ibtrs_clt_io_req *req);
> > +static int ibtrs_clt_read_req(struct ibtrs_clt_io_req *req);
>
> Please also remove these forward declarations.
OK

> > +bool ibtrs_clt_sess_is_connected(const struct ibtrs_clt_sess *sess)
> > +{
> > +     return sess->state == IBTRS_CLT_CONNECTED;
> > +}
>
> Is it really useful to introduce a one line function for testing the
> session state?
No, not in that case really, thanks.

> > +static inline struct ibtrs_tag *
> > +__ibtrs_get_tag(struct ibtrs_clt *clt, enum ibtrs_clt_con_type con_type)
> > +{
> > +     size_t max_depth = clt->queue_depth;
> > +     struct ibtrs_tag *tag;
> > +     int cpu, bit;
> > +
> > +     cpu = get_cpu();
> > +     do {
> > +             bit = find_first_zero_bit(clt->tags_map, max_depth);
> > +             if (unlikely(bit >= max_depth)) {
> > +                     put_cpu();
> > +                     return NULL;
> > +             }
> > +
> > +     } while (unlikely(test_and_set_bit_lock(bit, clt->tags_map)));
> > +     put_cpu();
> > +
> > +     tag = GET_TAG(clt, bit);
> > +     WARN_ON(tag->mem_id != bit);
> > +     tag->cpu_id = cpu;
> > +     tag->con_type = con_type;
> > +
> > +     return tag;
> > +}
>
> What is the role of the get_cpu() and put_cpu() calls in this function?
> How can it make sense to assign the cpu number to tag->cpu_id after
> put_cpu() has been called?
We disable preemption while looking for a free "ibtrs_tag" (permit) in
our tags_map. We store the cpu number the ibtrs_clt_get_tag() function
has been originally called on in the ibtrs_tag we just found, so that
when the user later would use this ibtrs_tag for an rdma operation
(ibtrs_clt_request()), we would select the rdma connection with
cq_vector corresponding to this cpu. If IRQ affinity is configured
accordingly, this enables for an IO response to be processed on the
same cpu the IO request was originally submitted on.

> > +static inline void ibtrs_clt_init_req(struct ibtrs_clt_io_req *req,
> > +                                   struct ibtrs_clt_sess *sess,
> > +                                   ibtrs_conf_fn *conf,
> > +                                   struct ibtrs_tag *tag, void *priv,
> > +                                   const struct kvec *vec, size_t usr_len,
> > +                                   struct scatterlist *sg, size_t sg_cnt,
> > +                                   size_t data_len, int dir)
> > +{
> > +     struct iov_iter iter;
> > +     size_t len;
> > +
> > +     req->tag = tag;
> > +     req->in_use = true;
> > +     req->usr_len = usr_len;
> > +     req->data_len = data_len;
> > +     req->sglist = sg;
> > +     req->sg_cnt = sg_cnt;
> > +     req->priv = priv;
> > +     req->dir = dir;
> > +     req->con = ibtrs_tag_to_clt_con(sess, tag);
> > +     req->conf = conf;
> > +     req->need_inv = false;
> > +     req->need_inv_comp = false;
> > +     req->inv_errno = 0;
> > +
> > +     iov_iter_kvec(&iter, READ, vec, 1, usr_len);
> > +     len = _copy_from_iter(req->iu->buf, usr_len, &iter);
> > +     WARN_ON(len != usr_len);
> > +
> > +     reinit_completion(&req->inv_comp);
> > +     if (sess->stats.enable_rdma_lat)
> > +             req->start_jiffies = jiffies;
> > +}
>
> A comment that explains what "req" stands for would be welcome. Since
> this function copies the entire payload, I assume that it is only used
> for control messages and not for reading or writing data from a block
> device?
Yes, we only copy control message provided by the user. Will extend
the description.

> > +static int ibtrs_clt_failover_req(struct ibtrs_clt *clt,
> > +                               struct ibtrs_clt_io_req *fail_req)
> > +{
> > +     struct ibtrs_clt_sess *alive_sess;
> > +     struct ibtrs_clt_io_req *req;
> > +     int err = -ECONNABORTED;
> > +     struct path_it it;
> > +
> > +     do_each_path(alive_sess, clt, &it) {
> > +             if (unlikely(alive_sess->state != IBTRS_CLT_CONNECTED))
> > +                     continue;
> > +             req = ibtrs_clt_get_copy_req(alive_sess, fail_req);
> > +             if (req->dir == DMA_TO_DEVICE)
> > +                     err = ibtrs_clt_write_req(req);
> > +             else
> > +                     err = ibtrs_clt_read_req(req);
> > +             if (unlikely(err)) {
> > +                     req->in_use = false;
> > +                     continue;
> > +             }
> > +             /* Success path */
> > +             ibtrs_clt_inc_failover_cnt(&alive_sess->stats);
> > +             break;
> > +     } while_each_path(&it);
> > +
> > +     return err;
> > +}
>
> Also for this function, a comment that explains the purpose of this
> function would be welcome.
Will add a description to it.

>
> > +static void fail_all_outstanding_reqs(struct ibtrs_clt_sess *sess)
> > +{
> > +     struct ibtrs_clt *clt = sess->clt;
> > +     struct ibtrs_clt_io_req *req;
> > +     int i, err;
> > +
> > +     if (!sess->reqs)
> > +             return;
> > +     for (i = 0; i < sess->queue_depth; ++i) {
> > +             req = &sess->reqs[i];
> > +             if (!req->in_use)
> > +                     continue;
> > +
> > +             /*
> > +              * Safely (without notification) complete failed request.
> > +              * After completion this request is still usebale and can
> > +              * be failovered to another path.
> > +              */
> > +             complete_rdma_req(req, -ECONNABORTED, false, true);
> > +
> > +             err = ibtrs_clt_failover_req(clt, req);
> > +             if (unlikely(err))
> > +                     /* Failover failed, notify anyway */
> > +                     req->conf(req->priv, err);
> > +     }
> > +}
>
> What guarantees that this function does not call complete_rdma_req()
> while complete_rdma_req() is called from the regular completion path?
Before calling this function all the qps are drained in
ibtrs_clt_stop_and_destroy_conns(...).

> > +static bool __ibtrs_clt_change_state(struct ibtrs_clt_sess *sess,
> > +                                  enum ibtrs_clt_state new_state)
> > +{
> > +     enum ibtrs_clt_state old_state;
> > +     bool changed = false;
> > +
> > +     old_state = sess->state;
> > +     switch (new_state) {
>
> Please use lockdep_assert_held() inside this function to verify at
> runtime that session state changes are serialized properly.
I haven't used lockdep_assert_held() before, will look into it.

> > +static enum ibtrs_clt_state ibtrs_clt_state(struct ibtrs_clt_sess *sess)
> > +{
> > +     enum ibtrs_clt_state state;
> > +
> > +     spin_lock_irq(&sess->state_wq.lock);
> > +     state = sess->state;
> > +     spin_unlock_irq(&sess->state_wq.lock);
> > +
> > +     return state;
> > +}
>
> Please remove this function and read sess->state without holding
> state_wq.lock.
ok.

> > +static void ibtrs_clt_hb_err_handler(struct ibtrs_con *c, int err)
> > +{
> > +     struct ibtrs_clt_con *con;
> > +
> > +     (void)err;
> > +     con = container_of(c, typeof(*con), c);
> > +     ibtrs_rdma_error_recovery(con);
> > +}
>
> Can "(void)err" be left out?
Yes
> Can the declaration and assignment of 'con' be merged into a single line
> of code?
Yes

> > +static int create_con(struct ibtrs_clt_sess *sess, unsigned int cid)
> > +{
> > +     struct ibtrs_clt_con *con;
> > +
> > +     con = kzalloc(sizeof(*con), GFP_KERNEL);
> > +     if (unlikely(!con))
> > +             return -ENOMEM;
> > +
> > +     /* Map first two connections to the first CPU */
> > +     con->cpu  = (cid ? cid - 1 : 0) % nr_cpu_ids;
> > +     con->c.cid = cid;
> > +     con->c.sess = &sess->s;
> > +     atomic_set(&con->io_cnt, 0);
> > +
> > +     sess->s.con[cid] = &con->c;
> > +
> > +     return 0;
> > +}
>
> The code to map a connection ID to onto a CPU occurs multiple times. Has
> it been considered to introduce a function for that mapping? Although
> one-line inline functions are not recommended in general, such a
> function will also make it easier to experiment with other mapping
> approaches, e.g. mapping hypertread siblings onto the same connection ID.
We have one connection for "user control messages" and as many
connections as cpus for actual IO traffic. They all have different
cq_vectors. This way one can experiment with any mapping by just
setting a different smp_affinity for the IRQs corresponding to this
cq_vectors under /proc/irq/.

> > +static inline bool xchg_sessions(struct ibtrs_clt_sess __rcu **rcu_ppcpu_path,
> > +                              struct ibtrs_clt_sess *sess,
> > +                              struct ibtrs_clt_sess *next)
> > +{
> > +     struct ibtrs_clt_sess **ppcpu_path;
> > +
> > +     /* Call cmpxchg() without sparse warnings */
> > +     ppcpu_path = (typeof(ppcpu_path))rcu_ppcpu_path;
> > +     return (sess == cmpxchg(ppcpu_path, sess, next));
> > +}
>
> This looks suspicious. Has it been considered to protect changes of
> rcu_ppcpu_path with a mutex and to protect reads with an RCU read lock?
>
> > +static void ibtrs_clt_add_path_to_arr(struct ibtrs_clt_sess *sess,
> > +                                   struct ibtrs_addr *addr)
> > +{
> > +     struct ibtrs_clt *clt = sess->clt;
> > +
> > +     mutex_lock(&clt->paths_mutex);
> > +     clt->paths_num++;
> > +
> > +     /*
> > +      * Firstly increase paths_num, wait for GP and then
> > +      * add path to the list.  Why?  Since we add path with
> > +      * !CONNECTED state explanation is similar to what has
> > +      * been written in ibtrs_clt_remove_path_from_arr().
> > +      */
> > +     synchronize_rcu();
> > +
> > +     list_add_tail_rcu(&sess->s.entry, &clt->paths_list);
> > +     mutex_unlock(&clt->paths_mutex);
> > +}
>
> synchronize_rcu() while a mutex is being held? Really?
The construct around our multipath implementation has been checked
https://lkml.org/lkml/2018/5/18/659 and then validated (is "validated"
the right word for this?): https://lkml.org/lkml/2018/5/28/2080.

> > +static void ibtrs_clt_close_work(struct work_struct *work)
> > +{
> > +     struct ibtrs_clt_sess *sess;
> > +
> > +     sess = container_of(work, struct ibtrs_clt_sess, close_work);
> > +
> > +     cancel_delayed_work_sync(&sess->reconnect_dwork);
> > +     ibtrs_clt_stop_and_destroy_conns(sess);
> > +     /*
> > +      * Sounds stupid, huh?  No, it is not.  Consider this sequence:
> > +      *
> > +      *   #CPU0                              #CPU1
> > +      *   1.  CONNECTED->RECONNECTING
> > +      *   2.                                 RECONNECTING->CLOSING
> > +      *   3.  queue_work(&reconnect_dwork)
> > +      *   4.                                 queue_work(&close_work);
> > +      *   5.  reconnect_work();              close_work();
> > +      *
> > +      * To avoid that case do cancel twice: before and after.
> > +      */
> > +     cancel_delayed_work_sync(&sess->reconnect_dwork);
> > +     ibtrs_clt_change_state(sess, IBTRS_CLT_CLOSED);
> > +}
>
> The above code looks suspicious to me. I think there should be an
> additional state change at the start of this function to prevent that
> reconnect_dwork gets requeued after having been canceled
Will look into it again, thanks.

>
> > +static void ibtrs_clt_dev_release(struct device *dev)
> > +{
> > +     /* Nobody plays with device references, so nop */
> > +}
>
> That comment sounds wrong. Have you reviewed all of the device driver
> core code and checked that there is no code in there that manipulates
> struct device refcounts? I think the code that frees struct ibtrs_clt
> should be moved from free_clt() into the above function.

We only use the device to create an entry under /sys/class. free_clt()
is destroying sysfs first and unregisters the device afterwards. I
don't really see the need to free from the callback instead... Will
make it clear in the comment.

Thanks a lot,
Danil

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 17:36     ` Danil Kipnis
@ 2019-09-25 18:55       ` Bart Van Assche
  2019-09-25 20:50         ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-25 18:55 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/25/19 10:36 AM, Danil Kipnis wrote:
> On Mon, Sep 23, 2019 at 11:51 PM Bart Van Assche <bvanassche@acm.org> wrote:
>> On 6/20/19 8:03 AM, Jack Wang wrote:
>>> +static void ibtrs_clt_dev_release(struct device *dev)
>>> +{
>>> +     /* Nobody plays with device references, so nop */
>>> +}
>>
>> That comment sounds wrong. Have you reviewed all of the device driver
>> core code and checked that there is no code in there that manipulates
>> struct device refcounts? I think the code that frees struct ibtrs_clt
>> should be moved from free_clt() into the above function.
> 
> We only use the device to create an entry under /sys/class. free_clt()
> is destroying sysfs first and unregisters the device afterwards. I
> don't really see the need to free from the callback instead... Will
> make it clear in the comment.

There is plenty of code under drivers/base that calls get_device() and
put_device(). Are you sure that none of the code under drivers/base will
ever call get_device() and put_device() for the ibtrs client device?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 18:55       ` Bart Van Assche
@ 2019-09-25 20:50         ` Danil Kipnis
  2019-09-25 21:08           ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 20:50 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Wed, Sep 25, 2019 at 8:55 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/25/19 10:36 AM, Danil Kipnis wrote:
> > On Mon, Sep 23, 2019 at 11:51 PM Bart Van Assche <bvanassche@acm.org> wrote:
> >> On 6/20/19 8:03 AM, Jack Wang wrote:
> >>> +static void ibtrs_clt_dev_release(struct device *dev)
> >>> +{
> >>> +     /* Nobody plays with device references, so nop */
> >>> +}
> >>
> >> That comment sounds wrong. Have you reviewed all of the device driver
> >> core code and checked that there is no code in there that manipulates
> >> struct device refcounts? I think the code that frees struct ibtrs_clt
> >> should be moved from free_clt() into the above function.
> >
> > We only use the device to create an entry under /sys/class. free_clt()
> > is destroying sysfs first and unregisters the device afterwards. I
> > don't really see the need to free from the callback instead... Will
> > make it clear in the comment.
>
> There is plenty of code under drivers/base that calls get_device() and
> put_device(). Are you sure that none of the code under drivers/base will
> ever call get_device() and put_device() for the ibtrs client device?
You mean how could multiple kernel modules share the same ibtrs
session...? I really never thought that far...

> Thanks,
>
> Bart.
>

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 20:50         ` Danil Kipnis
@ 2019-09-25 21:08           ` Bart Van Assche
  2019-09-25 21:16             ` Bart Van Assche
  2019-09-25 22:53             ` Danil Kipnis
  0 siblings, 2 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-25 21:08 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/25/19 1:50 PM, Danil Kipnis wrote:
> On Wed, Sep 25, 2019 at 8:55 PM Bart Van Assche <bvanassche@acm.org> wrote:
>> There is plenty of code under drivers/base that calls get_device() and
>> put_device(). Are you sure that none of the code under drivers/base will
>> ever call get_device() and put_device() for the ibtrs client device?
>
> You mean how could multiple kernel modules share the same ibtrs
> session...? I really never thought that far...

I meant something else: device_register() registers struct device
instances in multiple lists. The driver core may decide to iterate over
these lists and to call get_device() / put_device() on the devices it
finds in these lists.

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 21:08           ` Bart Van Assche
@ 2019-09-25 21:16             ` Bart Van Assche
  2019-09-25 22:53             ` Danil Kipnis
  1 sibling, 0 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-25 21:16 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/25/19 2:08 PM, Bart Van Assche wrote:
> On 9/25/19 1:50 PM, Danil Kipnis wrote:
>> On Wed, Sep 25, 2019 at 8:55 PM Bart Van Assche <bvanassche@acm.org> wrote:
>>> There is plenty of code under drivers/base that calls get_device() and
>>> put_device(). Are you sure that none of the code under drivers/base will
>>> ever call get_device() and put_device() for the ibtrs client device?
>>
>> You mean how could multiple kernel modules share the same ibtrs
>> session...? I really never thought that far...
> 
> I meant something else: device_register() registers struct device
> instances in multiple lists. The driver core may decide to iterate over
> these lists and to call get_device() / put_device() on the devices it
> finds in these lists.

Examples of such functions are device_pm_add() (which is called
indirectly by device_register()) and dpm_prepare(). Although it is
unlikely that this code will be used in combination with suspend/resume,
I don't think these drivers should be written such that it these are
incompatible with the runtime power management code.

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers
  2019-09-23 22:50   ` Bart Van Assche
@ 2019-09-25 21:45     ` Danil Kipnis
  2019-09-25 21:57       ` Bart Van Assche
  2019-09-27  8:56     ` Jinpu Wang
  1 sibling, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 21:45 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Roman Pen, Jack Wang

On Tue, Sep 24, 2019 at 12:50 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +#define P1 )
> > +#define P2 ))
> > +#define P3 )))
> > +#define P4 ))))
> > +#define P(N) P ## N
> > +
> > +#define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)
> > +#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__
> > +
> > +#define LIST(...)                                            \
> > +     __VA_ARGS__,                                            \
> > +     ({ unknown_type(); NULL; })                             \
> > +     CAT(P, COUNT_ARGS(__VA_ARGS__))                         \
> > +
> > +#define EMPTY()
> > +#define DEFER(id) id EMPTY()
> > +
> > +#define _CASE(obj, type, member)                             \
> > +     __builtin_choose_expr(                                  \
> > +     __builtin_types_compatible_p(                           \
> > +             typeof(obj), type),                             \
> > +             ((type)obj)->member
> > +#define CASE(o, t, m) DEFER(_CASE)(o, t, m)
> > +
> > +/*
> > + * Below we define retrieving of sessname from common IBTRS types.
> > + * Client or server related types have to be defined by special
> > + * TYPES_TO_SESSNAME macro.
> > + */
> > +
> > +void unknown_type(void);
> > +
> > +#ifndef TYPES_TO_SESSNAME
> > +#define TYPES_TO_SESSNAME(...) ({ unknown_type(); NULL; })
> > +#endif
> > +
> > +#define ibtrs_prefix(obj)                                    \
> > +     _CASE(obj, struct ibtrs_con *,  sess->sessname),        \
> > +     _CASE(obj, struct ibtrs_sess *, sessname),              \
> > +     TYPES_TO_SESSNAME(obj)                                  \
> > +     ))
>
> No preprocessor voodoo please. Please remove all of the above and modify
> the logging statements such that these pass the proper name string as
> first argument to logging macros.

Hi Bart,

do you think it would make sense we first submit a new patchset for
IBTRS (with the changes you suggested plus closed security problem)
and later submit a separate one for IBNBD only?

Thank you,
Danil

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers
  2019-09-25 21:45     ` Danil Kipnis
@ 2019-09-25 21:57       ` Bart Van Assche
  0 siblings, 0 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-25 21:57 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Roman Pen, Jack Wang

On 9/25/19 2:45 PM, Danil Kipnis wrote:
> do you think it would make sense we first submit a new patchset for
> IBTRS (with the changes you suggested plus closed security problem)
> and later submit a separate one for IBNBD only?

I'm not sure what others prefer. Personally I prefer to see all the 
code, that means IBTRS and IBNBD.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-18 15:47             ` Bart Van Assche
  2019-09-20  8:29               ` Danil Kipnis
@ 2019-09-25 22:26               ` Danil Kipnis
  2019-09-26  9:55                 ` Roman Penyaev
  1 sibling, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 22:26 UTC (permalink / raw)
  To: Bart Van Assche, Roman Pen
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Wed, Sep 18, 2019 at 5:47 PM Bart Van Assche <bvanassche@acm.org> wrote:
> Combining multiple queues (a) into a single queue (b) that is smaller
> than the combined source queues without sacrificing performance is
> tricky. We already have one such implementation in the block layer core
> and it took considerable time to get that implementation right. See e.g.
> blk_mq_sched_mark_restart_hctx() and blk_mq_sched_restart().

Roma, can you please estimate the performance impact in case we switch to it?

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 21:08           ` Bart Van Assche
  2019-09-25 21:16             ` Bart Van Assche
@ 2019-09-25 22:53             ` Danil Kipnis
  2019-09-25 23:21               ` Bart Van Assche
  1 sibling, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 22:53 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Wed, Sep 25, 2019 at 11:08 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/25/19 1:50 PM, Danil Kipnis wrote:
> > On Wed, Sep 25, 2019 at 8:55 PM Bart Van Assche <bvanassche@acm.org> wrote:
> >> There is plenty of code under drivers/base that calls get_device() and
> >> put_device(). Are you sure that none of the code under drivers/base will
> >> ever call get_device() and put_device() for the ibtrs client device?
> >
> > You mean how could multiple kernel modules share the same ibtrs
> > session...? I really never thought that far...
>
> I meant something else: device_register() registers struct device
> instances in multiple lists. The driver core may decide to iterate over
> these lists and to call get_device() / put_device() on the devices it
> finds in these lists.
Oh, you mean we just need stub functions for those, so that nobody
steps on a null?

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 22:53             ` Danil Kipnis
@ 2019-09-25 23:21               ` Bart Van Assche
  2019-09-26  9:16                 ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-25 23:21 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/25/19 3:53 PM, Danil Kipnis wrote:
> Oh, you mean we just need stub functions for those, so that nobody
> steps on a null?

What I meant is that the memory that is backing a device must not be 
freed until the reference count of a device has dropped to zero. If a 
struct device is embedded in a larger structure that means signaling a 
completion from inside the release function (ibtrs_clt_dev_release()) 
and not freeing the struct device memory (kfree(clt) in free_clt()) 
before that completion has been triggered.

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 16/25] ibnbd: client: private header with client structs and functions
  2019-09-17 16:36     ` Jinpu Wang
@ 2019-09-25 23:43       ` Danil Kipnis
  2019-09-26 10:00         ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-25 23:43 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev

On Tue, Sep 17, 2019 at 6:36 PM Jinpu Wang <jinpu.wang@cloud.ionos.com> wrote:
>
> On Sat, Sep 14, 2019 at 12:25 AM Bart Van Assche <bvanassche@acm.org> wrote:
> >
> > On 6/20/19 8:03 AM, Jack Wang wrote:
> > > +     char                    pathname[NAME_MAX];
> > [ ... ]
> >  > +    char                    blk_symlink_name[NAME_MAX];
> >
> > Please allocate path names dynamically instead of hard-coding the upper
> > length for a path.
Those strings are used to name directories and files under sysfs,
which I think makes NAME_MAX a natural limitation for them. Client and
server only exchange those strings on connection establishment, not in
the IO path. We do not really need to safe 256K on a server with 1000
devices mapped in parallel. A patch to allocate those strings makes
the code longer, introduces new error paths and in my opinion doesn't
bring any benefits.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 06/25] ibtrs: client: main functionality
  2019-09-25 23:21               ` Bart Van Assche
@ 2019-09-26  9:16                 ` Danil Kipnis
  0 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-09-26  9:16 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Thu, Sep 26, 2019 at 1:21 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/25/19 3:53 PM, Danil Kipnis wrote:
> > Oh, you mean we just need stub functions for those, so that nobody
> > steps on a null?
>
> What I meant is that the memory that is backing a device must not be
> freed until the reference count of a device has dropped to zero. If a
> struct device is embedded in a larger structure that means signaling a
> completion from inside the release function (ibtrs_clt_dev_release())
> and not freeing the struct device memory (kfree(clt) in free_clt())
> before that completion has been triggered.

Got it, thank you. Will move free_clt into the release function.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-25 22:26               ` Danil Kipnis
@ 2019-09-26  9:55                 ` Roman Penyaev
  2019-09-26 15:01                   ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Roman Penyaev @ 2019-09-26  9:55 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Thu, Sep 26, 2019 at 12:26 AM Danil Kipnis
<danil.kipnis@cloud.ionos.com> wrote:
>
> On Wed, Sep 18, 2019 at 5:47 PM Bart Van Assche <bvanassche@acm.org> wrote:
> > Combining multiple queues (a) into a single queue (b) that is smaller
> > than the combined source queues without sacrificing performance is
> > tricky. We already have one such implementation in the block layer core
> > and it took considerable time to get that implementation right. See e.g.
> > blk_mq_sched_mark_restart_hctx() and blk_mq_sched_restart().
>
> Roma, can you please estimate the performance impact in case we switch to it?

As I remember correctly I could not reuse the whole machinery with those
restarts from block core because shared tags are shared only between
hardware queues, i.e. different hardware queues share different tags sets.
IBTRS has many hardware queues (independent RDMA connections) but only one
tags set, which is equally shared between block devices.  What I dreamed
about is something like BLK_MQ_F_TAG_GLOBALLY_SHARED support in block
layer.

--
Roman

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 16/25] ibnbd: client: private header with client structs and functions
  2019-09-25 23:43       ` Danil Kipnis
@ 2019-09-26 10:00         ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-26 10:00 UTC (permalink / raw)
  To: Danil Kipnis, Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev

On Thu, Sep 26, 2019 at 1:43 AM Danil Kipnis
<danil.kipnis@cloud.ionos.com> wrote:
>
> On Tue, Sep 17, 2019 at 6:36 PM Jinpu Wang <jinpu.wang@cloud.ionos.com> wrote:
> >
> > On Sat, Sep 14, 2019 at 12:25 AM Bart Van Assche <bvanassche@acm.org> wrote:
> > >
> > > On 6/20/19 8:03 AM, Jack Wang wrote:
> > > > +     char                    pathname[NAME_MAX];
> > > [ ... ]
> > >  > +    char                    blk_symlink_name[NAME_MAX];
> > >
> > > Please allocate path names dynamically instead of hard-coding the upper
> > > length for a path.
> Those strings are used to name directories and files under sysfs,
> which I think makes NAME_MAX a natural limitation for them. Client and
> server only exchange those strings on connection establishment, not in
> the IO path. We do not really need to safe 256K on a server with 1000
> devices mapped in parallel. A patch to allocate those strings makes
> the code longer, introduces new error paths and in my opinion doesn't
> bring any benefits.
Hi Bart,

We have a draft patch, but it looks ugly, after discussing in house,
due to the reason
Danil mentioned.

we dropped the patch.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-09-18 21:46   ` Bart Van Assche
@ 2019-09-26 14:04     ` Jinpu Wang
  2019-09-26 15:11       ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-09-26 14:04 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

Sorry for the slow reply.

On Wed, Sep 18, 2019 at 11:46 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +#undef pr_fmt
> > +#define pr_fmt(fmt) KBUILD_MODNAME " L" __stringify(__LINE__) ": " fmt
>
> Same comment as for a previous patch: please do not include line number
> information in pr_fmt().
Ok, will be removed.

>
> > +static int ibnbd_dev_vfs_open(struct ibnbd_dev *dev, const char *path,
> > +                           fmode_t flags)
> > +{
> > +     int oflags = O_DSYNC; /* enable write-through */
> > +
> > +     if (flags & FMODE_WRITE)
> > +             oflags |= O_RDWR;
> > +     else if (flags & FMODE_READ)
> > +             oflags |= O_RDONLY;
> > +     else
> > +             return -EINVAL;
> > +
> > +     dev->file = filp_open(path, oflags, 0);
> > +     return PTR_ERR_OR_ZERO(dev->file);
> > +}
>
> Isn't the use of O_DSYNC something that should be configurable?
I know scst allow O_DSYNC to be configured, but in our production, we
only use with O_DSYNC,
 we sure can add options to allow it to configure it, but we don't
have a need yet.
>
> > +struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
> > +                              enum ibnbd_io_mode mode, struct bio_set *bs,
> > +                              ibnbd_dev_io_fn io_cb)
> > +{
> > +     struct ibnbd_dev *dev;
> > +     int ret;
> > +
> > +     dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> > +     if (!dev)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     if (mode == IBNBD_BLOCKIO) {
> > +             dev->blk_open_flags = flags;
> > +             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
> > +             if (ret)
> > +                     goto err;
> > +     } else if (mode == IBNBD_FILEIO) {
> > +             dev->blk_open_flags = FMODE_READ;
> > +             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
> > +             if (ret)
> > +                     goto err;
> > +
> > +             ret = ibnbd_dev_vfs_open(dev, path, flags);
> > +             if (ret)
> > +                     goto blk_put;
>
> This looks really weird. Why to call ibnbd_dev_blk_open() first for file
> I/O mode? Why to set dev->blk_open_flags to FMODE_READ in file I/O mode?

The reason behind is we want to be able to symlink to the block device.
And for File io mode, we only allow exporting block device.


>
> > +static int ibnbd_dev_blk_submit_io(struct ibnbd_dev *dev, sector_t sector,
> > +                                void *data, size_t len, u32 bi_size,
> > +                                enum ibnbd_io_flags flags, short prio,
> > +                                void *priv)
> > +{
> > +     struct request_queue *q = bdev_get_queue(dev->bdev);
> > +     struct ibnbd_dev_blk_io *io;
> > +     struct bio *bio;
> > +
> > +     /* check if the buffer is suitable for bdev */
> > +     if (unlikely(WARN_ON(!blk_rq_aligned(q, (unsigned long)data, len))))
> > +             return -EINVAL;
> > +
> > +     /* Generate bio with pages pointing to the rdma buffer */
> > +     bio = ibnbd_bio_map_kern(q, data, dev->ibd_bio_set, len, GFP_KERNEL);
> > +     if (unlikely(IS_ERR(bio)))
> > +             return PTR_ERR(bio);
> > +
> > +     io = kmalloc(sizeof(*io), GFP_KERNEL);
> > +     if (unlikely(!io)) {
> > +             bio_put(bio);
> > +             return -ENOMEM;
> > +     }
> > +
> > +     io->dev         = dev;
> > +     io->priv        = priv;
> > +
> > +     bio->bi_end_io          = ibnbd_dev_bi_end_io;
> > +     bio->bi_private         = io;
> > +     bio->bi_opf             = ibnbd_to_bio_flags(flags);
> > +     bio->bi_iter.bi_sector  = sector;
> > +     bio->bi_iter.bi_size    = bi_size;
> > +     bio_set_prio(bio, prio);
> > +     bio_set_dev(bio, dev->bdev);
> > +
> > +     submit_bio(bio);
> > +
> > +     return 0;
> > +}
>
> Can struct bio and struct ibnbd_dev_blk_io be combined into a single
> data structure by passing the size of the latter data structure as the
> front_pad argument to bioset_init()?
Thanks for the suggestion, will look into it,
looks we can embed struct bio to struct ibnbd_dev_blk_io.
>
> > +static void ibnbd_dev_file_submit_io_worker(struct work_struct *w)
> > +{
> > +     struct ibnbd_dev_file_io_work *dev_work;
> > +     struct file *f;
> > +     int ret, len;
> > +     loff_t off;
> > +
> > +     dev_work = container_of(w, struct ibnbd_dev_file_io_work, work);
> > +     off = dev_work->sector * ibnbd_dev_get_logical_bsize(dev_work->dev);
> > +     f = dev_work->dev->file;
> > +     len = dev_work->bi_size;
> > +
> > +     if (ibnbd_op(dev_work->flags) == IBNBD_OP_FLUSH) {
> > +             ret = ibnbd_dev_file_handle_flush(dev_work, off);
> > +             if (unlikely(ret))
> > +                     goto out;
> > +     }
> > +
> > +     if (ibnbd_op(dev_work->flags) == IBNBD_OP_WRITE_SAME) {
> > +             ret = ibnbd_dev_file_handle_write_same(dev_work);
> > +             if (unlikely(ret))
> > +                     goto out;
> > +     }
> > +
> > +     /* TODO Implement support for DIRECT */
> > +     if (dev_work->bi_size) {
> > +             loff_t off_tmp = off;
> > +
> > +             if (ibnbd_op(dev_work->flags) == IBNBD_OP_WRITE)
> > +                     ret = kernel_write(f, dev_work->data, dev_work->bi_size,
> > +                                        &off_tmp);
> > +             else
> > +                     ret = kernel_read(f, dev_work->data, dev_work->bi_size,
> > +                                       &off_tmp);
> > +
> > +             if (unlikely(ret < 0)) {
> > +                     goto out;
> > +             } else if (unlikely(ret != dev_work->bi_size)) {
> > +                     /* TODO implement support for partial completions */
> > +                     ret = -EIO;
> > +                     goto out;
> > +             } else {
> > +                     ret = 0;
> > +             }
> > +     }
> > +
> > +     if (dev_work->flags & IBNBD_F_FUA)
> > +             ret = ibnbd_dev_file_handle_fua(dev_work, off);
> > +out:
> > +     dev_work->dev->io_cb(dev_work->priv, ret);
> > +     kfree(dev_work);
> > +}
> > +
> > +static int ibnbd_dev_file_submit_io(struct ibnbd_dev *dev, sector_t sector,
> > +                                 void *data, size_t len, size_t bi_size,
> > +                                 enum ibnbd_io_flags flags, void *priv)
> > +{
> > +     struct ibnbd_dev_file_io_work *w;
> > +
> > +     if (!ibnbd_flags_supported(flags)) {
> > +             pr_info_ratelimited("Unsupported I/O flags: 0x%x on device "
> > +                                 "%s\n", flags, dev->name);
> > +             return -ENOTSUPP;
> > +     }
> > +
> > +     w = kmalloc(sizeof(*w), GFP_KERNEL);
> > +     if (!w)
> > +             return -ENOMEM;
> > +
> > +     w->dev          = dev;
> > +     w->priv         = priv;
> > +     w->sector       = sector;
> > +     w->data         = data;
> > +     w->len          = len;
> > +     w->bi_size      = bi_size;
> > +     w->flags        = flags;
> > +     INIT_WORK(&w->work, ibnbd_dev_file_submit_io_worker);
> > +
> > +     if (unlikely(!queue_work(fileio_wq, &w->work))) {
> > +             kfree(w);
> > +             return -EEXIST;
> > +     }
> > +
> > +     return 0;
> > +}
>
> Please use the in-kernel asynchronous I/O API instead of kernel_read()
> and kernel_write() and remove the fileio_wq workqueue. Examples of how
> to use call_read_iter() and call_write_iter() are available in the loop
> driver and also in drivers/target/target_core_file.c.
What the benefits of using call_read_iter/call_write_iter, does it
offer better performance?

>
> > +/** ibnbd_dev_init() - Initialize ibnbd_dev
> > + *
> > + * This functions initialized the ibnbd-dev component.
> > + * It has to be called 1x time before ibnbd_dev_open() is used
> > + */
> > +int ibnbd_dev_init(void);
>
> It is great so see kernel-doc headers above functions but I'm not sure
> these should be in .h files. I think most kernel developers prefer to
> see kernel-doc headers for functions in .c files because that makes it
> more likely that the implementation and the documentation stay in sync.
>
Ok, will move the kernel doc to source code.
I feel for exported functions, it's more common to do it in header files.
For this case, I think it's fine to move the kernel-doc to the c file.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-26  9:55                 ` Roman Penyaev
@ 2019-09-26 15:01                   ` Bart Van Assche
  2019-09-27  8:52                     ` Roman Penyaev
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-26 15:01 UTC (permalink / raw)
  To: Roman Penyaev, Danil Kipnis
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/26/19 2:55 AM, Roman Penyaev wrote:
> As I remember correctly I could not reuse the whole machinery with those
> restarts from block core because shared tags are shared only between
> hardware queues, i.e. different hardware queues share different tags sets.
> IBTRS has many hardware queues (independent RDMA connections) but only one
> tags set, which is equally shared between block devices.  What I dreamed
> about is something like BLK_MQ_F_TAG_GLOBALLY_SHARED support in block
> layer.

A patch series that adds support for sharing tag sets across hardware 
queues is pending. See also "[PATCH V3 0/8] blk-mq & scsi: fix reply 
queue selection and improve host wide tagset" 
(https://lore.kernel.org/linux-block/20180227100750.32299-1-ming.lei@redhat.com/). 
Would that patch series allow to remove the queue management code from 
ibnbd?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-09-26 14:04     ` Jinpu Wang
@ 2019-09-26 15:11       ` Bart Van Assche
  2019-09-26 15:25         ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-26 15:11 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On 9/26/19 7:04 AM, Jinpu Wang wrote:
> On Wed, Sep 18, 2019 at 11:46 PM Bart Van Assche <bvanassche@acm.org> wrote:
>> On 6/20/19 8:03 AM, Jack Wang wrote:
>> Isn't the use of O_DSYNC something that should be configurable?
> I know scst allow O_DSYNC to be configured, but in our production, we
> only use with O_DSYNC,
>   we sure can add options to allow it to configure it, but we don't
> have a need yet.

Shouldn't upstream code be general purpose instead of only satisfying 
the need of a single user?

>>> +struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
>>> +                              enum ibnbd_io_mode mode, struct bio_set *bs,
>>> +                              ibnbd_dev_io_fn io_cb)
>>> +{
>>> +     struct ibnbd_dev *dev;
>>> +     int ret;
>>> +
>>> +     dev = kzalloc(sizeof(*dev), GFP_KERNEL);
>>> +     if (!dev)
>>> +             return ERR_PTR(-ENOMEM);
>>> +
>>> +     if (mode == IBNBD_BLOCKIO) {
>>> +             dev->blk_open_flags = flags;
>>> +             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
>>> +             if (ret)
>>> +                     goto err;
>>> +     } else if (mode == IBNBD_FILEIO) {
>>> +             dev->blk_open_flags = FMODE_READ;
>>> +             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
>>> +             if (ret)
>>> +                     goto err;
>>> +
>>> +             ret = ibnbd_dev_vfs_open(dev, path, flags);
>>> +             if (ret)
>>> +                     goto blk_put;
>>
>> This looks really weird. Why to call ibnbd_dev_blk_open() first for file
>> I/O mode? Why to set dev->blk_open_flags to FMODE_READ in file I/O mode?
> 
> The reason behind is we want to be able to symlink to the block device.
> And for File io mode, we only allow exporting block device.

This sounds weird to me ...

>> Please use the in-kernel asynchronous I/O API instead of kernel_read()
>> and kernel_write() and remove the fileio_wq workqueue. Examples of how
>> to use call_read_iter() and call_write_iter() are available in the loop
>> driver and also in drivers/target/target_core_file.c.
>
> What the benefits of using call_read_iter/call_write_iter, does it
> offer better performance?

The benefits of using in-kernel asynchronous I/O I know of are:
* Better performance due to fewer context switches. For the posted code 
as many kernel threads will be active as the queue depth. So more 
context switches will be triggered than necessary.
* Removal the file I/O workqueue and hence a reduction of the number of 
kernel threads.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-09-26 15:11       ` Bart Van Assche
@ 2019-09-26 15:25         ` Danil Kipnis
  2019-09-26 15:29           ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-26 15:25 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jinpu Wang, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev

On Thu, Sep 26, 2019 at 5:11 PM Bart Van Assche <bvanassche@acm.org> wrote:
> >>> +struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
> >>> +                              enum ibnbd_io_mode mode, struct bio_set *bs,
> >>> +                              ibnbd_dev_io_fn io_cb)
> >>> +{
> >>> +     struct ibnbd_dev *dev;
> >>> +     int ret;
> >>> +
> >>> +     dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> >>> +     if (!dev)
> >>> +             return ERR_PTR(-ENOMEM);
> >>> +
> >>> +     if (mode == IBNBD_BLOCKIO) {
> >>> +             dev->blk_open_flags = flags;
> >>> +             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
> >>> +             if (ret)
> >>> +                     goto err;
> >>> +     } else if (mode == IBNBD_FILEIO) {
> >>> +             dev->blk_open_flags = FMODE_READ;
> >>> +             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
> >>> +             if (ret)
> >>> +                     goto err;
> >>> +
> >>> +             ret = ibnbd_dev_vfs_open(dev, path, flags);
> >>> +             if (ret)
> >>> +                     goto blk_put;
> >>
> >> This looks really weird. Why to call ibnbd_dev_blk_open() first for file
> >> I/O mode? Why to set dev->blk_open_flags to FMODE_READ in file I/O mode?

Bart, would it in your opinion be OK to drop the file_io support in
IBNBD entirely? We implemented this feature in the beginning of the
project to see whether it could be beneficial in some use cases, but
never actually found any.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-09-26 15:25         ` Danil Kipnis
@ 2019-09-26 15:29           ` Bart Van Assche
  2019-09-26 15:38             ` Danil Kipnis
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-26 15:29 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Jinpu Wang, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev

On 9/26/19 8:25 AM, Danil Kipnis wrote:
> On Thu, Sep 26, 2019 at 5:11 PM Bart Van Assche <bvanassche@acm.org> wrote:
>>>> This looks really weird. Why to call ibnbd_dev_blk_open() first for file
>>>> I/O mode? Why to set dev->blk_open_flags to FMODE_READ in file I/O mode?
> 
> Bart, would it in your opinion be OK to drop the file_io support in
> IBNBD entirely? We implemented this feature in the beginning of the
> project to see whether it could be beneficial in some use cases, but
> never actually found any.

I think that's reasonable since the loop driver can be used to convert a 
file into a block device.

Bart.



^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-09-26 15:29           ` Bart Van Assche
@ 2019-09-26 15:38             ` Danil Kipnis
  2019-09-26 15:42               ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-26 15:38 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jinpu Wang, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev

> > Bart, would it in your opinion be OK to drop the file_io support in
> > IBNBD entirely? We implemented this feature in the beginning of the
> > project to see whether it could be beneficial in some use cases, but
> > never actually found any.
>
> I think that's reasonable since the loop driver can be used to convert a
> file into a block device.
Jack, shall we drop it?

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev
  2019-09-26 15:38             ` Danil Kipnis
@ 2019-09-26 15:42               ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-26 15:42 UTC (permalink / raw)
  To: Danil Kipnis
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev

On Thu, Sep 26, 2019 at 5:38 PM Danil Kipnis
<danil.kipnis@cloud.ionos.com> wrote:
>
> > > Bart, would it in your opinion be OK to drop the file_io support in
> > > IBNBD entirely? We implemented this feature in the beginning of the
> > > project to see whether it could be beneficial in some use cases, but
> > > never actually found any.
> >
> > I think that's reasonable since the loop driver can be used to convert a
> > file into a block device.
> Jack, shall we drop it?

Yes, we should drop it in next round.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-26 15:01                   ` Bart Van Assche
@ 2019-09-27  8:52                     ` Roman Penyaev
  2019-09-27  9:32                       ` Danil Kipnis
  2019-09-27 16:37                       ` Bart Van Assche
  0 siblings, 2 replies; 148+ messages in thread
From: Roman Penyaev @ 2019-09-27  8:52 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Thu, Sep 26, 2019 at 5:01 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/26/19 2:55 AM, Roman Penyaev wrote:
> > As I remember correctly I could not reuse the whole machinery with those
> > restarts from block core because shared tags are shared only between
> > hardware queues, i.e. different hardware queues share different tags sets.
> > IBTRS has many hardware queues (independent RDMA connections) but only one
> > tags set, which is equally shared between block devices.  What I dreamed
> > about is something like BLK_MQ_F_TAG_GLOBALLY_SHARED support in block
> > layer.
>
> A patch series that adds support for sharing tag sets across hardware
> queues is pending. See also "[PATCH V3 0/8] blk-mq & scsi: fix reply
> queue selection and improve host wide tagset"
> (https://lore.kernel.org/linux-block/20180227100750.32299-1-ming.lei@redhat.com/).
> Would that patch series allow to remove the queue management code from
> ibnbd?

Hi Bart,

No, it seems this thingy is a bit different.  According to my
understanding patches 3 and 4 from this patchset do the
following: 1# split equally the whole queue depth on number
of hardware queues and 2# return tag number which is unique
host-wide (more or less similar to unique_tag, right?).

2# is not needed for ibtrs, and 1# can be easy done by dividing
queue_depth on number of hw queues on tag set allocation, e.g.
something like the following:

    ...
    tags->nr_hw_queues = num_online_cpus();
    tags->queue_depth  = sess->queue_deph / tags->nr_hw_queues;

    blk_mq_alloc_tag_set(tags);


And this trick won't work out for the performance.  ibtrs client
has a single resource: set of buffer chunks received from a
server side.  And these buffers should be dynamically distributed
between IO producers according to the load.  Having a hard split
of the whole queue depth between hw queues we can forget about a
dynamic load distribution, here is an example:

   - say server shares 1024 buffer chunks for a session (do not
     remember what is the actual number).

   - 1024 buffers are equally divided between hw queues, let's
     say 64 (number of cpus), so each queue is 16 requests depth.

   - only several CPUs produce IO, and instead of occupying the
     whole "bandwidth" of a session, i.e. 1024 buffer chunks,
     we limit ourselves to a small queue depth of an each hw
     queue.

And performance drops significantly when number of IO producers
is smaller than number of hw queues (CPUs), and it can be easily
tested and proved.

So for this particular ibtrs case tags should be globally shared,
and seems (unfortunately) there is no any other similar requirements
for other block devices.

--
Roman

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers
  2019-09-23 22:50   ` Bart Van Assche
  2019-09-25 21:45     ` Danil Kipnis
@ 2019-09-27  8:56     ` Jinpu Wang
  1 sibling, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27  8:56 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 24, 2019 at 12:50 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +#define P1 )
> > +#define P2 ))
> > +#define P3 )))
> > +#define P4 ))))
> > +#define P(N) P ## N
> > +
> > +#define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)
> > +#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__
> > +
> > +#define LIST(...)                                            \
> > +     __VA_ARGS__,                                            \
> > +     ({ unknown_type(); NULL; })                             \
> > +     CAT(P, COUNT_ARGS(__VA_ARGS__))                         \
> > +
> > +#define EMPTY()
> > +#define DEFER(id) id EMPTY()
> > +
> > +#define _CASE(obj, type, member)                             \
> > +     __builtin_choose_expr(                                  \
> > +     __builtin_types_compatible_p(                           \
> > +             typeof(obj), type),                             \
> > +             ((type)obj)->member
> > +#define CASE(o, t, m) DEFER(_CASE)(o, t, m)
> > +
> > +/*
> > + * Below we define retrieving of sessname from common IBTRS types.
> > + * Client or server related types have to be defined by special
> > + * TYPES_TO_SESSNAME macro.
> > + */
> > +
> > +void unknown_type(void);
> > +
> > +#ifndef TYPES_TO_SESSNAME
> > +#define TYPES_TO_SESSNAME(...) ({ unknown_type(); NULL; })
> > +#endif
> > +
> > +#define ibtrs_prefix(obj)                                    \
> > +     _CASE(obj, struct ibtrs_con *,  sess->sessname),        \
> > +     _CASE(obj, struct ibtrs_sess *, sessname),              \
> > +     TYPES_TO_SESSNAME(obj)                                  \
> > +     ))
>
> No preprocessor voodoo please. Please remove all of the above and modify
> the logging statements such that these pass the proper name string as
> first argument to logging macros.
Sure, will do.
>
> > +struct ibtrs_msg_conn_req {
> > +     u8              __cma_version; /* Is set to 0 by cma.c in case of
> > +                                     * AF_IB, do not touch that. */
> > +     u8              __ip_version;  /* On sender side that should be
> > +                                     * set to 0, or cma_save_ip_info()
> > +                                     * extract garbage and will fail. */
> > +     __le16          magic;
> > +     __le16          version;
> > +     __le16          cid;
> > +     __le16          cid_num;
> > +     __le16          recon_cnt;
> > +     uuid_t          sess_uuid;
> > +     uuid_t          paths_uuid;
> > +     u8              reserved[12];
> > +};
>
> Please remove the reserved[] array and check private_data_len in the
> code that receives the login request.
We already checked the private_data_len on server side, see ibtrs_rdma_connect,
and keep some reserved fields for future seems to be common practice
for protocol, IMO.
Also due to the fact, we already running the code in production, we
want to keep the protocol compatible, so future
transition could be smooth.
>
> > +/**
> > + * struct ibtrs_msg_conn_rsp - Server connection response to the client
> > + * @magic:      IBTRS magic
> > + * @version:    IBTRS protocol version
> > + * @errno:      If rdma_accept() then 0, if rdma_reject() indicates error
> > + * @queue_depth:   max inflight messages (queue-depth) in this session
> > + * @max_io_size:   max io size server supports
> > + * @max_hdr_size:  max msg header size server supports
> > + *
> > + * NOTE: size is 56 bytes, max possible is 136 bytes, see man rdma_accept().
> > + */
> > +struct ibtrs_msg_conn_rsp {
> > +     __le16          magic;
> > +     __le16          version;
> > +     __le16          errno;
> > +     __le16          queue_depth;
> > +     __le32          max_io_size;
> > +     __le32          max_hdr_size;
> > +     u8              reserved[40];
> > +};
>
> Same comment here: please remove the reserved[] array and check
> private_data_len in the code that processes this data structure.
Ditto.
>
> > +static inline int sockaddr_cmp(const struct sockaddr *a,
> > +                            const struct sockaddr *b)
> > +{
> > +     switch (a->sa_family) {
> > +     case AF_IB:
> > +             return memcmp(&((struct sockaddr_ib *)a)->sib_addr,
> > +                           &((struct sockaddr_ib *)b)->sib_addr,
> > +                           sizeof(struct ib_addr));
> > +     case AF_INET:
> > +             return memcmp(&((struct sockaddr_in *)a)->sin_addr,
> > +                           &((struct sockaddr_in *)b)->sin_addr,
> > +                           sizeof(struct in_addr));
> > +     case AF_INET6:
> > +             return memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
> > +                           &((struct sockaddr_in6 *)b)->sin6_addr,
> > +                           sizeof(struct in6_addr));
> > +     default:
> > +             return -ENOENT;
> > +     }
> > +}
> > +
> > +static inline int sockaddr_to_str(const struct sockaddr *addr,
> > +                                char *buf, size_t len)
> > +{
> > +     int cnt;
> > +
> > +     switch (addr->sa_family) {
> > +     case AF_IB:
> > +             cnt = scnprintf(buf, len, "gid:%pI6",
> > +                     &((struct sockaddr_ib *)addr)->sib_addr.sib_raw);
> > +             return cnt;
> > +     case AF_INET:
> > +             cnt = scnprintf(buf, len, "ip:%pI4",
> > +                     &((struct sockaddr_in *)addr)->sin_addr);
> > +             return cnt;
> > +     case AF_INET6:
> > +             cnt = scnprintf(buf, len, "ip:%pI6c",
> > +                       &((struct sockaddr_in6 *)addr)->sin6_addr);
> > +             return cnt;
> > +     }
> > +     cnt = scnprintf(buf, len, "<invalid address family>");
> > +     pr_err("Invalid address family\n");
> > +     return cnt;
> > +}
>
> Since these functions are not in the hot path, please move these into a
> .c file.
ok.
>
> > +/**
> > + * ibtrs_invalidate_flag() - returns proper flags for invalidation
> > + *
> > + * NOTE: This function is needed for compat layer, so think twice before
> > + *       rename or remove.
> > + */
> > +static inline u32 ibtrs_invalidate_flag(void)
> > +{
> > +     return IBTRS_MSG_NEED_INVAL_F;
> > +}
>
> An inline function that does nothing else than returning a compile-time
> constant? That does not look useful to me. How about inlining this function?
This is needed for the compact layer, we redefine some FR functions to
use FMR for our
ConnectX2 X3 HCA.
https://github.com/ionos-enterprise/ibnbd/tree/master/ibtrs/compat
It will finally fade out, but it will take time.


Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-27  8:52                     ` Roman Penyaev
@ 2019-09-27  9:32                       ` Danil Kipnis
  2019-09-27 12:18                         ` Danil Kipnis
  2019-09-27 16:37                       ` Bart Van Assche
  1 sibling, 1 reply; 148+ messages in thread
From: Danil Kipnis @ 2019-09-27  9:32 UTC (permalink / raw)
  To: Roman Penyaev
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Fri, Sep 27, 2019 at 10:52 AM Roman Penyaev <r.peniaev@gmail.com> wrote:
>
> No, it seems this thingy is a bit different.  According to my
> understanding patches 3 and 4 from this patchset do the
> following: 1# split equally the whole queue depth on number
> of hardware queues and 2# return tag number which is unique
> host-wide (more or less similar to unique_tag, right?).
>
> 2# is not needed for ibtrs, and 1# can be easy done by dividing
> queue_depth on number of hw queues on tag set allocation, e.g.
> something like the following:
>
>     ...
>     tags->nr_hw_queues = num_online_cpus();
>     tags->queue_depth  = sess->queue_deph / tags->nr_hw_queues;
>
>     blk_mq_alloc_tag_set(tags);
>
>
> And this trick won't work out for the performance.  ibtrs client
> has a single resource: set of buffer chunks received from a
> server side.  And these buffers should be dynamically distributed
> between IO producers according to the load.  Having a hard split
> of the whole queue depth between hw queues we can forget about a
> dynamic load distribution, here is an example:
>
>    - say server shares 1024 buffer chunks for a session (do not
>      remember what is the actual number).
>
>    - 1024 buffers are equally divided between hw queues, let's
>      say 64 (number of cpus), so each queue is 16 requests depth.
>
>    - only several CPUs produce IO, and instead of occupying the
>      whole "bandwidth" of a session, i.e. 1024 buffer chunks,
>      we limit ourselves to a small queue depth of an each hw
>      queue.
>
> And performance drops significantly when number of IO producers
> is smaller than number of hw queues (CPUs), and it can be easily
> tested and proved.
>
> So for this particular ibtrs case tags should be globally shared,
> and seems (unfortunately) there is no any other similar requirements
> for other block devices.
I don't see any difference between what you describe here and 100 dm
volumes sitting on top of a single NVME device.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules
  2019-09-23 23:03   ` Bart Van Assche
@ 2019-09-27 10:13     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27 10:13 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Tue, Sep 24, 2019 at 1:03 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +static int ibtrs_str_gid_to_sockaddr(const char *addr, size_t len,
> > +                                  short port, struct sockaddr_storage *dst)
> > +{
> > +     struct sockaddr_ib *dst_ib = (struct sockaddr_ib *)dst;
> > +     int ret;
> > +
> > +     /*
> > +      * We can use some of the I6 functions since GID is a valid
> > +      * IPv6 address format
> > +      */
> > +     ret = in6_pton(addr, len, dst_ib->sib_addr.sib_raw, '\0', NULL);
> > +     if (ret == 0)
> > +             return -EINVAL;
> > +
> > +     dst_ib->sib_family = AF_IB;
> > +     /*
> > +      * Use the same TCP server port number as the IB service ID
> > +      * on the IB port space range
> > +      */
> > +     dst_ib->sib_sid = cpu_to_be64(RDMA_IB_IP_PS_IB | port);
> > +     dst_ib->sib_sid_mask = cpu_to_be64(0xffffffffffffffffULL);
> > +     dst_ib->sib_pkey = cpu_to_be16(0xffff);
> > +
> > +     return 0;
> > +}
> > +
> > +/**
> > + * ibtrs_str_to_sockaddr() - Convert ibtrs address string to sockaddr
> > + * @addr     String representation of an addr (IPv4, IPv6 or IB GID):
> > + *              - "ip:192.168.1.1"
> > + *              - "ip:fe80::200:5aee:feaa:20a2"
> > + *              - "gid:fe80::200:5aee:feaa:20a2"
> > + * @len         String address length
> > + * @port     Destination port
> > + * @dst              Destination sockaddr structure
> > + *
> > + * Returns 0 if conversion successful. Non-zero on error.
> > + */
> > +static int ibtrs_str_to_sockaddr(const char *addr, size_t len,
> > +                              short port, struct sockaddr_storage *dst)
> > +{
> > +     if (strncmp(addr, "gid:", 4) == 0) {
> > +             return ibtrs_str_gid_to_sockaddr(addr + 4, len - 4, port, dst);
> > +     } else if (strncmp(addr, "ip:", 3) == 0) {
> > +             char port_str[8];
> > +             char *cpy;
> > +             int err;
> > +
> > +             snprintf(port_str, sizeof(port_str), "%u", port);
> > +             cpy = kstrndup(addr + 3, len - 3, GFP_KERNEL);
> > +             err = cpy ? inet_pton_with_scope(&init_net, AF_UNSPEC,
> > +                                              cpy, port_str, dst) : -ENOMEM;
> > +             kfree(cpy);
> > +
> > +             return err;
> > +     }
> > +     return -EPROTONOSUPPORT;
> > +}
>
> A considerable amount of code is required to support the IB/CM. Does
> supporting the IB/CM add any value? If that code would be left out,
> would anything break? Is it really useful to support IB networks where
> no IP address has been assigned to each IB port?

We had quite some problems with ipoib in the past, especially neighbor
discovery, from time to time
we encountered some IP are not reachable from other hosts.

That's why we want to have AF_IB support, which doesn't reply on IPoIB.

Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 05/25] ibtrs: client: private header with client structs and functions
  2019-09-23 23:05   ` Bart Van Assche
@ 2019-09-27 10:18     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27 10:18 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 24, 2019 at 1:05 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +static inline const char *ibtrs_clt_state_str(enum ibtrs_clt_state state)
> > +{
> > +     switch (state) {
> > +     case IBTRS_CLT_CONNECTING:
> > +             return "IBTRS_CLT_CONNECTING";
> > +     case IBTRS_CLT_CONNECTING_ERR:
> > +             return "IBTRS_CLT_CONNECTING_ERR";
> > +     case IBTRS_CLT_RECONNECTING:
> > +             return "IBTRS_CLT_RECONNECTING";
> > +     case IBTRS_CLT_CONNECTED:
> > +             return "IBTRS_CLT_CONNECTED";
> > +     case IBTRS_CLT_CLOSING:
> > +             return "IBTRS_CLT_CLOSING";
> > +     case IBTRS_CLT_CLOSED:
> > +             return "IBTRS_CLT_CLOSED";
> > +     case IBTRS_CLT_DEAD:
> > +             return "IBTRS_CLT_DEAD";
> > +     default:
> > +             return "UNKNOWN";
> > +     }
> > +}
>
> Since this code is not in the hot path, please move it from a .h into a
> .c file.
ok.
>
> > +static inline struct ibtrs_clt_con *to_clt_con(struct ibtrs_con *c)
> > +{
> > +     return container_of(c, struct ibtrs_clt_con, c);
> > +}
> > +
> > +static inline struct ibtrs_clt_sess *to_clt_sess(struct ibtrs_sess *s)
> > +{
> > +     return container_of(s, struct ibtrs_clt_sess, s);
> > +}
>
> Is it really useful to define functions for these conversions? Has it
> been considered to inline these functions?
We use them quite some places, it does make the code shorter.


Thanks
Jinpu Wang

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 07/25] ibtrs: client: statistics functions
  2019-09-23 23:15   ` Bart Van Assche
@ 2019-09-27 12:00     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27 12:00 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Tue, Sep 24, 2019 at 1:15 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +void ibtrs_clt_update_rdma_lat(struct ibtrs_clt_stats *stats, bool read,
> > +                            unsigned long ms)
> > +{
> > +     struct ibtrs_clt_stats_pcpu *s;
> > +     int id;
> > +
> > +     id = ibtrs_clt_ms_to_id(ms);
> > +     s = this_cpu_ptr(stats->pcpu_stats);
> > +     if (read) {
> > +             s->rdma_lat_distr[id].read++;
> > +             if (s->rdma_lat_max.read < ms)
> > +                     s->rdma_lat_max.read = ms;
> > +     } else {
> > +             s->rdma_lat_distr[id].write++;
> > +             if (s->rdma_lat_max.write < ms)
> > +                     s->rdma_lat_max.write = ms;
> > +     }
> > +}
>
> Can it happen that this function is called simultaneously from thread
> context and from interrupt context?
This can't happen, we only call the function from complete_rdma_req, and
complete_rdma_req is call from cq callback except fail_all_outstanding_reqs,
cq callback context is softirq, fail_all_outstanding_reqs is process
context, but we
disconnect and drain_qp before call into fail_all_outstading_reqs

>
> > +void ibtrs_clt_update_wc_stats(struct ibtrs_clt_con *con)
> > +{
> > +     struct ibtrs_clt_sess *sess = to_clt_sess(con->c.sess);
> > +     struct ibtrs_clt_stats *stats = &sess->stats;
> > +     struct ibtrs_clt_stats_pcpu *s;
> > +     int cpu;
> > +
> > +     cpu = raw_smp_processor_id();
> > +     s = this_cpu_ptr(stats->pcpu_stats);
> > +     s->wc_comp.cnt++;
> > +     s->wc_comp.total_cnt++;
> > +     if (unlikely(con->cpu != cpu)) {
> > +             s->cpu_migr.to++;
> > +
> > +             /* Careful here, override s pointer */
> > +             s = per_cpu_ptr(stats->pcpu_stats, con->cpu);
> > +             atomic_inc(&s->cpu_migr.from);
> > +     }
> > +}
>
> Same question here.
The function is only called from cq done callback,
>
> > +void ibtrs_clt_inc_failover_cnt(struct ibtrs_clt_stats *stats)
> > +{
> > +     struct ibtrs_clt_stats_pcpu *s;
> > +
> > +     s = this_cpu_ptr(stats->pcpu_stats);
> > +     s->rdma.failover_cnt++;
> > +}
>
> And here ...
this function only call from process context.

>
> Thanks,
>
> Bart.
Thanks,
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 09/25] ibtrs: server: private header with server structs and functions
  2019-09-23 23:21   ` Bart Van Assche
@ 2019-09-27 12:04     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27 12:04 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 24, 2019 at 1:21 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +static inline const char *ibtrs_srv_state_str(enum ibtrs_srv_state state)
> > +{
> > +     switch (state) {
> > +     case IBTRS_SRV_CONNECTING:
> > +             return "IBTRS_SRV_CONNECTING";
> > +     case IBTRS_SRV_CONNECTED:
> > +             return "IBTRS_SRV_CONNECTED";
> > +     case IBTRS_SRV_CLOSING:
> > +             return "IBTRS_SRV_CLOSING";
> > +     case IBTRS_SRV_CLOSED:
> > +             return "IBTRS_SRV_CLOSED";
> > +     default:
> > +             return "UNKNOWN";
> > +     }
> > +}
>
> Since this function is not in the hot path, please move it into a .c file.
Ok.
>
> > +/* See ibtrs-log.h */
> > +#define TYPES_TO_SESSNAME(obj)                                               \
> > +     LIST(CASE(obj, struct ibtrs_srv_sess *, s.sessname))
>
> Please remove this macro and pass 'sessname' explicitly to logging
> functions.
Ok.
>
> Thanks,
>
> Bart.
Thanks!

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-27  9:32                       ` Danil Kipnis
@ 2019-09-27 12:18                         ` Danil Kipnis
  0 siblings, 0 replies; 148+ messages in thread
From: Danil Kipnis @ 2019-09-27 12:18 UTC (permalink / raw)
  To: Roman Penyaev, Christoph Hellwig
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Sagi Grimberg, Jason Gunthorpe, Doug Ledford, rpenyaev,
	Jack Wang


On 27.09.19 11:32, Danil Kipnis wrote:
> On Fri, Sep 27, 2019 at 10:52 AM Roman Penyaev <r.peniaev@gmail.com> wrote:
>> No, it seems this thingy is a bit different.  According to my
>> understanding patches 3 and 4 from this patchset do the
>> following: 1# split equally the whole queue depth on number
>> of hardware queues and 2# return tag number which is unique
>> host-wide (more or less similar to unique_tag, right?).
>>
>> 2# is not needed for ibtrs, and 1# can be easy done by dividing
>> queue_depth on number of hw queues on tag set allocation, e.g.
>> something like the following:
>>
>>      ...
>>      tags->nr_hw_queues = num_online_cpus();
>>      tags->queue_depth  = sess->queue_deph / tags->nr_hw_queues;
>>
>>      blk_mq_alloc_tag_set(tags);
>>
>>
>> And this trick won't work out for the performance.  ibtrs client
>> has a single resource: set of buffer chunks received from a
>> server side.  And these buffers should be dynamically distributed
>> between IO producers according to the load.  Having a hard split
>> of the whole queue depth between hw queues we can forget about a
>> dynamic load distribution, here is an example:
>>
>>     - say server shares 1024 buffer chunks for a session (do not
>>       remember what is the actual number).
>>
>>     - 1024 buffers are equally divided between hw queues, let's
>>       say 64 (number of cpus), so each queue is 16 requests depth.
>>
>>     - only several CPUs produce IO, and instead of occupying the
>>       whole "bandwidth" of a session, i.e. 1024 buffer chunks,
>>       we limit ourselves to a small queue depth of an each hw
>>       queue.
>>
>> And performance drops significantly when number of IO producers
>> is smaller than number of hw queues (CPUs), and it can be easily
>> tested and proved.
>>
>> So for this particular ibtrs case tags should be globally shared,
>> and seems (unfortunately) there is no any other similar requirements
>> for other block devices.
> I don't see any difference between what you describe here and 100 dm
> volumes sitting on top of a single NVME device.

Hallo Christoph,

am I wrong?

Thank you,

Danil.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 10/25] ibtrs: server: main functionality
  2019-09-23 23:49   ` Bart Van Assche
@ 2019-09-27 15:03     ` Jinpu Wang
  2019-09-27 15:11       ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27 15:03 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 24, 2019 at 1:49 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +module_param_named(max_chunk_size, max_chunk_size, int, 0444);
> > +MODULE_PARM_DESC(max_chunk_size,
> > +              "Max size for each IO request, when change the unit is in byte"
> > +              " (default: " __stringify(DEFAULT_MAX_CHUNK_SIZE_KB) "KB)");
>
> Where can I find the definition of DEFAULT_MAX_CHUNK_SIZE_KB?
oh, it's a typo, should be DEFAULT_MAX_CHUNK_SIZE.
>
> > +static char cq_affinity_list[256] = "";
>
> No empty initializers for file-scope variables please.
Is it guaranteed by the compiler, the file-scope variables will be
empty initialized?
>
> > +     pr_info("cq_affinity_list changed to %*pbl\n",
> > +             cpumask_pr_args(&cq_affinity_mask));
>
> Should this pr_info() call perhaps be changed into pr_debug()?
Because the setting could lead to performance drop, pr_info seems more
appropriate.

>
> > +static bool __ibtrs_srv_change_state(struct ibtrs_srv_sess *sess,
> > +                                  enum ibtrs_srv_state new_state)
> > +{
> > +     enum ibtrs_srv_state old_state;
> > +     bool changed = false;
> > +
> > +     old_state = sess->state;
> > +     switch (new_state) {
>
> Please add a lockdep_assert_held() statement that checks whether calls
> of this function are serialized properly.
will look into it.
>
> > +/**
> > + * rdma_write_sg() - response on successful READ request
> > + */
> > +static int rdma_write_sg(struct ibtrs_srv_op *id)
> > +{
> > +     struct ibtrs_srv_sess *sess = to_srv_sess(id->con->c.sess);
> > +     dma_addr_t dma_addr = sess->dma_addr[id->msg_id];
> > +     struct ibtrs_srv *srv = sess->srv;
> > +     struct ib_send_wr inv_wr, imm_wr;
> > +     struct ib_rdma_wr *wr = NULL;
> > +     const struct ib_send_wr *bad_wr;
> > +     enum ib_send_flags flags;
> > +     size_t sg_cnt;
> > +     int err, i, offset;
> > +     bool need_inval;
> > +     u32 rkey = 0;
> > +
> > +     sg_cnt = le16_to_cpu(id->rd_msg->sg_cnt);
> > +     need_inval = le16_to_cpu(id->rd_msg->flags) & IBTRS_MSG_NEED_INVAL_F;
> > +     if (unlikely(!sg_cnt))
> > +             return -EINVAL;
> > +
> > +     offset = 0;
> > +     for (i = 0; i < sg_cnt; i++) {
> > +             struct ib_sge *list;
> > +
> > +             wr              = &id->tx_wr[i];
> > +             list            = &id->tx_sg[i];
> > +             list->addr      = dma_addr + offset;
> > +             list->length    = le32_to_cpu(id->rd_msg->desc[i].len);
> > +
> > +             /* WR will fail with length error
> > +              * if this is 0
> > +              */
> > +             if (unlikely(list->length == 0)) {
> > +                     ibtrs_err(sess, "Invalid RDMA-Write sg list length 0\n");
> > +                     return -EINVAL;
> > +             }
> > +
> > +             list->lkey = sess->s.dev->ib_pd->local_dma_lkey;
> > +             offset += list->length;
> > +
> > +             wr->wr.wr_cqe   = &io_comp_cqe;
> > +             wr->wr.sg_list  = list;
> > +             wr->wr.num_sge  = 1;
> > +             wr->remote_addr = le64_to_cpu(id->rd_msg->desc[i].addr);
> > +             wr->rkey        = le32_to_cpu(id->rd_msg->desc[i].key);
> > +             if (rkey == 0)
> > +                     rkey = wr->rkey;
> > +             else
> > +                     /* Only one key is actually used */
> > +                     WARN_ON_ONCE(rkey != wr->rkey);
> > +
> > +             if (i < (sg_cnt - 1))
> > +                     wr->wr.next = &id->tx_wr[i + 1].wr;
> > +             else if (need_inval)
> > +                     wr->wr.next = &inv_wr;
> > +             else
> > +                     wr->wr.next = &imm_wr;
> > +
> > +             wr->wr.opcode = IB_WR_RDMA_WRITE;
> > +             wr->wr.ex.imm_data = 0;
> > +             wr->wr.send_flags  = 0;
> > +     }
> > +     /*
> > +      * From time to time we have to post signalled sends,
> > +      * or send queue will fill up and only QP reset can help.
> > +      */
> > +     flags = atomic_inc_return(&id->con->wr_cnt) % srv->queue_depth ?
> > +                     0 : IB_SEND_SIGNALED;
> > +
> > +     if (need_inval) {
> > +             inv_wr.next = &imm_wr;
> > +             inv_wr.wr_cqe = &io_comp_cqe;
> > +             inv_wr.sg_list = NULL;
> > +             inv_wr.num_sge = 0;
> > +             inv_wr.opcode = IB_WR_SEND_WITH_INV;
> > +             inv_wr.send_flags = 0;
> > +             inv_wr.ex.invalidate_rkey = rkey;
> > +     }
> > +     imm_wr.next = NULL;
> > +     imm_wr.wr_cqe = &io_comp_cqe;
> > +     imm_wr.sg_list = NULL;
> > +     imm_wr.num_sge = 0;
> > +     imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM;
> > +     imm_wr.send_flags = flags;
> > +     imm_wr.ex.imm_data = cpu_to_be32(ibtrs_to_io_rsp_imm(id->msg_id,
> > +                                                          0, need_inval));
> > +
> > +     ib_dma_sync_single_for_device(sess->s.dev->ib_dev, dma_addr,
> > +                                   offset, DMA_BIDIRECTIONAL);
> > +
> > +     err = ib_post_send(id->con->c.qp, &id->tx_wr[0].wr, &bad_wr);
> > +     if (unlikely(err))
> > +             ibtrs_err(sess,
> > +                       "Posting RDMA-Write-Request to QP failed, err: %d\n",
> > +                       err);
> > +
> > +     return err;
> > +}
>
> All other RDMA server implementations use rdma_rw_ctx_init() and
> rdma_rw_ctx_wrs(). Please use these functions in IBTRS too.
rdma_rw_ctx_* api doesn't support RDMA_WRITE_WITH_IMM, and
ibtrs mainly use RDMA_WRITE_WITH_IMM.

>
> > +static void ibtrs_srv_hb_err_handler(struct ibtrs_con *c, int err)
> > +{
> > +     (void)err;
> > +     close_sess(to_srv_sess(c->sess));
> > +}
>
> Is the (void)err statement really necessary?
No, will be removed.
>
> > +static int ibtrs_srv_rdma_init(struct ibtrs_srv_ctx *ctx, unsigned int port)
> > +{
> > +     struct sockaddr_in6 sin = {
> > +             .sin6_family    = AF_INET6,
> > +             .sin6_addr      = IN6ADDR_ANY_INIT,
> > +             .sin6_port      = htons(port),
> > +     };
> > +     struct sockaddr_ib sib = {
> > +             .sib_family                     = AF_IB,
> > +             .sib_addr.sib_subnet_prefix     = 0ULL,
> > +             .sib_addr.sib_interface_id      = 0ULL,
> > +             .sib_sid        = cpu_to_be64(RDMA_IB_IP_PS_IB | port),
> > +             .sib_sid_mask   = cpu_to_be64(0xffffffffffffffffULL),
> > +             .sib_pkey       = cpu_to_be16(0xffff),
> > +     };
> > +     struct rdma_cm_id *cm_ip, *cm_ib;
> > +     int ret;
> > +
> > +     /*
> > +      * We accept both IPoIB and IB connections, so we need to keep
> > +      * two cm id's, one for each socket type and port space.
> > +      * If the cm initialization of one of the id's fails, we abort
> > +      * everything.
> > +      */
> > +     cm_ip = ibtrs_srv_cm_init(ctx, (struct sockaddr *)&sin, RDMA_PS_TCP);
> > +     if (unlikely(IS_ERR(cm_ip)))
> > +             return PTR_ERR(cm_ip);
> > +
> > +     cm_ib = ibtrs_srv_cm_init(ctx, (struct sockaddr *)&sib, RDMA_PS_IB);
> > +     if (unlikely(IS_ERR(cm_ib))) {
> > +             ret = PTR_ERR(cm_ib);
> > +             goto free_cm_ip;
> > +     }
> > +
> > +     ctx->cm_id_ip = cm_ip;
> > +     ctx->cm_id_ib = cm_ib;
> > +
> > +     return 0;
> > +
> > +free_cm_ip:
> > +     rdma_destroy_id(cm_ip);
> > +
> > +     return ret;
> > +}
>
> Will the above work if CONFIG_IPV6=n?
I tested with CONFIG_IPV6=n, it compiles.
>
> > +static int __init ibtrs_server_init(void)
> > +{
> > +     int err;
> > +
> > +     if (!strlen(cq_affinity_list))
> > +             init_cq_affinity();
>
> Is the above if-test useful? Can that if-test be left out?
You're right, will remove.
>
> Thanks,
>
> Bart.
Thanks!

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 10/25] ibtrs: server: main functionality
  2019-09-27 15:03     ` Jinpu Wang
@ 2019-09-27 15:11       ` Bart Van Assche
  2019-09-27 15:19         ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-27 15:11 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On 9/27/19 8:03 AM, Jinpu Wang wrote:
> On Tue, Sep 24, 2019 at 1:49 AM Bart Van Assche <bvanassche@acm.org> wrote:
>> On 6/20/19 8:03 AM, Jack Wang wrote:
>>> +static char cq_affinity_list[256] = "";
>>
>> No empty initializers for file-scope variables please.
 >
> Is it guaranteed by the compiler, the file-scope variables will be
> empty initialized?

That is guaranteed by the C standard. See also 
https://stackoverflow.com/questions/3373108/why-are-static-variables-auto-initialized-to-zero.

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 10/25] ibtrs: server: main functionality
  2019-09-27 15:11       ` Bart Van Assche
@ 2019-09-27 15:19         ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-09-27 15:19 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Fri, Sep 27, 2019 at 5:11 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/27/19 8:03 AM, Jinpu Wang wrote:
> > On Tue, Sep 24, 2019 at 1:49 AM Bart Van Assche <bvanassche@acm.org> wrote:
> >> On 6/20/19 8:03 AM, Jack Wang wrote:
> >>> +static char cq_affinity_list[256] = "";
> >>
> >> No empty initializers for file-scope variables please.
>  >
> > Is it guaranteed by the compiler, the file-scope variables will be
> > empty initialized?
>
> That is guaranteed by the C standard. See also
> https://stackoverflow.com/questions/3373108/why-are-static-variables-auto-initialized-to-zero.
>
> Bart.
Thanks, will remove the initializer.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-27  8:52                     ` Roman Penyaev
  2019-09-27  9:32                       ` Danil Kipnis
@ 2019-09-27 16:37                       ` Bart Van Assche
  2019-09-27 16:50                         ` Roman Penyaev
  1 sibling, 1 reply; 148+ messages in thread
From: Bart Van Assche @ 2019-09-27 16:37 UTC (permalink / raw)
  To: Roman Penyaev
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/27/19 1:52 AM, Roman Penyaev wrote:
> No, it seems this thingy is a bit different.  According to my
> understanding patches 3 and 4 from this patchset do the
> following: 1# split equally the whole queue depth on number
> of hardware queues and 2# return tag number which is unique
> host-wide (more or less similar to unique_tag, right?).
> 
> 2# is not needed for ibtrs, and 1# can be easy done by dividing
> queue_depth on number of hw queues on tag set allocation, e.g.
> something like the following:
> 
>      ...
>      tags->nr_hw_queues = num_online_cpus();
>      tags->queue_depth  = sess->queue_deph / tags->nr_hw_queues;
> 
>      blk_mq_alloc_tag_set(tags);
> 
> 
> And this trick won't work out for the performance.  ibtrs client
> has a single resource: set of buffer chunks received from a
> server side.  And these buffers should be dynamically distributed
> between IO producers according to the load.  Having a hard split
> of the whole queue depth between hw queues we can forget about a
> dynamic load distribution, here is an example:
> 
>     - say server shares 1024 buffer chunks for a session (do not
>       remember what is the actual number).
> 
>     - 1024 buffers are equally divided between hw queues, let's
>       say 64 (number of cpus), so each queue is 16 requests depth.
> 
>     - only several CPUs produce IO, and instead of occupying the
>       whole "bandwidth" of a session, i.e. 1024 buffer chunks,
>       we limit ourselves to a small queue depth of an each hw
>       queue.
> 
> And performance drops significantly when number of IO producers
> is smaller than number of hw queues (CPUs), and it can be easily
> tested and proved.
> 
> So for this particular ibtrs case tags should be globally shared,
> and seems (unfortunately) there is no any other similar requirements
> for other block devices.

Hi Roman,

I agree that BLK_MQ_F_HOST_TAGS partitions a tag set across hardware 
queues while ibnbd shares a single tag set across multiple hardware 
queues. Since such sharing may be useful for other block drivers, isn't 
that something that should be implemented in the block layer core 
instead of in the ibnbd driver? If that logic would be moved into the 
block layer core, would that allow to reuse the queue restarting logic 
that already exists in the block layer core?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-27 16:37                       ` Bart Van Assche
@ 2019-09-27 16:50                         ` Roman Penyaev
  2019-09-27 17:16                           ` Bart Van Assche
  0 siblings, 1 reply; 148+ messages in thread
From: Roman Penyaev @ 2019-09-27 16:50 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On Fri, Sep 27, 2019 at 6:37 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 9/27/19 1:52 AM, Roman Penyaev wrote:
> > No, it seems this thingy is a bit different.  According to my
> > understanding patches 3 and 4 from this patchset do the
> > following: 1# split equally the whole queue depth on number
> > of hardware queues and 2# return tag number which is unique
> > host-wide (more or less similar to unique_tag, right?).
> >
> > 2# is not needed for ibtrs, and 1# can be easy done by dividing
> > queue_depth on number of hw queues on tag set allocation, e.g.
> > something like the following:
> >
> >      ...
> >      tags->nr_hw_queues = num_online_cpus();
> >      tags->queue_depth  = sess->queue_deph / tags->nr_hw_queues;
> >
> >      blk_mq_alloc_tag_set(tags);
> >
> >
> > And this trick won't work out for the performance.  ibtrs client
> > has a single resource: set of buffer chunks received from a
> > server side.  And these buffers should be dynamically distributed
> > between IO producers according to the load.  Having a hard split
> > of the whole queue depth between hw queues we can forget about a
> > dynamic load distribution, here is an example:
> >
> >     - say server shares 1024 buffer chunks for a session (do not
> >       remember what is the actual number).
> >
> >     - 1024 buffers are equally divided between hw queues, let's
> >       say 64 (number of cpus), so each queue is 16 requests depth.
> >
> >     - only several CPUs produce IO, and instead of occupying the
> >       whole "bandwidth" of a session, i.e. 1024 buffer chunks,
> >       we limit ourselves to a small queue depth of an each hw
> >       queue.
> >
> > And performance drops significantly when number of IO producers
> > is smaller than number of hw queues (CPUs), and it can be easily
> > tested and proved.
> >
> > So for this particular ibtrs case tags should be globally shared,
> > and seems (unfortunately) there is no any other similar requirements
> > for other block devices.
>
> Hi Roman,
>
> I agree that BLK_MQ_F_HOST_TAGS partitions a tag set across hardware
> queues while ibnbd shares a single tag set across multiple hardware
> queues. Since such sharing may be useful for other block drivers, isn't
> that something that should be implemented in the block layer core
> instead of in the ibnbd driver? If that logic would be moved into the
> block layer core, would that allow to reuse the queue restarting logic
> that already exists in the block layer core?

Definitely yes, but what other block drivers you have in mind?

--
Roman

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 17/25] ibnbd: client: main functionality
  2019-09-27 16:50                         ` Roman Penyaev
@ 2019-09-27 17:16                           ` Bart Van Assche
  0 siblings, 0 replies; 148+ messages in thread
From: Bart Van Assche @ 2019-09-27 17:16 UTC (permalink / raw)
  To: Roman Penyaev
  Cc: Danil Kipnis, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	rpenyaev, Jack Wang

On 9/27/19 9:50 AM, Roman Penyaev wrote:
> On Fri, Sep 27, 2019 at 6:37 PM Bart Van Assche <bvanassche@acm.org> wrote:
>> I agree that BLK_MQ_F_HOST_TAGS partitions a tag set across hardware
>> queues while ibnbd shares a single tag set across multiple hardware
>> queues. Since such sharing may be useful for other block drivers, isn't
>> that something that should be implemented in the block layer core
>> instead of in the ibnbd driver? If that logic would be moved into the
>> block layer core, would that allow to reuse the queue restarting logic
>> that already exists in the block layer core?
> 
> Definitely yes, but what other block drivers you have in mind?

I'd like to hear the opinion of Jens and Christoph about this topic. My 
concern is that if the code for sharing a tag set across hwqs stays in 
the ibnbd driver and if another block driver is submitted in the future 
that needs the same logic that in order to end up with a single 
implementation of the tag set sharing code that the authors of the new 
driver would have to be asked to modify the ibnbd driver. I think it 
would be inappropriate to ask the authors of such a new driver to modify 
the ibnbd driver.

Bart.


^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 12/25] ibtrs: server: sysfs interface functions
  2019-09-24  0:00   ` Bart Van Assche
@ 2019-10-02 15:11     ` Jinpu Wang
  0 siblings, 0 replies; 148+ messages in thread
From: Jinpu Wang @ 2019-10-02 15:11 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev, Roman Pen

On Tue, Sep 24, 2019 at 2:00 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +static void ibtrs_srv_dev_release(struct device *dev)
> > +{
> > +     /* Nobody plays with device references, so nop */
> > +}
>
> I doubt that the above comment is correct.
>
> Thanks,
>
> Bart.
will fix it,

Thank you, Bart!

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 11/25] ibtrs: server: statistics functions
  2019-09-23 23:56   ` Bart Van Assche
@ 2019-10-02 15:15     ` Jinpu Wang
  2019-10-02 15:42       ` Leon Romanovsky
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-10-02 15:15 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Tue, Sep 24, 2019 at 1:56 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/20/19 8:03 AM, Jack Wang wrote:
> > +ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
> > +                                 char *page, size_t len)
> > +{
> > +     struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
> > +     struct ibtrs_srv_sess *sess;
> > +
> > +     sess = container_of(stats, typeof(*sess), stats);
> > +
> > +     return scnprintf(page, len, "%lld %lld %lld %lld %u\n",
> > +                      (s64)atomic64_read(&r->dir[READ].cnt),
> > +                      (s64)atomic64_read(&r->dir[READ].size_total),
> > +                      (s64)atomic64_read(&r->dir[WRITE].cnt),
> > +                      (s64)atomic64_read(&r->dir[WRITE].size_total),
> > +                      atomic_read(&sess->ids_inflight));
> > +}
>
> Does this follow the sysfs one-value-per-file rule? See also
> Documentation/filesystems/sysfs.txt.
>
> Thanks,
>
> Bart.
It looks overkill to create one file for each value to me, and there
are enough stats in sysfs contain multiple values.

Thanks
Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 11/25] ibtrs: server: statistics functions
  2019-10-02 15:15     ` Jinpu Wang
@ 2019-10-02 15:42       ` Leon Romanovsky
  2019-10-02 15:45         ` Jinpu Wang
  0 siblings, 1 reply; 148+ messages in thread
From: Leon Romanovsky @ 2019-10-02 15:42 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Wed, Oct 02, 2019 at 05:15:10PM +0200, Jinpu Wang wrote:
> On Tue, Sep 24, 2019 at 1:56 AM Bart Van Assche <bvanassche@acm.org> wrote:
> >
> > On 6/20/19 8:03 AM, Jack Wang wrote:
> > > +ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
> > > +                                 char *page, size_t len)
> > > +{
> > > +     struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
> > > +     struct ibtrs_srv_sess *sess;
> > > +
> > > +     sess = container_of(stats, typeof(*sess), stats);
> > > +
> > > +     return scnprintf(page, len, "%lld %lld %lld %lld %u\n",
> > > +                      (s64)atomic64_read(&r->dir[READ].cnt),
> > > +                      (s64)atomic64_read(&r->dir[READ].size_total),
> > > +                      (s64)atomic64_read(&r->dir[WRITE].cnt),
> > > +                      (s64)atomic64_read(&r->dir[WRITE].size_total),
> > > +                      atomic_read(&sess->ids_inflight));
> > > +}
> >
> > Does this follow the sysfs one-value-per-file rule? See also
> > Documentation/filesystems/sysfs.txt.
> >
> > Thanks,
> >
> > Bart.
> It looks overkill to create one file for each value to me, and there
> are enough stats in sysfs contain multiple values.

Not for statistics.

Thanks

>
> Thanks
> Jinpu

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 11/25] ibtrs: server: statistics functions
  2019-10-02 15:42       ` Leon Romanovsky
@ 2019-10-02 15:45         ` Jinpu Wang
  2019-10-02 16:00           ` Leon Romanovsky
  0 siblings, 1 reply; 148+ messages in thread
From: Jinpu Wang @ 2019-10-02 15:45 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Wed, Oct 2, 2019 at 5:42 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Wed, Oct 02, 2019 at 05:15:10PM +0200, Jinpu Wang wrote:
> > On Tue, Sep 24, 2019 at 1:56 AM Bart Van Assche <bvanassche@acm.org> wrote:
> > >
> > > On 6/20/19 8:03 AM, Jack Wang wrote:
> > > > +ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
> > > > +                                 char *page, size_t len)
> > > > +{
> > > > +     struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
> > > > +     struct ibtrs_srv_sess *sess;
> > > > +
> > > > +     sess = container_of(stats, typeof(*sess), stats);
> > > > +
> > > > +     return scnprintf(page, len, "%lld %lld %lld %lld %u\n",
> > > > +                      (s64)atomic64_read(&r->dir[READ].cnt),
> > > > +                      (s64)atomic64_read(&r->dir[READ].size_total),
> > > > +                      (s64)atomic64_read(&r->dir[WRITE].cnt),
> > > > +                      (s64)atomic64_read(&r->dir[WRITE].size_total),
> > > > +                      atomic_read(&sess->ids_inflight));
> > > > +}
> > >
> > > Does this follow the sysfs one-value-per-file rule? See also
> > > Documentation/filesystems/sysfs.txt.
> > >
> > > Thanks,
> > >
> > > Bart.
> > It looks overkill to create one file for each value to me, and there
> > are enough stats in sysfs contain multiple values.
>
> Not for statistics.
2 examples:
cat /sys/block/nvme0n1/inflight
       0        0
cat /sys/block/nvme0n1/stat
 1267566       53 85396638   927624  4790532  3076340 198306930
19413605        0  2459788 17013620    74392        0 397606816
6864

Thanks

^ permalink raw reply	[flat|nested] 148+ messages in thread

* Re: [PATCH v4 11/25] ibtrs: server: statistics functions
  2019-10-02 15:45         ` Jinpu Wang
@ 2019-10-02 16:00           ` Leon Romanovsky
  0 siblings, 0 replies; 148+ messages in thread
From: Leon Romanovsky @ 2019-10-02 16:00 UTC (permalink / raw)
  To: Jinpu Wang
  Cc: Bart Van Assche, Jack Wang, linux-block, linux-rdma, Jens Axboe,
	Christoph Hellwig, Sagi Grimberg, Jason Gunthorpe, Doug Ledford,
	Danil Kipnis, rpenyaev

On Wed, Oct 02, 2019 at 05:45:04PM +0200, Jinpu Wang wrote:
> On Wed, Oct 2, 2019 at 5:42 PM Leon Romanovsky <leon@kernel.org> wrote:
> >
> > On Wed, Oct 02, 2019 at 05:15:10PM +0200, Jinpu Wang wrote:
> > > On Tue, Sep 24, 2019 at 1:56 AM Bart Van Assche <bvanassche@acm.org> wrote:
> > > >
> > > > On 6/20/19 8:03 AM, Jack Wang wrote:
> > > > > +ssize_t ibtrs_srv_stats_rdma_to_str(struct ibtrs_srv_stats *stats,
> > > > > +                                 char *page, size_t len)
> > > > > +{
> > > > > +     struct ibtrs_srv_stats_rdma_stats *r = &stats->rdma_stats;
> > > > > +     struct ibtrs_srv_sess *sess;
> > > > > +
> > > > > +     sess = container_of(stats, typeof(*sess), stats);
> > > > > +
> > > > > +     return scnprintf(page, len, "%lld %lld %lld %lld %u\n",
> > > > > +                      (s64)atomic64_read(&r->dir[READ].cnt),
> > > > > +                      (s64)atomic64_read(&r->dir[READ].size_total),
> > > > > +                      (s64)atomic64_read(&r->dir[WRITE].cnt),
> > > > > +                      (s64)atomic64_read(&r->dir[WRITE].size_total),
> > > > > +                      atomic_read(&sess->ids_inflight));
> > > > > +}
> > > >
> > > > Does this follow the sysfs one-value-per-file rule? See also
> > > > Documentation/filesystems/sysfs.txt.
> > > >
> > > > Thanks,
> > > >
> > > > Bart.
> > > It looks overkill to create one file for each value to me, and there
> > > are enough stats in sysfs contain multiple values.
> >
> > Not for statistics.
> 2 examples:
> cat /sys/block/nvme0n1/inflight
>        0        0
> cat /sys/block/nvme0n1/stat
>  1267566       53 85396638   927624  4790532  3076340 198306930
> 19413605        0  2459788 17013620    74392        0 397606816
> 6864

OMG, I feel sorry for users who now should go and read code to see what
column 3 in second row means.

We respect our users, please don't do like they did.

Thanks

>
> Thanks

^ permalink raw reply	[flat|nested] 148+ messages in thread

end of thread, other threads:[~2019-10-02 16:01 UTC | newest]

Thread overview: 148+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-20 15:03 [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Jack Wang
2019-06-20 15:03 ` [PATCH v4 01/25] sysfs: export sysfs_remove_file_self() Jack Wang
2019-09-23 17:21   ` Bart Van Assche
2019-09-25  9:30     ` Danil Kipnis
2019-06-20 15:03 ` [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections Jack Wang
2019-09-23 17:44   ` Bart Van Assche
2019-09-25 10:20     ` Danil Kipnis
2019-09-25 15:38       ` Bart Van Assche
2019-06-20 15:03 ` [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers Jack Wang
2019-09-23 22:50   ` Bart Van Assche
2019-09-25 21:45     ` Danil Kipnis
2019-09-25 21:57       ` Bart Van Assche
2019-09-27  8:56     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules Jack Wang
2019-09-23 23:03   ` Bart Van Assche
2019-09-27 10:13     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 05/25] ibtrs: client: private header with client structs and functions Jack Wang
2019-09-23 23:05   ` Bart Van Assche
2019-09-27 10:18     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 06/25] ibtrs: client: main functionality Jack Wang
2019-09-23 21:51   ` Bart Van Assche
2019-09-25 17:36     ` Danil Kipnis
2019-09-25 18:55       ` Bart Van Assche
2019-09-25 20:50         ` Danil Kipnis
2019-09-25 21:08           ` Bart Van Assche
2019-09-25 21:16             ` Bart Van Assche
2019-09-25 22:53             ` Danil Kipnis
2019-09-25 23:21               ` Bart Van Assche
2019-09-26  9:16                 ` Danil Kipnis
2019-06-20 15:03 ` [PATCH v4 07/25] ibtrs: client: statistics functions Jack Wang
2019-09-23 23:15   ` Bart Van Assche
2019-09-27 12:00     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 08/25] ibtrs: client: sysfs interface functions Jack Wang
2019-06-20 15:03 ` [PATCH v4 09/25] ibtrs: server: private header with server structs and functions Jack Wang
2019-09-23 23:21   ` Bart Van Assche
2019-09-27 12:04     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 10/25] ibtrs: server: main functionality Jack Wang
2019-09-23 23:49   ` Bart Van Assche
2019-09-27 15:03     ` Jinpu Wang
2019-09-27 15:11       ` Bart Van Assche
2019-09-27 15:19         ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 11/25] ibtrs: server: statistics functions Jack Wang
2019-09-23 23:56   ` Bart Van Assche
2019-10-02 15:15     ` Jinpu Wang
2019-10-02 15:42       ` Leon Romanovsky
2019-10-02 15:45         ` Jinpu Wang
2019-10-02 16:00           ` Leon Romanovsky
2019-06-20 15:03 ` [PATCH v4 12/25] ibtrs: server: sysfs interface functions Jack Wang
2019-09-24  0:00   ` Bart Van Assche
2019-10-02 15:11     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 13/25] ibtrs: include client and server modules into kernel compilation Jack Wang
2019-06-20 15:03 ` [PATCH v4 14/25] ibtrs: a bit of documentation Jack Wang
2019-06-20 15:03 ` [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers Jack Wang
2019-09-13 22:10   ` Bart Van Assche
2019-09-15 14:30     ` Jinpu Wang
2019-09-16  5:27       ` Leon Romanovsky
2019-09-16 13:45         ` Bart Van Assche
2019-09-17 15:41           ` Leon Romanovsky
2019-09-17 15:52             ` Jinpu Wang
2019-09-16  7:08       ` Danil Kipnis
2019-09-16 14:57       ` Jinpu Wang
2019-09-16 17:25         ` Bart Van Assche
2019-09-17 12:27           ` Jinpu Wang
2019-09-16 15:39       ` Jinpu Wang
2019-09-18 15:26         ` Bart Van Assche
2019-09-18 16:11           ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 16/25] ibnbd: client: private header with client structs and functions Jack Wang
2019-09-13 22:25   ` Bart Van Assche
2019-09-17 16:36     ` Jinpu Wang
2019-09-25 23:43       ` Danil Kipnis
2019-09-26 10:00         ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 17/25] ibnbd: client: main functionality Jack Wang
2019-09-13 23:46   ` Bart Van Assche
2019-09-16 14:17     ` Danil Kipnis
2019-09-16 16:46       ` Bart Van Assche
2019-09-17 11:39         ` Danil Kipnis
2019-09-18  7:14           ` Danil Kipnis
2019-09-18 15:47             ` Bart Van Assche
2019-09-20  8:29               ` Danil Kipnis
2019-09-25 22:26               ` Danil Kipnis
2019-09-26  9:55                 ` Roman Penyaev
2019-09-26 15:01                   ` Bart Van Assche
2019-09-27  8:52                     ` Roman Penyaev
2019-09-27  9:32                       ` Danil Kipnis
2019-09-27 12:18                         ` Danil Kipnis
2019-09-27 16:37                       ` Bart Van Assche
2019-09-27 16:50                         ` Roman Penyaev
2019-09-27 17:16                           ` Bart Van Assche
2019-09-17 13:09     ` Jinpu Wang
2019-09-17 16:46       ` Bart Van Assche
2019-09-18 12:02         ` Jinpu Wang
2019-09-18 16:05     ` Jinpu Wang
2019-09-14  0:00   ` Bart Van Assche
2019-06-20 15:03 ` [PATCH v4 18/25] ibnbd: client: sysfs interface functions Jack Wang
2019-09-18 16:28   ` Bart Van Assche
2019-09-19 15:55     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 19/25] ibnbd: server: private header with server structs and functions Jack Wang
2019-06-20 15:03 ` [PATCH v4 20/25] ibnbd: server: main functionality Jack Wang
2019-09-18 17:41   ` Bart Van Assche
2019-09-20  7:36     ` Danil Kipnis
2019-09-20 15:42       ` Bart Van Assche
2019-09-23 15:19         ` Danil Kipnis
2019-06-20 15:03 ` [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev Jack Wang
2019-09-18 21:46   ` Bart Van Assche
2019-09-26 14:04     ` Jinpu Wang
2019-09-26 15:11       ` Bart Van Assche
2019-09-26 15:25         ` Danil Kipnis
2019-09-26 15:29           ` Bart Van Assche
2019-09-26 15:38             ` Danil Kipnis
2019-09-26 15:42               ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 22/25] ibnbd: server: sysfs interface functions Jack Wang
2019-06-20 15:03 ` [PATCH v4 23/25] ibnbd: include client and server modules into kernel compilation Jack Wang
2019-06-20 15:03 ` [PATCH v4 24/25] ibnbd: a bit of documentation Jack Wang
2019-09-13 23:58   ` Bart Van Assche
2019-09-18 12:22     ` Jinpu Wang
2019-06-20 15:03 ` [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules Jack Wang
2019-07-09 15:10   ` Leon Romanovsky
2019-07-09 15:18     ` Jinpu Wang
2019-07-09 15:51       ` Leon Romanovsky
2019-09-13 23:56   ` Bart Van Assche
2019-09-19 10:30     ` Jinpu Wang
2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
2019-07-09 11:00   ` Leon Romanovsky
2019-07-09 11:17     ` Greg KH
2019-07-09 11:57       ` Jinpu Wang
2019-07-09 13:32       ` Leon Romanovsky
2019-07-09 15:39       ` Bart Van Assche
2019-07-09 11:37     ` Jinpu Wang
2019-07-09 12:06       ` Jason Gunthorpe
2019-07-09 13:15         ` Jinpu Wang
2019-07-09 13:19           ` Jason Gunthorpe
2019-07-09 14:17             ` Jinpu Wang
2019-07-09 21:27             ` Sagi Grimberg
2019-07-19 13:12               ` Danil Kipnis
2019-07-10 14:55     ` Danil Kipnis
2019-07-09 12:04   ` Jason Gunthorpe
2019-07-09 19:45   ` Sagi Grimberg
2019-07-10 13:55     ` Jason Gunthorpe
2019-07-10 16:25       ` Sagi Grimberg
2019-07-10 17:25         ` Jason Gunthorpe
2019-07-10 19:11           ` Sagi Grimberg
2019-07-11  7:27             ` Danil Kipnis
2019-07-11  8:54     ` Danil Kipnis
2019-07-12  0:22       ` Sagi Grimberg
2019-07-12  7:57         ` Jinpu Wang
2019-07-12 19:40           ` Sagi Grimberg
2019-07-15 11:21             ` Jinpu Wang
2019-07-12 10:58         ` Danil Kipnis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).