Linux-RDMA Archive on lore.kernel.org
 help / color / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: Jack Wang <jinpuwang@gmail.com>,
	linux-block@vger.kernel.org, linux-rdma@vger.kernel.org
Cc: axboe@kernel.dk, hch@infradead.org, sagi@grimberg.me,
	jgg@mellanox.com, dledford@redhat.com,
	danil.kipnis@cloud.ionos.com, rpenyaev@suse.de,
	Roman Pen <roman.penyaev@profitbricks.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>
Subject: Re: [PATCH v4 17/25] ibnbd: client: main functionality
Date: Fri, 13 Sep 2019 16:46:15 -0700
Message-ID: <bd8963e2-d186-dbd0-fe39-7f4a518f4177@acm.org> (raw)
In-Reply-To: <20190620150337.7847-18-jinpuwang@gmail.com>

On 6/20/19 8:03 AM, Jack Wang wrote:
> +MODULE_VERSION(IBNBD_VER_STRING);

No version numbers in upstream code please.

> +/*
> + * This is for closing devices when unloading the module:
> + * we might be closing a lot (>256) of devices in parallel
> + * and it is better not to use the system_wq.
> + */
> +static struct workqueue_struct *unload_wq;

I think that a better motivation is needed for the introduction of a new 
workqueue.

> +#define KERNEL_SECTOR_SIZE      512

Please use SECTOR_SIZE instead of redefining it.

> +static int ibnbd_clt_revalidate_disk(struct ibnbd_clt_dev *dev,
> +				     size_t new_nsectors)
> +{
> +	int err = 0;
> +
> +	ibnbd_info(dev, "Device size changed from %zu to %zu sectors\n",
> +		   dev->nsectors, new_nsectors);
> +	dev->nsectors = new_nsectors;
> +	set_capacity(dev->gd,
> +		     dev->nsectors * (dev->logical_block_size /
> +				      KERNEL_SECTOR_SIZE));
> +	err = revalidate_disk(dev->gd);
> +	if (err)
> +		ibnbd_err(dev, "Failed to change device size from"
> +			  " %zu to %zu, err: %d\n", dev->nsectors,
> +			  new_nsectors, err);
> +	return err;
> +}

Since this function changes the block device size, I think that the name 
ibnbd_clt_revalidate_disk() is confusing. Please rename this function.

> +/**
> + * ibnbd_get_cpu_qlist() - finds a list with HW queues to be requeued
> + *
> + * Description:
> + *     Each CPU has a list of HW queues, which needs to be requeed.  If a list
> + *     is not empty - it is marked with a bit.  This function finds first
> + *     set bit in a bitmap and returns corresponding CPU list.
> + */

What does it mean to requeue a queue? Queue elements can be requeued but 
a queue in its entirety not. Please make this comment more clear.

> +/**
> + * ibnbd_requeue_if_needed() - requeue if CPU queue is marked as non empty
> + *
> + * Description:
> + *     Each CPU has it's own list of HW queues, which should be requeued.
> + *     Function finds such list with HW queues, takes a list lock, picks up
> + *     the first HW queue out of the list and requeues it.
> + *
> + * Return:
> + *     True if the queue was requeued, false otherwise.
> + *
> + * Context:
> + *     Does not matter.
> + */

Same comment here.

> +/**
> + * ibnbd_requeue_all_if_idle() - requeue all queues left in the list if
> + *     session is idling (there are no requests in-flight).
> + *
> + * Description:
> + *     This function tries to rerun all stopped queues if there are no
> + *     requests in-flight anymore.  This function tries to solve an obvious
> + *     problem, when number of tags < than number of queues (hctx), which
> + *     are stopped and put to sleep.  If last tag, which has been just put,
> + *     does not wake up all left queues (hctxs), IO requests hang forever.
> + *
> + *     That can happen when all number of tags, say N, have been exhausted
> + *     from one CPU, and we have many block devices per session, say M.
> + *     Each block device has it's own queue (hctx) for each CPU, so eventually
> + *     we can put that number of queues (hctxs) to sleep: M x nr_cpu_ids.
> + *     If number of tags N < M x nr_cpu_ids finally we will get an IO hang.
> + *
> + *     To avoid this hang last caller of ibnbd_put_tag() (last caller is the
> + *     one who observes sess->busy == 0) must wake up all remaining queues.
> + *
> + * Context:
> + *     Does not matter.
> + */

Same comment here.

A more general question is why ibnbd needs its own queue management 
while no other block driver needs this?

> +static void ibnbd_softirq_done_fn(struct request *rq)
> +{
> +	struct ibnbd_clt_dev *dev	= rq->rq_disk->private_data;
> +	struct ibnbd_clt_session *sess	= dev->sess;
> +	struct ibnbd_iu *iu;
> +
> +	iu = blk_mq_rq_to_pdu(rq);
> +	ibnbd_put_tag(sess, iu->tag);
> +	blk_mq_end_request(rq, iu->status);
> +}
> +
> +static void msg_io_conf(void *priv, int errno)
> +{
> +	struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;
> +	struct ibnbd_clt_dev *dev = iu->dev;
> +	struct request *rq = iu->rq;
> +
> +	iu->status = errno ? BLK_STS_IOERR : BLK_STS_OK;
> +
> +	if (softirq_enable) {
> +		blk_mq_complete_request(rq);
> +	} else {
> +		ibnbd_put_tag(dev->sess, iu->tag);
> +		blk_mq_end_request(rq, iu->status);
> +	}

Block drivers must call blk_mq_complete_request() instead of 
blk_mq_end_request() to complete a request after processing of the 
request has been started. Calling blk_mq_end_request() to complete a 
request is racy in case a timeout occurs while blk_mq_end_request() is 
in progress.

> +static void msg_conf(void *priv, int errno)
> +{
> +	struct ibnbd_iu *iu = (struct ibnbd_iu *)priv;

The kernel code I'm familiar with does not cast void pointers explicitly 
into another type. Please follow that convention and leave the cast out 
from the above and also from similar statements.

> +static int send_usr_msg(struct ibtrs_clt *ibtrs, int dir,
> +			struct ibnbd_iu *iu, struct kvec *vec, size_t nr,
> +			size_t len, struct scatterlist *sg, unsigned int sg_len,
> +			void (*conf)(struct work_struct *work),
> +			int *errno, bool wait)
> +{
> +	int err;
> +
> +	INIT_WORK(&iu->work, conf);
> +	err = ibtrs_clt_request(dir, msg_conf, ibtrs, iu->tag,
> +				iu, vec, nr, len, sg, sg_len);
> +	if (!err && wait) {
> +		wait_event(iu->comp.wait, iu->comp.errno != INT_MAX);

This looks weird. Why is this a wait_event() call instead of a 
wait_for_completion() call?

> +static struct blk_mq_ops ibnbd_mq_ops;
> +static int setup_mq_tags(struct ibnbd_clt_session *sess)
> +{
> +	struct blk_mq_tag_set *tags = &sess->tag_set;
> +
> +	memset(tags, 0, sizeof(*tags));
> +	tags->ops		= &ibnbd_mq_ops;
> +	tags->queue_depth	= sess->queue_depth;
> +	tags->numa_node		= NUMA_NO_NODE;
> +	tags->flags		= BLK_MQ_F_SHOULD_MERGE |
> +				  BLK_MQ_F_TAG_SHARED;
> +	tags->cmd_size		= sizeof(struct ibnbd_iu);
> +	tags->nr_hw_queues	= num_online_cpus();
> +
> +	return blk_mq_alloc_tag_set(tags);
> +}

Forward declarations should be avoided when possible. Can the forward 
declaration of ibnbd_mq_ops be avoided by moving the definition of 
setup_mq_tags() down?

> +static inline void wake_up_ibtrs_waiters(struct ibnbd_clt_session *sess)
> +{
> +	/* paired with rmb() in wait_for_ibtrs_connection() */
> +	smp_wmb();
> +	sess->ibtrs_ready = true;
> +	wake_up_all(&sess->ibtrs_waitq);
> +}

The placement of the smp_wmb() call looks wrong to me. Since 
wake_up_all() and wait_event() already guarantee acquire/release 
behavior, I think that the explicit barriers can be left out from this 
function and also from wait_for_ibtrs_connection().

> +static void wait_for_ibtrs_disconnection(struct ibnbd_clt_session *sess)
> +__releases(&sess_lock)
> +__acquires(&sess_lock)
> +{
> +	DEFINE_WAIT_FUNC(wait, autoremove_wake_function);
> +
> +	prepare_to_wait(&sess->ibtrs_waitq, &wait, TASK_UNINTERRUPTIBLE);
> +	if (IS_ERR_OR_NULL(sess->ibtrs)) {
> +		finish_wait(&sess->ibtrs_waitq, &wait);
> +		return;
> +	}
> +	mutex_unlock(&sess_lock);
> +	/* After unlock session can be freed, so careful */
> +	schedule();
> +	mutex_lock(&sess_lock);
> +}

This doesn't look right: any random wake_up() call can wake up this 
function. Shouldn't there be a loop in this function that causes the 
schedule() call to be repeated until the disconnect has happened?

> +
> +static struct ibnbd_clt_session *__find_and_get_sess(const char *sessname)
> +__releases(&sess_lock)
> +__acquires(&sess_lock)
> +{
> +	struct ibnbd_clt_session *sess;
> +	int err;
> +
> +again:
> +	list_for_each_entry(sess, &sess_list, list) {
> +		if (strcmp(sessname, sess->sessname))
> +			continue;
> +
> +		if (unlikely(sess->ibtrs_ready && IS_ERR_OR_NULL(sess->ibtrs)))
> +			/*
> +			 * No IBTRS connection, session is dying.
> +			 */
> +			continue;
> +
> +		if (likely(ibnbd_clt_get_sess(sess))) {
> +			/*
> +			 * Alive session is found, wait for IBTRS connection.
> +			 */
> +			mutex_unlock(&sess_lock);
> +			err = wait_for_ibtrs_connection(sess);
> +			if (unlikely(err))
> +				ibnbd_clt_put_sess(sess);
> +			mutex_lock(&sess_lock);
> +
> +			if (unlikely(err))
> +				/* Session is dying, repeat the loop */
> +				goto again;
> +
> +			return sess;
> +		}
> +		/*
> +		 * Ref is 0, session is dying, wait for IBTRS disconnect
> +		 * in order to avoid session names clashes.
> +		 */
> +		wait_for_ibtrs_disconnection(sess);
> +		/*
> +		 * IBTRS is disconnected and soon session will be freed,
> +		 * so repeat a loop.
> +		 */
> +		goto again;
> +	}
> +
> +	return NULL;
> +}
 >
> +
> +static struct ibnbd_clt_session *find_and_get_sess(const char *sessname)
> +{
> +	struct ibnbd_clt_session *sess;
> +
> +	mutex_lock(&sess_lock);
> +	sess = __find_and_get_sess(sessname);
> +	mutex_unlock(&sess_lock);
> +
> +	return sess;
> +}

Shouldn't __find_and_get_sess() function increase the reference count of 
sess before it returns? In other words, what prevents that the session 
is freed from another thread before find_and_get_sess() returns?

> +/*
> + * Get iorio of current task
> + */
> +static short ibnbd_current_ioprio(void)
> +{
> +	struct task_struct *tsp = current;
> +	unsigned short prio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
> +
> +	if (likely(tsp->io_context))
> +		prio = tsp->io_context->ioprio;
> +	return prio;
> +}

ibnbd should use req_get_ioprio() and should not look at 
current->io_context->ioprio. I think it is the responsibility of the 
block layer to extract the I/O priority from the task context. As an 
example, here is how the aio code does this:

		req->ki_ioprio = get_current_ioprio();

> +static blk_status_t ibnbd_queue_rq(struct blk_mq_hw_ctx *hctx,
> +				   const struct blk_mq_queue_data *bd)
> +{
> +	struct request *rq = bd->rq;
> +	struct ibnbd_clt_dev *dev = rq->rq_disk->private_data;
> +	struct ibnbd_iu *iu = blk_mq_rq_to_pdu(rq);
> +	int err;
> +
> +	if (unlikely(!ibnbd_clt_dev_is_mapped(dev)))
> +		return BLK_STS_IOERR;
> +
> +	iu->tag = ibnbd_get_tag(dev->sess, IBTRS_IO_CON, IBTRS_TAG_NOWAIT);
> +	if (unlikely(!iu->tag)) {
> +		ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_IFBUSY);
> +		return BLK_STS_RESOURCE;
> +	}
> +
> +	blk_mq_start_request(rq);
> +	err = ibnbd_client_xfer_request(dev, rq, iu);
> +	if (likely(err == 0))
> +		return BLK_STS_OK;
> +	if (unlikely(err == -EAGAIN || err == -ENOMEM)) {
> +		ibnbd_clt_dev_kick_mq_queue(dev, hctx, IBNBD_DELAY_10ms);
> +		ibnbd_put_tag(dev->sess, iu->tag);
> +		return BLK_STS_RESOURCE;
> +	}
> +
> +	ibnbd_put_tag(dev->sess, iu->tag);
> +	return BLK_STS_IOERR;
> +}

Every other block driver relies on the block layer core for tag 
allocation. Why does ibnbd need its own tag management?

> +static void setup_request_queue(struct ibnbd_clt_dev *dev)
> +{
> +	blk_queue_logical_block_size(dev->queue, dev->logical_block_size);
> +	blk_queue_physical_block_size(dev->queue, dev->physical_block_size);
> +	blk_queue_max_hw_sectors(dev->queue, dev->max_hw_sectors);
> +	blk_queue_max_write_same_sectors(dev->queue,
> +					 dev->max_write_same_sectors);
> +
> +	/*
> +	 * we don't support discards to "discontiguous" segments
> +	 * in on request
               ^^
               one?
> +	 */
> +	blk_queue_max_discard_segments(dev->queue, 1);
> +
> +	blk_queue_max_discard_sectors(dev->queue, dev->max_discard_sectors);
> +	dev->queue->limits.discard_granularity	= dev->discard_granularity;
> +	dev->queue->limits.discard_alignment	= dev->discard_alignment;
> +	if (dev->max_discard_sectors)
> +		blk_queue_flag_set(QUEUE_FLAG_DISCARD, dev->queue);
> +	if (dev->secure_discard)
> +		blk_queue_flag_set(QUEUE_FLAG_SECERASE, dev->queue);
> +
> +	blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
> +	blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
> +	blk_queue_max_segments(dev->queue, dev->max_segments);
> +	blk_queue_io_opt(dev->queue, dev->sess->max_io_size);
> +	blk_queue_virt_boundary(dev->queue, 4095);
> +	blk_queue_write_cache(dev->queue, true, true);
> +	dev->queue->queuedata = dev;
> +}

> +static void destroy_gen_disk(struct ibnbd_clt_dev *dev)
> +{
> +	del_gendisk(dev->gd);

> +	/*
> +	 * Before marking queue as dying (blk_cleanup_queue() does that)
> +	 * we have to be sure that everything in-flight has gone.
> +	 * Blink with freeze/unfreeze.
> +	 */
> +	blk_mq_freeze_queue(dev->queue);
> +	blk_mq_unfreeze_queue(dev->queue);

Please remove the above seven lines. blk_cleanup_queue() calls 
blk_set_queue_dying() and the second call in blk_set_queue_dying() is 
blk_freeze_queue_start().

> +	blk_cleanup_queue(dev->queue);
> +	put_disk(dev->gd);
> +}

> +
> +static void destroy_sysfs(struct ibnbd_clt_dev *dev,
> +			  const struct attribute *sysfs_self)
> +{
> +	ibnbd_clt_remove_dev_symlink(dev);
> +	if (dev->kobj.state_initialized) {
> +		if (sysfs_self)
> +			/* To avoid deadlock firstly commit suicide */
                                                             ^^^^^^^
Please chose terminology that is more appropriate for a professional 
context.

> +			sysfs_remove_file_self(&dev->kobj, sysfs_self);
> +		kobject_del(&dev->kobj);
> +		kobject_put(&dev->kobj);
> +	}
> +}

Bart.

  parent reply index

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20190620150337.7847-1-jinpuwang@gmail.com>
2019-06-20 15:03 ` [PATCH v4 01/25] sysfs: export sysfs_remove_file_self() Jack Wang
2019-09-23 17:21   ` Bart Van Assche
2019-09-25  9:30     ` Danil Kipnis
2019-07-09  9:55 ` [PATCH v4 00/25] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD) Danil Kipnis
2019-07-09 11:00   ` Leon Romanovsky
2019-07-09 11:17     ` Greg KH
2019-07-09 11:57       ` Jinpu Wang
2019-07-09 13:32       ` Leon Romanovsky
2019-07-09 15:39       ` Bart Van Assche
2019-07-09 11:37     ` Jinpu Wang
2019-07-09 12:06       ` Jason Gunthorpe
2019-07-09 13:15         ` Jinpu Wang
2019-07-09 13:19           ` Jason Gunthorpe
2019-07-09 14:17             ` Jinpu Wang
2019-07-09 21:27             ` Sagi Grimberg
2019-07-19 13:12               ` Danil Kipnis
2019-07-10 14:55     ` Danil Kipnis
2019-07-09 12:04   ` Jason Gunthorpe
2019-07-09 19:45   ` Sagi Grimberg
2019-07-10 13:55     ` Jason Gunthorpe
2019-07-10 16:25       ` Sagi Grimberg
2019-07-10 17:25         ` Jason Gunthorpe
2019-07-10 19:11           ` Sagi Grimberg
2019-07-11  7:27             ` Danil Kipnis
2019-07-11  8:54     ` Danil Kipnis
2019-07-12  0:22       ` Sagi Grimberg
2019-07-12  7:57         ` Jinpu Wang
2019-07-12 19:40           ` Sagi Grimberg
2019-07-15 11:21             ` Jinpu Wang
2019-07-12 10:58         ` Danil Kipnis
     [not found] ` <20190620150337.7847-26-jinpuwang@gmail.com>
2019-07-09 15:10   ` [PATCH v4 25/25] MAINTAINERS: Add maintainer for IBNBD/IBTRS modules Leon Romanovsky
2019-07-09 15:18     ` Jinpu Wang
2019-07-09 15:51       ` Leon Romanovsky
2019-09-13 23:56   ` Bart Van Assche
2019-09-19 10:30     ` Jinpu Wang
     [not found] ` <20190620150337.7847-16-jinpuwang@gmail.com>
2019-09-13 22:10   ` [PATCH v4 15/25] ibnbd: private headers with IBNBD protocol structs and helpers Bart Van Assche
2019-09-15 14:30     ` Jinpu Wang
2019-09-16  5:27       ` Leon Romanovsky
2019-09-16 13:45         ` Bart Van Assche
2019-09-17 15:41           ` Leon Romanovsky
2019-09-17 15:52             ` Jinpu Wang
2019-09-16  7:08       ` Danil Kipnis
2019-09-16 14:57       ` Jinpu Wang
2019-09-16 17:25         ` Bart Van Assche
2019-09-17 12:27           ` Jinpu Wang
2019-09-16 15:39       ` Jinpu Wang
2019-09-18 15:26         ` Bart Van Assche
2019-09-18 16:11           ` Jinpu Wang
     [not found] ` <20190620150337.7847-17-jinpuwang@gmail.com>
2019-09-13 22:25   ` [PATCH v4 16/25] ibnbd: client: private header with client structs and functions Bart Van Assche
2019-09-17 16:36     ` Jinpu Wang
2019-09-25 23:43       ` Danil Kipnis
2019-09-26 10:00         ` Jinpu Wang
     [not found] ` <20190620150337.7847-18-jinpuwang@gmail.com>
2019-09-13 23:46   ` Bart Van Assche [this message]
2019-09-16 14:17     ` [PATCH v4 17/25] ibnbd: client: main functionality Danil Kipnis
2019-09-16 16:46       ` Bart Van Assche
2019-09-17 11:39         ` Danil Kipnis
2019-09-18  7:14           ` Danil Kipnis
2019-09-18 15:47             ` Bart Van Assche
2019-09-20  8:29               ` Danil Kipnis
2019-09-25 22:26               ` Danil Kipnis
2019-09-26  9:55                 ` Roman Penyaev
2019-09-26 15:01                   ` Bart Van Assche
2019-09-27  8:52                     ` Roman Penyaev
2019-09-27  9:32                       ` Danil Kipnis
2019-09-27 12:18                         ` Danil Kipnis
2019-09-27 16:37                       ` Bart Van Assche
2019-09-27 16:50                         ` Roman Penyaev
2019-09-27 17:16                           ` Bart Van Assche
2019-09-17 13:09     ` Jinpu Wang
2019-09-17 16:46       ` Bart Van Assche
2019-09-18 12:02         ` Jinpu Wang
2019-09-18 16:05     ` Jinpu Wang
2019-09-14  0:00   ` Bart Van Assche
     [not found] ` <20190620150337.7847-25-jinpuwang@gmail.com>
2019-09-13 23:58   ` [PATCH v4 24/25] ibnbd: a bit of documentation Bart Van Assche
2019-09-18 12:22     ` Jinpu Wang
     [not found] ` <20190620150337.7847-19-jinpuwang@gmail.com>
2019-09-18 16:28   ` [PATCH v4 18/25] ibnbd: client: sysfs interface functions Bart Van Assche
2019-09-19 15:55     ` Jinpu Wang
     [not found] ` <20190620150337.7847-21-jinpuwang@gmail.com>
2019-09-18 17:41   ` [PATCH v4 20/25] ibnbd: server: main functionality Bart Van Assche
2019-09-20  7:36     ` Danil Kipnis
2019-09-20 15:42       ` Bart Van Assche
2019-09-23 15:19         ` Danil Kipnis
     [not found] ` <20190620150337.7847-22-jinpuwang@gmail.com>
2019-09-18 21:46   ` [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev Bart Van Assche
2019-09-26 14:04     ` Jinpu Wang
2019-09-26 15:11       ` Bart Van Assche
2019-09-26 15:25         ` Danil Kipnis
2019-09-26 15:29           ` Bart Van Assche
2019-09-26 15:38             ` Danil Kipnis
2019-09-26 15:42               ` Jinpu Wang
     [not found] ` <20190620150337.7847-3-jinpuwang@gmail.com>
2019-09-23 17:44   ` [PATCH v4 02/25] ibtrs: public interface header to establish RDMA connections Bart Van Assche
2019-09-25 10:20     ` Danil Kipnis
2019-09-25 15:38       ` Bart Van Assche
     [not found] ` <20190620150337.7847-7-jinpuwang@gmail.com>
2019-09-23 21:51   ` [PATCH v4 06/25] ibtrs: client: main functionality Bart Van Assche
2019-09-25 17:36     ` Danil Kipnis
2019-09-25 18:55       ` Bart Van Assche
2019-09-25 20:50         ` Danil Kipnis
2019-09-25 21:08           ` Bart Van Assche
2019-09-25 21:16             ` Bart Van Assche
2019-09-25 22:53             ` Danil Kipnis
2019-09-25 23:21               ` Bart Van Assche
2019-09-26  9:16                 ` Danil Kipnis
     [not found] ` <20190620150337.7847-4-jinpuwang@gmail.com>
2019-09-23 22:50   ` [PATCH v4 03/25] ibtrs: private headers with IBTRS protocol structs and helpers Bart Van Assche
2019-09-25 21:45     ` Danil Kipnis
2019-09-25 21:57       ` Bart Van Assche
2019-09-27  8:56     ` Jinpu Wang
     [not found] ` <20190620150337.7847-5-jinpuwang@gmail.com>
2019-09-23 23:03   ` [PATCH v4 04/25] ibtrs: core: lib functions shared between client and server modules Bart Van Assche
2019-09-27 10:13     ` Jinpu Wang
     [not found] ` <20190620150337.7847-6-jinpuwang@gmail.com>
2019-09-23 23:05   ` [PATCH v4 05/25] ibtrs: client: private header with client structs and functions Bart Van Assche
2019-09-27 10:18     ` Jinpu Wang
     [not found] ` <20190620150337.7847-8-jinpuwang@gmail.com>
2019-09-23 23:15   ` [PATCH v4 07/25] ibtrs: client: statistics functions Bart Van Assche
2019-09-27 12:00     ` Jinpu Wang
     [not found] ` <20190620150337.7847-10-jinpuwang@gmail.com>
2019-09-23 23:21   ` [PATCH v4 09/25] ibtrs: server: private header with server structs and functions Bart Van Assche
2019-09-27 12:04     ` Jinpu Wang
     [not found] ` <20190620150337.7847-11-jinpuwang@gmail.com>
2019-09-23 23:49   ` [PATCH v4 10/25] ibtrs: server: main functionality Bart Van Assche
2019-09-27 15:03     ` Jinpu Wang
2019-09-27 15:11       ` Bart Van Assche
2019-09-27 15:19         ` Jinpu Wang
     [not found] ` <20190620150337.7847-12-jinpuwang@gmail.com>
2019-09-23 23:56   ` [PATCH v4 11/25] ibtrs: server: statistics functions Bart Van Assche
2019-10-02 15:15     ` Jinpu Wang
2019-10-02 15:42       ` Leon Romanovsky
2019-10-02 15:45         ` Jinpu Wang
2019-10-02 16:00           ` Leon Romanovsky
     [not found] ` <20190620150337.7847-13-jinpuwang@gmail.com>
2019-09-24  0:00   ` [PATCH v4 12/25] ibtrs: server: sysfs interface functions Bart Van Assche
2019-10-02 15:11     ` Jinpu Wang

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bd8963e2-d186-dbd0-fe39-7f4a518f4177@acm.org \
    --to=bvanassche@acm.org \
    --cc=axboe@kernel.dk \
    --cc=danil.kipnis@cloud.ionos.com \
    --cc=dledford@redhat.com \
    --cc=hch@infradead.org \
    --cc=jgg@mellanox.com \
    --cc=jinpu.wang@cloud.ionos.com \
    --cc=jinpuwang@gmail.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=roman.penyaev@profitbricks.com \
    --cc=rpenyaev@suse.de \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-RDMA Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-rdma/0 linux-rdma/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-rdma linux-rdma/ https://lore.kernel.org/linux-rdma \
		linux-rdma@vger.kernel.org
	public-inbox-index linux-rdma

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rdma


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git