All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Steve Wise <swise@opengridcomputing.com>
Cc: 'Sagi Grimberg' <sagi@lightbits.io>,
	'Christoph Hellwig' <hch@lst.de>,
	axboe@kernel.dk, keith.busch@intel.com,
	'Ming Lin' <ming.l@ssi.samsung.com>,
	linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	'Jay Freyensee' <james.p.freyensee@intel.com>,
	'Armen Baloyan' <armenx.baloyan@intel.com>
Subject: Re: [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver
Date: Tue, 14 Jun 2016 07:32:48 -0700	[thread overview]
Message-ID: <20160614143248.GB17800@infradead.org> (raw)
In-Reply-To: <051801d1c297$c7d8a7d0$5789f770$@opengridcomputing.com>

On Thu, Jun 09, 2016 at 04:42:11PM -0500, Steve Wise wrote:
> 
> <snip>
> 
> > > +
> > > +static struct nvmet_rdma_queue *
> > > +nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev,
> > > +		struct rdma_cm_id *cm_id,
> > > +		struct rdma_cm_event *event)
> > > +{
> > > +	struct nvmet_rdma_queue *queue;
> > > +	int ret;
> > > +
> > > +	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> > > +	if (!queue) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_reject;
> > > +	}
> > > +
> > > +	ret = nvmet_sq_init(&queue->nvme_sq);
> > > +	if (ret)
> > > +		goto out_free_queue;
> > > +
> > > +	ret = nvmet_rdma_parse_cm_connect_req(&event->param.conn,
> > queue);
> > > +	if (ret)
> > > +		goto out_destroy_sq;
> > > +
> > > +	/*
> > > +	 * Schedules the actual release because calling rdma_destroy_id from
> > > +	 * inside a CM callback would trigger a deadlock. (great API
> design..)
> > > +	 */
> > > +	INIT_WORK(&queue->release_work,
> > nvmet_rdma_release_queue_work);
> > > +	queue->dev = ndev;
> > > +	queue->cm_id = cm_id;
> > > +
> > > +	spin_lock_init(&queue->state_lock);
> > > +	queue->state = NVMET_RDMA_Q_CONNECTING;
> > > +	INIT_LIST_HEAD(&queue->rsp_wait_list);
> > > +	INIT_LIST_HEAD(&queue->rsp_wr_wait_list);
> > > +	spin_lock_init(&queue->rsp_wr_wait_lock);
> > > +	INIT_LIST_HEAD(&queue->free_rsps);
> > > +	spin_lock_init(&queue->rsps_lock);
> > > +
> > > +	queue->idx = ida_simple_get(&nvmet_rdma_queue_ida, 0, 0,
> > GFP_KERNEL);
> > > +	if (queue->idx < 0) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_free_queue;
> > > +	}
> > > +
> > > +	ret = nvmet_rdma_alloc_rsps(queue);
> > > +	if (ret) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_ida_remove;
> > > +	}
> > > +
> > > +	if (!ndev->srq) {
> > > +		queue->cmds = nvmet_rdma_alloc_cmds(ndev,
> > > +				queue->recv_queue_size,
> > > +				!queue->host_qid);
> > > +		if (IS_ERR(queue->cmds)) {
> > > +			ret = NVME_RDMA_CM_NO_RSC;
> > > +			goto out_free_cmds;
> > > +		}
> > > +	}
> > > +
> 
> Should the above error path actually goto a block that frees the rsps?  Like
> this?

Yes, this looks good.  Thanks a lot, I'll include it in when reposting.

WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
To: Steve Wise <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
Cc: 'Sagi Grimberg' <sagi-ImC7XgPzLAfvYQKSrp0J2Q@public.gmane.org>,
	'Christoph Hellwig' <hch-jcswGhMUV9g@public.gmane.org>,
	axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org,
	keith.busch-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org,
	'Ming Lin' <ming.l-Vzezgt5dB6uUEJcrhfAQsw@public.gmane.org>,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
	linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	'Jay Freyensee'
	<james.p.freyensee-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	'Armen Baloyan'
	<armenx.baloyan-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Subject: Re: [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver
Date: Tue, 14 Jun 2016 07:32:48 -0700	[thread overview]
Message-ID: <20160614143248.GB17800@infradead.org> (raw)
In-Reply-To: <051801d1c297$c7d8a7d0$5789f770$@opengridcomputing.com>

On Thu, Jun 09, 2016 at 04:42:11PM -0500, Steve Wise wrote:
> 
> <snip>
> 
> > > +
> > > +static struct nvmet_rdma_queue *
> > > +nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev,
> > > +		struct rdma_cm_id *cm_id,
> > > +		struct rdma_cm_event *event)
> > > +{
> > > +	struct nvmet_rdma_queue *queue;
> > > +	int ret;
> > > +
> > > +	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> > > +	if (!queue) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_reject;
> > > +	}
> > > +
> > > +	ret = nvmet_sq_init(&queue->nvme_sq);
> > > +	if (ret)
> > > +		goto out_free_queue;
> > > +
> > > +	ret = nvmet_rdma_parse_cm_connect_req(&event->param.conn,
> > queue);
> > > +	if (ret)
> > > +		goto out_destroy_sq;
> > > +
> > > +	/*
> > > +	 * Schedules the actual release because calling rdma_destroy_id from
> > > +	 * inside a CM callback would trigger a deadlock. (great API
> design..)
> > > +	 */
> > > +	INIT_WORK(&queue->release_work,
> > nvmet_rdma_release_queue_work);
> > > +	queue->dev = ndev;
> > > +	queue->cm_id = cm_id;
> > > +
> > > +	spin_lock_init(&queue->state_lock);
> > > +	queue->state = NVMET_RDMA_Q_CONNECTING;
> > > +	INIT_LIST_HEAD(&queue->rsp_wait_list);
> > > +	INIT_LIST_HEAD(&queue->rsp_wr_wait_list);
> > > +	spin_lock_init(&queue->rsp_wr_wait_lock);
> > > +	INIT_LIST_HEAD(&queue->free_rsps);
> > > +	spin_lock_init(&queue->rsps_lock);
> > > +
> > > +	queue->idx = ida_simple_get(&nvmet_rdma_queue_ida, 0, 0,
> > GFP_KERNEL);
> > > +	if (queue->idx < 0) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_free_queue;
> > > +	}
> > > +
> > > +	ret = nvmet_rdma_alloc_rsps(queue);
> > > +	if (ret) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_ida_remove;
> > > +	}
> > > +
> > > +	if (!ndev->srq) {
> > > +		queue->cmds = nvmet_rdma_alloc_cmds(ndev,
> > > +				queue->recv_queue_size,
> > > +				!queue->host_qid);
> > > +		if (IS_ERR(queue->cmds)) {
> > > +			ret = NVME_RDMA_CM_NO_RSC;
> > > +			goto out_free_cmds;
> > > +		}
> > > +	}
> > > +
> 
> Should the above error path actually goto a block that frees the rsps?  Like
> this?

Yes, this looks good.  Thanks a lot, I'll include it in when reposting.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: hch@infradead.org (Christoph Hellwig)
Subject: [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver
Date: Tue, 14 Jun 2016 07:32:48 -0700	[thread overview]
Message-ID: <20160614143248.GB17800@infradead.org> (raw)
In-Reply-To: <051801d1c297$c7d8a7d0$5789f770$@opengridcomputing.com>

On Thu, Jun 09, 2016@04:42:11PM -0500, Steve Wise wrote:
> 
> <snip>
> 
> > > +
> > > +static struct nvmet_rdma_queue *
> > > +nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev,
> > > +		struct rdma_cm_id *cm_id,
> > > +		struct rdma_cm_event *event)
> > > +{
> > > +	struct nvmet_rdma_queue *queue;
> > > +	int ret;
> > > +
> > > +	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> > > +	if (!queue) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_reject;
> > > +	}
> > > +
> > > +	ret = nvmet_sq_init(&queue->nvme_sq);
> > > +	if (ret)
> > > +		goto out_free_queue;
> > > +
> > > +	ret = nvmet_rdma_parse_cm_connect_req(&event->param.conn,
> > queue);
> > > +	if (ret)
> > > +		goto out_destroy_sq;
> > > +
> > > +	/*
> > > +	 * Schedules the actual release because calling rdma_destroy_id from
> > > +	 * inside a CM callback would trigger a deadlock. (great API
> design..)
> > > +	 */
> > > +	INIT_WORK(&queue->release_work,
> > nvmet_rdma_release_queue_work);
> > > +	queue->dev = ndev;
> > > +	queue->cm_id = cm_id;
> > > +
> > > +	spin_lock_init(&queue->state_lock);
> > > +	queue->state = NVMET_RDMA_Q_CONNECTING;
> > > +	INIT_LIST_HEAD(&queue->rsp_wait_list);
> > > +	INIT_LIST_HEAD(&queue->rsp_wr_wait_list);
> > > +	spin_lock_init(&queue->rsp_wr_wait_lock);
> > > +	INIT_LIST_HEAD(&queue->free_rsps);
> > > +	spin_lock_init(&queue->rsps_lock);
> > > +
> > > +	queue->idx = ida_simple_get(&nvmet_rdma_queue_ida, 0, 0,
> > GFP_KERNEL);
> > > +	if (queue->idx < 0) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_free_queue;
> > > +	}
> > > +
> > > +	ret = nvmet_rdma_alloc_rsps(queue);
> > > +	if (ret) {
> > > +		ret = NVME_RDMA_CM_NO_RSC;
> > > +		goto out_ida_remove;
> > > +	}
> > > +
> > > +	if (!ndev->srq) {
> > > +		queue->cmds = nvmet_rdma_alloc_cmds(ndev,
> > > +				queue->recv_queue_size,
> > > +				!queue->host_qid);
> > > +		if (IS_ERR(queue->cmds)) {
> > > +			ret = NVME_RDMA_CM_NO_RSC;
> > > +			goto out_free_cmds;
> > > +		}
> > > +	}
> > > +
> 
> Should the above error path actually goto a block that frees the rsps?  Like
> this?

Yes, this looks good.  Thanks a lot, I'll include it in when reposting.

  parent reply	other threads:[~2016-06-14 14:32 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-06 21:23 NVMe over Fabrics RDMA transport drivers Christoph Hellwig
2016-06-06 21:23 ` Christoph Hellwig
2016-06-06 21:23 ` [PATCH 1/5] blk-mq: Introduce blk_mq_reinit_tagset Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-06 21:23 ` [PATCH 2/5] nvme: add new reconnecting controller state Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-06 21:23 ` [PATCH 3/5] nvme-rdma.h: Add includes for nvme rdma_cm negotiation Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-07 11:59   ` Sagi Grimberg
2016-06-07 11:59     ` Sagi Grimberg
2016-06-07 11:59     ` Sagi Grimberg
2016-06-06 21:23 ` [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-07 12:00   ` Sagi Grimberg
2016-06-07 12:00     ` Sagi Grimberg
2016-06-09 21:42     ` Steve Wise
2016-06-09 21:42       ` Steve Wise
2016-06-09 21:42       ` Steve Wise
2016-06-09 21:54       ` Ming Lin
2016-06-09 21:54         ` Ming Lin
2016-06-09 21:54         ` Ming Lin
2016-06-14 14:32       ` Christoph Hellwig [this message]
2016-06-14 14:32         ` Christoph Hellwig
2016-06-14 14:32         ` Christoph Hellwig
2016-06-09 23:03     ` Steve Wise
2016-06-09 23:03       ` Steve Wise
2016-06-09 23:03       ` Steve Wise
2016-06-14 14:31       ` Christoph Hellwig
2016-06-14 14:31         ` Christoph Hellwig
2016-06-14 14:31         ` Christoph Hellwig
2016-06-14 15:14         ` Steve Wise
2016-06-14 15:14           ` Steve Wise
2016-06-14 15:14           ` Steve Wise
     [not found]         ` <00ea01d1c64f$64db8880$2e929980$@opengridcomputing.com>
2016-06-14 15:23           ` Steve Wise
2016-06-14 15:23             ` Steve Wise
2016-06-14 15:23             ` Steve Wise
2016-06-14 16:10       ` Steve Wise
2016-06-14 16:10         ` Steve Wise
2016-06-14 16:10         ` Steve Wise
2016-06-14 16:22         ` Steve Wise
2016-06-14 16:22           ` Steve Wise
2016-06-14 16:22           ` Steve Wise
2016-06-15 18:32           ` Sagi Grimberg
2016-06-15 18:32             ` Sagi Grimberg
2016-06-15 18:32             ` Sagi Grimberg
2016-06-14 16:47         ` Hefty, Sean
2016-06-14 16:47           ` Hefty, Sean
2016-06-14 16:47           ` Hefty, Sean
2016-06-06 21:23 ` [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-07 12:00   ` Sagi Grimberg
2016-06-07 12:00     ` Sagi Grimberg
2016-06-07 12:00     ` Sagi Grimberg
2016-06-07 14:47   ` Keith Busch
2016-06-07 14:47     ` Keith Busch
2016-06-07 15:15     ` Freyensee, James P
2016-06-07 15:15       ` Freyensee, James P
2016-06-07 15:15       ` Freyensee, James P
2016-06-07 11:57 ` NVMe over Fabrics RDMA transport drivers Sagi Grimberg
2016-06-07 11:57   ` Sagi Grimberg
2016-06-07 12:01   ` Christoph Hellwig
2016-06-07 12:01     ` Christoph Hellwig
2016-06-07 12:01     ` Christoph Hellwig
2016-06-07 14:55   ` Woodruff, Robert J
2016-06-07 14:55     ` Woodruff, Robert J
2016-06-07 14:55     ` Woodruff, Robert J
2016-06-07 20:14     ` Steve Wise
2016-06-07 20:14       ` Steve Wise
2016-06-07 20:27       ` Christoph Hellwig
2016-06-07 20:27         ` Christoph Hellwig
2016-07-06 12:55 NVMe over Fabrics RDMA transport drivers V2 Christoph Hellwig
2016-07-06 12:55 ` [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver Christoph Hellwig
2016-07-06 12:55   ` Christoph Hellwig
2016-07-06 12:55   ` Christoph Hellwig
2016-07-08 13:51   ` Steve Wise
2016-07-08 13:51     ` Steve Wise
2016-07-08 13:51     ` Steve Wise

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160614143248.GB17800@infradead.org \
    --to=hch@infradead.org \
    --cc=armenx.baloyan@intel.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=james.p.freyensee@intel.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=ming.l@ssi.samsung.com \
    --cc=sagi@lightbits.io \
    --cc=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.