All of lore.kernel.org
 help / color / mirror / Atom feed
From: kbuild test robot <lkp@intel.com>
Cc: kbuild-all@01.org, dennis.dalessandro@intel.com,
	Bart Van Assche <Bart.VanAssche@wdc.com>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
	Greg Thelen <gthelen@google.com>,
	Tarick Bedeir <tarick@google.com>
Subject: Re: [PATCH v3] IB: make INFINIBAND_ADDR_TRANS configurable
Date: Sun, 15 Apr 2018 02:22:24 +0800	[thread overview]
Message-ID: <201804150206.RwJP5RUY%fengguang.wu@intel.com> (raw)
In-Reply-To: <20180414153642.28178-1-gthelen@google.com>

[-- Attachment #1: Type: text/plain, Size: 12785 bytes --]

Hi Greg,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16 next-20180413]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Greg-Thelen/IB-make-INFINIBAND_ADDR_TRANS-configurable/20180414-234042
config: x86_64-randconfig-x011-201815 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   drivers/nvme/host/rdma.o: In function `nvme_rdma_stop_queue':
>> drivers/nvme/host/rdma.c:554: undefined reference to `rdma_disconnect'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_create_qp':
>> drivers/nvme/host/rdma.c:258: undefined reference to `rdma_create_qp'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_free_queue':
>> drivers/nvme/host/rdma.c:570: undefined reference to `rdma_destroy_id'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_alloc_queue':
>> drivers/nvme/host/rdma.c:511: undefined reference to `__rdma_create_id'
>> drivers/nvme/host/rdma.c:523: undefined reference to `rdma_resolve_addr'
   drivers/nvme/host/rdma.c:544: undefined reference to `rdma_destroy_id'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_addr_resolved':
>> drivers/nvme/host/rdma.c:1461: undefined reference to `rdma_resolve_route'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_create_queue_ib':
>> drivers/nvme/host/rdma.c:485: undefined reference to `rdma_destroy_qp'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_route_resolved':
>> drivers/nvme/host/rdma.c:1512: undefined reference to `rdma_connect'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_conn_rejected':
>> drivers/nvme/host/rdma.c:1436: undefined reference to `rdma_reject_msg'
>> drivers/nvme/host/rdma.c:1437: undefined reference to `rdma_consumer_reject_data'

vim +554 drivers/nvme/host/rdma.c

f41725bb Israel Rukshin    2017-11-26  423  
ca6e95bb Sagi Grimberg     2017-05-04  424  static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
71102307 Christoph Hellwig 2016-07-06  425  {
ca6e95bb Sagi Grimberg     2017-05-04  426  	struct ib_device *ibdev;
71102307 Christoph Hellwig 2016-07-06  427  	const int send_wr_factor = 3;			/* MR, SEND, INV */
71102307 Christoph Hellwig 2016-07-06  428  	const int cq_factor = send_wr_factor + 1;	/* + RECV */
71102307 Christoph Hellwig 2016-07-06  429  	int comp_vector, idx = nvme_rdma_queue_idx(queue);
71102307 Christoph Hellwig 2016-07-06  430  	int ret;
71102307 Christoph Hellwig 2016-07-06  431  
ca6e95bb Sagi Grimberg     2017-05-04  432  	queue->device = nvme_rdma_find_get_device(queue->cm_id);
ca6e95bb Sagi Grimberg     2017-05-04  433  	if (!queue->device) {
ca6e95bb Sagi Grimberg     2017-05-04  434  		dev_err(queue->cm_id->device->dev.parent,
ca6e95bb Sagi Grimberg     2017-05-04  435  			"no client data found!\n");
ca6e95bb Sagi Grimberg     2017-05-04  436  		return -ECONNREFUSED;
ca6e95bb Sagi Grimberg     2017-05-04  437  	}
ca6e95bb Sagi Grimberg     2017-05-04  438  	ibdev = queue->device->dev;
71102307 Christoph Hellwig 2016-07-06  439  
71102307 Christoph Hellwig 2016-07-06  440  	/*
0b36658c Sagi Grimberg     2017-07-13  441  	 * Spread I/O queues completion vectors according their queue index.
0b36658c Sagi Grimberg     2017-07-13  442  	 * Admin queues can always go on completion vector 0.
71102307 Christoph Hellwig 2016-07-06  443  	 */
0b36658c Sagi Grimberg     2017-07-13  444  	comp_vector = idx == 0 ? idx : idx - 1;
71102307 Christoph Hellwig 2016-07-06  445  
71102307 Christoph Hellwig 2016-07-06  446  	/* +1 for ib_stop_cq */
ca6e95bb Sagi Grimberg     2017-05-04  447  	queue->ib_cq = ib_alloc_cq(ibdev, queue,
ca6e95bb Sagi Grimberg     2017-05-04  448  				cq_factor * queue->queue_size + 1,
ca6e95bb Sagi Grimberg     2017-05-04  449  				comp_vector, IB_POLL_SOFTIRQ);
71102307 Christoph Hellwig 2016-07-06  450  	if (IS_ERR(queue->ib_cq)) {
71102307 Christoph Hellwig 2016-07-06  451  		ret = PTR_ERR(queue->ib_cq);
ca6e95bb Sagi Grimberg     2017-05-04  452  		goto out_put_dev;
71102307 Christoph Hellwig 2016-07-06  453  	}
71102307 Christoph Hellwig 2016-07-06  454  
71102307 Christoph Hellwig 2016-07-06  455  	ret = nvme_rdma_create_qp(queue, send_wr_factor);
71102307 Christoph Hellwig 2016-07-06  456  	if (ret)
71102307 Christoph Hellwig 2016-07-06  457  		goto out_destroy_ib_cq;
71102307 Christoph Hellwig 2016-07-06  458  
71102307 Christoph Hellwig 2016-07-06  459  	queue->rsp_ring = nvme_rdma_alloc_ring(ibdev, queue->queue_size,
71102307 Christoph Hellwig 2016-07-06  460  			sizeof(struct nvme_completion), DMA_FROM_DEVICE);
71102307 Christoph Hellwig 2016-07-06  461  	if (!queue->rsp_ring) {
71102307 Christoph Hellwig 2016-07-06  462  		ret = -ENOMEM;
71102307 Christoph Hellwig 2016-07-06  463  		goto out_destroy_qp;
71102307 Christoph Hellwig 2016-07-06  464  	}
71102307 Christoph Hellwig 2016-07-06  465  
f41725bb Israel Rukshin    2017-11-26  466  	ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs,
f41725bb Israel Rukshin    2017-11-26  467  			      queue->queue_size,
f41725bb Israel Rukshin    2017-11-26  468  			      IB_MR_TYPE_MEM_REG,
f41725bb Israel Rukshin    2017-11-26  469  			      nvme_rdma_get_max_fr_pages(ibdev));
f41725bb Israel Rukshin    2017-11-26  470  	if (ret) {
f41725bb Israel Rukshin    2017-11-26  471  		dev_err(queue->ctrl->ctrl.device,
f41725bb Israel Rukshin    2017-11-26  472  			"failed to initialize MR pool sized %d for QID %d\n",
f41725bb Israel Rukshin    2017-11-26  473  			queue->queue_size, idx);
f41725bb Israel Rukshin    2017-11-26  474  		goto out_destroy_ring;
f41725bb Israel Rukshin    2017-11-26  475  	}
f41725bb Israel Rukshin    2017-11-26  476  
eb1bd249 Max Gurtovoy      2017-11-28  477  	set_bit(NVME_RDMA_Q_TR_READY, &queue->flags);
eb1bd249 Max Gurtovoy      2017-11-28  478  
71102307 Christoph Hellwig 2016-07-06  479  	return 0;
71102307 Christoph Hellwig 2016-07-06  480  
f41725bb Israel Rukshin    2017-11-26  481  out_destroy_ring:
f41725bb Israel Rukshin    2017-11-26  482  	nvme_rdma_free_ring(ibdev, queue->rsp_ring, queue->queue_size,
f41725bb Israel Rukshin    2017-11-26  483  			    sizeof(struct nvme_completion), DMA_FROM_DEVICE);
71102307 Christoph Hellwig 2016-07-06  484  out_destroy_qp:
1f61def9 Max Gurtovoy      2017-11-06 @485  	rdma_destroy_qp(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  486  out_destroy_ib_cq:
71102307 Christoph Hellwig 2016-07-06  487  	ib_free_cq(queue->ib_cq);
ca6e95bb Sagi Grimberg     2017-05-04  488  out_put_dev:
ca6e95bb Sagi Grimberg     2017-05-04  489  	nvme_rdma_dev_put(queue->device);
71102307 Christoph Hellwig 2016-07-06  490  	return ret;
71102307 Christoph Hellwig 2016-07-06  491  }
71102307 Christoph Hellwig 2016-07-06  492  
41e8cfa1 Sagi Grimberg     2017-07-10  493  static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
71102307 Christoph Hellwig 2016-07-06  494  		int idx, size_t queue_size)
71102307 Christoph Hellwig 2016-07-06  495  {
71102307 Christoph Hellwig 2016-07-06  496  	struct nvme_rdma_queue *queue;
8f4e8dac Max Gurtovoy      2017-02-19  497  	struct sockaddr *src_addr = NULL;
71102307 Christoph Hellwig 2016-07-06  498  	int ret;
71102307 Christoph Hellwig 2016-07-06  499  
71102307 Christoph Hellwig 2016-07-06  500  	queue = &ctrl->queues[idx];
71102307 Christoph Hellwig 2016-07-06  501  	queue->ctrl = ctrl;
71102307 Christoph Hellwig 2016-07-06  502  	init_completion(&queue->cm_done);
71102307 Christoph Hellwig 2016-07-06  503  
71102307 Christoph Hellwig 2016-07-06  504  	if (idx > 0)
71102307 Christoph Hellwig 2016-07-06  505  		queue->cmnd_capsule_len = ctrl->ctrl.ioccsz * 16;
71102307 Christoph Hellwig 2016-07-06  506  	else
71102307 Christoph Hellwig 2016-07-06  507  		queue->cmnd_capsule_len = sizeof(struct nvme_command);
71102307 Christoph Hellwig 2016-07-06  508  
71102307 Christoph Hellwig 2016-07-06  509  	queue->queue_size = queue_size;
71102307 Christoph Hellwig 2016-07-06  510  
71102307 Christoph Hellwig 2016-07-06 @511  	queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
71102307 Christoph Hellwig 2016-07-06  512  			RDMA_PS_TCP, IB_QPT_RC);
71102307 Christoph Hellwig 2016-07-06  513  	if (IS_ERR(queue->cm_id)) {
71102307 Christoph Hellwig 2016-07-06  514  		dev_info(ctrl->ctrl.device,
71102307 Christoph Hellwig 2016-07-06  515  			"failed to create CM ID: %ld\n", PTR_ERR(queue->cm_id));
71102307 Christoph Hellwig 2016-07-06  516  		return PTR_ERR(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  517  	}
71102307 Christoph Hellwig 2016-07-06  518  
8f4e8dac Max Gurtovoy      2017-02-19  519  	if (ctrl->ctrl.opts->mask & NVMF_OPT_HOST_TRADDR)
0928f9b4 Sagi Grimberg     2017-02-05  520  		src_addr = (struct sockaddr *)&ctrl->src_addr;
8f4e8dac Max Gurtovoy      2017-02-19  521  
0928f9b4 Sagi Grimberg     2017-02-05  522  	queue->cm_error = -ETIMEDOUT;
0928f9b4 Sagi Grimberg     2017-02-05 @523  	ret = rdma_resolve_addr(queue->cm_id, src_addr,
0928f9b4 Sagi Grimberg     2017-02-05  524  			(struct sockaddr *)&ctrl->addr,
71102307 Christoph Hellwig 2016-07-06  525  			NVME_RDMA_CONNECT_TIMEOUT_MS);
71102307 Christoph Hellwig 2016-07-06  526  	if (ret) {
71102307 Christoph Hellwig 2016-07-06  527  		dev_info(ctrl->ctrl.device,
71102307 Christoph Hellwig 2016-07-06  528  			"rdma_resolve_addr failed (%d).\n", ret);
71102307 Christoph Hellwig 2016-07-06  529  		goto out_destroy_cm_id;
71102307 Christoph Hellwig 2016-07-06  530  	}
71102307 Christoph Hellwig 2016-07-06  531  
71102307 Christoph Hellwig 2016-07-06  532  	ret = nvme_rdma_wait_for_cm(queue);
71102307 Christoph Hellwig 2016-07-06  533  	if (ret) {
71102307 Christoph Hellwig 2016-07-06  534  		dev_info(ctrl->ctrl.device,
d8bfceeb Sagi Grimberg     2017-10-11  535  			"rdma connection establishment failed (%d)\n", ret);
71102307 Christoph Hellwig 2016-07-06  536  		goto out_destroy_cm_id;
71102307 Christoph Hellwig 2016-07-06  537  	}
71102307 Christoph Hellwig 2016-07-06  538  
5013e98b Sagi Grimberg     2017-10-11  539  	set_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags);
71102307 Christoph Hellwig 2016-07-06  540  
71102307 Christoph Hellwig 2016-07-06  541  	return 0;
71102307 Christoph Hellwig 2016-07-06  542  
71102307 Christoph Hellwig 2016-07-06  543  out_destroy_cm_id:
71102307 Christoph Hellwig 2016-07-06 @544  	rdma_destroy_id(queue->cm_id);
eb1bd249 Max Gurtovoy      2017-11-28  545  	nvme_rdma_destroy_queue_ib(queue);
71102307 Christoph Hellwig 2016-07-06  546  	return ret;
71102307 Christoph Hellwig 2016-07-06  547  }
71102307 Christoph Hellwig 2016-07-06  548  
71102307 Christoph Hellwig 2016-07-06  549  static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
71102307 Christoph Hellwig 2016-07-06  550  {
a57bd541 Sagi Grimberg     2017-08-28  551  	if (!test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
a57bd541 Sagi Grimberg     2017-08-28  552  		return;
a57bd541 Sagi Grimberg     2017-08-28  553  
71102307 Christoph Hellwig 2016-07-06 @554  	rdma_disconnect(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  555  	ib_drain_qp(queue->qp);
71102307 Christoph Hellwig 2016-07-06  556  }
71102307 Christoph Hellwig 2016-07-06  557  
71102307 Christoph Hellwig 2016-07-06  558  static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
71102307 Christoph Hellwig 2016-07-06  559  {
5013e98b Sagi Grimberg     2017-10-11  560  	if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
a57bd541 Sagi Grimberg     2017-08-28  561  		return;
a57bd541 Sagi Grimberg     2017-08-28  562  
bd9f0759 Sagi Grimberg     2017-10-19  563  	if (nvme_rdma_queue_idx(queue) == 0) {
bd9f0759 Sagi Grimberg     2017-10-19  564  		nvme_rdma_free_qe(queue->device->dev,
bd9f0759 Sagi Grimberg     2017-10-19  565  			&queue->ctrl->async_event_sqe,
bd9f0759 Sagi Grimberg     2017-10-19  566  			sizeof(struct nvme_command), DMA_TO_DEVICE);
bd9f0759 Sagi Grimberg     2017-10-19  567  	}
bd9f0759 Sagi Grimberg     2017-10-19  568  
71102307 Christoph Hellwig 2016-07-06  569  	nvme_rdma_destroy_queue_ib(queue);
71102307 Christoph Hellwig 2016-07-06 @570  	rdma_destroy_id(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  571  }
71102307 Christoph Hellwig 2016-07-06  572  

:::::: The code at line 554 was first introduced by commit
:::::: 7110230719602852481c2793d054f866b2bf4a2b nvme-rdma: add a NVMe over Fabrics RDMA host driver

:::::: TO: Christoph Hellwig <hch@lst.de>
:::::: CC: Jens Axboe <axboe@fb.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 27526 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: kbuild test robot <lkp@intel.com>
To: Greg Thelen <gthelen@google.com>
Cc: kbuild-all@01.org, dennis.dalessandro@intel.com,
	Bart Van Assche <Bart.VanAssche@wdc.com>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
	Greg Thelen <gthelen@google.com>,
	Tarick Bedeir <tarick@google.com>
Subject: Re: [PATCH v3] IB: make INFINIBAND_ADDR_TRANS configurable
Date: Sun, 15 Apr 2018 02:22:24 +0800	[thread overview]
Message-ID: <201804150206.RwJP5RUY%fengguang.wu@intel.com> (raw)
In-Reply-To: <20180414153642.28178-1-gthelen@google.com>

[-- Attachment #1: Type: text/plain, Size: 12785 bytes --]

Hi Greg,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16 next-20180413]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Greg-Thelen/IB-make-INFINIBAND_ADDR_TRANS-configurable/20180414-234042
config: x86_64-randconfig-x011-201815 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   drivers/nvme/host/rdma.o: In function `nvme_rdma_stop_queue':
>> drivers/nvme/host/rdma.c:554: undefined reference to `rdma_disconnect'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_create_qp':
>> drivers/nvme/host/rdma.c:258: undefined reference to `rdma_create_qp'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_free_queue':
>> drivers/nvme/host/rdma.c:570: undefined reference to `rdma_destroy_id'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_alloc_queue':
>> drivers/nvme/host/rdma.c:511: undefined reference to `__rdma_create_id'
>> drivers/nvme/host/rdma.c:523: undefined reference to `rdma_resolve_addr'
   drivers/nvme/host/rdma.c:544: undefined reference to `rdma_destroy_id'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_addr_resolved':
>> drivers/nvme/host/rdma.c:1461: undefined reference to `rdma_resolve_route'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_create_queue_ib':
>> drivers/nvme/host/rdma.c:485: undefined reference to `rdma_destroy_qp'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_route_resolved':
>> drivers/nvme/host/rdma.c:1512: undefined reference to `rdma_connect'
   drivers/nvme/host/rdma.o: In function `nvme_rdma_conn_rejected':
>> drivers/nvme/host/rdma.c:1436: undefined reference to `rdma_reject_msg'
>> drivers/nvme/host/rdma.c:1437: undefined reference to `rdma_consumer_reject_data'

vim +554 drivers/nvme/host/rdma.c

f41725bb Israel Rukshin    2017-11-26  423  
ca6e95bb Sagi Grimberg     2017-05-04  424  static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
71102307 Christoph Hellwig 2016-07-06  425  {
ca6e95bb Sagi Grimberg     2017-05-04  426  	struct ib_device *ibdev;
71102307 Christoph Hellwig 2016-07-06  427  	const int send_wr_factor = 3;			/* MR, SEND, INV */
71102307 Christoph Hellwig 2016-07-06  428  	const int cq_factor = send_wr_factor + 1;	/* + RECV */
71102307 Christoph Hellwig 2016-07-06  429  	int comp_vector, idx = nvme_rdma_queue_idx(queue);
71102307 Christoph Hellwig 2016-07-06  430  	int ret;
71102307 Christoph Hellwig 2016-07-06  431  
ca6e95bb Sagi Grimberg     2017-05-04  432  	queue->device = nvme_rdma_find_get_device(queue->cm_id);
ca6e95bb Sagi Grimberg     2017-05-04  433  	if (!queue->device) {
ca6e95bb Sagi Grimberg     2017-05-04  434  		dev_err(queue->cm_id->device->dev.parent,
ca6e95bb Sagi Grimberg     2017-05-04  435  			"no client data found!\n");
ca6e95bb Sagi Grimberg     2017-05-04  436  		return -ECONNREFUSED;
ca6e95bb Sagi Grimberg     2017-05-04  437  	}
ca6e95bb Sagi Grimberg     2017-05-04  438  	ibdev = queue->device->dev;
71102307 Christoph Hellwig 2016-07-06  439  
71102307 Christoph Hellwig 2016-07-06  440  	/*
0b36658c Sagi Grimberg     2017-07-13  441  	 * Spread I/O queues completion vectors according their queue index.
0b36658c Sagi Grimberg     2017-07-13  442  	 * Admin queues can always go on completion vector 0.
71102307 Christoph Hellwig 2016-07-06  443  	 */
0b36658c Sagi Grimberg     2017-07-13  444  	comp_vector = idx == 0 ? idx : idx - 1;
71102307 Christoph Hellwig 2016-07-06  445  
71102307 Christoph Hellwig 2016-07-06  446  	/* +1 for ib_stop_cq */
ca6e95bb Sagi Grimberg     2017-05-04  447  	queue->ib_cq = ib_alloc_cq(ibdev, queue,
ca6e95bb Sagi Grimberg     2017-05-04  448  				cq_factor * queue->queue_size + 1,
ca6e95bb Sagi Grimberg     2017-05-04  449  				comp_vector, IB_POLL_SOFTIRQ);
71102307 Christoph Hellwig 2016-07-06  450  	if (IS_ERR(queue->ib_cq)) {
71102307 Christoph Hellwig 2016-07-06  451  		ret = PTR_ERR(queue->ib_cq);
ca6e95bb Sagi Grimberg     2017-05-04  452  		goto out_put_dev;
71102307 Christoph Hellwig 2016-07-06  453  	}
71102307 Christoph Hellwig 2016-07-06  454  
71102307 Christoph Hellwig 2016-07-06  455  	ret = nvme_rdma_create_qp(queue, send_wr_factor);
71102307 Christoph Hellwig 2016-07-06  456  	if (ret)
71102307 Christoph Hellwig 2016-07-06  457  		goto out_destroy_ib_cq;
71102307 Christoph Hellwig 2016-07-06  458  
71102307 Christoph Hellwig 2016-07-06  459  	queue->rsp_ring = nvme_rdma_alloc_ring(ibdev, queue->queue_size,
71102307 Christoph Hellwig 2016-07-06  460  			sizeof(struct nvme_completion), DMA_FROM_DEVICE);
71102307 Christoph Hellwig 2016-07-06  461  	if (!queue->rsp_ring) {
71102307 Christoph Hellwig 2016-07-06  462  		ret = -ENOMEM;
71102307 Christoph Hellwig 2016-07-06  463  		goto out_destroy_qp;
71102307 Christoph Hellwig 2016-07-06  464  	}
71102307 Christoph Hellwig 2016-07-06  465  
f41725bb Israel Rukshin    2017-11-26  466  	ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs,
f41725bb Israel Rukshin    2017-11-26  467  			      queue->queue_size,
f41725bb Israel Rukshin    2017-11-26  468  			      IB_MR_TYPE_MEM_REG,
f41725bb Israel Rukshin    2017-11-26  469  			      nvme_rdma_get_max_fr_pages(ibdev));
f41725bb Israel Rukshin    2017-11-26  470  	if (ret) {
f41725bb Israel Rukshin    2017-11-26  471  		dev_err(queue->ctrl->ctrl.device,
f41725bb Israel Rukshin    2017-11-26  472  			"failed to initialize MR pool sized %d for QID %d\n",
f41725bb Israel Rukshin    2017-11-26  473  			queue->queue_size, idx);
f41725bb Israel Rukshin    2017-11-26  474  		goto out_destroy_ring;
f41725bb Israel Rukshin    2017-11-26  475  	}
f41725bb Israel Rukshin    2017-11-26  476  
eb1bd249 Max Gurtovoy      2017-11-28  477  	set_bit(NVME_RDMA_Q_TR_READY, &queue->flags);
eb1bd249 Max Gurtovoy      2017-11-28  478  
71102307 Christoph Hellwig 2016-07-06  479  	return 0;
71102307 Christoph Hellwig 2016-07-06  480  
f41725bb Israel Rukshin    2017-11-26  481  out_destroy_ring:
f41725bb Israel Rukshin    2017-11-26  482  	nvme_rdma_free_ring(ibdev, queue->rsp_ring, queue->queue_size,
f41725bb Israel Rukshin    2017-11-26  483  			    sizeof(struct nvme_completion), DMA_FROM_DEVICE);
71102307 Christoph Hellwig 2016-07-06  484  out_destroy_qp:
1f61def9 Max Gurtovoy      2017-11-06 @485  	rdma_destroy_qp(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  486  out_destroy_ib_cq:
71102307 Christoph Hellwig 2016-07-06  487  	ib_free_cq(queue->ib_cq);
ca6e95bb Sagi Grimberg     2017-05-04  488  out_put_dev:
ca6e95bb Sagi Grimberg     2017-05-04  489  	nvme_rdma_dev_put(queue->device);
71102307 Christoph Hellwig 2016-07-06  490  	return ret;
71102307 Christoph Hellwig 2016-07-06  491  }
71102307 Christoph Hellwig 2016-07-06  492  
41e8cfa1 Sagi Grimberg     2017-07-10  493  static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
71102307 Christoph Hellwig 2016-07-06  494  		int idx, size_t queue_size)
71102307 Christoph Hellwig 2016-07-06  495  {
71102307 Christoph Hellwig 2016-07-06  496  	struct nvme_rdma_queue *queue;
8f4e8dac Max Gurtovoy      2017-02-19  497  	struct sockaddr *src_addr = NULL;
71102307 Christoph Hellwig 2016-07-06  498  	int ret;
71102307 Christoph Hellwig 2016-07-06  499  
71102307 Christoph Hellwig 2016-07-06  500  	queue = &ctrl->queues[idx];
71102307 Christoph Hellwig 2016-07-06  501  	queue->ctrl = ctrl;
71102307 Christoph Hellwig 2016-07-06  502  	init_completion(&queue->cm_done);
71102307 Christoph Hellwig 2016-07-06  503  
71102307 Christoph Hellwig 2016-07-06  504  	if (idx > 0)
71102307 Christoph Hellwig 2016-07-06  505  		queue->cmnd_capsule_len = ctrl->ctrl.ioccsz * 16;
71102307 Christoph Hellwig 2016-07-06  506  	else
71102307 Christoph Hellwig 2016-07-06  507  		queue->cmnd_capsule_len = sizeof(struct nvme_command);
71102307 Christoph Hellwig 2016-07-06  508  
71102307 Christoph Hellwig 2016-07-06  509  	queue->queue_size = queue_size;
71102307 Christoph Hellwig 2016-07-06  510  
71102307 Christoph Hellwig 2016-07-06 @511  	queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
71102307 Christoph Hellwig 2016-07-06  512  			RDMA_PS_TCP, IB_QPT_RC);
71102307 Christoph Hellwig 2016-07-06  513  	if (IS_ERR(queue->cm_id)) {
71102307 Christoph Hellwig 2016-07-06  514  		dev_info(ctrl->ctrl.device,
71102307 Christoph Hellwig 2016-07-06  515  			"failed to create CM ID: %ld\n", PTR_ERR(queue->cm_id));
71102307 Christoph Hellwig 2016-07-06  516  		return PTR_ERR(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  517  	}
71102307 Christoph Hellwig 2016-07-06  518  
8f4e8dac Max Gurtovoy      2017-02-19  519  	if (ctrl->ctrl.opts->mask & NVMF_OPT_HOST_TRADDR)
0928f9b4 Sagi Grimberg     2017-02-05  520  		src_addr = (struct sockaddr *)&ctrl->src_addr;
8f4e8dac Max Gurtovoy      2017-02-19  521  
0928f9b4 Sagi Grimberg     2017-02-05  522  	queue->cm_error = -ETIMEDOUT;
0928f9b4 Sagi Grimberg     2017-02-05 @523  	ret = rdma_resolve_addr(queue->cm_id, src_addr,
0928f9b4 Sagi Grimberg     2017-02-05  524  			(struct sockaddr *)&ctrl->addr,
71102307 Christoph Hellwig 2016-07-06  525  			NVME_RDMA_CONNECT_TIMEOUT_MS);
71102307 Christoph Hellwig 2016-07-06  526  	if (ret) {
71102307 Christoph Hellwig 2016-07-06  527  		dev_info(ctrl->ctrl.device,
71102307 Christoph Hellwig 2016-07-06  528  			"rdma_resolve_addr failed (%d).\n", ret);
71102307 Christoph Hellwig 2016-07-06  529  		goto out_destroy_cm_id;
71102307 Christoph Hellwig 2016-07-06  530  	}
71102307 Christoph Hellwig 2016-07-06  531  
71102307 Christoph Hellwig 2016-07-06  532  	ret = nvme_rdma_wait_for_cm(queue);
71102307 Christoph Hellwig 2016-07-06  533  	if (ret) {
71102307 Christoph Hellwig 2016-07-06  534  		dev_info(ctrl->ctrl.device,
d8bfceeb Sagi Grimberg     2017-10-11  535  			"rdma connection establishment failed (%d)\n", ret);
71102307 Christoph Hellwig 2016-07-06  536  		goto out_destroy_cm_id;
71102307 Christoph Hellwig 2016-07-06  537  	}
71102307 Christoph Hellwig 2016-07-06  538  
5013e98b Sagi Grimberg     2017-10-11  539  	set_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags);
71102307 Christoph Hellwig 2016-07-06  540  
71102307 Christoph Hellwig 2016-07-06  541  	return 0;
71102307 Christoph Hellwig 2016-07-06  542  
71102307 Christoph Hellwig 2016-07-06  543  out_destroy_cm_id:
71102307 Christoph Hellwig 2016-07-06 @544  	rdma_destroy_id(queue->cm_id);
eb1bd249 Max Gurtovoy      2017-11-28  545  	nvme_rdma_destroy_queue_ib(queue);
71102307 Christoph Hellwig 2016-07-06  546  	return ret;
71102307 Christoph Hellwig 2016-07-06  547  }
71102307 Christoph Hellwig 2016-07-06  548  
71102307 Christoph Hellwig 2016-07-06  549  static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
71102307 Christoph Hellwig 2016-07-06  550  {
a57bd541 Sagi Grimberg     2017-08-28  551  	if (!test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
a57bd541 Sagi Grimberg     2017-08-28  552  		return;
a57bd541 Sagi Grimberg     2017-08-28  553  
71102307 Christoph Hellwig 2016-07-06 @554  	rdma_disconnect(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  555  	ib_drain_qp(queue->qp);
71102307 Christoph Hellwig 2016-07-06  556  }
71102307 Christoph Hellwig 2016-07-06  557  
71102307 Christoph Hellwig 2016-07-06  558  static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
71102307 Christoph Hellwig 2016-07-06  559  {
5013e98b Sagi Grimberg     2017-10-11  560  	if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
a57bd541 Sagi Grimberg     2017-08-28  561  		return;
a57bd541 Sagi Grimberg     2017-08-28  562  
bd9f0759 Sagi Grimberg     2017-10-19  563  	if (nvme_rdma_queue_idx(queue) == 0) {
bd9f0759 Sagi Grimberg     2017-10-19  564  		nvme_rdma_free_qe(queue->device->dev,
bd9f0759 Sagi Grimberg     2017-10-19  565  			&queue->ctrl->async_event_sqe,
bd9f0759 Sagi Grimberg     2017-10-19  566  			sizeof(struct nvme_command), DMA_TO_DEVICE);
bd9f0759 Sagi Grimberg     2017-10-19  567  	}
bd9f0759 Sagi Grimberg     2017-10-19  568  
71102307 Christoph Hellwig 2016-07-06  569  	nvme_rdma_destroy_queue_ib(queue);
71102307 Christoph Hellwig 2016-07-06 @570  	rdma_destroy_id(queue->cm_id);
71102307 Christoph Hellwig 2016-07-06  571  }
71102307 Christoph Hellwig 2016-07-06  572  

:::::: The code at line 554 was first introduced by commit
:::::: 7110230719602852481c2793d054f866b2bf4a2b nvme-rdma: add a NVMe over Fabrics RDMA host driver

:::::: TO: Christoph Hellwig <hch@lst.de>
:::::: CC: Jens Axboe <axboe@fb.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 27526 bytes --]

  reply	other threads:[~2018-04-14 18:22 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-13  7:06 [PATCH] IB: make INFINIBAND_ADDR_TRANS configurable Greg Thelen
2018-04-13 12:47 ` Bart Van Assche
2018-04-13 17:27   ` [PATCH v2] " Greg Thelen
2018-04-14 15:13     ` Dennis Dalessandro
2018-04-14 15:34       ` Greg Thelen
2018-04-14 15:36         ` [PATCH v3] " Greg Thelen
2018-04-14 18:22           ` kbuild test robot [this message]
2018-04-14 18:22             ` kbuild test robot
2018-04-14 18:47           ` kbuild test robot
2018-04-14 18:47             ` kbuild test robot
2018-04-14 16:05         ` [PATCH v2] " Joe Perches
2018-04-16  9:03   ` [PATCH] " oulijun
2018-04-16  9:03     ` oulijun
2018-04-15 12:06 ` Christoph Hellwig
2018-04-16  4:02   ` Greg Thelen
2018-04-16  8:56     ` Christoph Hellwig
2018-04-16 14:51       ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201804150206.RwJP5RUY%fengguang.wu@intel.com \
    --to=lkp@intel.com \
    --cc=Bart.VanAssche@wdc.com \
    --cc=dennis.dalessandro@intel.com \
    --cc=dledford@redhat.com \
    --cc=gthelen@google.com \
    --cc=jgg@ziepe.ca \
    --cc=kbuild-all@01.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=tarick@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.