All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/11] zfcp patches for 2.6.39 merge window
@ 2011-02-22 18:54 Steffen Maier
  2011-02-22 18:54 ` [PATCH 01/11] zfcp: Remove redundant unlikely() Steffen Maier
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley; +Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens

James,

here is a series of zfcp updates for the 2.6.39 merge window.
The patches apply on top of the current scsi-misc tree.

Steffen

Linux on System z Development

IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martin Jetter
Geschaeftsfuehrung: Dirk Wittkopp
Sitz der Gesellschaft: Boeblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 01/11] zfcp: Remove redundant unlikely()
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 02/11] zfcp: Remove unused flag ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT Steffen Maier
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
	Tobias Klauser, Christof Schmitt

[-- Attachment #1: 701-zfcp-remove-unlikely.diff --]
[-- Type: text/plain, Size: 900 bytes --]

From: Tobias Klauser <tklauser@distanz.ch>

IS_ERR() already implies unlikely(), so it can be omitted here.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_fsf.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -1552,7 +1552,7 @@ int zfcp_fsf_open_wka_port(struct zfcp_f
 				  SBAL_FLAGS0_TYPE_READ,
 				  qdio->adapter->pool.erp_req);
 
-	if (unlikely(IS_ERR(req))) {
+	if (IS_ERR(req)) {
 		retval = PTR_ERR(req);
 		goto out;
 	}
@@ -1605,7 +1605,7 @@ int zfcp_fsf_close_wka_port(struct zfcp_
 				  SBAL_FLAGS0_TYPE_READ,
 				  qdio->adapter->pool.erp_req);
 
-	if (unlikely(IS_ERR(req))) {
+	if (IS_ERR(req)) {
 		retval = PTR_ERR(req);
 		goto out;
 	}

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 02/11] zfcp: Remove unused flag ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
  2011-02-22 18:54 ` [PATCH 01/11] zfcp: Remove redundant unlikely() Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 03/11] zfcp: Replace kmem_cache for "status read" data Steffen Maier
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 702-zfcp-unused-flag.diff --]
[-- Type: text/plain, Size: 1105 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

Remove the now unused flag ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_def.h |    1 -
 drivers/s390/scsi/zfcp_fsf.c |    1 -
 2 files changed, 2 deletions(-)
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -89,7 +89,6 @@ struct zfcp_reqlist;
 #define ZFCP_STATUS_LUN_READONLY		0x00000008
 
 /* FSF request status (this does not have a common part) */
-#define ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT	0x00000002
 #define ZFCP_STATUS_FSFREQ_ERROR		0x00000008
 #define ZFCP_STATUS_FSFREQ_CLEANUP		0x00000010
 #define ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED	0x00000040
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -2284,7 +2284,6 @@ struct zfcp_fsf_req *zfcp_fsf_fcp_task_m
 		goto out;
 	}
 
-	req->status |= ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT;
 	req->data = scmnd;
 	req->handler = zfcp_fsf_fcp_task_mgmt_handler;
 	req->qtcb->header.lun_handle = zfcp_sdev->lun_handle;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 03/11] zfcp: Replace kmem_cache for "status read" data
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
  2011-02-22 18:54 ` [PATCH 01/11] zfcp: Remove redundant unlikely() Steffen Maier
  2011-02-22 18:54 ` [PATCH 02/11] zfcp: Remove unused flag ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 04/11] zfcp: Introduce new kmem_cache for FC request and response data Steffen Maier
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 703-zfcp-kmem-cache-status.diff --]
[-- Type: text/plain, Size: 5419 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

zfcp requires a mempool for the status read data blocks to resubmit
the "status read" requests at any time. Each status read data block
has the size of a page (4096 bytes) and needs to be placed in one
page.

Instead of having a kmem_cache for allocating page sized chunks, use
mempool_create_page_pool to create a mempool returning pages and
remove the zfcp kmem_cache.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c |   20 ++++++--------------
 drivers/s390/scsi/zfcp_def.h |    3 +--
 drivers/s390/scsi/zfcp_erp.c |    2 +-
 drivers/s390/scsi/zfcp_fsf.c |   12 +++++++-----
 4 files changed, 15 insertions(+), 22 deletions(-)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -132,11 +132,6 @@ static int __init zfcp_module_init(void)
 	if (!zfcp_data.qtcb_cache)
 		goto out_qtcb_cache;
 
-	zfcp_data.sr_buffer_cache = zfcp_cache_hw_align("zfcp_sr",
-					sizeof(struct fsf_status_read_buffer));
-	if (!zfcp_data.sr_buffer_cache)
-		goto out_sr_cache;
-
 	zfcp_data.gid_pn_cache = zfcp_cache_hw_align("zfcp_gid",
 					sizeof(struct zfcp_fc_gid_pn));
 	if (!zfcp_data.gid_pn_cache)
@@ -181,8 +176,6 @@ out_transport:
 out_adisc_cache:
 	kmem_cache_destroy(zfcp_data.gid_pn_cache);
 out_gid_cache:
-	kmem_cache_destroy(zfcp_data.sr_buffer_cache);
-out_sr_cache:
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
 out_qtcb_cache:
 	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
@@ -199,7 +192,6 @@ static void __exit zfcp_module_exit(void
 	fc_release_transport(zfcp_data.scsi_transport_template);
 	kmem_cache_destroy(zfcp_data.adisc_cache);
 	kmem_cache_destroy(zfcp_data.gid_pn_cache);
-	kmem_cache_destroy(zfcp_data.sr_buffer_cache);
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
 	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
 }
@@ -264,10 +256,10 @@ static int zfcp_allocate_low_mem_buffers
 	if (!adapter->pool.qtcb_pool)
 		return -ENOMEM;
 
-	adapter->pool.status_read_data =
-		mempool_create_slab_pool(FSF_STATUS_READS_RECOM,
-					 zfcp_data.sr_buffer_cache);
-	if (!adapter->pool.status_read_data)
+	BUILD_BUG_ON(sizeof(struct fsf_status_read_buffer) > PAGE_SIZE);
+	adapter->pool.sr_data =
+		mempool_create_page_pool(FSF_STATUS_READS_RECOM, 0);
+	if (!adapter->pool.sr_data)
 		return -ENOMEM;
 
 	adapter->pool.gid_pn =
@@ -290,8 +282,8 @@ static void zfcp_free_low_mem_buffers(st
 		mempool_destroy(adapter->pool.qtcb_pool);
 	if (adapter->pool.status_read_req)
 		mempool_destroy(adapter->pool.status_read_req);
-	if (adapter->pool.status_read_data)
-		mempool_destroy(adapter->pool.status_read_data);
+	if (adapter->pool.sr_data)
+		mempool_destroy(adapter->pool.sr_data);
 	if (adapter->pool.gid_pn)
 		mempool_destroy(adapter->pool.gid_pn);
 }
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -107,7 +107,7 @@ struct zfcp_adapter_mempool {
 	mempool_t *scsi_req;
 	mempool_t *scsi_abort;
 	mempool_t *status_read_req;
-	mempool_t *status_read_data;
+	mempool_t *sr_data;
 	mempool_t *gid_pn;
 	mempool_t *qtcb_pool;
 };
@@ -319,7 +319,6 @@ struct zfcp_data {
 	struct scsi_transport_template *scsi_transport_template;
 	struct kmem_cache	*gpn_ft_cache;
 	struct kmem_cache	*qtcb_cache;
-	struct kmem_cache	*sr_buffer_cache;
 	struct kmem_cache	*gid_pn_cache;
 	struct kmem_cache	*adisc_cache;
 };
--- a/drivers/s390/scsi/zfcp_erp.c
+++ b/drivers/s390/scsi/zfcp_erp.c
@@ -732,7 +732,7 @@ static int zfcp_erp_adapter_strategy_ope
 	if (zfcp_erp_adapter_strategy_open_fsf_xport(act) == ZFCP_ERP_FAILED)
 		return ZFCP_ERP_FAILED;
 
-	if (mempool_resize(act->adapter->pool.status_read_data,
+	if (mempool_resize(act->adapter->pool.sr_data,
 			   act->adapter->stat_read_buf_num, GFP_KERNEL))
 		return ZFCP_ERP_FAILED;
 
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -212,7 +212,7 @@ static void zfcp_fsf_status_read_handler
 
 	if (req->status & ZFCP_STATUS_FSFREQ_DISMISSED) {
 		zfcp_dbf_hba_fsf_uss("fssrh_1", req);
-		mempool_free(sr_buf, adapter->pool.status_read_data);
+		mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data);
 		zfcp_fsf_req_free(req);
 		return;
 	}
@@ -265,7 +265,7 @@ static void zfcp_fsf_status_read_handler
 		break;
 	}
 
-	mempool_free(sr_buf, adapter->pool.status_read_data);
+	mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data);
 	zfcp_fsf_req_free(req);
 
 	atomic_inc(&adapter->stat_miss);
@@ -723,6 +723,7 @@ int zfcp_fsf_status_read(struct zfcp_qdi
 	struct zfcp_adapter *adapter = qdio->adapter;
 	struct zfcp_fsf_req *req;
 	struct fsf_status_read_buffer *sr_buf;
+	struct page *page;
 	int retval = -EIO;
 
 	spin_lock_irq(&qdio->req_q_lock);
@@ -736,11 +737,12 @@ int zfcp_fsf_status_read(struct zfcp_qdi
 		goto out;
 	}
 
-	sr_buf = mempool_alloc(adapter->pool.status_read_data, GFP_ATOMIC);
-	if (!sr_buf) {
+	page = mempool_alloc(adapter->pool.sr_data, GFP_ATOMIC);
+	if (!page) {
 		retval = -ENOMEM;
 		goto failed_buf;
 	}
+	sr_buf = page_address(page);
 	memset(sr_buf, 0, sizeof(*sr_buf));
 	req->data = sr_buf;
 
@@ -755,7 +757,7 @@ int zfcp_fsf_status_read(struct zfcp_qdi
 
 failed_req_send:
 	req->data = NULL;
-	mempool_free(sr_buf, adapter->pool.status_read_data);
+	mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data);
 failed_buf:
 	zfcp_dbf_hba_fsf_uss("fssr__1", req);
 	zfcp_fsf_req_free(req);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 04/11] zfcp: Introduce new kmem_cache for FC request and response data
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (2 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 03/11] zfcp: Replace kmem_cache for "status read" data Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 05/11] zfcp: Allocate GID_PN data through new FC kmem_cache Steffen Maier
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 704-zfcp-kmem-cache-fc.diff --]
[-- Type: text/plain, Size: 7127 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

A data buffer that is passed to the hardware must not cross a page
boundary. zfcp uses a series of kmem_caches to align the data to not
cross a page boundary. Introduce a new kmem_cache for the FC requests
sent from the zfcp driver and use it for the ELS ADISC data.  The goal
is to migrate to the FC kmem_cache in later patches and remove the
request specific kmem_caches.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c |   14 ++++++-------
 drivers/s390/scsi/zfcp_def.h |    1 
 drivers/s390/scsi/zfcp_ext.h |    1 
 drivers/s390/scsi/zfcp_fc.c  |   46 ++++++++++++++++++++++---------------------
 drivers/s390/scsi/zfcp_fc.h  |   27 ++++++++++++++-----------
 5 files changed, 47 insertions(+), 42 deletions(-)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -137,10 +137,10 @@ static int __init zfcp_module_init(void)
 	if (!zfcp_data.gid_pn_cache)
 		goto out_gid_cache;
 
-	zfcp_data.adisc_cache = zfcp_cache_hw_align("zfcp_adisc",
-					sizeof(struct zfcp_fc_els_adisc));
-	if (!zfcp_data.adisc_cache)
-		goto out_adisc_cache;
+	zfcp_fc_req_cache = zfcp_cache_hw_align("zfcp_fc_req",
+						sizeof(struct zfcp_fc_req));
+	if (!zfcp_fc_req_cache)
+		goto out_fc_cache;
 
 	zfcp_data.scsi_transport_template =
 		fc_attach_transport(&zfcp_transport_functions);
@@ -172,8 +172,8 @@ out_ccw_register:
 out_misc:
 	fc_release_transport(zfcp_data.scsi_transport_template);
 out_transport:
-	kmem_cache_destroy(zfcp_data.adisc_cache);
-out_adisc_cache:
+	kmem_cache_destroy(zfcp_fc_req_cache);
+out_fc_cache:
 	kmem_cache_destroy(zfcp_data.gid_pn_cache);
 out_gid_cache:
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
@@ -190,7 +190,7 @@ static void __exit zfcp_module_exit(void
 	ccw_driver_unregister(&zfcp_ccw_driver);
 	misc_deregister(&zfcp_cfdc_misc);
 	fc_release_transport(zfcp_data.scsi_transport_template);
-	kmem_cache_destroy(zfcp_data.adisc_cache);
+	kmem_cache_destroy(zfcp_fc_req_cache);
 	kmem_cache_destroy(zfcp_data.gid_pn_cache);
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
 	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -320,7 +320,6 @@ struct zfcp_data {
 	struct kmem_cache	*gpn_ft_cache;
 	struct kmem_cache	*qtcb_cache;
 	struct kmem_cache	*gid_pn_cache;
-	struct kmem_cache	*adisc_cache;
 };
 
 #endif /* ZFCP_DEF_H */
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -80,6 +80,7 @@ extern void zfcp_erp_notify(struct zfcp_
 extern void zfcp_erp_timeout_handler(unsigned long);
 
 /* zfcp_fc.c */
+extern struct kmem_cache *zfcp_fc_req_cache;
 extern void zfcp_fc_enqueue_event(struct zfcp_adapter *,
 				enum fc_host_event_code event_code, u32);
 extern void zfcp_fc_post_event(struct work_struct *);
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -16,6 +16,8 @@
 #include "zfcp_ext.h"
 #include "zfcp_fc.h"
 
+struct kmem_cache *zfcp_fc_req_cache;
+
 static u32 zfcp_fc_rscn_range_mask[] = {
 	[ELS_ADDR_FMT_PORT]		= 0xFFFFFF,
 	[ELS_ADDR_FMT_AREA]		= 0xFFFF00,
@@ -419,11 +421,11 @@ void zfcp_fc_plogi_evaluate(struct zfcp_
 
 static void zfcp_fc_adisc_handler(void *data)
 {
-	struct zfcp_fc_els_adisc *adisc = data;
-	struct zfcp_port *port = adisc->els.port;
-	struct fc_els_adisc *adisc_resp = &adisc->adisc_resp;
+	struct zfcp_fc_req *fc_req = data;
+	struct zfcp_port *port = fc_req->ct_els.port;
+	struct fc_els_adisc *adisc_resp = &fc_req->u.adisc.rsp;
 
-	if (adisc->els.status) {
+	if (fc_req->ct_els.status) {
 		/* request rejected or timed out */
 		zfcp_erp_port_forced_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED,
 					    "fcadh_1");
@@ -445,42 +447,42 @@ static void zfcp_fc_adisc_handler(void *
  out:
 	atomic_clear_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
 	put_device(&port->dev);
-	kmem_cache_free(zfcp_data.adisc_cache, adisc);
+	kmem_cache_free(zfcp_fc_req_cache, fc_req);
 }
 
 static int zfcp_fc_adisc(struct zfcp_port *port)
 {
-	struct zfcp_fc_els_adisc *adisc;
+	struct zfcp_fc_req *fc_req;
 	struct zfcp_adapter *adapter = port->adapter;
+	struct Scsi_Host *shost = adapter->scsi_host;
 	int ret;
 
-	adisc = kmem_cache_zalloc(zfcp_data.adisc_cache, GFP_ATOMIC);
-	if (!adisc)
+	fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_ATOMIC);
+	if (!fc_req)
 		return -ENOMEM;
 
-	adisc->els.port = port;
-	adisc->els.req = &adisc->req;
-	adisc->els.resp = &adisc->resp;
-	sg_init_one(adisc->els.req, &adisc->adisc_req,
+	fc_req->ct_els.port = port;
+	fc_req->ct_els.req = &fc_req->sg_req;
+	fc_req->ct_els.resp = &fc_req->sg_rsp;
+	sg_init_one(&fc_req->sg_req, &fc_req->u.adisc.req,
 		    sizeof(struct fc_els_adisc));
-	sg_init_one(adisc->els.resp, &adisc->adisc_resp,
+	sg_init_one(&fc_req->sg_rsp, &fc_req->u.adisc.rsp,
 		    sizeof(struct fc_els_adisc));
 
-	adisc->els.handler = zfcp_fc_adisc_handler;
-	adisc->els.handler_data = adisc;
+	fc_req->ct_els.handler = zfcp_fc_adisc_handler;
+	fc_req->ct_els.handler_data = fc_req;
 
 	/* acc. to FC-FS, hard_nport_id in ADISC should not be set for ports
 	   without FC-AL-2 capability, so we don't set it */
-	adisc->adisc_req.adisc_wwpn = fc_host_port_name(adapter->scsi_host);
-	adisc->adisc_req.adisc_wwnn = fc_host_node_name(adapter->scsi_host);
-	adisc->adisc_req.adisc_cmd = ELS_ADISC;
-	hton24(adisc->adisc_req.adisc_port_id,
-	       fc_host_port_id(adapter->scsi_host));
+	fc_req->u.adisc.req.adisc_wwpn = fc_host_port_name(shost);
+	fc_req->u.adisc.req.adisc_wwnn = fc_host_node_name(shost);
+	fc_req->u.adisc.req.adisc_cmd = ELS_ADISC;
+	hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost));
 
-	ret = zfcp_fsf_send_els(adapter, port->d_id, &adisc->els,
+	ret = zfcp_fsf_send_els(adapter, port->d_id, &fc_req->ct_els,
 				ZFCP_FC_CTELS_TMO);
 	if (ret)
-		kmem_cache_free(zfcp_data.adisc_cache, adisc);
+		kmem_cache_free(zfcp_fc_req_cache, fc_req);
 
 	return ret;
 }
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -123,19 +123,22 @@ struct zfcp_fc_gpn_ft {
 };
 
 /**
- * struct zfcp_fc_els_adisc - everything required in zfcp for issuing ELS ADISC
- * @els: data required for issuing els fsf command
- * @req: scatterlist entry for ELS ADISC request
- * @resp: scatterlist entry for ELS ADISC response
- * @adisc_req: ELS ADISC request data
- * @adisc_resp: ELS ADISC response data
+ * struct zfcp_fc_req - Container for FC ELS and CT requests sent from zfcp
+ * @ct_els: data required for issuing fsf command
+ * @sg_req: scatterlist entry for request data
+ * @sg_rsp: scatterlist entry for response data
+ * @u: request specific data
  */
-struct zfcp_fc_els_adisc {
-	struct zfcp_fsf_ct_els els;
-	struct scatterlist req;
-	struct scatterlist resp;
-	struct fc_els_adisc adisc_req;
-	struct fc_els_adisc adisc_resp;
+struct zfcp_fc_req {
+	struct zfcp_fsf_ct_els				ct_els;
+	struct scatterlist				sg_req;
+	struct scatterlist				sg_rsp;
+	union {
+		struct {
+			struct fc_els_adisc		req;
+			struct fc_els_adisc		rsp;
+		} adisc;
+	} u;
 };
 
 /**

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 05/11] zfcp: Allocate GID_PN data through new FC kmem_cache
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (3 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 04/11] zfcp: Introduce new kmem_cache for FC request and response data Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 06/11] zfcp: Use common FC kmem_cache for GPN_FT request Steffen Maier
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 705-zfcp-alloc-gid-pn.diff --]
[-- Type: text/plain, Size: 8173 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

Allocate the data for the GID_PN request through the new FC
kmem_cache. While updating the GID_PN code, also introduce a helper
function for initializing the CT header for FC nameserver requests.
Remove the "paranoia" check as well, the GID_PN request data does not
suddenly change.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c |   10 -----
 drivers/s390/scsi/zfcp_def.h |    1 
 drivers/s390/scsi/zfcp_fc.c  |   78 ++++++++++++++++++++-----------------------
 drivers/s390/scsi/zfcp_fc.h  |   25 +++----------
 4 files changed, 45 insertions(+), 69 deletions(-)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -132,11 +132,6 @@ static int __init zfcp_module_init(void)
 	if (!zfcp_data.qtcb_cache)
 		goto out_qtcb_cache;
 
-	zfcp_data.gid_pn_cache = zfcp_cache_hw_align("zfcp_gid",
-					sizeof(struct zfcp_fc_gid_pn));
-	if (!zfcp_data.gid_pn_cache)
-		goto out_gid_cache;
-
 	zfcp_fc_req_cache = zfcp_cache_hw_align("zfcp_fc_req",
 						sizeof(struct zfcp_fc_req));
 	if (!zfcp_fc_req_cache)
@@ -174,8 +169,6 @@ out_misc:
 out_transport:
 	kmem_cache_destroy(zfcp_fc_req_cache);
 out_fc_cache:
-	kmem_cache_destroy(zfcp_data.gid_pn_cache);
-out_gid_cache:
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
 out_qtcb_cache:
 	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
@@ -191,7 +184,6 @@ static void __exit zfcp_module_exit(void
 	misc_deregister(&zfcp_cfdc_misc);
 	fc_release_transport(zfcp_data.scsi_transport_template);
 	kmem_cache_destroy(zfcp_fc_req_cache);
-	kmem_cache_destroy(zfcp_data.gid_pn_cache);
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
 	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
 }
@@ -263,7 +255,7 @@ static int zfcp_allocate_low_mem_buffers
 		return -ENOMEM;
 
 	adapter->pool.gid_pn =
-		mempool_create_slab_pool(1, zfcp_data.gid_pn_cache);
+		mempool_create_slab_pool(1, zfcp_fc_req_cache);
 	if (!adapter->pool.gid_pn)
 		return -ENOMEM;
 
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -319,7 +319,6 @@ struct zfcp_data {
 	struct scsi_transport_template *scsi_transport_template;
 	struct kmem_cache	*gpn_ft_cache;
 	struct kmem_cache	*qtcb_cache;
-	struct kmem_cache	*gid_pn_cache;
 };
 
 #endif /* ZFCP_DEF_H */
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -262,24 +262,18 @@ void zfcp_fc_incoming_els(struct zfcp_fs
 		zfcp_fc_incoming_rscn(fsf_req);
 }
 
-static void zfcp_fc_ns_gid_pn_eval(void *data)
+static void zfcp_fc_ns_gid_pn_eval(struct zfcp_fc_req *fc_req)
 {
-	struct zfcp_fc_gid_pn *gid_pn = data;
-	struct zfcp_fsf_ct_els *ct = &gid_pn->ct;
-	struct zfcp_fc_gid_pn_req *gid_pn_req = sg_virt(ct->req);
-	struct zfcp_fc_gid_pn_resp *gid_pn_resp = sg_virt(ct->resp);
-	struct zfcp_port *port = gid_pn->port;
+	struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els;
+	struct zfcp_fc_gid_pn_rsp *gid_pn_rsp = &fc_req->u.gid_pn.rsp;
 
-	if (ct->status)
+	if (ct_els->status)
 		return;
-	if (gid_pn_resp->ct_hdr.ct_cmd != FC_FS_ACC)
+	if (gid_pn_rsp->ct_hdr.ct_cmd != FC_FS_ACC)
 		return;
 
-	/* paranoia */
-	if (gid_pn_req->gid_pn.fn_wwpn != port->wwpn)
-		return;
 	/* looks like a valid d_id */
-	port->d_id = ntoh24(gid_pn_resp->gid_pn.fp_fid);
+	ct_els->port->d_id = ntoh24(gid_pn_rsp->gid_pn.fp_fid);
 }
 
 static void zfcp_fc_complete(void *data)
@@ -287,69 +281,73 @@ static void zfcp_fc_complete(void *data)
 	complete(data);
 }
 
+static void zfcp_fc_ct_ns_init(struct fc_ct_hdr *ct_hdr, u16 cmd, u16 mr_size)
+{
+	ct_hdr->ct_rev = FC_CT_REV;
+	ct_hdr->ct_fs_type = FC_FST_DIR;
+	ct_hdr->ct_fs_subtype = FC_NS_SUBTYPE;
+	ct_hdr->ct_cmd = cmd;
+	ct_hdr->ct_mr_size = mr_size / 4;
+}
+
 static int zfcp_fc_ns_gid_pn_request(struct zfcp_port *port,
-				     struct zfcp_fc_gid_pn *gid_pn)
+				     struct zfcp_fc_req *fc_req)
 {
 	struct zfcp_adapter *adapter = port->adapter;
 	DECLARE_COMPLETION_ONSTACK(completion);
+	struct zfcp_fc_gid_pn_req *gid_pn_req = &fc_req->u.gid_pn.req;
+	struct zfcp_fc_gid_pn_rsp *gid_pn_rsp = &fc_req->u.gid_pn.rsp;
 	int ret;
 
 	/* setup parameters for send generic command */
-	gid_pn->port = port;
-	gid_pn->ct.handler = zfcp_fc_complete;
-	gid_pn->ct.handler_data = &completion;
-	gid_pn->ct.req = &gid_pn->sg_req;
-	gid_pn->ct.resp = &gid_pn->sg_resp;
-	sg_init_one(&gid_pn->sg_req, &gid_pn->gid_pn_req,
-		    sizeof(struct zfcp_fc_gid_pn_req));
-	sg_init_one(&gid_pn->sg_resp, &gid_pn->gid_pn_resp,
-		    sizeof(struct zfcp_fc_gid_pn_resp));
-
-	/* setup nameserver request */
-	gid_pn->gid_pn_req.ct_hdr.ct_rev = FC_CT_REV;
-	gid_pn->gid_pn_req.ct_hdr.ct_fs_type = FC_FST_DIR;
-	gid_pn->gid_pn_req.ct_hdr.ct_fs_subtype = FC_NS_SUBTYPE;
-	gid_pn->gid_pn_req.ct_hdr.ct_options = 0;
-	gid_pn->gid_pn_req.ct_hdr.ct_cmd = FC_NS_GID_PN;
-	gid_pn->gid_pn_req.ct_hdr.ct_mr_size = ZFCP_FC_CT_SIZE_PAGE / 4;
-	gid_pn->gid_pn_req.gid_pn.fn_wwpn = port->wwpn;
+	fc_req->ct_els.port = port;
+	fc_req->ct_els.handler = zfcp_fc_complete;
+	fc_req->ct_els.handler_data = &completion;
+	fc_req->ct_els.req = &fc_req->sg_req;
+	fc_req->ct_els.resp = &fc_req->sg_rsp;
+	sg_init_one(&fc_req->sg_req, gid_pn_req, sizeof(*gid_pn_req));
+	sg_init_one(&fc_req->sg_rsp, gid_pn_rsp, sizeof(*gid_pn_rsp));
+
+	zfcp_fc_ct_ns_init(&gid_pn_req->ct_hdr,
+			   FC_NS_GID_PN, ZFCP_FC_CT_SIZE_PAGE);
+	gid_pn_req->gid_pn.fn_wwpn = port->wwpn;
 
-	ret = zfcp_fsf_send_ct(&adapter->gs->ds, &gid_pn->ct,
+	ret = zfcp_fsf_send_ct(&adapter->gs->ds, &fc_req->ct_els,
 			       adapter->pool.gid_pn_req,
 			       ZFCP_FC_CTELS_TMO);
 	if (!ret) {
 		wait_for_completion(&completion);
-		zfcp_fc_ns_gid_pn_eval(gid_pn);
+		zfcp_fc_ns_gid_pn_eval(fc_req);
 	}
 	return ret;
 }
 
 /**
- * zfcp_fc_ns_gid_pn_request - initiate GID_PN nameserver request
+ * zfcp_fc_ns_gid_pn - initiate GID_PN nameserver request
  * @port: port where GID_PN request is needed
  * return: -ENOMEM on error, 0 otherwise
  */
 static int zfcp_fc_ns_gid_pn(struct zfcp_port *port)
 {
 	int ret;
-	struct zfcp_fc_gid_pn *gid_pn;
+	struct zfcp_fc_req *fc_req;
 	struct zfcp_adapter *adapter = port->adapter;
 
-	gid_pn = mempool_alloc(adapter->pool.gid_pn, GFP_ATOMIC);
-	if (!gid_pn)
+	fc_req = mempool_alloc(adapter->pool.gid_pn, GFP_ATOMIC);
+	if (!fc_req)
 		return -ENOMEM;
 
-	memset(gid_pn, 0, sizeof(*gid_pn));
+	memset(fc_req, 0, sizeof(*fc_req));
 
 	ret = zfcp_fc_wka_port_get(&adapter->gs->ds);
 	if (ret)
 		goto out;
 
-	ret = zfcp_fc_ns_gid_pn_request(port, gid_pn);
+	ret = zfcp_fc_ns_gid_pn_request(port, fc_req);
 
 	zfcp_fc_wka_port_put(&adapter->gs->ds);
 out:
-	mempool_free(gid_pn, adapter->pool.gid_pn);
+	mempool_free(fc_req, adapter->pool.gid_pn);
 	return ret;
 }
 
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -64,33 +64,16 @@ struct zfcp_fc_gid_pn_req {
 } __packed;
 
 /**
- * struct zfcp_fc_gid_pn_resp - container for ct header plus gid_pn response
+ * struct zfcp_fc_gid_pn_rsp - container for ct header plus gid_pn response
  * @ct_hdr: FC GS common transport header
  * @gid_pn: GID_PN response
  */
-struct zfcp_fc_gid_pn_resp {
+struct zfcp_fc_gid_pn_rsp {
 	struct fc_ct_hdr	ct_hdr;
 	struct fc_gid_pn_resp	gid_pn;
 } __packed;
 
 /**
- * struct zfcp_fc_gid_pn - everything required in zfcp for gid_pn request
- * @ct: data passed to zfcp_fsf for issuing fsf request
- * @sg_req: scatterlist entry for request data
- * @sg_resp: scatterlist entry for response data
- * @gid_pn_req: GID_PN request data
- * @gid_pn_resp: GID_PN response data
- */
-struct zfcp_fc_gid_pn {
-	struct zfcp_fsf_ct_els ct;
-	struct scatterlist sg_req;
-	struct scatterlist sg_resp;
-	struct zfcp_fc_gid_pn_req gid_pn_req;
-	struct zfcp_fc_gid_pn_resp gid_pn_resp;
-	struct zfcp_port *port;
-};
-
-/**
  * struct zfcp_fc_gpn_ft - container for ct header plus gpn_ft request
  * @ct_hdr: FC GS common transport header
  * @gpn_ft: GPN_FT request
@@ -138,6 +121,10 @@ struct zfcp_fc_req {
 			struct fc_els_adisc		req;
 			struct fc_els_adisc		rsp;
 		} adisc;
+		struct {
+			struct zfcp_fc_gid_pn_req	req;
+			struct zfcp_fc_gid_pn_rsp	rsp;
+		} gid_pn;
 	} u;
 };
 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 06/11] zfcp: Use common FC kmem_cache for GPN_FT request
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (4 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 05/11] zfcp: Allocate GID_PN data through new FC kmem_cache Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 07/11] zfcp: Move qtcb kmem_cache to zfcp_fsf.c Steffen Maier
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 706-zfcp-alloc-gpn-ft.diff --]
[-- Type: text/plain, Size: 7701 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

Switch the allocation of the GPN_FT request data to the FC kmem_cache
and remove the zfcp_gpn kmem_cache.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c |    8 ---
 drivers/s390/scsi/zfcp_def.h |    1 
 drivers/s390/scsi/zfcp_fc.c  |   87 +++++++++++++++----------------------------
 drivers/s390/scsi/zfcp_fc.h  |   26 +-----------
 4 files changed, 35 insertions(+), 87 deletions(-)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -122,11 +122,6 @@ static int __init zfcp_module_init(void)
 {
 	int retval = -ENOMEM;
 
-	zfcp_data.gpn_ft_cache = zfcp_cache_hw_align("zfcp_gpn",
-					sizeof(struct zfcp_fc_gpn_ft_req));
-	if (!zfcp_data.gpn_ft_cache)
-		goto out;
-
 	zfcp_data.qtcb_cache = zfcp_cache_hw_align("zfcp_qtcb",
 					sizeof(struct fsf_qtcb));
 	if (!zfcp_data.qtcb_cache)
@@ -171,8 +166,6 @@ out_transport:
 out_fc_cache:
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
 out_qtcb_cache:
-	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
-out:
 	return retval;
 }
 
@@ -185,7 +178,6 @@ static void __exit zfcp_module_exit(void
 	fc_release_transport(zfcp_data.scsi_transport_template);
 	kmem_cache_destroy(zfcp_fc_req_cache);
 	kmem_cache_destroy(zfcp_data.qtcb_cache);
-	kmem_cache_destroy(zfcp_data.gpn_ft_cache);
 }
 
 module_exit(zfcp_module_exit);
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -317,7 +317,6 @@ struct zfcp_fsf_req {
 struct zfcp_data {
 	struct scsi_host_template scsi_host_template;
 	struct scsi_transport_template *scsi_transport_template;
-	struct kmem_cache	*gpn_ft_cache;
 	struct kmem_cache	*qtcb_cache;
 };
 
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -528,68 +528,42 @@ void zfcp_fc_test_link(struct zfcp_port
 		put_device(&port->dev);
 }
 
-static void zfcp_free_sg_env(struct zfcp_fc_gpn_ft *gpn_ft, int buf_num)
+static struct zfcp_fc_req *zfcp_alloc_sg_env(int buf_num)
 {
-	struct scatterlist *sg = &gpn_ft->sg_req;
+	struct zfcp_fc_req *fc_req;
 
-	kmem_cache_free(zfcp_data.gpn_ft_cache, sg_virt(sg));
-	zfcp_sg_free_table(gpn_ft->sg_resp, buf_num);
-
-	kfree(gpn_ft);
-}
-
-static struct zfcp_fc_gpn_ft *zfcp_alloc_sg_env(int buf_num)
-{
-	struct zfcp_fc_gpn_ft *gpn_ft;
-	struct zfcp_fc_gpn_ft_req *req;
-
-	gpn_ft = kzalloc(sizeof(*gpn_ft), GFP_KERNEL);
-	if (!gpn_ft)
+	fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_KERNEL);
+	if (!fc_req)
 		return NULL;
 
-	req = kmem_cache_zalloc(zfcp_data.gpn_ft_cache, GFP_KERNEL);
-	if (!req) {
-		kfree(gpn_ft);
-		gpn_ft = NULL;
-		goto out;
+	if (zfcp_sg_setup_table(&fc_req->sg_rsp, buf_num)) {
+		kmem_cache_free(zfcp_fc_req_cache, fc_req);
+		return NULL;
 	}
-	sg_init_one(&gpn_ft->sg_req, req, sizeof(*req));
 
-	if (zfcp_sg_setup_table(gpn_ft->sg_resp, buf_num)) {
-		zfcp_free_sg_env(gpn_ft, buf_num);
-		gpn_ft = NULL;
-	}
-out:
-	return gpn_ft;
-}
+	sg_init_one(&fc_req->sg_req, &fc_req->u.gpn_ft.req,
+		    sizeof(struct zfcp_fc_gpn_ft_req));
 
+	return fc_req;
+}
 
-static int zfcp_fc_send_gpn_ft(struct zfcp_fc_gpn_ft *gpn_ft,
+static int zfcp_fc_send_gpn_ft(struct zfcp_fc_req *fc_req,
 			       struct zfcp_adapter *adapter, int max_bytes)
 {
-	struct zfcp_fsf_ct_els *ct = &gpn_ft->ct;
-	struct zfcp_fc_gpn_ft_req *req = sg_virt(&gpn_ft->sg_req);
+	struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els;
+	struct zfcp_fc_gpn_ft_req *req = &fc_req->u.gpn_ft.req;
 	DECLARE_COMPLETION_ONSTACK(completion);
 	int ret;
 
-	/* prepare CT IU for GPN_FT */
-	req->ct_hdr.ct_rev = FC_CT_REV;
-	req->ct_hdr.ct_fs_type = FC_FST_DIR;
-	req->ct_hdr.ct_fs_subtype = FC_NS_SUBTYPE;
-	req->ct_hdr.ct_options = 0;
-	req->ct_hdr.ct_cmd = FC_NS_GPN_FT;
-	req->ct_hdr.ct_mr_size = max_bytes / 4;
-	req->gpn_ft.fn_domain_id_scope = 0;
-	req->gpn_ft.fn_area_id_scope = 0;
+	zfcp_fc_ct_ns_init(&req->ct_hdr, FC_NS_GPN_FT, max_bytes);
 	req->gpn_ft.fn_fc4_type = FC_TYPE_FCP;
 
-	/* prepare zfcp_send_ct */
-	ct->handler = zfcp_fc_complete;
-	ct->handler_data = &completion;
-	ct->req = &gpn_ft->sg_req;
-	ct->resp = gpn_ft->sg_resp;
+	ct_els->handler = zfcp_fc_complete;
+	ct_els->handler_data = &completion;
+	ct_els->req = &fc_req->sg_req;
+	ct_els->resp = &fc_req->sg_rsp;
 
-	ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct, NULL,
+	ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct_els, NULL,
 			       ZFCP_FC_CTELS_TMO);
 	if (!ret)
 		wait_for_completion(&completion);
@@ -610,11 +584,11 @@ static void zfcp_fc_validate_port(struct
 	list_move_tail(&port->list, lh);
 }
 
-static int zfcp_fc_eval_gpn_ft(struct zfcp_fc_gpn_ft *gpn_ft,
+static int zfcp_fc_eval_gpn_ft(struct zfcp_fc_req *fc_req,
 			       struct zfcp_adapter *adapter, int max_entries)
 {
-	struct zfcp_fsf_ct_els *ct = &gpn_ft->ct;
-	struct scatterlist *sg = gpn_ft->sg_resp;
+	struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els;
+	struct scatterlist *sg = &fc_req->sg_rsp;
 	struct fc_ct_hdr *hdr = sg_virt(sg);
 	struct fc_gpn_ft_resp *acc = sg_virt(sg);
 	struct zfcp_port *port, *tmp;
@@ -623,7 +597,7 @@ static int zfcp_fc_eval_gpn_ft(struct zf
 	u32 d_id;
 	int ret = 0, x, last = 0;
 
-	if (ct->status)
+	if (ct_els->status)
 		return -EIO;
 
 	if (hdr->ct_cmd != FC_FS_ACC) {
@@ -687,7 +661,7 @@ void zfcp_fc_scan_ports(struct work_stru
 	struct zfcp_adapter *adapter = container_of(work, struct zfcp_adapter,
 						    scan_work);
 	int ret, i;
-	struct zfcp_fc_gpn_ft *gpn_ft;
+	struct zfcp_fc_req *fc_req;
 	int chain, max_entries, buf_num, max_bytes;
 
 	chain = adapter->adapter_features & FSF_FEATURE_ELS_CT_CHAINED_SBALS;
@@ -702,21 +676,22 @@ void zfcp_fc_scan_ports(struct work_stru
 	if (zfcp_fc_wka_port_get(&adapter->gs->ds))
 		return;
 
-	gpn_ft = zfcp_alloc_sg_env(buf_num);
-	if (!gpn_ft)
+	fc_req = zfcp_alloc_sg_env(buf_num);
+	if (!fc_req)
 		goto out;
 
 	for (i = 0; i < 3; i++) {
-		ret = zfcp_fc_send_gpn_ft(gpn_ft, adapter, max_bytes);
+		ret = zfcp_fc_send_gpn_ft(fc_req, adapter, max_bytes);
 		if (!ret) {
-			ret = zfcp_fc_eval_gpn_ft(gpn_ft, adapter, max_entries);
+			ret = zfcp_fc_eval_gpn_ft(fc_req, adapter, max_entries);
 			if (ret == -EAGAIN)
 				ssleep(1);
 			else
 				break;
 		}
 	}
-	zfcp_free_sg_env(gpn_ft, buf_num);
+	zfcp_sg_free_table(&fc_req->sg_rsp, buf_num);
+	kmem_cache_free(zfcp_fc_req_cache, fc_req);
 out:
 	zfcp_fc_wka_port_put(&adapter->gs->ds);
 }
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -84,28 +84,6 @@ struct zfcp_fc_gpn_ft_req {
 } __packed;
 
 /**
- * struct zfcp_fc_gpn_ft_resp - container for ct header plus gpn_ft response
- * @ct_hdr: FC GS common transport header
- * @gpn_ft: Array of gpn_ft response data to fill one memory page
- */
-struct zfcp_fc_gpn_ft_resp {
-	struct fc_ct_hdr	ct_hdr;
-	struct fc_gpn_ft_resp	gpn_ft[ZFCP_FC_GPN_FT_ENT_PAGE];
-} __packed;
-
-/**
- * struct zfcp_fc_gpn_ft - zfcp data for gpn_ft request
- * @ct: data passed to zfcp_fsf for issuing fsf request
- * @sg_req: scatter list entry for gpn_ft request
- * @sg_resp: scatter list entries for gpn_ft responses (per memory page)
- */
-struct zfcp_fc_gpn_ft {
-	struct zfcp_fsf_ct_els ct;
-	struct scatterlist sg_req;
-	struct scatterlist sg_resp[ZFCP_FC_GPN_FT_NUM_BUFS];
-};
-
-/**
  * struct zfcp_fc_req - Container for FC ELS and CT requests sent from zfcp
  * @ct_els: data required for issuing fsf command
  * @sg_req: scatterlist entry for request data
@@ -125,6 +103,10 @@ struct zfcp_fc_req {
 			struct zfcp_fc_gid_pn_req	req;
 			struct zfcp_fc_gid_pn_rsp	rsp;
 		} gid_pn;
+		struct {
+			struct scatterlist sg_rsp2[ZFCP_FC_GPN_FT_NUM_BUFS - 1];
+			struct zfcp_fc_gpn_ft_req	req;
+		} gpn_ft;
 	} u;
 };
 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 07/11] zfcp: Move qtcb kmem_cache to zfcp_fsf.c
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (5 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 06/11] zfcp: Use common FC kmem_cache for GPN_FT request Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 08/11] zfcp: Merge FCP task management setup with regular FCP command setup Steffen Maier
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 707-zfcp-kmem-cache-qtcb.diff --]
[-- Type: text/plain, Size: 3358 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

Move the kmem_cache for allocating the qtcb to zfcp_fsf.c and rename
it accordingly.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c |   12 ++++++------
 drivers/s390/scsi/zfcp_def.h |    1 -
 drivers/s390/scsi/zfcp_ext.h |    1 +
 drivers/s390/scsi/zfcp_fsf.c |    6 ++++--
 4 files changed, 11 insertions(+), 9 deletions(-)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -122,9 +122,9 @@ static int __init zfcp_module_init(void)
 {
 	int retval = -ENOMEM;
 
-	zfcp_data.qtcb_cache = zfcp_cache_hw_align("zfcp_qtcb",
-					sizeof(struct fsf_qtcb));
-	if (!zfcp_data.qtcb_cache)
+	zfcp_fsf_qtcb_cache = zfcp_cache_hw_align("zfcp_fsf_qtcb",
+						  sizeof(struct fsf_qtcb));
+	if (!zfcp_fsf_qtcb_cache)
 		goto out_qtcb_cache;
 
 	zfcp_fc_req_cache = zfcp_cache_hw_align("zfcp_fc_req",
@@ -164,7 +164,7 @@ out_misc:
 out_transport:
 	kmem_cache_destroy(zfcp_fc_req_cache);
 out_fc_cache:
-	kmem_cache_destroy(zfcp_data.qtcb_cache);
+	kmem_cache_destroy(zfcp_fsf_qtcb_cache);
 out_qtcb_cache:
 	return retval;
 }
@@ -177,7 +177,7 @@ static void __exit zfcp_module_exit(void
 	misc_deregister(&zfcp_cfdc_misc);
 	fc_release_transport(zfcp_data.scsi_transport_template);
 	kmem_cache_destroy(zfcp_fc_req_cache);
-	kmem_cache_destroy(zfcp_data.qtcb_cache);
+	kmem_cache_destroy(zfcp_fsf_qtcb_cache);
 }
 
 module_exit(zfcp_module_exit);
@@ -236,7 +236,7 @@ static int zfcp_allocate_low_mem_buffers
 		return -ENOMEM;
 
 	adapter->pool.qtcb_pool =
-		mempool_create_slab_pool(4, zfcp_data.qtcb_cache);
+		mempool_create_slab_pool(4, zfcp_fsf_qtcb_cache);
 	if (!adapter->pool.qtcb_pool)
 		return -ENOMEM;
 
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -317,7 +317,6 @@ struct zfcp_fsf_req {
 struct zfcp_data {
 	struct scsi_host_template scsi_host_template;
 	struct scsi_transport_template *scsi_transport_template;
-	struct kmem_cache	*qtcb_cache;
 };
 
 #endif /* ZFCP_DEF_H */
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -98,6 +98,7 @@ extern int zfcp_fc_exec_bsg_job(struct f
 extern int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *);
 
 /* zfcp_fsf.c */
+extern struct kmem_cache *zfcp_fsf_qtcb_cache;
 extern int zfcp_fsf_open_port(struct zfcp_erp_action *);
 extern int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *);
 extern int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *);
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -18,6 +18,8 @@
 #include "zfcp_qdio.h"
 #include "zfcp_reqlist.h"
 
+struct kmem_cache *zfcp_fsf_qtcb_cache;
+
 static void zfcp_fsf_request_timeout_handler(unsigned long data)
 {
 	struct zfcp_adapter *adapter = (struct zfcp_adapter *) data;
@@ -83,7 +85,7 @@ void zfcp_fsf_req_free(struct zfcp_fsf_r
 	}
 
 	if (likely(req->qtcb))
-		kmem_cache_free(zfcp_data.qtcb_cache, req->qtcb);
+		kmem_cache_free(zfcp_fsf_qtcb_cache, req->qtcb);
 	kfree(req);
 }
 
@@ -628,7 +630,7 @@ static struct fsf_qtcb *zfcp_qtcb_alloc(
 	if (likely(pool))
 		qtcb = mempool_alloc(pool, GFP_ATOMIC);
 	else
-		qtcb = kmem_cache_alloc(zfcp_data.qtcb_cache, GFP_ATOMIC);
+		qtcb = kmem_cache_alloc(zfcp_fsf_qtcb_cache, GFP_ATOMIC);
 
 	if (unlikely(!qtcb))
 		return NULL;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 08/11] zfcp: Merge FCP task management setup with regular FCP command setup
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (6 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 07/11] zfcp: Move qtcb kmem_cache to zfcp_fsf.c Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 09/11] zfcp: Move SCSI host and transport templates out of struct zfcp_data Steffen Maier
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 708-zfcp-fcp-task-mgmnt.diff --]
[-- Type: text/plain, Size: 2786 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

For task management commands, only LUN and flags are required. The
regular FCP setup already sets the LUN in the fcp_cmnd. All is
required for merging the function is setting up the TM flags.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_fc.h  |   22 ++++++++--------------
 drivers/s390/scsi/zfcp_fsf.c |    4 ++--
 2 files changed, 10 insertions(+), 16 deletions(-)
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -164,14 +164,21 @@ struct zfcp_fc_wka_ports {
  * zfcp_fc_scsi_to_fcp - setup FCP command with data from scsi_cmnd
  * @fcp: fcp_cmnd to setup
  * @scsi: scsi_cmnd where to get LUN, task attributes/flags and CDB
+ * @tm: task management flags to setup task management command
  */
 static inline
-void zfcp_fc_scsi_to_fcp(struct fcp_cmnd *fcp, struct scsi_cmnd *scsi)
+void zfcp_fc_scsi_to_fcp(struct fcp_cmnd *fcp, struct scsi_cmnd *scsi,
+			 u8 tm_flags)
 {
 	char tag[2];
 
 	int_to_scsilun(scsi->device->lun, (struct scsi_lun *) &fcp->fc_lun);
 
+	if (unlikely(tm_flags)) {
+		fcp->fc_tm_flags = tm_flags;
+		return;
+	}
+
 	if (scsi_populate_tag_msg(scsi, tag)) {
 		switch (tag[0]) {
 		case MSG_ORDERED_TAG:
@@ -198,19 +205,6 @@ void zfcp_fc_scsi_to_fcp(struct fcp_cmnd
 }
 
 /**
- * zfcp_fc_fcp_tm - setup FCP command as task management command
- * @fcp: fcp_cmnd to setup
- * @dev: scsi_device where to send the task management command
- * @tm: task management flags to setup tm command
- */
-static inline
-void zfcp_fc_fcp_tm(struct fcp_cmnd *fcp, struct scsi_device *dev, u8 tm_flags)
-{
-	int_to_scsilun(dev->lun, (struct scsi_lun *) &fcp->fc_lun);
-	fcp->fc_tm_flags |= tm_flags;
-}
-
-/**
  * zfcp_fc_evap_fcp_rsp - evaluate FCP RSP IU and update scsi_cmnd accordingly
  * @fcp_rsp: FCP RSP IU to evaluate
  * @scsi: SCSI command where to update status and sense buffer
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -2210,7 +2210,7 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *
 	zfcp_fsf_set_data_dir(scsi_cmnd, &io->data_direction);
 
 	fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd;
-	zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd);
+	zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd, 0);
 
 	if (scsi_prot_sg_count(scsi_cmnd)) {
 		zfcp_qdio_set_data_div(qdio, &req->qdio_req,
@@ -2299,7 +2299,7 @@ struct zfcp_fsf_req *zfcp_fsf_fcp_task_m
 	zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
 
 	fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd;
-	zfcp_fc_fcp_tm(fcp_cmnd, scmnd->device, tm_flags);
+	zfcp_fc_scsi_to_fcp(fcp_cmnd, scmnd, tm_flags);
 
 	zfcp_fsf_start_timer(req, ZFCP_SCSI_ER_TIMEOUT);
 	if (!zfcp_fsf_req_send(req))

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 09/11] zfcp: Move SCSI host and transport templates out of struct zfcp_data
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (7 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 08/11] zfcp: Merge FCP task management setup with regular FCP command setup Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 10/11] fc: Add GSPN_ID request to header file Steffen Maier
  2011-02-22 18:54 ` [PATCH 11/11] zfcp: Add information to symbolic port name when running in NPIV mode Steffen Maier
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 709-zfcp-move-templates.diff --]
[-- Type: text/plain, Size: 7671 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

The SCSI host and transport templates are the only members left in the
global zfcp_data struct. Move them out of zfcp_data  and remove the
now unused zfcp_data struct. Also update the names of the register and
unregister functions to use the zfcp_scsi prefix.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c  |   14 ++++----
 drivers/s390/scsi/zfcp_def.h  |    6 ---
 drivers/s390/scsi/zfcp_ext.h  |    6 +--
 drivers/s390/scsi/zfcp_scsi.c |   70 ++++++++++++++++++++++--------------------
 4 files changed, 48 insertions(+), 48 deletions(-)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -132,11 +132,11 @@ static int __init zfcp_module_init(void)
 	if (!zfcp_fc_req_cache)
 		goto out_fc_cache;
 
-	zfcp_data.scsi_transport_template =
+	zfcp_scsi_transport_template =
 		fc_attach_transport(&zfcp_transport_functions);
-	if (!zfcp_data.scsi_transport_template)
+	if (!zfcp_scsi_transport_template)
 		goto out_transport;
-	scsi_transport_reserve_device(zfcp_data.scsi_transport_template,
+	scsi_transport_reserve_device(zfcp_scsi_transport_template,
 				      sizeof(struct zfcp_scsi_dev));
 
 
@@ -160,7 +160,7 @@ static int __init zfcp_module_init(void)
 out_ccw_register:
 	misc_deregister(&zfcp_cfdc_misc);
 out_misc:
-	fc_release_transport(zfcp_data.scsi_transport_template);
+	fc_release_transport(zfcp_scsi_transport_template);
 out_transport:
 	kmem_cache_destroy(zfcp_fc_req_cache);
 out_fc_cache:
@@ -175,7 +175,7 @@ static void __exit zfcp_module_exit(void
 {
 	ccw_driver_unregister(&zfcp_ccw_driver);
 	misc_deregister(&zfcp_cfdc_misc);
-	fc_release_transport(zfcp_data.scsi_transport_template);
+	fc_release_transport(zfcp_scsi_transport_template);
 	kmem_cache_destroy(zfcp_fc_req_cache);
 	kmem_cache_destroy(zfcp_fsf_qtcb_cache);
 }
@@ -413,7 +413,7 @@ struct zfcp_adapter *zfcp_adapter_enqueu
 	adapter->dma_parms.max_segment_size = ZFCP_QDIO_SBALE_LEN;
 	adapter->ccw_device->dev.dma_parms = &adapter->dma_parms;
 
-	if (!zfcp_adapter_scsi_register(adapter))
+	if (!zfcp_scsi_adapter_register(adapter))
 		return adapter;
 
 failed:
@@ -430,7 +430,7 @@ void zfcp_adapter_unregister(struct zfcp
 	zfcp_destroy_adapter_work_queue(adapter);
 
 	zfcp_fc_wka_ports_force_offline(adapter->gs);
-	zfcp_adapter_scsi_unregister(adapter);
+	zfcp_scsi_adapter_unregister(adapter);
 	sysfs_remove_group(&cdev->dev.kobj, &zfcp_sysfs_adapter_attrs);
 
 	zfcp_erp_thread_kill(adapter);
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -313,10 +313,4 @@ struct zfcp_fsf_req {
 	void			(*handler)(struct zfcp_fsf_req *);
 };
 
-/* driver data */
-struct zfcp_data {
-	struct scsi_host_template scsi_host_template;
-	struct scsi_transport_template *scsi_transport_template;
-};
-
 #endif /* ZFCP_DEF_H */
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -141,9 +141,9 @@ extern struct zfcp_fsf_req *zfcp_fsf_get
 					     struct qdio_buffer *);
 
 /* zfcp_scsi.c */
-extern struct zfcp_data zfcp_data;
-extern int zfcp_adapter_scsi_register(struct zfcp_adapter *);
-extern void zfcp_adapter_scsi_unregister(struct zfcp_adapter *);
+extern struct scsi_transport_template *zfcp_scsi_transport_template;
+extern int zfcp_scsi_adapter_register(struct zfcp_adapter *);
+extern void zfcp_scsi_adapter_unregister(struct zfcp_adapter *);
 extern struct fc_function_template zfcp_transport_functions;
 extern void zfcp_scsi_rport_work(struct work_struct *);
 extern void zfcp_scsi_schedule_rport_register(struct zfcp_port *);
--- a/drivers/s390/scsi/zfcp_scsi.c
+++ b/drivers/s390/scsi/zfcp_scsi.c
@@ -292,7 +292,37 @@ static int zfcp_scsi_eh_host_reset_handl
 	return SUCCESS;
 }
 
-int zfcp_adapter_scsi_register(struct zfcp_adapter *adapter)
+struct scsi_transport_template *zfcp_scsi_transport_template;
+
+static struct scsi_host_template zfcp_scsi_host_template = {
+	.module			 = THIS_MODULE,
+	.name			 = "zfcp",
+	.queuecommand		 = zfcp_scsi_queuecommand,
+	.eh_abort_handler	 = zfcp_scsi_eh_abort_handler,
+	.eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler,
+	.eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler,
+	.eh_host_reset_handler	 = zfcp_scsi_eh_host_reset_handler,
+	.slave_alloc		 = zfcp_scsi_slave_alloc,
+	.slave_configure	 = zfcp_scsi_slave_configure,
+	.slave_destroy		 = zfcp_scsi_slave_destroy,
+	.change_queue_depth	 = zfcp_scsi_change_queue_depth,
+	.proc_name		 = "zfcp",
+	.can_queue		 = 4096,
+	.this_id		 = -1,
+	.sg_tablesize		 = ZFCP_QDIO_MAX_SBALES_PER_REQ,
+	.max_sectors		 = (ZFCP_QDIO_MAX_SBALES_PER_REQ * 8),
+	.dma_boundary		 = ZFCP_QDIO_SBALE_LEN - 1,
+	.cmd_per_lun		 = 1,
+	.use_clustering		 = 1,
+	.shost_attrs		 = zfcp_sysfs_shost_attrs,
+	.sdev_attrs		 = zfcp_sysfs_sdev_attrs,
+};
+
+/**
+ * zfcp_scsi_adapter_register - Register SCSI and FC host with SCSI midlayer
+ * @adapter: The zfcp adapter to register with the SCSI midlayer
+ */
+int zfcp_scsi_adapter_register(struct zfcp_adapter *adapter)
 {
 	struct ccw_dev_id dev_id;
 
@@ -301,7 +331,7 @@ int zfcp_adapter_scsi_register(struct zf
 
 	ccw_device_get_id(adapter->ccw_device, &dev_id);
 	/* register adapter as SCSI host with mid layer of SCSI stack */
-	adapter->scsi_host = scsi_host_alloc(&zfcp_data.scsi_host_template,
+	adapter->scsi_host = scsi_host_alloc(&zfcp_scsi_host_template,
 					     sizeof (struct zfcp_adapter *));
 	if (!adapter->scsi_host) {
 		dev_err(&adapter->ccw_device->dev,
@@ -316,7 +346,7 @@ int zfcp_adapter_scsi_register(struct zf
 	adapter->scsi_host->max_channel = 0;
 	adapter->scsi_host->unique_id = dev_id.devno;
 	adapter->scsi_host->max_cmd_len = 16; /* in struct fcp_cmnd */
-	adapter->scsi_host->transportt = zfcp_data.scsi_transport_template;
+	adapter->scsi_host->transportt = zfcp_scsi_transport_template;
 
 	adapter->scsi_host->hostdata[0] = (unsigned long) adapter;
 
@@ -328,7 +358,11 @@ int zfcp_adapter_scsi_register(struct zf
 	return 0;
 }
 
-void zfcp_adapter_scsi_unregister(struct zfcp_adapter *adapter)
+/**
+ * zfcp_scsi_adapter_unregister - Unregister SCSI and FC host from SCSI midlayer
+ * @adapter: The zfcp adapter to unregister.
+ */
+void zfcp_scsi_adapter_unregister(struct zfcp_adapter *adapter)
 {
 	struct Scsi_Host *shost;
 	struct zfcp_port *port;
@@ -346,8 +380,6 @@ void zfcp_adapter_scsi_unregister(struct
 	scsi_remove_host(shost);
 	scsi_host_put(shost);
 	adapter->scsi_host = NULL;
-
-	return;
 }
 
 static struct fc_host_statistics*
@@ -692,29 +724,3 @@ struct fc_function_template zfcp_transpo
 	.show_host_port_id = 1,
 	.dd_bsg_size = sizeof(struct zfcp_fsf_ct_els),
 };
-
-struct zfcp_data zfcp_data = {
-	.scsi_host_template = {
-		.name			 = "zfcp",
-		.module			 = THIS_MODULE,
-		.proc_name		 = "zfcp",
-		.change_queue_depth	 = zfcp_scsi_change_queue_depth,
-		.slave_alloc		 = zfcp_scsi_slave_alloc,
-		.slave_configure	 = zfcp_scsi_slave_configure,
-		.slave_destroy		 = zfcp_scsi_slave_destroy,
-		.queuecommand		 = zfcp_scsi_queuecommand,
-		.eh_abort_handler	 = zfcp_scsi_eh_abort_handler,
-		.eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler,
-		.eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler,
-		.eh_host_reset_handler	 = zfcp_scsi_eh_host_reset_handler,
-		.can_queue		 = 4096,
-		.this_id		 = -1,
-		.sg_tablesize		 = ZFCP_QDIO_MAX_SBALES_PER_REQ,
-		.cmd_per_lun		 = 1,
-		.use_clustering		 = 1,
-		.sdev_attrs		 = zfcp_sysfs_sdev_attrs,
-		.max_sectors		 = (ZFCP_QDIO_MAX_SBALES_PER_REQ * 8),
-		.dma_boundary		 = ZFCP_QDIO_SBALE_LEN - 1,
-		.shost_attrs		 = zfcp_sysfs_shost_attrs,
-	},
-};

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 10/11] fc: Add GSPN_ID request to header file
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (8 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 09/11] zfcp: Move SCSI host and transport templates out of struct zfcp_data Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  2011-02-22 18:54 ` [PATCH 11/11] zfcp: Add information to symbolic port name when running in NPIV mode Steffen Maier
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 710-fc-gspn-id.diff --]
[-- Type: text/plain, Size: 1223 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

Add request code and corresponding structs for "Get Symbolic Port
Name" (GSPN_ID) request.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 include/scsi/fc/fc_ns.h |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)
--- a/include/scsi/fc/fc_ns.h
+++ b/include/scsi/fc/fc_ns.h
@@ -41,6 +41,7 @@ enum fc_ns_req {
 	FC_NS_GI_A =	0x0101,		/* get identifiers - scope */
 	FC_NS_GPN_ID =	0x0112,		/* get port name by ID */
 	FC_NS_GNN_ID =	0x0113,		/* get node name by ID */
+	FC_NS_GSPN_ID = 0x0118,		/* get symbolic port name */
 	FC_NS_GID_PN =	0x0121,		/* get ID for port name */
 	FC_NS_GID_NN =	0x0131,		/* get IDs for node name */
 	FC_NS_GID_FT =	0x0171,		/* get IDs by FC4 type */
@@ -144,7 +145,7 @@ struct fc_ns_gid_pn {
 };
 
 /*
- * GID_PN response
+ * GID_PN response or GSPN_ID request
  */
 struct fc_gid_pn_resp {
 	__u8      fp_resvd;
@@ -152,6 +153,14 @@ struct fc_gid_pn_resp {
 };
 
 /*
+ * GSPN_ID response
+ */
+struct fc_gspn_resp {
+	__u8	fp_name_len;
+	char	fp_name[];
+};
+
+/*
  * RFT_ID request - register FC-4 types for ID.
  */
 struct fc_ns_rft_id {

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 11/11] zfcp: Add information to symbolic port name when running in NPIV mode
  2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
                   ` (9 preceding siblings ...)
  2011-02-22 18:54 ` [PATCH 10/11] fc: Add GSPN_ID request to header file Steffen Maier
@ 2011-02-22 18:54 ` Steffen Maier
  10 siblings, 0 replies; 12+ messages in thread
From: Steffen Maier @ 2011-02-22 18:54 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Christof Schmitt

[-- Attachment #1: 711-zfcp-port-name-info.diff --]
[-- Type: text/plain, Size: 9014 bytes --]

From: Christof Schmitt <christof.schmitt@de.ibm.com>

Query the FC symbolic port name for reporting in the fc_host sysfs and
enable the symbolic_name attribute in the fc_host sysfs. When running
in NPIV mode, extend the symbolic port name with the devno and the
hostname. This allows better identification of Linux systems for SAN
and storage administrators.

Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
---


 drivers/s390/scsi/zfcp_aux.c  |    2 
 drivers/s390/scsi/zfcp_def.h  |    1 
 drivers/s390/scsi/zfcp_erp.c  |    2 
 drivers/s390/scsi/zfcp_ext.h  |    1 
 drivers/s390/scsi/zfcp_fc.c   |  120 ++++++++++++++++++++++++++++++++++++++++++
 drivers/s390/scsi/zfcp_fc.h   |   42 ++++++++++++++
 drivers/s390/scsi/zfcp_scsi.c |    1 
 7 files changed, 169 insertions(+)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -362,6 +362,7 @@ struct zfcp_adapter *zfcp_adapter_enqueu
 
 	INIT_WORK(&adapter->stat_work, _zfcp_status_read_scheduler);
 	INIT_WORK(&adapter->scan_work, zfcp_fc_scan_ports);
+	INIT_WORK(&adapter->ns_up_work, zfcp_fc_sym_name_update);
 
 	if (zfcp_qdio_setup(adapter))
 		goto failed;
@@ -427,6 +428,7 @@ void zfcp_adapter_unregister(struct zfcp
 
 	cancel_work_sync(&adapter->scan_work);
 	cancel_work_sync(&adapter->stat_work);
+	cancel_work_sync(&adapter->ns_up_work);
 	zfcp_destroy_adapter_work_queue(adapter);
 
 	zfcp_fc_wka_ports_force_offline(adapter->gs);
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -189,6 +189,7 @@ struct zfcp_adapter {
 	struct fsf_qtcb_bottom_port *stats_reset_data;
 	unsigned long		stats_reset;
 	struct work_struct	scan_work;
+	struct work_struct	ns_up_work;
 	struct service_level	service_level;
 	struct workqueue_struct	*work_queue;
 	struct device_dma_parameters dma_parms;
--- a/drivers/s390/scsi/zfcp_erp.c
+++ b/drivers/s390/scsi/zfcp_erp.c
@@ -1231,8 +1231,10 @@ static void zfcp_erp_action_cleanup(stru
 		if (result == ZFCP_ERP_SUCCEEDED) {
 			register_service_level(&adapter->service_level);
 			queue_work(adapter->work_queue, &adapter->scan_work);
+			queue_work(adapter->work_queue, &adapter->ns_up_work);
 		} else
 			unregister_service_level(&adapter->service_level);
+
 		kref_put(&adapter->ref, zfcp_adapter_release);
 		break;
 	}
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -96,6 +96,7 @@ extern int zfcp_fc_gs_setup(struct zfcp_
 extern void zfcp_fc_gs_destroy(struct zfcp_adapter *);
 extern int zfcp_fc_exec_bsg_job(struct fc_bsg_job *);
 extern int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *);
+extern void zfcp_fc_sym_name_update(struct work_struct *);
 
 /* zfcp_fsf.c */
 extern struct kmem_cache *zfcp_fsf_qtcb_cache;
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <linux/utsname.h>
 #include <scsi/fc/fc_els.h>
 #include <scsi/libfc.h>
 #include "zfcp_ext.h"
@@ -696,6 +697,125 @@ out:
 	zfcp_fc_wka_port_put(&adapter->gs->ds);
 }
 
+static int zfcp_fc_gspn(struct zfcp_adapter *adapter,
+			struct zfcp_fc_req *fc_req)
+{
+	DECLARE_COMPLETION_ONSTACK(completion);
+	char devno[] = "DEVNO:";
+	struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els;
+	struct zfcp_fc_gspn_req *gspn_req = &fc_req->u.gspn.req;
+	struct zfcp_fc_gspn_rsp *gspn_rsp = &fc_req->u.gspn.rsp;
+	int ret;
+
+	zfcp_fc_ct_ns_init(&gspn_req->ct_hdr, FC_NS_GSPN_ID,
+			   FC_SYMBOLIC_NAME_SIZE);
+	hton24(gspn_req->gspn.fp_fid, fc_host_port_id(adapter->scsi_host));
+
+	sg_init_one(&fc_req->sg_req, gspn_req, sizeof(*gspn_req));
+	sg_init_one(&fc_req->sg_rsp, gspn_rsp, sizeof(*gspn_rsp));
+
+	ct_els->handler = zfcp_fc_complete;
+	ct_els->handler_data = &completion;
+	ct_els->req = &fc_req->sg_req;
+	ct_els->resp = &fc_req->sg_rsp;
+
+	ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct_els, NULL,
+			       ZFCP_FC_CTELS_TMO);
+	if (ret)
+		return ret;
+
+	wait_for_completion(&completion);
+	if (ct_els->status)
+		return ct_els->status;
+
+	if (fc_host_port_type(adapter->scsi_host) == FC_PORTTYPE_NPIV &&
+	    !(strstr(gspn_rsp->gspn.fp_name, devno)))
+		snprintf(fc_host_symbolic_name(adapter->scsi_host),
+			 FC_SYMBOLIC_NAME_SIZE, "%s%s %s NAME: %s",
+			 gspn_rsp->gspn.fp_name, devno,
+			 dev_name(&adapter->ccw_device->dev),
+			 init_utsname()->nodename);
+	else
+		strlcpy(fc_host_symbolic_name(adapter->scsi_host),
+			gspn_rsp->gspn.fp_name, FC_SYMBOLIC_NAME_SIZE);
+
+	return 0;
+}
+
+static void zfcp_fc_rspn(struct zfcp_adapter *adapter,
+			 struct zfcp_fc_req *fc_req)
+{
+	DECLARE_COMPLETION_ONSTACK(completion);
+	struct Scsi_Host *shost = adapter->scsi_host;
+	struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els;
+	struct zfcp_fc_rspn_req *rspn_req = &fc_req->u.rspn.req;
+	struct fc_ct_hdr *rspn_rsp = &fc_req->u.rspn.rsp;
+	int ret, len;
+
+	zfcp_fc_ct_ns_init(&rspn_req->ct_hdr, FC_NS_RSPN_ID,
+			   FC_SYMBOLIC_NAME_SIZE);
+	hton24(rspn_req->rspn.fr_fid.fp_fid, fc_host_port_id(shost));
+	len = strlcpy(rspn_req->rspn.fr_name, fc_host_symbolic_name(shost),
+		      FC_SYMBOLIC_NAME_SIZE);
+	rspn_req->rspn.fr_name_len = len;
+
+	sg_init_one(&fc_req->sg_req, rspn_req, sizeof(*rspn_req));
+	sg_init_one(&fc_req->sg_rsp, rspn_rsp, sizeof(*rspn_rsp));
+
+	ct_els->handler = zfcp_fc_complete;
+	ct_els->handler_data = &completion;
+	ct_els->req = &fc_req->sg_req;
+	ct_els->resp = &fc_req->sg_rsp;
+
+	ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct_els, NULL,
+			       ZFCP_FC_CTELS_TMO);
+	if (!ret)
+		wait_for_completion(&completion);
+}
+
+/**
+ * zfcp_fc_sym_name_update - Retrieve and update the symbolic port name
+ * @work: ns_up_work of the adapter where to update the symbolic port name
+ *
+ * Retrieve the current symbolic port name that may have been set by
+ * the hardware using the GSPN request and update the fc_host
+ * symbolic_name sysfs attribute. When running in NPIV mode (and hence
+ * the port name is unique for this system), update the symbolic port
+ * name to add Linux specific information and update the FC nameserver
+ * using the RSPN request.
+ */
+void zfcp_fc_sym_name_update(struct work_struct *work)
+{
+	struct zfcp_adapter *adapter = container_of(work, struct zfcp_adapter,
+						    ns_up_work);
+	int ret;
+	struct zfcp_fc_req *fc_req;
+
+	if (fc_host_port_type(adapter->scsi_host) != FC_PORTTYPE_NPORT &&
+	    fc_host_port_type(adapter->scsi_host) != FC_PORTTYPE_NPIV)
+		return;
+
+	fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_KERNEL);
+	if (!fc_req)
+		return;
+
+	ret = zfcp_fc_wka_port_get(&adapter->gs->ds);
+	if (ret)
+		goto out_free;
+
+	ret = zfcp_fc_gspn(adapter, fc_req);
+	if (ret || fc_host_port_type(adapter->scsi_host) != FC_PORTTYPE_NPIV)
+		goto out_ds_put;
+
+	memset(fc_req, 0, sizeof(*fc_req));
+	zfcp_fc_rspn(adapter, fc_req);
+
+out_ds_put:
+	zfcp_fc_wka_port_put(&adapter->gs->ds);
+out_free:
+	kmem_cache_free(zfcp_fc_req_cache, fc_req);
+}
+
 static void zfcp_fc_ct_els_job_handler(void *data)
 {
 	struct fc_bsg_job *job = data;
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -84,6 +84,40 @@ struct zfcp_fc_gpn_ft_req {
 } __packed;
 
 /**
+ * struct zfcp_fc_gspn_req - container for ct header plus GSPN_ID request
+ * @ct_hdr: FC GS common transport header
+ * @gspn: GSPN_ID request
+ */
+struct zfcp_fc_gspn_req {
+	struct fc_ct_hdr	ct_hdr;
+	struct fc_gid_pn_resp	gspn;
+} __packed;
+
+/**
+ * struct zfcp_fc_gspn_rsp - container for ct header plus GSPN_ID response
+ * @ct_hdr: FC GS common transport header
+ * @gspn: GSPN_ID response
+ * @name: The name string of the GSPN_ID response
+ */
+struct zfcp_fc_gspn_rsp {
+	struct fc_ct_hdr	ct_hdr;
+	struct fc_gspn_resp	gspn;
+	char			name[FC_SYMBOLIC_NAME_SIZE];
+} __packed;
+
+/**
+ * struct zfcp_fc_rspn_req - container for ct header plus RSPN_ID request
+ * @ct_hdr: FC GS common transport header
+ * @rspn: RSPN_ID request
+ * @name: The name string of the RSPN_ID request
+ */
+struct zfcp_fc_rspn_req {
+	struct fc_ct_hdr	ct_hdr;
+	struct fc_ns_rspn	rspn;
+	char			name[FC_SYMBOLIC_NAME_SIZE];
+} __packed;
+
+/**
  * struct zfcp_fc_req - Container for FC ELS and CT requests sent from zfcp
  * @ct_els: data required for issuing fsf command
  * @sg_req: scatterlist entry for request data
@@ -107,6 +141,14 @@ struct zfcp_fc_req {
 			struct scatterlist sg_rsp2[ZFCP_FC_GPN_FT_NUM_BUFS - 1];
 			struct zfcp_fc_gpn_ft_req	req;
 		} gpn_ft;
+		struct {
+			struct zfcp_fc_gspn_req		req;
+			struct zfcp_fc_gspn_rsp		rsp;
+		} gspn;
+		struct {
+			struct zfcp_fc_rspn_req		req;
+			struct fc_ct_hdr		rsp;
+		} rspn;
 	} u;
 };
 
--- a/drivers/s390/scsi/zfcp_scsi.c
+++ b/drivers/s390/scsi/zfcp_scsi.c
@@ -720,6 +720,7 @@ struct fc_function_template zfcp_transpo
 	/* no functions registered for following dynamic attributes but
 	   directly set by LLDD */
 	.show_host_port_type = 1,
+	.show_host_symbolic_name = 1,
 	.show_host_speed = 1,
 	.show_host_port_id = 1,
 	.dd_bsg_size = sizeof(struct zfcp_fsf_ct_els),

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-02-22 18:54 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-22 18:54 [PATCH 00/11] zfcp patches for 2.6.39 merge window Steffen Maier
2011-02-22 18:54 ` [PATCH 01/11] zfcp: Remove redundant unlikely() Steffen Maier
2011-02-22 18:54 ` [PATCH 02/11] zfcp: Remove unused flag ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT Steffen Maier
2011-02-22 18:54 ` [PATCH 03/11] zfcp: Replace kmem_cache for "status read" data Steffen Maier
2011-02-22 18:54 ` [PATCH 04/11] zfcp: Introduce new kmem_cache for FC request and response data Steffen Maier
2011-02-22 18:54 ` [PATCH 05/11] zfcp: Allocate GID_PN data through new FC kmem_cache Steffen Maier
2011-02-22 18:54 ` [PATCH 06/11] zfcp: Use common FC kmem_cache for GPN_FT request Steffen Maier
2011-02-22 18:54 ` [PATCH 07/11] zfcp: Move qtcb kmem_cache to zfcp_fsf.c Steffen Maier
2011-02-22 18:54 ` [PATCH 08/11] zfcp: Merge FCP task management setup with regular FCP command setup Steffen Maier
2011-02-22 18:54 ` [PATCH 09/11] zfcp: Move SCSI host and transport templates out of struct zfcp_data Steffen Maier
2011-02-22 18:54 ` [PATCH 10/11] fc: Add GSPN_ID request to header file Steffen Maier
2011-02-22 18:54 ` [PATCH 11/11] zfcp: Add information to symbolic port name when running in NPIV mode Steffen Maier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.