All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv3 0/8] Referrals support
@ 2013-12-17  8:18 Hannes Reinecke
  2013-12-17  8:18 ` [PATCH 1/8] target_core_alua: validate ALUA state transition Hannes Reinecke
                   ` (7 more replies)
  0 siblings, 8 replies; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

Hi Nic,

as discussed here is now the third version of the referrals
infrastructure. All the configfs layout changes have been
removed as requested.

Main features of this patchset:

- Asynchronous transitioning: I've switched 'transitioning'
  handling to use a workqueue, that should allow us to simulate
  asynchronous transitioning modes. IE TCM should now be capable
  of handling requests while in transitioning, and properly terminate
  these with the correct sense code.
- Include target device descriptor in VPD page 83
  For the ALUA device handler we'd need to identify the target device
  where a given target port belongs to. So include the respective
  values in the VPD page.
- Referrals support: Adding a new configfs attribute 'lba_map'
  which contains the referrals access mapping. The 'LBD Dependent'
  ALUA state can only be selected if an lba_map is present; also
  explicit ALUA has to be disabled.

Hannes Reinecke (8):
  target_core_alua: validate ALUA state transition
  target_core_alua: Allocate ALUA metadata on demand
  target_core_alua: store old and pending ALUA state
  target_core_alua: Use workqueue for ALUA transitioning
  target_core: simplify scsi_name_len calculation
  target_core_spc: Include target device descriptor in VPD page 83
  target_core_alua: Referrals infrastructure
  target_core_alua: Referrals configfs integration

 drivers/target/target_core_alua.c      | 546 +++++++++++++++++++++++++++------
 drivers/target/target_core_alua.h      |  15 +-
 drivers/target/target_core_configfs.c  | 180 ++++++++++-
 drivers/target/target_core_device.c    |   3 +
 drivers/target/target_core_sbc.c       |   5 +-
 drivers/target/target_core_spc.c       |  76 ++++-
 drivers/target/target_core_transport.c |  28 +-
 include/scsi/scsi.h                    |   1 +
 include/target/target_core_base.h      |  27 +-
 9 files changed, 770 insertions(+), 111 deletions(-)

-- 
1.7.12.4

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 1/8] target_core_alua: validate ALUA state transition
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 19:32   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 2/8] target_core_alua: Allocate ALUA metadata on demand Hannes Reinecke
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

As we now can modify the list of supported states we need to
validate the requested ALUA state when doing a state transition.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_alua.c | 55 ++++++++++++++++++++++++++++++++-------
 1 file changed, 45 insertions(+), 10 deletions(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index fdcee32..292ecce 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -41,11 +41,14 @@
 #include "target_core_alua.h"
 #include "target_core_ua.h"
 
-static sense_reason_t core_alua_check_transition(int state, int *primary);
+static sense_reason_t core_alua_check_transition(int state, int valid,
+						 int *primary);
 static int core_alua_set_tg_pt_secondary_state(
 		struct t10_alua_tg_pt_gp_member *tg_pt_gp_mem,
 		struct se_port *port, int explicit, int offline);
 
+static char *core_alua_dump_state(int state);
+
 static u16 alua_lu_gps_counter;
 static u32 alua_lu_gps_count;
 
@@ -210,7 +213,7 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
 	unsigned char *ptr;
 	sense_reason_t rc = TCM_NO_SENSE;
 	u32 len = 4; /* Skip over RESERVED area in header */
-	int alua_access_state, primary = 0;
+	int alua_access_state, primary = 0, valid_states;
 	u16 tg_pt_id, rtpi;
 
 	if (!l_port)
@@ -252,6 +255,7 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
 		rc = TCM_UNSUPPORTED_SCSI_OPCODE;
 		goto out;
 	}
+	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
 
 	ptr = &buf[4]; /* Skip over RESERVED area in header */
 
@@ -263,7 +267,8 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
 		 * the state is a primary or secondary target port asymmetric
 		 * access state.
 		 */
-		rc = core_alua_check_transition(alua_access_state, &primary);
+		rc = core_alua_check_transition(alua_access_state,
+						valid_states, &primary);
 		if (rc) {
 			/*
 			 * If the SET TARGET PORT GROUPS attempts to establish
@@ -618,17 +623,31 @@ out:
  * Check implicit and explicit ALUA state change request.
  */
 static sense_reason_t
-core_alua_check_transition(int state, int *primary)
+core_alua_check_transition(int state, int valid, int *primary)
 {
+	/*
+	 * OPTIMIZED, NON-OPTIMIZED, STANDBY and UNAVAILABLE are
+	 * defined as primary target port asymmetric access states.
+	 */
 	switch (state) {
 	case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
+		if (!(valid & ALUA_AO_SUP))
+			goto not_supported;
+		*primary = 1;
+		break;
 	case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
+		if (!(valid & ALUA_AN_SUP))
+			goto not_supported;
+		*primary = 1;
+		break;
 	case ALUA_ACCESS_STATE_STANDBY:
+		if (!(valid & ALUA_S_SUP))
+			goto not_supported;
+		*primary = 1;
+		break;
 	case ALUA_ACCESS_STATE_UNAVAILABLE:
-		/*
-		 * OPTIMIZED, NON-OPTIMIZED, STANDBY and UNAVAILABLE are
-		 * defined as primary target port asymmetric access states.
-		 */
+		if (!(valid & ALUA_U_SUP))
+			goto not_supported;
 		*primary = 1;
 		break;
 	case ALUA_ACCESS_STATE_OFFLINE:
@@ -636,14 +655,27 @@ core_alua_check_transition(int state, int *primary)
 		 * OFFLINE state is defined as a secondary target port
 		 * asymmetric access state.
 		 */
+		if (!(valid & ALUA_O_SUP))
+			goto not_supported;
 		*primary = 0;
 		break;
+	case ALUA_ACCESS_STATE_TRANSITION:
+		/*
+		 * Transitioning is set internally, and
+		 * cannot be selected manually.
+		 */
+		goto not_supported;
 	default:
 		pr_err("Unknown ALUA access state: 0x%02x\n", state);
 		return TCM_INVALID_PARAMETER_LIST;
 	}
 
 	return 0;
+
+not_supported:
+	pr_err("ALUA access state %s not supported",
+	       core_alua_dump_state(state));
+	return TCM_INVALID_PARAMETER_LIST;
 }
 
 static char *core_alua_dump_state(int state)
@@ -659,6 +691,8 @@ static char *core_alua_dump_state(int state)
 		return "Unavailable";
 	case ALUA_ACCESS_STATE_OFFLINE:
 		return "Offline";
+	case ALUA_ACCESS_STATE_TRANSITION:
+		return "Transitioning";
 	default:
 		return "Unknown";
 	}
@@ -884,9 +918,10 @@ int core_alua_do_port_transition(
 	struct t10_alua_lu_gp_member *lu_gp_mem, *local_lu_gp_mem;
 	struct t10_alua_tg_pt_gp *tg_pt_gp;
 	unsigned char *md_buf;
-	int primary;
+	int primary, valid_states;
 
-	if (core_alua_check_transition(new_state, &primary) != 0)
+	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
+	if (core_alua_check_transition(new_state, valid_states, &primary) != 0)
 		return -EINVAL;
 
 	md_buf = kzalloc(l_tg_pt_gp->tg_pt_gp_md_buf_len, GFP_KERNEL);
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 2/8] target_core_alua: Allocate ALUA metadata on demand
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
  2013-12-17  8:18 ` [PATCH 1/8] target_core_alua: validate ALUA state transition Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 19:32   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 3/8] target_core_alua: store old and pending ALUA state Hannes Reinecke
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

We should only allocate ALUA metadata if we're actually going
to write them.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_alua.c | 70 +++++++++++++++++----------------------
 drivers/target/target_core_alua.h |  3 ++
 include/target/target_core_base.h |  3 --
 3 files changed, 34 insertions(+), 42 deletions(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index 292ecce..738244b 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -770,16 +770,22 @@ static int core_alua_write_tpg_metadata(
  */
 static int core_alua_update_tpg_primary_metadata(
 	struct t10_alua_tg_pt_gp *tg_pt_gp,
-	int primary_state,
-	unsigned char *md_buf)
+	int primary_state)
 {
+	unsigned char *md_buf;
 	struct t10_wwn *wwn = &tg_pt_gp->tg_pt_gp_dev->t10_wwn;
 	char path[ALUA_METADATA_PATH_LEN];
-	int len;
+	int len, rc;
+
+	md_buf = kzalloc(ALUA_MD_BUF_LEN, GFP_KERNEL);
+	if (!md_buf) {
+		pr_err("Unable to allocate buf for ALUA metadata\n");
+		return -ENOMEM;
+	}
 
 	memset(path, 0, ALUA_METADATA_PATH_LEN);
 
-	len = snprintf(md_buf, tg_pt_gp->tg_pt_gp_md_buf_len,
+	len = snprintf(md_buf, ALUA_MD_BUF_LEN,
 			"tg_pt_gp_id=%hu\n"
 			"alua_access_state=0x%02x\n"
 			"alua_access_status=0x%02x\n",
@@ -790,14 +796,15 @@ static int core_alua_update_tpg_primary_metadata(
 		"/var/target/alua/tpgs_%s/%s", &wwn->unit_serial[0],
 		config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item));
 
-	return core_alua_write_tpg_metadata(path, md_buf, len);
+	rc = core_alua_write_tpg_metadata(path, md_buf, len);
+	kfree(md_buf);
+	return rc;
 }
 
 static int core_alua_do_transition_tg_pt(
 	struct t10_alua_tg_pt_gp *tg_pt_gp,
 	struct se_port *l_port,
 	struct se_node_acl *nacl,
-	unsigned char *md_buf,
 	int new_state,
 	int explicit)
 {
@@ -885,8 +892,7 @@ static int core_alua_do_transition_tg_pt(
 	 */
 	if (tg_pt_gp->tg_pt_gp_write_metadata) {
 		mutex_lock(&tg_pt_gp->tg_pt_gp_md_mutex);
-		core_alua_update_tpg_primary_metadata(tg_pt_gp,
-					new_state, md_buf);
+		core_alua_update_tpg_primary_metadata(tg_pt_gp, new_state);
 		mutex_unlock(&tg_pt_gp->tg_pt_gp_md_mutex);
 	}
 	/*
@@ -917,19 +923,12 @@ int core_alua_do_port_transition(
 	struct t10_alua_lu_gp *lu_gp;
 	struct t10_alua_lu_gp_member *lu_gp_mem, *local_lu_gp_mem;
 	struct t10_alua_tg_pt_gp *tg_pt_gp;
-	unsigned char *md_buf;
 	int primary, valid_states;
 
 	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
 	if (core_alua_check_transition(new_state, valid_states, &primary) != 0)
 		return -EINVAL;
 
-	md_buf = kzalloc(l_tg_pt_gp->tg_pt_gp_md_buf_len, GFP_KERNEL);
-	if (!md_buf) {
-		pr_err("Unable to allocate buf for ALUA metadata\n");
-		return -ENOMEM;
-	}
-
 	local_lu_gp_mem = l_dev->dev_alua_lu_gp_mem;
 	spin_lock(&local_lu_gp_mem->lu_gp_mem_lock);
 	lu_gp = local_lu_gp_mem->lu_gp;
@@ -947,10 +946,9 @@ int core_alua_do_port_transition(
 		 * success.
 		 */
 		core_alua_do_transition_tg_pt(l_tg_pt_gp, l_port, l_nacl,
-					md_buf, new_state, explicit);
+					new_state, explicit);
 		atomic_dec(&lu_gp->lu_gp_ref_cnt);
 		smp_mb__after_atomic_dec();
-		kfree(md_buf);
 		return 0;
 	}
 	/*
@@ -1000,7 +998,7 @@ int core_alua_do_port_transition(
 			 * success.
 			 */
 			core_alua_do_transition_tg_pt(tg_pt_gp, port,
-					nacl, md_buf, new_state, explicit);
+					nacl, new_state, explicit);
 
 			spin_lock(&dev->t10_alua.tg_pt_gps_lock);
 			atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
@@ -1022,7 +1020,6 @@ int core_alua_do_port_transition(
 
 	atomic_dec(&lu_gp->lu_gp_ref_cnt);
 	smp_mb__after_atomic_dec();
-	kfree(md_buf);
 	return 0;
 }
 
@@ -1031,13 +1028,18 @@ int core_alua_do_port_transition(
  */
 static int core_alua_update_tpg_secondary_metadata(
 	struct t10_alua_tg_pt_gp_member *tg_pt_gp_mem,
-	struct se_port *port,
-	unsigned char *md_buf,
-	u32 md_buf_len)
+	struct se_port *port)
 {
+	unsigned char *md_buf;
 	struct se_portal_group *se_tpg = port->sep_tpg;
 	char path[ALUA_METADATA_PATH_LEN], wwn[ALUA_SECONDARY_METADATA_WWN_LEN];
-	int len;
+	int len, rc;
+
+	md_buf = kzalloc(ALUA_MD_BUF_LEN, GFP_KERNEL);
+	if (!md_buf) {
+		pr_err("Unable to allocate buf for ALUA metadata\n");
+		return -ENOMEM;
+	}
 
 	memset(path, 0, ALUA_METADATA_PATH_LEN);
 	memset(wwn, 0, ALUA_SECONDARY_METADATA_WWN_LEN);
@@ -1049,7 +1051,7 @@ static int core_alua_update_tpg_secondary_metadata(
 		snprintf(wwn+len, ALUA_SECONDARY_METADATA_WWN_LEN-len, "+%hu",
 				se_tpg->se_tpg_tfo->tpg_get_tag(se_tpg));
 
-	len = snprintf(md_buf, md_buf_len, "alua_tg_pt_offline=%d\n"
+	len = snprintf(md_buf, ALUA_MD_BUF_LEN, "alua_tg_pt_offline=%d\n"
 			"alua_tg_pt_status=0x%02x\n",
 			atomic_read(&port->sep_tg_pt_secondary_offline),
 			port->sep_tg_pt_secondary_stat);
@@ -1058,7 +1060,10 @@ static int core_alua_update_tpg_secondary_metadata(
 			se_tpg->se_tpg_tfo->get_fabric_name(), wwn,
 			port->sep_lun->unpacked_lun);
 
-	return core_alua_write_tpg_metadata(path, md_buf, len);
+	rc = core_alua_write_tpg_metadata(path, md_buf, len);
+	kfree(md_buf);
+
+	return rc;
 }
 
 static int core_alua_set_tg_pt_secondary_state(
@@ -1068,8 +1073,6 @@ static int core_alua_set_tg_pt_secondary_state(
 	int offline)
 {
 	struct t10_alua_tg_pt_gp *tg_pt_gp;
-	unsigned char *md_buf;
-	u32 md_buf_len;
 	int trans_delay_msecs;
 
 	spin_lock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
@@ -1090,7 +1093,6 @@ static int core_alua_set_tg_pt_secondary_state(
 	else
 		atomic_set(&port->sep_tg_pt_secondary_offline, 0);
 
-	md_buf_len = tg_pt_gp->tg_pt_gp_md_buf_len;
 	port->sep_tg_pt_secondary_stat = (explicit) ?
 			ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
 			ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
@@ -1112,18 +1114,9 @@ static int core_alua_set_tg_pt_secondary_state(
 	 * secondary state and status
 	 */
 	if (port->sep_tg_pt_secondary_write_md) {
-		md_buf = kzalloc(md_buf_len, GFP_KERNEL);
-		if (!md_buf) {
-			pr_err("Unable to allocate md_buf for"
-				" secondary ALUA access metadata\n");
-			return -ENOMEM;
-		}
 		mutex_lock(&port->sep_tg_pt_md_mutex);
-		core_alua_update_tpg_secondary_metadata(tg_pt_gp_mem, port,
-				md_buf, md_buf_len);
+		core_alua_update_tpg_secondary_metadata(tg_pt_gp_mem, port);
 		mutex_unlock(&port->sep_tg_pt_md_mutex);
-
-		kfree(md_buf);
 	}
 
 	return 0;
@@ -1382,7 +1375,6 @@ struct t10_alua_tg_pt_gp *core_alua_allocate_tg_pt_gp(struct se_device *dev,
 	spin_lock_init(&tg_pt_gp->tg_pt_gp_lock);
 	atomic_set(&tg_pt_gp->tg_pt_gp_ref_cnt, 0);
 	tg_pt_gp->tg_pt_gp_dev = dev;
-	tg_pt_gp->tg_pt_gp_md_buf_len = ALUA_MD_BUF_LEN;
 	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
 		ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED);
 	/*
diff --git a/drivers/target/target_core_alua.h b/drivers/target/target_core_alua.h
index 88e2e83..1a152cd 100644
--- a/drivers/target/target_core_alua.h
+++ b/drivers/target/target_core_alua.h
@@ -78,6 +78,9 @@
  */
 #define ALUA_SECONDARY_METADATA_WWN_LEN			256
 
+/* Used by core_alua_update_tpg_(primary,secondary)_metadata */
+#define ALUA_MD_BUF_LEN					1024
+
 extern struct kmem_cache *t10_alua_lu_gp_cache;
 extern struct kmem_cache *t10_alua_lu_gp_mem_cache;
 extern struct kmem_cache *t10_alua_tg_pt_gp_cache;
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 45412a6..6e95281 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -291,9 +291,6 @@ struct t10_alua_tg_pt_gp {
 	int	tg_pt_gp_implicit_trans_secs;
 	int	tg_pt_gp_pref;
 	int	tg_pt_gp_write_metadata;
-	/* Used by struct t10_alua_tg_pt_gp->tg_pt_gp_md_buf_len */
-#define ALUA_MD_BUF_LEN				1024
-	u32	tg_pt_gp_md_buf_len;
 	u32	tg_pt_gp_members;
 	atomic_t tg_pt_gp_alua_access_state;
 	atomic_t tg_pt_gp_ref_cnt;
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 3/8] target_core_alua: store old and pending ALUA state
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
  2013-12-17  8:18 ` [PATCH 1/8] target_core_alua: validate ALUA state transition Hannes Reinecke
  2013-12-17  8:18 ` [PATCH 2/8] target_core_alua: Allocate ALUA metadata on demand Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 19:32   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 4/8] target_core_alua: Use workqueue for ALUA transitioning Hannes Reinecke
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

During state transition we should be storing both the original
and the pending state.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_alua.c | 15 ++++++++++-----
 include/target/target_core_base.h |  2 ++
 2 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index 738244b..4805e97 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -812,12 +812,15 @@ static int core_alua_do_transition_tg_pt(
 	struct se_lun_acl *lacl;
 	struct se_port *port;
 	struct t10_alua_tg_pt_gp_member *mem;
-	int old_state = 0;
+
 	/*
 	 * Save the old primary ALUA access state, and set the current state
 	 * to ALUA_ACCESS_STATE_TRANSITION.
 	 */
-	old_state = atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
+	tg_pt_gp->tg_pt_gp_alua_previous_state =
+		atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
+	tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
+
 	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
 			ALUA_ACCESS_STATE_TRANSITION);
 	tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
@@ -898,13 +901,15 @@ static int core_alua_do_transition_tg_pt(
 	/*
 	 * Set the current primary ALUA access state to the requested new state
 	 */
-	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state, new_state);
+	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
+		   tg_pt_gp->tg_pt_gp_alua_pending_state);
 
 	pr_debug("Successful %s ALUA transition TG PT Group: %s ID: %hu"
 		" from primary access state %s to %s\n", (explicit) ? "explicit" :
 		"implicit", config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item),
-		tg_pt_gp->tg_pt_gp_id, core_alua_dump_state(old_state),
-		core_alua_dump_state(new_state));
+		tg_pt_gp->tg_pt_gp_id,
+		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_previous_state),
+		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_pending_state));
 
 	return 0;
 }
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 6e95281..6758e81 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -284,6 +284,8 @@ struct t10_alua_tg_pt_gp {
 	u16	tg_pt_gp_id;
 	int	tg_pt_gp_valid_id;
 	int	tg_pt_gp_alua_supported_states;
+	int	tg_pt_gp_alua_pending_state;
+	int	tg_pt_gp_alua_previous_state;
 	int	tg_pt_gp_alua_access_status;
 	int	tg_pt_gp_alua_access_type;
 	int	tg_pt_gp_nonop_delay_msecs;
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 4/8] target_core_alua: Use workqueue for ALUA transitioning
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
                   ` (2 preceding siblings ...)
  2013-12-17  8:18 ` [PATCH 3/8] target_core_alua: store old and pending ALUA state Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 19:32   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 5/8] target_core: simplify scsi_name_len calculation Hannes Reinecke
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

Use a workqueue for processing ALUA state transitions; this allows
us to process implicit delay properly.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_alua.c | 174 +++++++++++++++++++++++++++-----------
 include/target/target_core_base.h |   4 +
 2 files changed, 128 insertions(+), 50 deletions(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index 4805e97..01f0c71 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -769,8 +769,7 @@ static int core_alua_write_tpg_metadata(
  * Called with tg_pt_gp->tg_pt_gp_md_mutex held
  */
 static int core_alua_update_tpg_primary_metadata(
-	struct t10_alua_tg_pt_gp *tg_pt_gp,
-	int primary_state)
+	struct t10_alua_tg_pt_gp *tg_pt_gp)
 {
 	unsigned char *md_buf;
 	struct t10_wwn *wwn = &tg_pt_gp->tg_pt_gp_dev->t10_wwn;
@@ -789,7 +788,8 @@ static int core_alua_update_tpg_primary_metadata(
 			"tg_pt_gp_id=%hu\n"
 			"alua_access_state=0x%02x\n"
 			"alua_access_status=0x%02x\n",
-			tg_pt_gp->tg_pt_gp_id, primary_state,
+			tg_pt_gp->tg_pt_gp_id,
+			tg_pt_gp->tg_pt_gp_alua_pending_state,
 			tg_pt_gp->tg_pt_gp_alua_access_status);
 
 	snprintf(path, ALUA_METADATA_PATH_LEN,
@@ -801,36 +801,17 @@ static int core_alua_update_tpg_primary_metadata(
 	return rc;
 }
 
-static int core_alua_do_transition_tg_pt(
-	struct t10_alua_tg_pt_gp *tg_pt_gp,
-	struct se_port *l_port,
-	struct se_node_acl *nacl,
-	int new_state,
-	int explicit)
+static void core_alua_do_transition_tg_pt_work(struct work_struct *work)
 {
+	struct t10_alua_tg_pt_gp *tg_pt_gp = container_of(work,
+		struct t10_alua_tg_pt_gp, tg_pt_gp_transition_work.work);
+	struct se_device *dev = tg_pt_gp->tg_pt_gp_dev;
 	struct se_dev_entry *se_deve;
 	struct se_lun_acl *lacl;
 	struct se_port *port;
 	struct t10_alua_tg_pt_gp_member *mem;
-
-	/*
-	 * Save the old primary ALUA access state, and set the current state
-	 * to ALUA_ACCESS_STATE_TRANSITION.
-	 */
-	tg_pt_gp->tg_pt_gp_alua_previous_state =
-		atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
-	tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
-
-	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
-			ALUA_ACCESS_STATE_TRANSITION);
-	tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
-				ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
-				ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
-	/*
-	 * Check for the optional ALUA primary state transition delay
-	 */
-	if (tg_pt_gp->tg_pt_gp_trans_delay_msecs != 0)
-		msleep_interruptible(tg_pt_gp->tg_pt_gp_trans_delay_msecs);
+	bool explicit = (tg_pt_gp->tg_pt_gp_alua_access_status ==
+			 ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG);
 
 	spin_lock(&tg_pt_gp->tg_pt_gp_lock);
 	list_for_each_entry(mem, &tg_pt_gp->tg_pt_gp_mem_list,
@@ -865,9 +846,12 @@ static int core_alua_do_transition_tg_pt(
 			if (!lacl)
 				continue;
 
-			if (explicit &&
-			   (nacl != NULL) && (nacl == lacl->se_lun_nacl) &&
-			   (l_port != NULL) && (l_port == port))
+			if ((tg_pt_gp->tg_pt_gp_alua_access_status ==
+			     ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG) &&
+			   (tg_pt_gp->tg_pt_gp_alua_nacl != NULL) &&
+			    (tg_pt_gp->tg_pt_gp_alua_nacl == lacl->se_lun_nacl) &&
+			   (tg_pt_gp->tg_pt_gp_alua_port != NULL) &&
+			    (tg_pt_gp->tg_pt_gp_alua_port == port))
 				continue;
 
 			core_scsi3_ua_allocate(lacl->se_lun_nacl,
@@ -895,7 +879,7 @@ static int core_alua_do_transition_tg_pt(
 	 */
 	if (tg_pt_gp->tg_pt_gp_write_metadata) {
 		mutex_lock(&tg_pt_gp->tg_pt_gp_md_mutex);
-		core_alua_update_tpg_primary_metadata(tg_pt_gp, new_state);
+		core_alua_update_tpg_primary_metadata(tg_pt_gp);
 		mutex_unlock(&tg_pt_gp->tg_pt_gp_md_mutex);
 	}
 	/*
@@ -910,6 +894,87 @@ static int core_alua_do_transition_tg_pt(
 		tg_pt_gp->tg_pt_gp_id,
 		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_previous_state),
 		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_pending_state));
+	spin_lock(&dev->t10_alua.tg_pt_gps_lock);
+	atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
+	smp_mb__after_atomic_dec();
+	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
+
+	if (tg_pt_gp->tg_pt_gp_transition_complete)
+		complete(tg_pt_gp->tg_pt_gp_transition_complete);
+}
+
+static int core_alua_do_transition_tg_pt(
+	struct t10_alua_tg_pt_gp *tg_pt_gp,
+	int new_state,
+	int explicit)
+{
+	struct se_device *dev = tg_pt_gp->tg_pt_gp_dev;
+	DECLARE_COMPLETION_ONSTACK(wait);
+
+	/* Nothing to be done here */
+	if (atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state) == new_state)
+		return 0;
+
+	if (new_state == ALUA_ACCESS_STATE_TRANSITION)
+		return -EAGAIN;
+
+	/*
+	 * Flush any pending transitions
+	 */
+	if (!explicit && tg_pt_gp->tg_pt_gp_implicit_trans_secs &&
+	    atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state) ==
+	    ALUA_ACCESS_STATE_TRANSITION) {
+		/* Just in case */
+		tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
+		tg_pt_gp->tg_pt_gp_transition_complete = &wait;
+		flush_delayed_work(&tg_pt_gp->tg_pt_gp_transition_work);
+		wait_for_completion(&wait);
+		tg_pt_gp->tg_pt_gp_transition_complete = NULL;
+		return 0;
+	}
+
+	/*
+	 * Save the old primary ALUA access state, and set the current state
+	 * to ALUA_ACCESS_STATE_TRANSITION.
+	 */
+	tg_pt_gp->tg_pt_gp_alua_previous_state =
+		atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
+	tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
+
+	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
+			ALUA_ACCESS_STATE_TRANSITION);
+	tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
+				ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
+				ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
+
+	/*
+	 * Check for the optional ALUA primary state transition delay
+	 */
+	if (tg_pt_gp->tg_pt_gp_trans_delay_msecs != 0)
+		msleep_interruptible(tg_pt_gp->tg_pt_gp_trans_delay_msecs);
+
+	/*
+	 * Take a reference for workqueue item
+	 */
+	spin_lock(&dev->t10_alua.tg_pt_gps_lock);
+	atomic_inc(&tg_pt_gp->tg_pt_gp_ref_cnt);
+	smp_mb__after_atomic_inc();
+	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
+
+	if (!explicit && tg_pt_gp->tg_pt_gp_implicit_trans_secs) {
+		unsigned long transition_tmo;
+
+		transition_tmo = tg_pt_gp->tg_pt_gp_implicit_trans_secs * HZ;
+		queue_delayed_work(tg_pt_gp->tg_pt_gp_dev->tmr_wq,
+				   &tg_pt_gp->tg_pt_gp_transition_work,
+				   transition_tmo);
+	} else {
+		tg_pt_gp->tg_pt_gp_transition_complete = &wait;
+		queue_delayed_work(tg_pt_gp->tg_pt_gp_dev->tmr_wq,
+				   &tg_pt_gp->tg_pt_gp_transition_work, 0);
+		wait_for_completion(&wait);
+		tg_pt_gp->tg_pt_gp_transition_complete = NULL;
+	}
 
 	return 0;
 }
@@ -923,12 +988,10 @@ int core_alua_do_port_transition(
 	int explicit)
 {
 	struct se_device *dev;
-	struct se_port *port;
-	struct se_node_acl *nacl;
 	struct t10_alua_lu_gp *lu_gp;
 	struct t10_alua_lu_gp_member *lu_gp_mem, *local_lu_gp_mem;
 	struct t10_alua_tg_pt_gp *tg_pt_gp;
-	int primary, valid_states;
+	int primary, valid_states, rc = 0;
 
 	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
 	if (core_alua_check_transition(new_state, valid_states, &primary) != 0)
@@ -950,11 +1013,13 @@ int core_alua_do_port_transition(
 		 * core_alua_do_transition_tg_pt() will always return
 		 * success.
 		 */
-		core_alua_do_transition_tg_pt(l_tg_pt_gp, l_port, l_nacl,
-					new_state, explicit);
+		l_tg_pt_gp->tg_pt_gp_alua_port = l_port;
+		l_tg_pt_gp->tg_pt_gp_alua_nacl = l_nacl;
+		rc = core_alua_do_transition_tg_pt(l_tg_pt_gp,
+						   new_state, explicit);
 		atomic_dec(&lu_gp->lu_gp_ref_cnt);
 		smp_mb__after_atomic_dec();
-		return 0;
+		return rc;
 	}
 	/*
 	 * For all other LU groups aside from 'default_lu_gp', walk all of
@@ -989,11 +1054,11 @@ int core_alua_do_port_transition(
 				continue;
 
 			if (l_tg_pt_gp == tg_pt_gp) {
-				port = l_port;
-				nacl = l_nacl;
+				tg_pt_gp->tg_pt_gp_alua_port = l_port;
+				tg_pt_gp->tg_pt_gp_alua_nacl = l_nacl;
 			} else {
-				port = NULL;
-				nacl = NULL;
+				tg_pt_gp->tg_pt_gp_alua_port = NULL;
+				tg_pt_gp->tg_pt_gp_alua_nacl = NULL;
 			}
 			atomic_inc(&tg_pt_gp->tg_pt_gp_ref_cnt);
 			smp_mb__after_atomic_inc();
@@ -1002,12 +1067,14 @@ int core_alua_do_port_transition(
 			 * core_alua_do_transition_tg_pt() will always return
 			 * success.
 			 */
-			core_alua_do_transition_tg_pt(tg_pt_gp, port,
-					nacl, new_state, explicit);
+			rc = core_alua_do_transition_tg_pt(tg_pt_gp,
+					new_state, explicit);
 
 			spin_lock(&dev->t10_alua.tg_pt_gps_lock);
 			atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
 			smp_mb__after_atomic_dec();
+			if (rc)
+				break;
 		}
 		spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
 
@@ -1017,15 +1084,18 @@ int core_alua_do_port_transition(
 	}
 	spin_unlock(&lu_gp->lu_gp_lock);
 
-	pr_debug("Successfully processed LU Group: %s all ALUA TG PT"
-		" Group IDs: %hu %s transition to primary state: %s\n",
-		config_item_name(&lu_gp->lu_gp_group.cg_item),
-		l_tg_pt_gp->tg_pt_gp_id, (explicit) ? "explicit" : "implicit",
-		core_alua_dump_state(new_state));
+	if (!rc) {
+		pr_debug("Successfully processed LU Group: %s all ALUA TG PT"
+			 " Group IDs: %hu %s transition to primary state: %s\n",
+			 config_item_name(&lu_gp->lu_gp_group.cg_item),
+			 l_tg_pt_gp->tg_pt_gp_id,
+			 (explicit) ? "explicit" : "implicit",
+			 core_alua_dump_state(new_state));
+	}
 
 	atomic_dec(&lu_gp->lu_gp_ref_cnt);
 	smp_mb__after_atomic_dec();
-	return 0;
+	return rc;
 }
 
 /*
@@ -1379,6 +1449,8 @@ struct t10_alua_tg_pt_gp *core_alua_allocate_tg_pt_gp(struct se_device *dev,
 	mutex_init(&tg_pt_gp->tg_pt_gp_md_mutex);
 	spin_lock_init(&tg_pt_gp->tg_pt_gp_lock);
 	atomic_set(&tg_pt_gp->tg_pt_gp_ref_cnt, 0);
+	INIT_DELAYED_WORK(&tg_pt_gp->tg_pt_gp_transition_work,
+			  core_alua_do_transition_tg_pt_work);
 	tg_pt_gp->tg_pt_gp_dev = dev;
 	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
 		ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED);
@@ -1507,6 +1579,8 @@ void core_alua_free_tg_pt_gp(
 	dev->t10_alua.alua_tg_pt_gps_counter--;
 	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
 
+	flush_delayed_work(&tg_pt_gp->tg_pt_gp_transition_work);
+
 	/*
 	 * Allow a struct t10_alua_tg_pt_gp_member * referenced by
 	 * core_alua_get_tg_pt_gp_by_name() in
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 6758e81..65390f6 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -302,6 +302,10 @@ struct t10_alua_tg_pt_gp {
 	struct config_group tg_pt_gp_group;
 	struct list_head tg_pt_gp_list;
 	struct list_head tg_pt_gp_mem_list;
+	struct se_port *tg_pt_gp_alua_port;
+	struct se_node_acl *tg_pt_gp_alua_nacl;
+	struct delayed_work tg_pt_gp_transition_work;
+	struct completion *tg_pt_gp_transition_complete;
 };
 
 struct t10_alua_tg_pt_gp_member {
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 5/8] target_core: simplify scsi_name_len calculation
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
                   ` (3 preceding siblings ...)
  2013-12-17  8:18 ` [PATCH 4/8] target_core_alua: Use workqueue for ALUA transitioning Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 19:32   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83 Hannes Reinecke
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

scsi_name_len in spc_emulate_evpd_83 is calculated twice, with
the results of the first calculation discarded. So remove it.
And check for the maximum allowed length, too.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_spc.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index 021c3f4..603c411 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -365,16 +365,6 @@ check_lu_gp:
 		 * section 7.5.1 Table 362
 		 */
 check_scsi_name:
-		scsi_name_len = strlen(tpg->se_tpg_tfo->tpg_get_wwn(tpg));
-		/* UTF-8 ",t,0x<16-bit TPGT>" + NULL Terminator */
-		scsi_name_len += 10;
-		/* Check for 4-byte padding */
-		padding = ((-scsi_name_len) & 3);
-		if (padding != 0)
-			scsi_name_len += padding;
-		/* Header size + Designation descriptor */
-		scsi_name_len += 4;
-
 		buf[off] =
 			(tpg->se_tpg_tfo->get_fabric_proto_ident(tpg) << 4);
 		buf[off++] |= 0x3; /* CODE SET == UTF-8 */
@@ -402,8 +392,11 @@ check_scsi_name:
 		 * shall be no larger than 256 and shall be a multiple
 		 * of four.
 		 */
+		padding = ((-scsi_name_len) & 3);
 		if (padding)
 			scsi_name_len += padding;
+		if (scsi_name_len > 256)
+			scsi_name_len = 256;
 
 		buf[off-1] = scsi_name_len;
 		off += scsi_name_len;
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
                   ` (4 preceding siblings ...)
  2013-12-17  8:18 ` [PATCH 5/8] target_core: simplify scsi_name_len calculation Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 19:50   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 7/8] target_core_alua: Referrals infrastructure Hannes Reinecke
  2013-12-17  8:18 ` [PATCH 8/8] target_core_alua: Referrals configfs integration Hannes Reinecke
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

We should be including a descriptor referring to the target device
to allow identification of different TCM instances.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_spc.c | 43 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 42 insertions(+), 1 deletion(-)

diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index 603c411..39054d9 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -267,7 +267,7 @@ check_t10_vend_desc:
 	port = lun->lun_sep;
 	if (port) {
 		struct t10_alua_lu_gp *lu_gp;
-		u32 padding, scsi_name_len;
+		u32 padding, scsi_name_len, scsi_target_len;
 		u16 lu_gp_id = 0;
 		u16 tg_pt_gp_id = 0;
 		u16 tpgt;
@@ -402,6 +402,47 @@ check_scsi_name:
 		off += scsi_name_len;
 		/* Header size + Designation descriptor */
 		len += (scsi_name_len + 4);
+
+		/*
+		 * Target device designator
+		 */
+		buf[off] =
+			(tpg->se_tpg_tfo->get_fabric_proto_ident(tpg) << 4);
+		buf[off++] |= 0x3; /* CODE SET == UTF-8 */
+		buf[off] = 0x80; /* Set PIV=1 */
+		/* Set ASSOCIATION == target device: 10b */
+		buf[off] |= 0x20;
+		/* DESIGNATOR TYPE == SCSI name string */
+		buf[off++] |= 0x8;
+		off += 2; /* Skip over Reserved and length */
+		/*
+		 * SCSI name string identifer containing, $FABRIC_MOD
+		 * dependent information.  For LIO-Target and iSCSI
+		 * Target Port, this means "<iSCSI name>" in
+		 * UTF-8 encoding.
+		 */
+		scsi_target_len = sprintf(&buf[off], "%s",
+					  tpg->se_tpg_tfo->tpg_get_wwn(tpg));
+		scsi_target_len += 1 /* Include  NULL terminator */;
+		/*
+		 * The null-terminated, null-padded (see 4.4.2) SCSI
+		 * NAME STRING field contains a UTF-8 format string.
+		 * The number of bytes in the SCSI NAME STRING field
+		 * (i.e., the value in the DESIGNATOR LENGTH field)
+		 * shall be no larger than 256 and shall be a multiple
+		 * of four.
+		 */
+		padding = ((-scsi_target_len) & 3);
+		if (padding)
+			scsi_target_len += padding;
+		if (scsi_name_len > 256)
+			scsi_name_len = 256;
+
+		buf[off-1] = scsi_target_len;
+		off += scsi_target_len;
+
+		/* Header size + Designation descriptor */
+		len += (scsi_target_len + 4);
 	}
 	buf[2] = ((len >> 8) & 0xff);
 	buf[3] = (len & 0xff); /* Page Length for VPD 0x83 */
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 7/8] target_core_alua: Referrals infrastructure
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
                   ` (5 preceding siblings ...)
  2013-12-17  8:18 ` [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83 Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 20:06   ` Nicholas A. Bellinger
  2013-12-17  8:18 ` [PATCH 8/8] target_core_alua: Referrals configfs integration Hannes Reinecke
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

Add infrastructure for referrals.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_alua.c     | 153 ++++++++++++++++++++++++++++++++++
 drivers/target/target_core_alua.h     |   4 +-
 drivers/target/target_core_configfs.c |   9 +-
 drivers/target/target_core_device.c   |   2 +
 drivers/target/target_core_sbc.c      |   5 +-
 drivers/target/target_core_spc.c      |  20 +++++
 include/scsi/scsi.h                   |   1 +
 include/target/target_core_base.h     |  18 ++++
 8 files changed, 209 insertions(+), 3 deletions(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index 01f0c71..dbfbf14 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -58,6 +58,75 @@ static LIST_HEAD(lu_gps_list);
 struct t10_alua_lu_gp *default_lu_gp;
 
 /*
+ * REPORT REFERRALS
+ *
+ * See sbc3r35 section 5.23
+ */
+sense_reason_t
+target_emulate_report_referrals(struct se_cmd *cmd)
+{
+	struct se_device *dev = cmd->se_dev;
+	struct t10_alua_lba_map *map;
+	struct t10_alua_lba_map_member *map_mem;
+	unsigned char *buf;
+	u32 rd_len = 0, off;
+
+	if (cmd->data_length < 4) {
+		pr_warn("REPORT REFERRALS allocation length %u too"
+			" small\n", cmd->data_length);
+		return TCM_INVALID_CDB_FIELD;
+	}
+
+	buf = transport_kmap_data_sg(cmd);
+	if (!buf)
+		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+
+	off = 4;
+	spin_lock(&dev->t10_alua.lba_map_lock);
+	if (list_empty(&dev->t10_alua.lba_map_list)) {
+		spin_unlock(&dev->t10_alua.lba_map_lock);
+		transport_kunmap_data_sg(cmd);
+
+		return TCM_UNSUPPORTED_SCSI_OPCODE;
+	}
+
+	list_for_each_entry(map, &dev->t10_alua.lba_map_list,
+			    lba_map_list) {
+		int desc_num = off + 3;
+		int pg_num;
+
+		off += 4;
+		put_unaligned_be64(map->lba_map_first_lba, &buf[off]);
+		off += 8;
+		put_unaligned_be64(map->lba_map_last_lba, &buf[off]);
+		off += 8;
+		rd_len += 20;
+		pg_num = 0;
+		list_for_each_entry(map_mem, &map->lba_map_mem_list,
+				    lba_map_mem_list) {
+			buf[off++] = map_mem->lba_map_mem_alua_state & 0x0f;
+			off++;
+			buf[off++] = (map_mem->lba_map_mem_alua_pg_id >> 8) & 0xff;
+			buf[off++] = (map_mem->lba_map_mem_alua_pg_id & 0xff);
+			rd_len += 4;
+			pg_num++;
+		}
+		buf[desc_num] = pg_num;
+	}
+	spin_unlock(&dev->t10_alua.lba_map_lock);
+
+	/*
+	 * Set the RETURN DATA LENGTH set in the header of the DataIN Payload
+	 */
+	put_unaligned_be16(rd_len, &buf[2]);
+
+	transport_kunmap_data_sg(cmd);
+
+	target_complete_cmd(cmd, GOOD);
+	return 0;
+}
+
+/*
  * REPORT_TARGET_PORT_GROUPS
  *
  * See spc4r17 section 6.27
@@ -391,6 +460,80 @@ static inline int core_alua_state_nonoptimized(
 	return 0;
 }
 
+static inline int core_alua_state_lba_dependent(
+	struct se_cmd *cmd,
+	struct t10_alua_tg_pt_gp *tg_pt_gp,
+	u8 *alua_ascq)
+{
+	struct se_device *dev = cmd->se_dev;
+	u32 segment_size, segment_mult, sectors;
+	u64 lba;
+
+	/* Only need to check for cdb actually containing LBAs */
+	if (!cmd->se_cmd_flags & SCF_SCSI_DATA_CDB)
+		return 0;
+
+	spin_lock(&dev->t10_alua.lba_map_lock);
+	segment_size = dev->t10_alua.lba_map_segment_size;
+	segment_mult = dev->t10_alua.lba_map_segment_multiplier;
+	sectors = cmd->data_length / dev->dev_attrib.block_size;
+
+	lba = cmd->t_task_lba;
+	while (lba < cmd->t_task_lba + sectors) {
+		struct t10_alua_lba_map *cur_map = NULL, *map;
+		struct t10_alua_lba_map_member *map_mem;
+
+		list_for_each_entry(map, &dev->t10_alua.lba_map_list,
+				    lba_map_list) {
+			u64 start_lba, last_lba;
+			u64 first_lba = map->lba_map_first_lba;
+
+			if (segment_mult) {
+				start_lba = lba % (segment_size * segment_mult);
+				last_lba = first_lba + segment_size - 1;
+				if (start_lba >= first_lba &&
+				    start_lba <= last_lba) {
+					lba += segment_size;
+					cur_map = map;
+					break;
+				}
+			} else {
+				last_lba = map->lba_map_last_lba;
+				if (lba >= first_lba && lba <= last_lba) {
+					lba = last_lba + 1;
+					cur_map = map;
+					break;
+				}
+			}
+		}
+		if (!cur_map) {
+			spin_unlock(&dev->t10_alua.lba_map_lock);
+			*alua_ascq = ASCQ_04H_ALUA_TG_PT_UNAVAILABLE;
+			return 1;
+		}
+		list_for_each_entry(map_mem, &cur_map->lba_map_mem_list,
+				    lba_map_mem_list) {
+			if (map_mem->lba_map_mem_alua_pg_id !=
+			    tg_pt_gp->tg_pt_gp_id)
+				continue;
+			switch(map_mem->lba_map_mem_alua_state) {
+			case ALUA_ACCESS_STATE_STANDBY:
+				spin_unlock(&dev->t10_alua.lba_map_lock);
+				*alua_ascq = ASCQ_04H_ALUA_TG_PT_STANDBY;
+				return 1;
+			case ALUA_ACCESS_STATE_UNAVAILABLE:
+				spin_unlock(&dev->t10_alua.lba_map_lock);
+				*alua_ascq = ASCQ_04H_ALUA_TG_PT_UNAVAILABLE;
+				return 1;
+			default:
+				break;
+			}
+		}
+	}
+	spin_unlock(&dev->t10_alua.lba_map_lock);
+	return 0;
+}
+
 static inline int core_alua_state_standby(
 	struct se_cmd *cmd,
 	unsigned char *cdb,
@@ -588,6 +731,9 @@ target_alua_state_check(struct se_cmd *cmd)
 	case ALUA_ACCESS_STATE_TRANSITION:
 		ret = core_alua_state_transition(cmd, cdb, &alua_ascq);
 		break;
+	case ALUA_ACCESS_STATE_LBA_DEPENDENT:
+		ret = core_alua_state_lba_dependent(cmd, tg_pt_gp, &alua_ascq);
+		break;
 	/*
 	 * OFFLINE is a secondary ALUA target port group access state, that is
 	 * handled above with struct se_port->sep_tg_pt_secondary_offline=1
@@ -650,6 +796,11 @@ core_alua_check_transition(int state, int valid, int *primary)
 			goto not_supported;
 		*primary = 1;
 		break;
+	case ALUA_ACCESS_STATE_LBA_DEPENDENT:
+		if (!(valid & ALUA_LBD_SUP))
+			goto not_supported;
+		*primary = 1;
+		break;
 	case ALUA_ACCESS_STATE_OFFLINE:
 		/*
 		 * OFFLINE state is defined as a secondary target port
@@ -685,6 +836,8 @@ static char *core_alua_dump_state(int state)
 		return "Active/Optimized";
 	case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
 		return "Active/NonOptimized";
+	case ALUA_ACCESS_STATE_LBA_DEPENDENT:
+		return "LBA Dependent";
 	case ALUA_ACCESS_STATE_STANDBY:
 		return "Standby";
 	case ALUA_ACCESS_STATE_UNAVAILABLE:
diff --git a/drivers/target/target_core_alua.h b/drivers/target/target_core_alua.h
index 1a152cd..47950cd 100644
--- a/drivers/target/target_core_alua.h
+++ b/drivers/target/target_core_alua.h
@@ -13,12 +13,13 @@
 /*
  * ASYMMETRIC ACCESS STATE field
  *
- * from spc4r17 section 6.27 Table 245
+ * from spc4r36j section 6.37 Table 307
  */
 #define ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED	0x0
 #define ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED	0x1
 #define ALUA_ACCESS_STATE_STANDBY		0x2
 #define ALUA_ACCESS_STATE_UNAVAILABLE		0x3
+#define ALUA_ACCESS_STATE_LBA_DEPENDENT		0x4
 #define ALUA_ACCESS_STATE_OFFLINE		0xe
 #define ALUA_ACCESS_STATE_TRANSITION		0xf
 
@@ -88,6 +89,7 @@ extern struct kmem_cache *t10_alua_tg_pt_gp_mem_cache;
 
 extern sense_reason_t target_emulate_report_target_port_groups(struct se_cmd *);
 extern sense_reason_t target_emulate_set_target_port_groups(struct se_cmd *);
+extern sense_reason_t target_emulate_report_referrals(struct se_cmd *);
 extern int core_alua_check_nonop_delay(struct se_cmd *);
 extern int core_alua_do_port_transition(struct t10_alua_tg_pt_gp *,
 				struct se_device *, struct se_port *,
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index 272755d..e74ef8c 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -2054,6 +2054,13 @@ static ssize_t target_core_alua_tg_pt_gp_store_attr_alua_access_state(
 			" transition while TPGS_IMPLICIT_ALUA is disabled\n");
 		return -EINVAL;
 	}
+	if (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICIT_ALUA &&
+	    new_state == ALUA_ACCESS_STATE_LBA_DEPENDENT) {
+		/* LBA DEPENDENT is only allowed with implicit ALUA */
+		pr_err("Unable to process implicit configfs ALUA transition"
+		       " while explicit ALUA management is enabled\n");
+		return -EINVAL;
+	}
 
 	ret = core_alua_do_port_transition(tg_pt_gp, dev,
 					NULL, NULL, new_state, 0);
@@ -2188,7 +2195,7 @@ SE_DEV_ALUA_SUPPORT_STATE_SHOW(lba_dependent,
 			       tg_pt_gp_alua_supported_states, ALUA_LBD_SUP);
 SE_DEV_ALUA_SUPPORT_STATE_STORE(lba_dependent,
 				tg_pt_gp_alua_supported_states, ALUA_LBD_SUP);
-SE_DEV_ALUA_TG_PT_ATTR(alua_support_lba_dependent, S_IRUGO | S_IWUSR);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_lba_dependent, S_IRUGO);
 
 SE_DEV_ALUA_SUPPORT_STATE_SHOW(unavailable,
 			       tg_pt_gp_alua_supported_states, ALUA_U_SUP);
diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
index 207b340..3c08f99 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -1439,6 +1439,8 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
 	spin_lock_init(&dev->t10_pr.aptpl_reg_lock);
 	INIT_LIST_HEAD(&dev->t10_alua.tg_pt_gps_list);
 	spin_lock_init(&dev->t10_alua.tg_pt_gps_lock);
+	INIT_LIST_HEAD(&dev->t10_alua.lba_map_list);
+	spin_lock_init(&dev->t10_alua.lba_map_lock);
 
 	dev->t10_wwn.t10_dev = dev;
 	dev->t10_alua.t10_dev = dev;
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 52ae54e..6863dbe 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -33,7 +33,7 @@
 
 #include "target_core_internal.h"
 #include "target_core_ua.h"
-
+#include "target_core_alua.h"
 
 static sense_reason_t
 sbc_emulate_readcapacity(struct se_cmd *cmd)
@@ -731,6 +731,9 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		case SAI_READ_CAPACITY_16:
 			cmd->execute_cmd = sbc_emulate_readcapacity_16;
 			break;
+		case SAI_REPORT_REFERRALS:
+			cmd->execute_cmd = target_emulate_report_referrals;
+			break;
 		default:
 			pr_err("Unsupported SA: 0x%02x\n",
 				cmd->t_task_cdb[1] & 0x1f);
diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index 39054d9..f9889fd 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -476,6 +476,11 @@ spc_emulate_evpd_86(struct se_cmd *cmd, unsigned char *buf)
 	/* If WriteCache emulation is enabled, set V_SUP */
 	if (spc_check_dev_wce(dev))
 		buf[6] = 0x01;
+	/* If an LBA map is present set R_SUP */
+	spin_lock(&cmd->se_dev->t10_alua.lba_map_lock);
+	if (!list_empty(&dev->t10_alua.lba_map_list))
+		buf[8] = 0x10;
+	spin_unlock(&cmd->se_dev->t10_alua.lba_map_lock);
 	return 0;
 }
 
@@ -634,6 +639,20 @@ spc_emulate_evpd_b2(struct se_cmd *cmd, unsigned char *buf)
 	return 0;
 }
 
+/* Referrals VPD page */
+static sense_reason_t
+spc_emulate_evpd_b3(struct se_cmd *cmd, unsigned char *buf)
+{
+	struct se_device *dev = cmd->se_dev;
+
+	buf[0] = dev->transport->get_device_type(dev);
+	buf[3] = 0x0c;
+	put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[8]);
+	put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[12]);
+
+	return 0;
+}
+
 static sense_reason_t
 spc_emulate_evpd_00(struct se_cmd *cmd, unsigned char *buf);
 
@@ -648,6 +667,7 @@ static struct {
 	{ .page = 0xb0, .emulate = spc_emulate_evpd_b0 },
 	{ .page = 0xb1, .emulate = spc_emulate_evpd_b1 },
 	{ .page = 0xb2, .emulate = spc_emulate_evpd_b2 },
+	{ .page = 0xb3, .emulate = spc_emulate_evpd_b3 },
 };
 
 /* supported vital product data pages */
diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
index 66d42ed..0a4edfe 100644
--- a/include/scsi/scsi.h
+++ b/include/scsi/scsi.h
@@ -155,6 +155,7 @@ enum scsi_timeouts {
 /* values for service action in */
 #define	SAI_READ_CAPACITY_16  0x10
 #define SAI_GET_LBA_STATUS    0x12
+#define SAI_REPORT_REFERRALS  0x13
 /* values for VARIABLE_LENGTH_CMD service action codes
  * see spc4r17 Section D.3.5, table D.7 and D.8 */
 #define VLC_SA_RECEIVE_CREDENTIAL 0x1800
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 65390f6..766421b 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -247,10 +247,28 @@ typedef enum {
 
 struct se_cmd;
 
+struct t10_alua_lba_map_member {
+	struct list_head lba_map_mem_list;
+	int lba_map_mem_alua_state;
+	int lba_map_mem_alua_pg_id;
+};
+
+struct t10_alua_lba_map {
+	u64 lba_map_first_lba;
+	u64 lba_map_last_lba;
+	struct list_head lba_map_list;
+	struct list_head lba_map_mem_list;
+};
+
 struct t10_alua {
 	/* ALUA Target Port Group ID */
 	u16	alua_tg_pt_gps_counter;
 	u32	alua_tg_pt_gps_count;
+	/* Referrals support */
+	spinlock_t lba_map_lock;
+	u32     lba_map_segment_size;
+	u32     lba_map_segment_multiplier;
+	struct list_head lba_map_list;
 	spinlock_t tg_pt_gps_lock;
 	struct se_device *t10_dev;
 	/* Used for default ALUA Target Port Group */
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 8/8] target_core_alua: Referrals configfs integration
  2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
                   ` (6 preceding siblings ...)
  2013-12-17  8:18 ` [PATCH 7/8] target_core_alua: Referrals infrastructure Hannes Reinecke
@ 2013-12-17  8:18 ` Hannes Reinecke
  2013-12-17 20:49   ` Nicholas A. Bellinger
  7 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-17  8:18 UTC (permalink / raw)
  To: Nic Bellinger; +Cc: target-devel, linux-scsi, Hannes Reinecke

Referrals need an LBA map, which needs to be kept
consistent across all target port groups. So
instead of tying the map to the target port groups
I've implemented a single attribute containing the
entire map.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/target/target_core_alua.c      | 101 +++++++++++++++++++
 drivers/target/target_core_alua.h      |   8 ++
 drivers/target/target_core_configfs.c  | 171 +++++++++++++++++++++++++++++++++
 drivers/target/target_core_device.c    |   1 +
 drivers/target/target_core_transport.c |  28 +++++-
 5 files changed, 308 insertions(+), 1 deletion(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index dbfbf14..dc0d399 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -1350,6 +1350,107 @@ static int core_alua_set_tg_pt_secondary_state(
 	return 0;
 }
 
+struct t10_alua_lba_map *
+core_alua_allocate_lba_map(struct list_head *list,
+			   u64 first_lba, u64 last_lba)
+{
+	struct t10_alua_lba_map *lba_map;
+
+	lba_map = kmem_cache_zalloc(t10_alua_lba_map_cache, GFP_KERNEL);
+	if (!lba_map) {
+		pr_err("Unable to allocate struct t10_alua_lba_map\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	INIT_LIST_HEAD(&lba_map->lba_map_mem_list);
+	lba_map->lba_map_first_lba = first_lba;
+	lba_map->lba_map_last_lba = last_lba;
+
+	list_add_tail(&lba_map->lba_map_list, list);
+	return lba_map;
+}
+
+int
+core_alua_allocate_lba_map_mem(struct t10_alua_lba_map *lba_map,
+			       int pg_id, int state)
+{
+	struct t10_alua_lba_map_member *lba_map_mem;
+
+	list_for_each_entry(lba_map_mem, &lba_map->lba_map_mem_list,
+			    lba_map_mem_list) {
+		if (lba_map_mem->lba_map_mem_alua_pg_id == pg_id) {
+			pr_err("Duplicate pg_id %d in lba_map\n", pg_id);
+			return -EINVAL;
+		}
+	}
+
+	lba_map_mem = kmem_cache_zalloc(t10_alua_lba_map_mem_cache, GFP_KERNEL);
+	if (!lba_map_mem) {
+		pr_err("Unable to allocate struct t10_alua_lba_map_mem\n");
+		return -ENOMEM;
+	}
+	lba_map_mem->lba_map_mem_alua_state = state;
+	lba_map_mem->lba_map_mem_alua_pg_id = pg_id;
+
+	list_add_tail(&lba_map_mem->lba_map_mem_list,
+		      &lba_map->lba_map_mem_list);
+	return 0;
+}
+
+void
+core_alua_free_lba_map(struct list_head *lba_list)
+{
+	struct t10_alua_lba_map *lba_map, *lba_map_tmp;
+	struct t10_alua_lba_map_member *lba_map_mem, *lba_map_mem_tmp;
+
+	list_for_each_entry_safe(lba_map, lba_map_tmp, lba_list,
+				 lba_map_list) {
+		list_for_each_entry_safe(lba_map_mem, lba_map_mem_tmp,
+					 &lba_map->lba_map_mem_list,
+					 lba_map_mem_list) {
+			list_del(&lba_map_mem->lba_map_mem_list);
+			kmem_cache_free(t10_alua_lba_map_mem_cache,
+					lba_map_mem);
+		}
+		list_del(&lba_map->lba_map_list);
+		kmem_cache_free(t10_alua_lba_map_cache, lba_map);
+	}
+}
+
+void
+core_alua_set_lba_map(struct se_device *dev, struct list_head *lba_map_list,
+		      int segment_size, int segment_mult)
+{
+	struct list_head old_lba_map_list;
+	struct t10_alua_tg_pt_gp *tg_pt_gp;
+	int activate = 0, supported;
+
+	INIT_LIST_HEAD(&old_lba_map_list);
+	spin_lock(&dev->t10_alua.lba_map_lock);
+	dev->t10_alua.lba_map_segment_size = segment_size;
+	dev->t10_alua.lba_map_segment_multiplier = segment_mult;
+	list_splice_init(&dev->t10_alua.lba_map_list, &old_lba_map_list);
+	if (lba_map_list) {
+		list_splice_init(lba_map_list, &dev->t10_alua.lba_map_list);
+		activate = 1;
+	}
+	spin_unlock(&dev->t10_alua.lba_map_lock);
+	spin_lock(&dev->t10_alua.tg_pt_gps_lock);
+	list_for_each_entry(tg_pt_gp, &dev->t10_alua.tg_pt_gps_list,
+			    tg_pt_gp_list) {
+
+		if (!tg_pt_gp->tg_pt_gp_valid_id)
+			continue;
+		supported = tg_pt_gp->tg_pt_gp_alua_supported_states;
+		if (activate)
+			supported |= ALUA_LBD_SUP;
+		else
+			supported &= ~ALUA_LBD_SUP;
+		tg_pt_gp->tg_pt_gp_alua_supported_states = supported;
+	}
+	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
+	core_alua_free_lba_map(&old_lba_map_list);
+}
+
 struct t10_alua_lu_gp *
 core_alua_allocate_lu_gp(const char *name, int def_group)
 {
diff --git a/drivers/target/target_core_alua.h b/drivers/target/target_core_alua.h
index 47950cd..0a7d65e 100644
--- a/drivers/target/target_core_alua.h
+++ b/drivers/target/target_core_alua.h
@@ -86,6 +86,8 @@ extern struct kmem_cache *t10_alua_lu_gp_cache;
 extern struct kmem_cache *t10_alua_lu_gp_mem_cache;
 extern struct kmem_cache *t10_alua_tg_pt_gp_cache;
 extern struct kmem_cache *t10_alua_tg_pt_gp_mem_cache;
+extern struct kmem_cache *t10_alua_lba_map_cache;
+extern struct kmem_cache *t10_alua_lba_map_mem_cache;
 
 extern sense_reason_t target_emulate_report_target_port_groups(struct se_cmd *);
 extern sense_reason_t target_emulate_set_target_port_groups(struct se_cmd *);
@@ -95,6 +97,12 @@ extern int core_alua_do_port_transition(struct t10_alua_tg_pt_gp *,
 				struct se_device *, struct se_port *,
 				struct se_node_acl *, int, int);
 extern char *core_alua_dump_status(int);
+extern struct t10_alua_lba_map *core_alua_allocate_lba_map(
+				struct list_head *, u64, u64);
+extern int core_alua_allocate_lba_map_mem(struct t10_alua_lba_map *, int, int);
+extern void core_alua_free_lba_map(struct list_head *);
+extern void core_alua_set_lba_map(struct se_device *, struct list_head *,
+				int, int);
 extern struct t10_alua_lu_gp *core_alua_allocate_lu_gp(const char *, int);
 extern int core_alua_set_lu_gp_id(struct t10_alua_lu_gp *, u16);
 extern void core_alua_free_lu_gp(struct t10_alua_lu_gp *);
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index e74ef8c..1dbc1bc 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -1741,6 +1741,176 @@ static struct target_core_configfs_attribute target_core_attr_dev_alua_lu_gp = {
 	.store	= target_core_store_alua_lu_gp,
 };
 
+static ssize_t target_core_show_dev_lba_map(void *p, char *page)
+{
+	struct se_device *dev = p;
+	struct t10_alua_lba_map *map;
+	struct t10_alua_lba_map_member *mem;
+	char *b = page;
+	int bl = 0;
+	char state;
+
+	spin_lock(&dev->t10_alua.lba_map_lock);
+	if (!list_empty(&dev->t10_alua.lba_map_list))
+	    bl += sprintf(b + bl, "%u %u\n",
+			  dev->t10_alua.lba_map_segment_size,
+			  dev->t10_alua.lba_map_segment_multiplier);
+	list_for_each_entry(map, &dev->t10_alua.lba_map_list, lba_map_list) {
+		bl += sprintf(b + bl, "%llu %llu",
+			      map->lba_map_first_lba, map->lba_map_last_lba);
+		list_for_each_entry(mem, &map->lba_map_mem_list,
+				    lba_map_mem_list) {
+			switch (mem->lba_map_mem_alua_state) {
+			case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
+				state = 'O';
+				break;
+			case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
+				state = 'A';
+				break;
+			case ALUA_ACCESS_STATE_STANDBY:
+				state = 'S';
+				break;
+			case ALUA_ACCESS_STATE_UNAVAILABLE:
+				state = 'U';
+				break;
+			default:
+				state = '.';
+				break;
+			}
+			bl += sprintf(b + bl, " %d:%c",
+				      mem->lba_map_mem_alua_pg_id, state);
+		}
+		bl += sprintf(b + bl, "\n");
+	}
+	spin_unlock(&dev->t10_alua.lba_map_lock);
+	return bl;
+}
+
+static ssize_t target_core_store_dev_lba_map(
+	void *p,
+	const char *page,
+	size_t count)
+{
+	struct se_device *dev = p;
+	struct t10_alua_lba_map *lba_map = NULL;
+	struct list_head lba_list;
+	char *map_entries, *ptr;
+	char state;
+	int pg_num = -1, pg;
+	int ret = 0, num = 0, pg_id, alua_state;
+	unsigned long start_lba = -1, end_lba = -1;
+	unsigned long segment_size = -1, segment_mult = -1;
+
+	map_entries = kstrdup(page, GFP_KERNEL);
+	if (!map_entries)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&lba_list);
+	while ((ptr = strsep(&map_entries, "\n")) != NULL) {
+		if (!*ptr)
+			continue;
+
+		if (num == 0) {
+			if (sscanf(ptr, "%lu %lu\n",
+				   &segment_size, &segment_mult) != 2) {
+				pr_err("Invalid line %d\n", num);
+				ret = -EINVAL;
+				break;
+			}
+			num++;
+			continue;
+		}
+		if (sscanf(ptr, "%lu %lu", &start_lba, &end_lba) != 2) {
+			pr_err("Invalid line %d\n", num);
+			ret = -EINVAL;
+			break;
+		}
+		ptr = strchr(ptr, ' ');
+		if (!ptr) {
+			pr_err("Invalid line %d, missing end lba\n", num);
+			ret = -EINVAL;
+			break;
+		}
+		ptr++;
+		ptr = strchr(ptr, ' ');
+		if (!ptr) {
+			pr_err("Invalid line %d, missing state definitions\n",
+			       num);
+			ret = -EINVAL;
+			break;
+		}
+		ptr++;
+		lba_map = core_alua_allocate_lba_map(&lba_list,
+						     start_lba, end_lba);
+		if (IS_ERR(lba_map)) {
+			ret = PTR_ERR(lba_map);
+			break;
+		}
+		pg = 0;
+		while (sscanf(ptr, "%d:%c", &pg_id, &state) == 2) {
+			switch (state) {
+			case 'O':
+				alua_state = ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED;
+				break;
+			case 'A':
+				alua_state = ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED;
+				break;
+			case 'S':
+				alua_state = ALUA_ACCESS_STATE_STANDBY;
+				break;
+			case 'U':
+				alua_state = ALUA_ACCESS_STATE_UNAVAILABLE;
+				break;
+			default:
+				pr_err("Invalid ALUA state '%c'\n", state);
+				ret = -EINVAL;
+				goto out;
+			}
+
+			ret = core_alua_allocate_lba_map_mem(lba_map,
+							     pg_id, alua_state);
+			if (ret) {
+				pr_err("Invalid target descriptor %d:%c "
+				       "at line %d\n",
+				       pg_id, state, num);
+				break;
+			}
+			pg++;
+			ptr = strchr(ptr, ' ');
+			if (ptr)
+				ptr++;
+			else
+				break;
+		}
+		if (pg_num == -1)
+		    pg_num = pg;
+		else if (pg != pg_num) {
+			pr_err("Only %d from %d port groups definitions "
+			       "at line %d\n", pg, pg_num, num);
+			ret = -EINVAL;
+			break;
+		}
+		num++;
+	}
+out:
+	if (ret) {
+		core_alua_free_lba_map(&lba_list);
+		count = ret;
+	} else
+		core_alua_set_lba_map(dev, &lba_list,
+				      segment_size, segment_mult);
+	kfree(map_entries);
+	return count;
+}
+
+static struct target_core_configfs_attribute target_core_attr_dev_lba_map = {
+	.attr	= { .ca_owner = THIS_MODULE,
+		    .ca_name = "lba_map",
+		    .ca_mode = S_IRUGO | S_IWUSR },
+	.show	= target_core_show_dev_lba_map,
+	.store	= target_core_store_dev_lba_map,
+};
+
 static struct configfs_attribute *lio_core_dev_attrs[] = {
 	&target_core_attr_dev_info.attr,
 	&target_core_attr_dev_control.attr,
@@ -1748,6 +1918,7 @@ static struct configfs_attribute *lio_core_dev_attrs[] = {
 	&target_core_attr_dev_udev_path.attr,
 	&target_core_attr_dev_enable.attr,
 	&target_core_attr_dev_alua_lu_gp.attr,
+	&target_core_attr_dev_lba_map.attr,
 	NULL,
 };
 
diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
index 3c08f99..376a4d3 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -1585,6 +1585,7 @@ void target_free_device(struct se_device *dev)
 	}
 
 	core_alua_free_lu_gp_mem(dev);
+	core_alua_set_lba_map(dev, NULL, 0, 0);
 	core_scsi3_free_all_registrations(dev);
 	se_release_vpd_for_dev(dev);
 
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 91953da..18c828d 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -62,6 +62,8 @@ struct kmem_cache *t10_alua_lu_gp_cache;
 struct kmem_cache *t10_alua_lu_gp_mem_cache;
 struct kmem_cache *t10_alua_tg_pt_gp_cache;
 struct kmem_cache *t10_alua_tg_pt_gp_mem_cache;
+struct kmem_cache *t10_alua_lba_map_cache;
+struct kmem_cache *t10_alua_lba_map_mem_cache;
 
 static void transport_complete_task_attr(struct se_cmd *cmd);
 static void transport_handle_queue_full(struct se_cmd *cmd,
@@ -128,14 +130,36 @@ int init_se_kmem_caches(void)
 				"mem_t failed\n");
 		goto out_free_tg_pt_gp_cache;
 	}
+	t10_alua_lba_map_cache = kmem_cache_create(
+			"t10_alua_lba_map_cache",
+			sizeof(struct t10_alua_lba_map),
+			__alignof__(struct t10_alua_lba_map), 0, NULL);
+	if (!t10_alua_lba_map_cache) {
+		pr_err("kmem_cache_create() for t10_alua_lba_map_"
+				"cache failed\n");
+		goto out_free_tg_pt_gp_mem_cache;
+	}
+	t10_alua_lba_map_mem_cache = kmem_cache_create(
+			"t10_alua_lba_map_mem_cache",
+			sizeof(struct t10_alua_lba_map_member),
+			__alignof__(struct t10_alua_lba_map_member), 0, NULL);
+	if (!t10_alua_lba_map_mem_cache) {
+		pr_err("kmem_cache_create() for t10_alua_lba_map_mem_"
+				"cache failed\n");
+		goto out_free_lba_map_cache;
+	}
 
 	target_completion_wq = alloc_workqueue("target_completion",
 					       WQ_MEM_RECLAIM, 0);
 	if (!target_completion_wq)
-		goto out_free_tg_pt_gp_mem_cache;
+		goto out_free_lba_map_mem_cache;
 
 	return 0;
 
+out_free_lba_map_mem_cache:
+	kmem_cache_destroy(t10_alua_lba_map_mem_cache);
+out_free_lba_map_cache:
+	kmem_cache_destroy(t10_alua_lba_map_cache);
 out_free_tg_pt_gp_mem_cache:
 	kmem_cache_destroy(t10_alua_tg_pt_gp_mem_cache);
 out_free_tg_pt_gp_cache:
@@ -164,6 +188,8 @@ void release_se_kmem_caches(void)
 	kmem_cache_destroy(t10_alua_lu_gp_mem_cache);
 	kmem_cache_destroy(t10_alua_tg_pt_gp_cache);
 	kmem_cache_destroy(t10_alua_tg_pt_gp_mem_cache);
+	kmem_cache_destroy(t10_alua_lba_map_cache);
+	kmem_cache_destroy(t10_alua_lba_map_mem_cache);
 }
 
 /* This code ensures unique mib indexes are handed out. */
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/8] target_core_alua: validate ALUA state transition
  2013-12-17  8:18 ` [PATCH 1/8] target_core_alua: validate ALUA state transition Hannes Reinecke
@ 2013-12-17 19:32   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 19:32 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> As we now can modify the list of supported states we need to
> validate the requested ALUA state when doing a state transition.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_alua.c | 55 ++++++++++++++++++++++++++++++++-------
>  1 file changed, 45 insertions(+), 10 deletions(-)
> 

Applied to for-next.

Thanks Hannes!

--nab

> diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
> index fdcee32..292ecce 100644
> --- a/drivers/target/target_core_alua.c
> +++ b/drivers/target/target_core_alua.c
> @@ -41,11 +41,14 @@
>  #include "target_core_alua.h"
>  #include "target_core_ua.h"
>  
> -static sense_reason_t core_alua_check_transition(int state, int *primary);
> +static sense_reason_t core_alua_check_transition(int state, int valid,
> +						 int *primary);
>  static int core_alua_set_tg_pt_secondary_state(
>  		struct t10_alua_tg_pt_gp_member *tg_pt_gp_mem,
>  		struct se_port *port, int explicit, int offline);
>  
> +static char *core_alua_dump_state(int state);
> +
>  static u16 alua_lu_gps_counter;
>  static u32 alua_lu_gps_count;
>  
> @@ -210,7 +213,7 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
>  	unsigned char *ptr;
>  	sense_reason_t rc = TCM_NO_SENSE;
>  	u32 len = 4; /* Skip over RESERVED area in header */
> -	int alua_access_state, primary = 0;
> +	int alua_access_state, primary = 0, valid_states;
>  	u16 tg_pt_id, rtpi;
>  
>  	if (!l_port)
> @@ -252,6 +255,7 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
>  		rc = TCM_UNSUPPORTED_SCSI_OPCODE;
>  		goto out;
>  	}
> +	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
>  
>  	ptr = &buf[4]; /* Skip over RESERVED area in header */
>  
> @@ -263,7 +267,8 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
>  		 * the state is a primary or secondary target port asymmetric
>  		 * access state.
>  		 */
> -		rc = core_alua_check_transition(alua_access_state, &primary);
> +		rc = core_alua_check_transition(alua_access_state,
> +						valid_states, &primary);
>  		if (rc) {
>  			/*
>  			 * If the SET TARGET PORT GROUPS attempts to establish
> @@ -618,17 +623,31 @@ out:
>   * Check implicit and explicit ALUA state change request.
>   */
>  static sense_reason_t
> -core_alua_check_transition(int state, int *primary)
> +core_alua_check_transition(int state, int valid, int *primary)
>  {
> +	/*
> +	 * OPTIMIZED, NON-OPTIMIZED, STANDBY and UNAVAILABLE are
> +	 * defined as primary target port asymmetric access states.
> +	 */
>  	switch (state) {
>  	case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
> +		if (!(valid & ALUA_AO_SUP))
> +			goto not_supported;
> +		*primary = 1;
> +		break;
>  	case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
> +		if (!(valid & ALUA_AN_SUP))
> +			goto not_supported;
> +		*primary = 1;
> +		break;
>  	case ALUA_ACCESS_STATE_STANDBY:
> +		if (!(valid & ALUA_S_SUP))
> +			goto not_supported;
> +		*primary = 1;
> +		break;
>  	case ALUA_ACCESS_STATE_UNAVAILABLE:
> -		/*
> -		 * OPTIMIZED, NON-OPTIMIZED, STANDBY and UNAVAILABLE are
> -		 * defined as primary target port asymmetric access states.
> -		 */
> +		if (!(valid & ALUA_U_SUP))
> +			goto not_supported;
>  		*primary = 1;
>  		break;
>  	case ALUA_ACCESS_STATE_OFFLINE:
> @@ -636,14 +655,27 @@ core_alua_check_transition(int state, int *primary)
>  		 * OFFLINE state is defined as a secondary target port
>  		 * asymmetric access state.
>  		 */
> +		if (!(valid & ALUA_O_SUP))
> +			goto not_supported;
>  		*primary = 0;
>  		break;
> +	case ALUA_ACCESS_STATE_TRANSITION:
> +		/*
> +		 * Transitioning is set internally, and
> +		 * cannot be selected manually.
> +		 */
> +		goto not_supported;
>  	default:
>  		pr_err("Unknown ALUA access state: 0x%02x\n", state);
>  		return TCM_INVALID_PARAMETER_LIST;
>  	}
>  
>  	return 0;
> +
> +not_supported:
> +	pr_err("ALUA access state %s not supported",
> +	       core_alua_dump_state(state));
> +	return TCM_INVALID_PARAMETER_LIST;
>  }
>  
>  static char *core_alua_dump_state(int state)
> @@ -659,6 +691,8 @@ static char *core_alua_dump_state(int state)
>  		return "Unavailable";
>  	case ALUA_ACCESS_STATE_OFFLINE:
>  		return "Offline";
> +	case ALUA_ACCESS_STATE_TRANSITION:
> +		return "Transitioning";
>  	default:
>  		return "Unknown";
>  	}
> @@ -884,9 +918,10 @@ int core_alua_do_port_transition(
>  	struct t10_alua_lu_gp_member *lu_gp_mem, *local_lu_gp_mem;
>  	struct t10_alua_tg_pt_gp *tg_pt_gp;
>  	unsigned char *md_buf;
> -	int primary;
> +	int primary, valid_states;
>  
> -	if (core_alua_check_transition(new_state, &primary) != 0)
> +	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
> +	if (core_alua_check_transition(new_state, valid_states, &primary) != 0)
>  		return -EINVAL;
>  
>  	md_buf = kzalloc(l_tg_pt_gp->tg_pt_gp_md_buf_len, GFP_KERNEL);

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/8] target_core_alua: Allocate ALUA metadata on demand
  2013-12-17  8:18 ` [PATCH 2/8] target_core_alua: Allocate ALUA metadata on demand Hannes Reinecke
@ 2013-12-17 19:32   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 19:32 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> We should only allocate ALUA metadata if we're actually going
> to write them.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_alua.c | 70 +++++++++++++++++----------------------
>  drivers/target/target_core_alua.h |  3 ++
>  include/target/target_core_base.h |  3 --
>  3 files changed, 34 insertions(+), 42 deletions(-)
> 

Applied to for-next.

--nab

> diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
> index 292ecce..738244b 100644
> --- a/drivers/target/target_core_alua.c
> +++ b/drivers/target/target_core_alua.c
> @@ -770,16 +770,22 @@ static int core_alua_write_tpg_metadata(
>   */
>  static int core_alua_update_tpg_primary_metadata(
>  	struct t10_alua_tg_pt_gp *tg_pt_gp,
> -	int primary_state,
> -	unsigned char *md_buf)
> +	int primary_state)
>  {
> +	unsigned char *md_buf;
>  	struct t10_wwn *wwn = &tg_pt_gp->tg_pt_gp_dev->t10_wwn;
>  	char path[ALUA_METADATA_PATH_LEN];
> -	int len;
> +	int len, rc;
> +
> +	md_buf = kzalloc(ALUA_MD_BUF_LEN, GFP_KERNEL);
> +	if (!md_buf) {
> +		pr_err("Unable to allocate buf for ALUA metadata\n");
> +		return -ENOMEM;
> +	}
>  
>  	memset(path, 0, ALUA_METADATA_PATH_LEN);
>  
> -	len = snprintf(md_buf, tg_pt_gp->tg_pt_gp_md_buf_len,
> +	len = snprintf(md_buf, ALUA_MD_BUF_LEN,
>  			"tg_pt_gp_id=%hu\n"
>  			"alua_access_state=0x%02x\n"
>  			"alua_access_status=0x%02x\n",
> @@ -790,14 +796,15 @@ static int core_alua_update_tpg_primary_metadata(
>  		"/var/target/alua/tpgs_%s/%s", &wwn->unit_serial[0],
>  		config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item));
>  
> -	return core_alua_write_tpg_metadata(path, md_buf, len);
> +	rc = core_alua_write_tpg_metadata(path, md_buf, len);
> +	kfree(md_buf);
> +	return rc;
>  }
>  
>  static int core_alua_do_transition_tg_pt(
>  	struct t10_alua_tg_pt_gp *tg_pt_gp,
>  	struct se_port *l_port,
>  	struct se_node_acl *nacl,
> -	unsigned char *md_buf,
>  	int new_state,
>  	int explicit)
>  {
> @@ -885,8 +892,7 @@ static int core_alua_do_transition_tg_pt(
>  	 */
>  	if (tg_pt_gp->tg_pt_gp_write_metadata) {
>  		mutex_lock(&tg_pt_gp->tg_pt_gp_md_mutex);
> -		core_alua_update_tpg_primary_metadata(tg_pt_gp,
> -					new_state, md_buf);
> +		core_alua_update_tpg_primary_metadata(tg_pt_gp, new_state);
>  		mutex_unlock(&tg_pt_gp->tg_pt_gp_md_mutex);
>  	}
>  	/*
> @@ -917,19 +923,12 @@ int core_alua_do_port_transition(
>  	struct t10_alua_lu_gp *lu_gp;
>  	struct t10_alua_lu_gp_member *lu_gp_mem, *local_lu_gp_mem;
>  	struct t10_alua_tg_pt_gp *tg_pt_gp;
> -	unsigned char *md_buf;
>  	int primary, valid_states;
>  
>  	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
>  	if (core_alua_check_transition(new_state, valid_states, &primary) != 0)
>  		return -EINVAL;
>  
> -	md_buf = kzalloc(l_tg_pt_gp->tg_pt_gp_md_buf_len, GFP_KERNEL);
> -	if (!md_buf) {
> -		pr_err("Unable to allocate buf for ALUA metadata\n");
> -		return -ENOMEM;
> -	}
> -
>  	local_lu_gp_mem = l_dev->dev_alua_lu_gp_mem;
>  	spin_lock(&local_lu_gp_mem->lu_gp_mem_lock);
>  	lu_gp = local_lu_gp_mem->lu_gp;
> @@ -947,10 +946,9 @@ int core_alua_do_port_transition(
>  		 * success.
>  		 */
>  		core_alua_do_transition_tg_pt(l_tg_pt_gp, l_port, l_nacl,
> -					md_buf, new_state, explicit);
> +					new_state, explicit);
>  		atomic_dec(&lu_gp->lu_gp_ref_cnt);
>  		smp_mb__after_atomic_dec();
> -		kfree(md_buf);
>  		return 0;
>  	}
>  	/*
> @@ -1000,7 +998,7 @@ int core_alua_do_port_transition(
>  			 * success.
>  			 */
>  			core_alua_do_transition_tg_pt(tg_pt_gp, port,
> -					nacl, md_buf, new_state, explicit);
> +					nacl, new_state, explicit);
>  
>  			spin_lock(&dev->t10_alua.tg_pt_gps_lock);
>  			atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
> @@ -1022,7 +1020,6 @@ int core_alua_do_port_transition(
>  
>  	atomic_dec(&lu_gp->lu_gp_ref_cnt);
>  	smp_mb__after_atomic_dec();
> -	kfree(md_buf);
>  	return 0;
>  }
>  
> @@ -1031,13 +1028,18 @@ int core_alua_do_port_transition(
>   */
>  static int core_alua_update_tpg_secondary_metadata(
>  	struct t10_alua_tg_pt_gp_member *tg_pt_gp_mem,
> -	struct se_port *port,
> -	unsigned char *md_buf,
> -	u32 md_buf_len)
> +	struct se_port *port)
>  {
> +	unsigned char *md_buf;
>  	struct se_portal_group *se_tpg = port->sep_tpg;
>  	char path[ALUA_METADATA_PATH_LEN], wwn[ALUA_SECONDARY_METADATA_WWN_LEN];
> -	int len;
> +	int len, rc;
> +
> +	md_buf = kzalloc(ALUA_MD_BUF_LEN, GFP_KERNEL);
> +	if (!md_buf) {
> +		pr_err("Unable to allocate buf for ALUA metadata\n");
> +		return -ENOMEM;
> +	}
>  
>  	memset(path, 0, ALUA_METADATA_PATH_LEN);
>  	memset(wwn, 0, ALUA_SECONDARY_METADATA_WWN_LEN);
> @@ -1049,7 +1051,7 @@ static int core_alua_update_tpg_secondary_metadata(
>  		snprintf(wwn+len, ALUA_SECONDARY_METADATA_WWN_LEN-len, "+%hu",
>  				se_tpg->se_tpg_tfo->tpg_get_tag(se_tpg));
>  
> -	len = snprintf(md_buf, md_buf_len, "alua_tg_pt_offline=%d\n"
> +	len = snprintf(md_buf, ALUA_MD_BUF_LEN, "alua_tg_pt_offline=%d\n"
>  			"alua_tg_pt_status=0x%02x\n",
>  			atomic_read(&port->sep_tg_pt_secondary_offline),
>  			port->sep_tg_pt_secondary_stat);
> @@ -1058,7 +1060,10 @@ static int core_alua_update_tpg_secondary_metadata(
>  			se_tpg->se_tpg_tfo->get_fabric_name(), wwn,
>  			port->sep_lun->unpacked_lun);
>  
> -	return core_alua_write_tpg_metadata(path, md_buf, len);
> +	rc = core_alua_write_tpg_metadata(path, md_buf, len);
> +	kfree(md_buf);
> +
> +	return rc;
>  }
>  
>  static int core_alua_set_tg_pt_secondary_state(
> @@ -1068,8 +1073,6 @@ static int core_alua_set_tg_pt_secondary_state(
>  	int offline)
>  {
>  	struct t10_alua_tg_pt_gp *tg_pt_gp;
> -	unsigned char *md_buf;
> -	u32 md_buf_len;
>  	int trans_delay_msecs;
>  
>  	spin_lock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
> @@ -1090,7 +1093,6 @@ static int core_alua_set_tg_pt_secondary_state(
>  	else
>  		atomic_set(&port->sep_tg_pt_secondary_offline, 0);
>  
> -	md_buf_len = tg_pt_gp->tg_pt_gp_md_buf_len;
>  	port->sep_tg_pt_secondary_stat = (explicit) ?
>  			ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
>  			ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
> @@ -1112,18 +1114,9 @@ static int core_alua_set_tg_pt_secondary_state(
>  	 * secondary state and status
>  	 */
>  	if (port->sep_tg_pt_secondary_write_md) {
> -		md_buf = kzalloc(md_buf_len, GFP_KERNEL);
> -		if (!md_buf) {
> -			pr_err("Unable to allocate md_buf for"
> -				" secondary ALUA access metadata\n");
> -			return -ENOMEM;
> -		}
>  		mutex_lock(&port->sep_tg_pt_md_mutex);
> -		core_alua_update_tpg_secondary_metadata(tg_pt_gp_mem, port,
> -				md_buf, md_buf_len);
> +		core_alua_update_tpg_secondary_metadata(tg_pt_gp_mem, port);
>  		mutex_unlock(&port->sep_tg_pt_md_mutex);
> -
> -		kfree(md_buf);
>  	}
>  
>  	return 0;
> @@ -1382,7 +1375,6 @@ struct t10_alua_tg_pt_gp *core_alua_allocate_tg_pt_gp(struct se_device *dev,
>  	spin_lock_init(&tg_pt_gp->tg_pt_gp_lock);
>  	atomic_set(&tg_pt_gp->tg_pt_gp_ref_cnt, 0);
>  	tg_pt_gp->tg_pt_gp_dev = dev;
> -	tg_pt_gp->tg_pt_gp_md_buf_len = ALUA_MD_BUF_LEN;
>  	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
>  		ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED);
>  	/*
> diff --git a/drivers/target/target_core_alua.h b/drivers/target/target_core_alua.h
> index 88e2e83..1a152cd 100644
> --- a/drivers/target/target_core_alua.h
> +++ b/drivers/target/target_core_alua.h
> @@ -78,6 +78,9 @@
>   */
>  #define ALUA_SECONDARY_METADATA_WWN_LEN			256
>  
> +/* Used by core_alua_update_tpg_(primary,secondary)_metadata */
> +#define ALUA_MD_BUF_LEN					1024
> +
>  extern struct kmem_cache *t10_alua_lu_gp_cache;
>  extern struct kmem_cache *t10_alua_lu_gp_mem_cache;
>  extern struct kmem_cache *t10_alua_tg_pt_gp_cache;
> diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
> index 45412a6..6e95281 100644
> --- a/include/target/target_core_base.h
> +++ b/include/target/target_core_base.h
> @@ -291,9 +291,6 @@ struct t10_alua_tg_pt_gp {
>  	int	tg_pt_gp_implicit_trans_secs;
>  	int	tg_pt_gp_pref;
>  	int	tg_pt_gp_write_metadata;
> -	/* Used by struct t10_alua_tg_pt_gp->tg_pt_gp_md_buf_len */
> -#define ALUA_MD_BUF_LEN				1024
> -	u32	tg_pt_gp_md_buf_len;
>  	u32	tg_pt_gp_members;
>  	atomic_t tg_pt_gp_alua_access_state;
>  	atomic_t tg_pt_gp_ref_cnt;

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/8] target_core_alua: store old and pending ALUA state
  2013-12-17  8:18 ` [PATCH 3/8] target_core_alua: store old and pending ALUA state Hannes Reinecke
@ 2013-12-17 19:32   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 19:32 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> During state transition we should be storing both the original
> and the pending state.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_alua.c | 15 ++++++++++-----
>  include/target/target_core_base.h |  2 ++
>  2 files changed, 12 insertions(+), 5 deletions(-)
> 

Applied to for-next.

--nab

> diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
> index 738244b..4805e97 100644
> --- a/drivers/target/target_core_alua.c
> +++ b/drivers/target/target_core_alua.c
> @@ -812,12 +812,15 @@ static int core_alua_do_transition_tg_pt(
>  	struct se_lun_acl *lacl;
>  	struct se_port *port;
>  	struct t10_alua_tg_pt_gp_member *mem;
> -	int old_state = 0;
> +
>  	/*
>  	 * Save the old primary ALUA access state, and set the current state
>  	 * to ALUA_ACCESS_STATE_TRANSITION.
>  	 */
> -	old_state = atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
> +	tg_pt_gp->tg_pt_gp_alua_previous_state =
> +		atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
> +	tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
> +
>  	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
>  			ALUA_ACCESS_STATE_TRANSITION);
>  	tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
> @@ -898,13 +901,15 @@ static int core_alua_do_transition_tg_pt(
>  	/*
>  	 * Set the current primary ALUA access state to the requested new state
>  	 */
> -	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state, new_state);
> +	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
> +		   tg_pt_gp->tg_pt_gp_alua_pending_state);
>  
>  	pr_debug("Successful %s ALUA transition TG PT Group: %s ID: %hu"
>  		" from primary access state %s to %s\n", (explicit) ? "explicit" :
>  		"implicit", config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item),
> -		tg_pt_gp->tg_pt_gp_id, core_alua_dump_state(old_state),
> -		core_alua_dump_state(new_state));
> +		tg_pt_gp->tg_pt_gp_id,
> +		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_previous_state),
> +		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_pending_state));
>  
>  	return 0;
>  }
> diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
> index 6e95281..6758e81 100644
> --- a/include/target/target_core_base.h
> +++ b/include/target/target_core_base.h
> @@ -284,6 +284,8 @@ struct t10_alua_tg_pt_gp {
>  	u16	tg_pt_gp_id;
>  	int	tg_pt_gp_valid_id;
>  	int	tg_pt_gp_alua_supported_states;
> +	int	tg_pt_gp_alua_pending_state;
> +	int	tg_pt_gp_alua_previous_state;
>  	int	tg_pt_gp_alua_access_status;
>  	int	tg_pt_gp_alua_access_type;
>  	int	tg_pt_gp_nonop_delay_msecs;



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/8] target_core_alua: Use workqueue for ALUA transitioning
  2013-12-17  8:18 ` [PATCH 4/8] target_core_alua: Use workqueue for ALUA transitioning Hannes Reinecke
@ 2013-12-17 19:32   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 19:32 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> Use a workqueue for processing ALUA state transitions; this allows
> us to process implicit delay properly.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_alua.c | 174 +++++++++++++++++++++++++++-----------
>  include/target/target_core_base.h |   4 +
>  2 files changed, 128 insertions(+), 50 deletions(-)
> 

Applied to for-next.

--nab

> diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
> index 4805e97..01f0c71 100644
> --- a/drivers/target/target_core_alua.c
> +++ b/drivers/target/target_core_alua.c
> @@ -769,8 +769,7 @@ static int core_alua_write_tpg_metadata(
>   * Called with tg_pt_gp->tg_pt_gp_md_mutex held
>   */
>  static int core_alua_update_tpg_primary_metadata(
> -	struct t10_alua_tg_pt_gp *tg_pt_gp,
> -	int primary_state)
> +	struct t10_alua_tg_pt_gp *tg_pt_gp)
>  {
>  	unsigned char *md_buf;
>  	struct t10_wwn *wwn = &tg_pt_gp->tg_pt_gp_dev->t10_wwn;
> @@ -789,7 +788,8 @@ static int core_alua_update_tpg_primary_metadata(
>  			"tg_pt_gp_id=%hu\n"
>  			"alua_access_state=0x%02x\n"
>  			"alua_access_status=0x%02x\n",
> -			tg_pt_gp->tg_pt_gp_id, primary_state,
> +			tg_pt_gp->tg_pt_gp_id,
> +			tg_pt_gp->tg_pt_gp_alua_pending_state,
>  			tg_pt_gp->tg_pt_gp_alua_access_status);
>  
>  	snprintf(path, ALUA_METADATA_PATH_LEN,
> @@ -801,36 +801,17 @@ static int core_alua_update_tpg_primary_metadata(
>  	return rc;
>  }
>  
> -static int core_alua_do_transition_tg_pt(
> -	struct t10_alua_tg_pt_gp *tg_pt_gp,
> -	struct se_port *l_port,
> -	struct se_node_acl *nacl,
> -	int new_state,
> -	int explicit)
> +static void core_alua_do_transition_tg_pt_work(struct work_struct *work)
>  {
> +	struct t10_alua_tg_pt_gp *tg_pt_gp = container_of(work,
> +		struct t10_alua_tg_pt_gp, tg_pt_gp_transition_work.work);
> +	struct se_device *dev = tg_pt_gp->tg_pt_gp_dev;
>  	struct se_dev_entry *se_deve;
>  	struct se_lun_acl *lacl;
>  	struct se_port *port;
>  	struct t10_alua_tg_pt_gp_member *mem;
> -
> -	/*
> -	 * Save the old primary ALUA access state, and set the current state
> -	 * to ALUA_ACCESS_STATE_TRANSITION.
> -	 */
> -	tg_pt_gp->tg_pt_gp_alua_previous_state =
> -		atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
> -	tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
> -
> -	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
> -			ALUA_ACCESS_STATE_TRANSITION);
> -	tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
> -				ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
> -				ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
> -	/*
> -	 * Check for the optional ALUA primary state transition delay
> -	 */
> -	if (tg_pt_gp->tg_pt_gp_trans_delay_msecs != 0)
> -		msleep_interruptible(tg_pt_gp->tg_pt_gp_trans_delay_msecs);
> +	bool explicit = (tg_pt_gp->tg_pt_gp_alua_access_status ==
> +			 ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG);
>  
>  	spin_lock(&tg_pt_gp->tg_pt_gp_lock);
>  	list_for_each_entry(mem, &tg_pt_gp->tg_pt_gp_mem_list,
> @@ -865,9 +846,12 @@ static int core_alua_do_transition_tg_pt(
>  			if (!lacl)
>  				continue;
>  
> -			if (explicit &&
> -			   (nacl != NULL) && (nacl == lacl->se_lun_nacl) &&
> -			   (l_port != NULL) && (l_port == port))
> +			if ((tg_pt_gp->tg_pt_gp_alua_access_status ==
> +			     ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG) &&
> +			   (tg_pt_gp->tg_pt_gp_alua_nacl != NULL) &&
> +			    (tg_pt_gp->tg_pt_gp_alua_nacl == lacl->se_lun_nacl) &&
> +			   (tg_pt_gp->tg_pt_gp_alua_port != NULL) &&
> +			    (tg_pt_gp->tg_pt_gp_alua_port == port))
>  				continue;
>  
>  			core_scsi3_ua_allocate(lacl->se_lun_nacl,
> @@ -895,7 +879,7 @@ static int core_alua_do_transition_tg_pt(
>  	 */
>  	if (tg_pt_gp->tg_pt_gp_write_metadata) {
>  		mutex_lock(&tg_pt_gp->tg_pt_gp_md_mutex);
> -		core_alua_update_tpg_primary_metadata(tg_pt_gp, new_state);
> +		core_alua_update_tpg_primary_metadata(tg_pt_gp);
>  		mutex_unlock(&tg_pt_gp->tg_pt_gp_md_mutex);
>  	}
>  	/*
> @@ -910,6 +894,87 @@ static int core_alua_do_transition_tg_pt(
>  		tg_pt_gp->tg_pt_gp_id,
>  		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_previous_state),
>  		core_alua_dump_state(tg_pt_gp->tg_pt_gp_alua_pending_state));
> +	spin_lock(&dev->t10_alua.tg_pt_gps_lock);
> +	atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
> +	smp_mb__after_atomic_dec();
> +	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
> +
> +	if (tg_pt_gp->tg_pt_gp_transition_complete)
> +		complete(tg_pt_gp->tg_pt_gp_transition_complete);
> +}
> +
> +static int core_alua_do_transition_tg_pt(
> +	struct t10_alua_tg_pt_gp *tg_pt_gp,
> +	int new_state,
> +	int explicit)
> +{
> +	struct se_device *dev = tg_pt_gp->tg_pt_gp_dev;
> +	DECLARE_COMPLETION_ONSTACK(wait);
> +
> +	/* Nothing to be done here */
> +	if (atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state) == new_state)
> +		return 0;
> +
> +	if (new_state == ALUA_ACCESS_STATE_TRANSITION)
> +		return -EAGAIN;
> +
> +	/*
> +	 * Flush any pending transitions
> +	 */
> +	if (!explicit && tg_pt_gp->tg_pt_gp_implicit_trans_secs &&
> +	    atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state) ==
> +	    ALUA_ACCESS_STATE_TRANSITION) {
> +		/* Just in case */
> +		tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
> +		tg_pt_gp->tg_pt_gp_transition_complete = &wait;
> +		flush_delayed_work(&tg_pt_gp->tg_pt_gp_transition_work);
> +		wait_for_completion(&wait);
> +		tg_pt_gp->tg_pt_gp_transition_complete = NULL;
> +		return 0;
> +	}
> +
> +	/*
> +	 * Save the old primary ALUA access state, and set the current state
> +	 * to ALUA_ACCESS_STATE_TRANSITION.
> +	 */
> +	tg_pt_gp->tg_pt_gp_alua_previous_state =
> +		atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
> +	tg_pt_gp->tg_pt_gp_alua_pending_state = new_state;
> +
> +	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
> +			ALUA_ACCESS_STATE_TRANSITION);
> +	tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
> +				ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
> +				ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
> +
> +	/*
> +	 * Check for the optional ALUA primary state transition delay
> +	 */
> +	if (tg_pt_gp->tg_pt_gp_trans_delay_msecs != 0)
> +		msleep_interruptible(tg_pt_gp->tg_pt_gp_trans_delay_msecs);
> +
> +	/*
> +	 * Take a reference for workqueue item
> +	 */
> +	spin_lock(&dev->t10_alua.tg_pt_gps_lock);
> +	atomic_inc(&tg_pt_gp->tg_pt_gp_ref_cnt);
> +	smp_mb__after_atomic_inc();
> +	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
> +
> +	if (!explicit && tg_pt_gp->tg_pt_gp_implicit_trans_secs) {
> +		unsigned long transition_tmo;
> +
> +		transition_tmo = tg_pt_gp->tg_pt_gp_implicit_trans_secs * HZ;
> +		queue_delayed_work(tg_pt_gp->tg_pt_gp_dev->tmr_wq,
> +				   &tg_pt_gp->tg_pt_gp_transition_work,
> +				   transition_tmo);
> +	} else {
> +		tg_pt_gp->tg_pt_gp_transition_complete = &wait;
> +		queue_delayed_work(tg_pt_gp->tg_pt_gp_dev->tmr_wq,
> +				   &tg_pt_gp->tg_pt_gp_transition_work, 0);
> +		wait_for_completion(&wait);
> +		tg_pt_gp->tg_pt_gp_transition_complete = NULL;
> +	}
>  
>  	return 0;
>  }
> @@ -923,12 +988,10 @@ int core_alua_do_port_transition(
>  	int explicit)
>  {
>  	struct se_device *dev;
> -	struct se_port *port;
> -	struct se_node_acl *nacl;
>  	struct t10_alua_lu_gp *lu_gp;
>  	struct t10_alua_lu_gp_member *lu_gp_mem, *local_lu_gp_mem;
>  	struct t10_alua_tg_pt_gp *tg_pt_gp;
> -	int primary, valid_states;
> +	int primary, valid_states, rc = 0;
>  
>  	valid_states = l_tg_pt_gp->tg_pt_gp_alua_supported_states;
>  	if (core_alua_check_transition(new_state, valid_states, &primary) != 0)
> @@ -950,11 +1013,13 @@ int core_alua_do_port_transition(
>  		 * core_alua_do_transition_tg_pt() will always return
>  		 * success.
>  		 */
> -		core_alua_do_transition_tg_pt(l_tg_pt_gp, l_port, l_nacl,
> -					new_state, explicit);
> +		l_tg_pt_gp->tg_pt_gp_alua_port = l_port;
> +		l_tg_pt_gp->tg_pt_gp_alua_nacl = l_nacl;
> +		rc = core_alua_do_transition_tg_pt(l_tg_pt_gp,
> +						   new_state, explicit);
>  		atomic_dec(&lu_gp->lu_gp_ref_cnt);
>  		smp_mb__after_atomic_dec();
> -		return 0;
> +		return rc;
>  	}
>  	/*
>  	 * For all other LU groups aside from 'default_lu_gp', walk all of
> @@ -989,11 +1054,11 @@ int core_alua_do_port_transition(
>  				continue;
>  
>  			if (l_tg_pt_gp == tg_pt_gp) {
> -				port = l_port;
> -				nacl = l_nacl;
> +				tg_pt_gp->tg_pt_gp_alua_port = l_port;
> +				tg_pt_gp->tg_pt_gp_alua_nacl = l_nacl;
>  			} else {
> -				port = NULL;
> -				nacl = NULL;
> +				tg_pt_gp->tg_pt_gp_alua_port = NULL;
> +				tg_pt_gp->tg_pt_gp_alua_nacl = NULL;
>  			}
>  			atomic_inc(&tg_pt_gp->tg_pt_gp_ref_cnt);
>  			smp_mb__after_atomic_inc();
> @@ -1002,12 +1067,14 @@ int core_alua_do_port_transition(
>  			 * core_alua_do_transition_tg_pt() will always return
>  			 * success.
>  			 */
> -			core_alua_do_transition_tg_pt(tg_pt_gp, port,
> -					nacl, new_state, explicit);
> +			rc = core_alua_do_transition_tg_pt(tg_pt_gp,
> +					new_state, explicit);
>  
>  			spin_lock(&dev->t10_alua.tg_pt_gps_lock);
>  			atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
>  			smp_mb__after_atomic_dec();
> +			if (rc)
> +				break;
>  		}
>  		spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
>  
> @@ -1017,15 +1084,18 @@ int core_alua_do_port_transition(
>  	}
>  	spin_unlock(&lu_gp->lu_gp_lock);
>  
> -	pr_debug("Successfully processed LU Group: %s all ALUA TG PT"
> -		" Group IDs: %hu %s transition to primary state: %s\n",
> -		config_item_name(&lu_gp->lu_gp_group.cg_item),
> -		l_tg_pt_gp->tg_pt_gp_id, (explicit) ? "explicit" : "implicit",
> -		core_alua_dump_state(new_state));
> +	if (!rc) {
> +		pr_debug("Successfully processed LU Group: %s all ALUA TG PT"
> +			 " Group IDs: %hu %s transition to primary state: %s\n",
> +			 config_item_name(&lu_gp->lu_gp_group.cg_item),
> +			 l_tg_pt_gp->tg_pt_gp_id,
> +			 (explicit) ? "explicit" : "implicit",
> +			 core_alua_dump_state(new_state));
> +	}
>  
>  	atomic_dec(&lu_gp->lu_gp_ref_cnt);
>  	smp_mb__after_atomic_dec();
> -	return 0;
> +	return rc;
>  }
>  
>  /*
> @@ -1379,6 +1449,8 @@ struct t10_alua_tg_pt_gp *core_alua_allocate_tg_pt_gp(struct se_device *dev,
>  	mutex_init(&tg_pt_gp->tg_pt_gp_md_mutex);
>  	spin_lock_init(&tg_pt_gp->tg_pt_gp_lock);
>  	atomic_set(&tg_pt_gp->tg_pt_gp_ref_cnt, 0);
> +	INIT_DELAYED_WORK(&tg_pt_gp->tg_pt_gp_transition_work,
> +			  core_alua_do_transition_tg_pt_work);
>  	tg_pt_gp->tg_pt_gp_dev = dev;
>  	atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
>  		ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED);
> @@ -1507,6 +1579,8 @@ void core_alua_free_tg_pt_gp(
>  	dev->t10_alua.alua_tg_pt_gps_counter--;
>  	spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
>  
> +	flush_delayed_work(&tg_pt_gp->tg_pt_gp_transition_work);
> +
>  	/*
>  	 * Allow a struct t10_alua_tg_pt_gp_member * referenced by
>  	 * core_alua_get_tg_pt_gp_by_name() in
> diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
> index 6758e81..65390f6 100644
> --- a/include/target/target_core_base.h
> +++ b/include/target/target_core_base.h
> @@ -302,6 +302,10 @@ struct t10_alua_tg_pt_gp {
>  	struct config_group tg_pt_gp_group;
>  	struct list_head tg_pt_gp_list;
>  	struct list_head tg_pt_gp_mem_list;
> +	struct se_port *tg_pt_gp_alua_port;
> +	struct se_node_acl *tg_pt_gp_alua_nacl;
> +	struct delayed_work tg_pt_gp_transition_work;
> +	struct completion *tg_pt_gp_transition_complete;
>  };
>  
>  struct t10_alua_tg_pt_gp_member {



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/8] target_core: simplify scsi_name_len calculation
  2013-12-17  8:18 ` [PATCH 5/8] target_core: simplify scsi_name_len calculation Hannes Reinecke
@ 2013-12-17 19:32   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 19:32 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> scsi_name_len in spc_emulate_evpd_83 is calculated twice, with
> the results of the first calculation discarded. So remove it.
> And check for the maximum allowed length, too.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_spc.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 

Applied to for-next.

--nab

> diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
> index 021c3f4..603c411 100644
> --- a/drivers/target/target_core_spc.c
> +++ b/drivers/target/target_core_spc.c
> @@ -365,16 +365,6 @@ check_lu_gp:
>  		 * section 7.5.1 Table 362
>  		 */
>  check_scsi_name:
> -		scsi_name_len = strlen(tpg->se_tpg_tfo->tpg_get_wwn(tpg));
> -		/* UTF-8 ",t,0x<16-bit TPGT>" + NULL Terminator */
> -		scsi_name_len += 10;
> -		/* Check for 4-byte padding */
> -		padding = ((-scsi_name_len) & 3);
> -		if (padding != 0)
> -			scsi_name_len += padding;
> -		/* Header size + Designation descriptor */
> -		scsi_name_len += 4;
> -
>  		buf[off] =
>  			(tpg->se_tpg_tfo->get_fabric_proto_ident(tpg) << 4);
>  		buf[off++] |= 0x3; /* CODE SET == UTF-8 */
> @@ -402,8 +392,11 @@ check_scsi_name:
>  		 * shall be no larger than 256 and shall be a multiple
>  		 * of four.
>  		 */
> +		padding = ((-scsi_name_len) & 3);
>  		if (padding)
>  			scsi_name_len += padding;
> +		if (scsi_name_len > 256)
> +			scsi_name_len = 256;
>  
>  		buf[off-1] = scsi_name_len;
>  		off += scsi_name_len;



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83
  2013-12-17  8:18 ` [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83 Hannes Reinecke
@ 2013-12-17 19:50   ` Nicholas A. Bellinger
  2013-12-17 20:01     ` Nicholas A. Bellinger
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 19:50 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> We should be including a descriptor referring to the target device
> to allow identification of different TCM instances.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_spc.c | 43 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 42 insertions(+), 1 deletion(-)
> 

One issue with this patch.  The local buffer in spc_emulate_inquiry is
currently hardcoded to SE_INQUIRY_BUF=512, so two large scsi name
designators could overflow here..

So for the largest case with EVPD=0x83, this would be:

4 bytes for header +
20 bytes for NAA IEEE Registered Extended Assigned designator +
56 bytes for T10 Vendor Identifier +
8 bytes for Relative target port +
8 bytes for Target port group +
8 bytes for Logical unit group +
256 bytes for SCSI name (target port) +
256 bytes for SCSI name (target device) == 616 bytes.

So for good measure, bumping up SE_INQUIRY_BUF to 1024.

--nab

> diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
> index 603c411..39054d9 100644
> --- a/drivers/target/target_core_spc.c
> +++ b/drivers/target/target_core_spc.c
> @@ -267,7 +267,7 @@ check_t10_vend_desc:
>  	port = lun->lun_sep;
>  	if (port) {
>  		struct t10_alua_lu_gp *lu_gp;
> -		u32 padding, scsi_name_len;
> +		u32 padding, scsi_name_len, scsi_target_len;
>  		u16 lu_gp_id = 0;
>  		u16 tg_pt_gp_id = 0;
>  		u16 tpgt;
> @@ -402,6 +402,47 @@ check_scsi_name:
>  		off += scsi_name_len;
>  		/* Header size + Designation descriptor */
>  		len += (scsi_name_len + 4);
> +
> +		/*
> +		 * Target device designator
> +		 */
> +		buf[off] =
> +			(tpg->se_tpg_tfo->get_fabric_proto_ident(tpg) << 4);
> +		buf[off++] |= 0x3; /* CODE SET == UTF-8 */
> +		buf[off] = 0x80; /* Set PIV=1 */
> +		/* Set ASSOCIATION == target device: 10b */
> +		buf[off] |= 0x20;
> +		/* DESIGNATOR TYPE == SCSI name string */
> +		buf[off++] |= 0x8;
> +		off += 2; /* Skip over Reserved and length */
> +		/*
> +		 * SCSI name string identifer containing, $FABRIC_MOD
> +		 * dependent information.  For LIO-Target and iSCSI
> +		 * Target Port, this means "<iSCSI name>" in
> +		 * UTF-8 encoding.
> +		 */
> +		scsi_target_len = sprintf(&buf[off], "%s",
> +					  tpg->se_tpg_tfo->tpg_get_wwn(tpg));
> +		scsi_target_len += 1 /* Include  NULL terminator */;
> +		/*
> +		 * The null-terminated, null-padded (see 4.4.2) SCSI
> +		 * NAME STRING field contains a UTF-8 format string.
> +		 * The number of bytes in the SCSI NAME STRING field
> +		 * (i.e., the value in the DESIGNATOR LENGTH field)
> +		 * shall be no larger than 256 and shall be a multiple
> +		 * of four.
> +		 */
> +		padding = ((-scsi_target_len) & 3);
> +		if (padding)
> +			scsi_target_len += padding;
> +		if (scsi_name_len > 256)
> +			scsi_name_len = 256;
> +
> +		buf[off-1] = scsi_target_len;
> +		off += scsi_target_len;
> +
> +		/* Header size + Designation descriptor */
> +		len += (scsi_target_len + 4);
>  	}
>  	buf[2] = ((len >> 8) & 0xff);
>  	buf[3] = (len & 0xff); /* Page Length for VPD 0x83 */

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83
  2013-12-17 19:50   ` Nicholas A. Bellinger
@ 2013-12-17 20:01     ` Nicholas A. Bellinger
  2013-12-18  9:20       ` Hannes Reinecke
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 20:01 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 11:50 -0800, Nicholas A. Bellinger wrote:
> On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> > We should be including a descriptor referring to the target device
> > to allow identification of different TCM instances.
> > 
> > Signed-off-by: Hannes Reinecke <hare@suse.de>
> > ---
> >  drivers/target/target_core_spc.c | 43 +++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 42 insertions(+), 1 deletion(-)
> > 
> 
> One issue with this patch.  The local buffer in spc_emulate_inquiry is
> currently hardcoded to SE_INQUIRY_BUF=512, so two large scsi name
> designators could overflow here..
> 
> So for the largest case with EVPD=0x83, this would be:
> 
> 4 bytes for header +
> 20 bytes for NAA IEEE Registered Extended Assigned designator +
> 56 bytes for T10 Vendor Identifier +
> 8 bytes for Relative target port +
> 8 bytes for Target port group +
> 8 bytes for Logical unit group +
> 256 bytes for SCSI name (target port) +
> 256 bytes for SCSI name (target device) == 616 bytes.
> 
> So for good measure, bumping up SE_INQUIRY_BUF to 1024.
> 

Mmmm, looking at this again, is reporting back two SCSI names in
EVPD=0x83 with different associations (one for target port, and one for
target device) really necessary..?

Doesn't the existing target port association report back the same
information..?

--nab

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 7/8] target_core_alua: Referrals infrastructure
  2013-12-17  8:18 ` [PATCH 7/8] target_core_alua: Referrals infrastructure Hannes Reinecke
@ 2013-12-17 20:06   ` Nicholas A. Bellinger
  2013-12-18  8:09     ` Hannes Reinecke
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 20:06 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> Add infrastructure for referrals.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_alua.c     | 153 ++++++++++++++++++++++++++++++++++
>  drivers/target/target_core_alua.h     |   4 +-
>  drivers/target/target_core_configfs.c |   9 +-
>  drivers/target/target_core_device.c   |   2 +
>  drivers/target/target_core_sbc.c      |   5 +-
>  drivers/target/target_core_spc.c      |  20 +++++
>  include/scsi/scsi.h                   |   1 +
>  include/target/target_core_base.h     |  18 ++++
>  8 files changed, 209 insertions(+), 3 deletions(-)
> 

Applied, with one comment below..

> diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
> index 01f0c71..dbfbf14 100644
> --- a/drivers/target/target_core_alua.c
> +++ b/drivers/target/target_core_alua.c
> @@ -58,6 +58,75 @@ static LIST_HEAD(lu_gps_list);
>  struct t10_alua_lu_gp *default_lu_gp;
>  
>  /*
> + * REPORT REFERRALS
> + *
> + * See sbc3r35 section 5.23
> + */
> +sense_reason_t
> +target_emulate_report_referrals(struct se_cmd *cmd)
> +{
> +	struct se_device *dev = cmd->se_dev;
> +	struct t10_alua_lba_map *map;
> +	struct t10_alua_lba_map_member *map_mem;
> +	unsigned char *buf;
> +	u32 rd_len = 0, off;
> +
> +	if (cmd->data_length < 4) {
> +		pr_warn("REPORT REFERRALS allocation length %u too"
> +			" small\n", cmd->data_length);
> +		return TCM_INVALID_CDB_FIELD;
> +	}
> +
> +	buf = transport_kmap_data_sg(cmd);
> +	if (!buf)
> +		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> +
> +	off = 4;
> +	spin_lock(&dev->t10_alua.lba_map_lock);
> +	if (list_empty(&dev->t10_alua.lba_map_list)) {
> +		spin_unlock(&dev->t10_alua.lba_map_lock);
> +		transport_kunmap_data_sg(cmd);
> +
> +		return TCM_UNSUPPORTED_SCSI_OPCODE;
> +	}
> +
> +	list_for_each_entry(map, &dev->t10_alua.lba_map_list,
> +			    lba_map_list) {
> +		int desc_num = off + 3;
> +		int pg_num;
> +
> +		off += 4;
> +		put_unaligned_be64(map->lba_map_first_lba, &buf[off]);
> +		off += 8;
> +		put_unaligned_be64(map->lba_map_last_lba, &buf[off]);
> +		off += 8;
> +		rd_len += 20;
> +		pg_num = 0;
> +		list_for_each_entry(map_mem, &map->lba_map_mem_list,
> +				    lba_map_mem_list) {
> +			buf[off++] = map_mem->lba_map_mem_alua_state & 0x0f;
> +			off++;
> +			buf[off++] = (map_mem->lba_map_mem_alua_pg_id >> 8) & 0xff;
> +			buf[off++] = (map_mem->lba_map_mem_alua_pg_id & 0xff);
> +			rd_len += 4;
> +			pg_num++;
> +		}
> +		buf[desc_num] = pg_num;
> +	}
> +	spin_unlock(&dev->t10_alua.lba_map_lock);
> +

The above loop needs a check based on cmd->data_length to not overflow
buf here..

Care to send an incremental patch for this..?

--nab

> +	/*
> +	 * Set the RETURN DATA LENGTH set in the header of the DataIN Payload
> +	 */
> +	put_unaligned_be16(rd_len, &buf[2]);
> +
> +	transport_kunmap_data_sg(cmd);
> +
> +	target_complete_cmd(cmd, GOOD);
> +	return 0;
> +}
> +
> +/*
>   * REPORT_TARGET_PORT_GROUPS
>   *
>   * See spc4r17 section 6.27
> @@ -391,6 +460,80 @@ static inline int core_alua_state_nonoptimized(
>  	return 0;
>  }
>  
> +static inline int core_alua_state_lba_dependent(
> +	struct se_cmd *cmd,
> +	struct t10_alua_tg_pt_gp *tg_pt_gp,
> +	u8 *alua_ascq)
> +{
> +	struct se_device *dev = cmd->se_dev;
> +	u32 segment_size, segment_mult, sectors;
> +	u64 lba;
> +
> +	/* Only need to check for cdb actually containing LBAs */
> +	if (!cmd->se_cmd_flags & SCF_SCSI_DATA_CDB)
> +		return 0;
> +
> +	spin_lock(&dev->t10_alua.lba_map_lock);
> +	segment_size = dev->t10_alua.lba_map_segment_size;
> +	segment_mult = dev->t10_alua.lba_map_segment_multiplier;
> +	sectors = cmd->data_length / dev->dev_attrib.block_size;
> +
> +	lba = cmd->t_task_lba;
> +	while (lba < cmd->t_task_lba + sectors) {
> +		struct t10_alua_lba_map *cur_map = NULL, *map;
> +		struct t10_alua_lba_map_member *map_mem;
> +
> +		list_for_each_entry(map, &dev->t10_alua.lba_map_list,
> +				    lba_map_list) {
> +			u64 start_lba, last_lba;
> +			u64 first_lba = map->lba_map_first_lba;
> +
> +			if (segment_mult) {
> +				start_lba = lba % (segment_size * segment_mult);
> +				last_lba = first_lba + segment_size - 1;
> +				if (start_lba >= first_lba &&
> +				    start_lba <= last_lba) {
> +					lba += segment_size;
> +					cur_map = map;
> +					break;
> +				}
> +			} else {
> +				last_lba = map->lba_map_last_lba;
> +				if (lba >= first_lba && lba <= last_lba) {
> +					lba = last_lba + 1;
> +					cur_map = map;
> +					break;
> +				}
> +			}
> +		}
> +		if (!cur_map) {
> +			spin_unlock(&dev->t10_alua.lba_map_lock);
> +			*alua_ascq = ASCQ_04H_ALUA_TG_PT_UNAVAILABLE;
> +			return 1;
> +		}
> +		list_for_each_entry(map_mem, &cur_map->lba_map_mem_list,
> +				    lba_map_mem_list) {
> +			if (map_mem->lba_map_mem_alua_pg_id !=
> +			    tg_pt_gp->tg_pt_gp_id)
> +				continue;
> +			switch(map_mem->lba_map_mem_alua_state) {
> +			case ALUA_ACCESS_STATE_STANDBY:
> +				spin_unlock(&dev->t10_alua.lba_map_lock);
> +				*alua_ascq = ASCQ_04H_ALUA_TG_PT_STANDBY;
> +				return 1;
> +			case ALUA_ACCESS_STATE_UNAVAILABLE:
> +				spin_unlock(&dev->t10_alua.lba_map_lock);
> +				*alua_ascq = ASCQ_04H_ALUA_TG_PT_UNAVAILABLE;
> +				return 1;
> +			default:
> +				break;
> +			}
> +		}
> +	}
> +	spin_unlock(&dev->t10_alua.lba_map_lock);
> +	return 0;
> +}
> +
>  static inline int core_alua_state_standby(
>  	struct se_cmd *cmd,
>  	unsigned char *cdb,
> @@ -588,6 +731,9 @@ target_alua_state_check(struct se_cmd *cmd)
>  	case ALUA_ACCESS_STATE_TRANSITION:
>  		ret = core_alua_state_transition(cmd, cdb, &alua_ascq);
>  		break;
> +	case ALUA_ACCESS_STATE_LBA_DEPENDENT:
> +		ret = core_alua_state_lba_dependent(cmd, tg_pt_gp, &alua_ascq);
> +		break;
>  	/*
>  	 * OFFLINE is a secondary ALUA target port group access state, that is
>  	 * handled above with struct se_port->sep_tg_pt_secondary_offline=1
> @@ -650,6 +796,11 @@ core_alua_check_transition(int state, int valid, int *primary)
>  			goto not_supported;
>  		*primary = 1;
>  		break;
> +	case ALUA_ACCESS_STATE_LBA_DEPENDENT:
> +		if (!(valid & ALUA_LBD_SUP))
> +			goto not_supported;
> +		*primary = 1;
> +		break;
>  	case ALUA_ACCESS_STATE_OFFLINE:
>  		/*
>  		 * OFFLINE state is defined as a secondary target port
> @@ -685,6 +836,8 @@ static char *core_alua_dump_state(int state)
>  		return "Active/Optimized";
>  	case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
>  		return "Active/NonOptimized";
> +	case ALUA_ACCESS_STATE_LBA_DEPENDENT:
> +		return "LBA Dependent";
>  	case ALUA_ACCESS_STATE_STANDBY:
>  		return "Standby";
>  	case ALUA_ACCESS_STATE_UNAVAILABLE:
> diff --git a/drivers/target/target_core_alua.h b/drivers/target/target_core_alua.h
> index 1a152cd..47950cd 100644
> --- a/drivers/target/target_core_alua.h
> +++ b/drivers/target/target_core_alua.h
> @@ -13,12 +13,13 @@
>  /*
>   * ASYMMETRIC ACCESS STATE field
>   *
> - * from spc4r17 section 6.27 Table 245
> + * from spc4r36j section 6.37 Table 307
>   */
>  #define ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED	0x0
>  #define ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED	0x1
>  #define ALUA_ACCESS_STATE_STANDBY		0x2
>  #define ALUA_ACCESS_STATE_UNAVAILABLE		0x3
> +#define ALUA_ACCESS_STATE_LBA_DEPENDENT		0x4
>  #define ALUA_ACCESS_STATE_OFFLINE		0xe
>  #define ALUA_ACCESS_STATE_TRANSITION		0xf
>  
> @@ -88,6 +89,7 @@ extern struct kmem_cache *t10_alua_tg_pt_gp_mem_cache;
>  
>  extern sense_reason_t target_emulate_report_target_port_groups(struct se_cmd *);
>  extern sense_reason_t target_emulate_set_target_port_groups(struct se_cmd *);
> +extern sense_reason_t target_emulate_report_referrals(struct se_cmd *);
>  extern int core_alua_check_nonop_delay(struct se_cmd *);
>  extern int core_alua_do_port_transition(struct t10_alua_tg_pt_gp *,
>  				struct se_device *, struct se_port *,
> diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
> index 272755d..e74ef8c 100644
> --- a/drivers/target/target_core_configfs.c
> +++ b/drivers/target/target_core_configfs.c
> @@ -2054,6 +2054,13 @@ static ssize_t target_core_alua_tg_pt_gp_store_attr_alua_access_state(
>  			" transition while TPGS_IMPLICIT_ALUA is disabled\n");
>  		return -EINVAL;
>  	}
> +	if (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICIT_ALUA &&
> +	    new_state == ALUA_ACCESS_STATE_LBA_DEPENDENT) {
> +		/* LBA DEPENDENT is only allowed with implicit ALUA */
> +		pr_err("Unable to process implicit configfs ALUA transition"
> +		       " while explicit ALUA management is enabled\n");
> +		return -EINVAL;
> +	}
>  
>  	ret = core_alua_do_port_transition(tg_pt_gp, dev,
>  					NULL, NULL, new_state, 0);
> @@ -2188,7 +2195,7 @@ SE_DEV_ALUA_SUPPORT_STATE_SHOW(lba_dependent,
>  			       tg_pt_gp_alua_supported_states, ALUA_LBD_SUP);
>  SE_DEV_ALUA_SUPPORT_STATE_STORE(lba_dependent,
>  				tg_pt_gp_alua_supported_states, ALUA_LBD_SUP);
> -SE_DEV_ALUA_TG_PT_ATTR(alua_support_lba_dependent, S_IRUGO | S_IWUSR);
> +SE_DEV_ALUA_TG_PT_ATTR(alua_support_lba_dependent, S_IRUGO);
>  
>  SE_DEV_ALUA_SUPPORT_STATE_SHOW(unavailable,
>  			       tg_pt_gp_alua_supported_states, ALUA_U_SUP);
> diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
> index 207b340..3c08f99 100644
> --- a/drivers/target/target_core_device.c
> +++ b/drivers/target/target_core_device.c
> @@ -1439,6 +1439,8 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
>  	spin_lock_init(&dev->t10_pr.aptpl_reg_lock);
>  	INIT_LIST_HEAD(&dev->t10_alua.tg_pt_gps_list);
>  	spin_lock_init(&dev->t10_alua.tg_pt_gps_lock);
> +	INIT_LIST_HEAD(&dev->t10_alua.lba_map_list);
> +	spin_lock_init(&dev->t10_alua.lba_map_lock);
>  
>  	dev->t10_wwn.t10_dev = dev;
>  	dev->t10_alua.t10_dev = dev;
> diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
> index 52ae54e..6863dbe 100644
> --- a/drivers/target/target_core_sbc.c
> +++ b/drivers/target/target_core_sbc.c
> @@ -33,7 +33,7 @@
>  
>  #include "target_core_internal.h"
>  #include "target_core_ua.h"
> -
> +#include "target_core_alua.h"
>  
>  static sense_reason_t
>  sbc_emulate_readcapacity(struct se_cmd *cmd)
> @@ -731,6 +731,9 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
>  		case SAI_READ_CAPACITY_16:
>  			cmd->execute_cmd = sbc_emulate_readcapacity_16;
>  			break;
> +		case SAI_REPORT_REFERRALS:
> +			cmd->execute_cmd = target_emulate_report_referrals;
> +			break;
>  		default:
>  			pr_err("Unsupported SA: 0x%02x\n",
>  				cmd->t_task_cdb[1] & 0x1f);
> diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
> index 39054d9..f9889fd 100644
> --- a/drivers/target/target_core_spc.c
> +++ b/drivers/target/target_core_spc.c
> @@ -476,6 +476,11 @@ spc_emulate_evpd_86(struct se_cmd *cmd, unsigned char *buf)
>  	/* If WriteCache emulation is enabled, set V_SUP */
>  	if (spc_check_dev_wce(dev))
>  		buf[6] = 0x01;
> +	/* If an LBA map is present set R_SUP */
> +	spin_lock(&cmd->se_dev->t10_alua.lba_map_lock);
> +	if (!list_empty(&dev->t10_alua.lba_map_list))
> +		buf[8] = 0x10;
> +	spin_unlock(&cmd->se_dev->t10_alua.lba_map_lock);
>  	return 0;
>  }
>  
> @@ -634,6 +639,20 @@ spc_emulate_evpd_b2(struct se_cmd *cmd, unsigned char *buf)
>  	return 0;
>  }
>  
> +/* Referrals VPD page */
> +static sense_reason_t
> +spc_emulate_evpd_b3(struct se_cmd *cmd, unsigned char *buf)
> +{
> +	struct se_device *dev = cmd->se_dev;
> +
> +	buf[0] = dev->transport->get_device_type(dev);
> +	buf[3] = 0x0c;
> +	put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[8]);
> +	put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[12]);
> +
> +	return 0;
> +}
> +
>  static sense_reason_t
>  spc_emulate_evpd_00(struct se_cmd *cmd, unsigned char *buf);
>  
> @@ -648,6 +667,7 @@ static struct {
>  	{ .page = 0xb0, .emulate = spc_emulate_evpd_b0 },
>  	{ .page = 0xb1, .emulate = spc_emulate_evpd_b1 },
>  	{ .page = 0xb2, .emulate = spc_emulate_evpd_b2 },
> +	{ .page = 0xb3, .emulate = spc_emulate_evpd_b3 },
>  };
>  
>  /* supported vital product data pages */
> diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
> index 66d42ed..0a4edfe 100644
> --- a/include/scsi/scsi.h
> +++ b/include/scsi/scsi.h
> @@ -155,6 +155,7 @@ enum scsi_timeouts {
>  /* values for service action in */
>  #define	SAI_READ_CAPACITY_16  0x10
>  #define SAI_GET_LBA_STATUS    0x12
> +#define SAI_REPORT_REFERRALS  0x13
>  /* values for VARIABLE_LENGTH_CMD service action codes
>   * see spc4r17 Section D.3.5, table D.7 and D.8 */
>  #define VLC_SA_RECEIVE_CREDENTIAL 0x1800
> diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
> index 65390f6..766421b 100644
> --- a/include/target/target_core_base.h
> +++ b/include/target/target_core_base.h
> @@ -247,10 +247,28 @@ typedef enum {
>  
>  struct se_cmd;
>  
> +struct t10_alua_lba_map_member {
> +	struct list_head lba_map_mem_list;
> +	int lba_map_mem_alua_state;
> +	int lba_map_mem_alua_pg_id;
> +};
> +
> +struct t10_alua_lba_map {
> +	u64 lba_map_first_lba;
> +	u64 lba_map_last_lba;
> +	struct list_head lba_map_list;
> +	struct list_head lba_map_mem_list;
> +};
> +
>  struct t10_alua {
>  	/* ALUA Target Port Group ID */
>  	u16	alua_tg_pt_gps_counter;
>  	u32	alua_tg_pt_gps_count;
> +	/* Referrals support */
> +	spinlock_t lba_map_lock;
> +	u32     lba_map_segment_size;
> +	u32     lba_map_segment_multiplier;
> +	struct list_head lba_map_list;
>  	spinlock_t tg_pt_gps_lock;
>  	struct se_device *t10_dev;
>  	/* Used for default ALUA Target Port Group */



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 8/8] target_core_alua: Referrals configfs integration
  2013-12-17  8:18 ` [PATCH 8/8] target_core_alua: Referrals configfs integration Hannes Reinecke
@ 2013-12-17 20:49   ` Nicholas A. Bellinger
  2013-12-18  8:15     ` Hannes Reinecke
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-17 20:49 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> Referrals need an LBA map, which needs to be kept
> consistent across all target port groups. So
> instead of tying the map to the target port groups
> I've implemented a single attribute containing the
> entire map.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/target/target_core_alua.c      | 101 +++++++++++++++++++
>  drivers/target/target_core_alua.h      |   8 ++
>  drivers/target/target_core_configfs.c  | 171 +++++++++++++++++++++++++++++++++
>  drivers/target/target_core_device.c    |   1 +
>  drivers/target/target_core_transport.c |  28 +++++-
>  5 files changed, 308 insertions(+), 1 deletion(-)
> 

Applied, with one comment below..

<SNIP>

> diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
> index e74ef8c..1dbc1bc 100644
> --- a/drivers/target/target_core_configfs.c
> +++ b/drivers/target/target_core_configfs.c
> @@ -1741,6 +1741,176 @@ static struct target_core_configfs_attribute target_core_attr_dev_alua_lu_gp = {
>  	.store	= target_core_store_alua_lu_gp,
>  };
>  
> +static ssize_t target_core_show_dev_lba_map(void *p, char *page)
> +{
> +	struct se_device *dev = p;
> +	struct t10_alua_lba_map *map;
> +	struct t10_alua_lba_map_member *mem;
> +	char *b = page;
> +	int bl = 0;
> +	char state;
> +
> +	spin_lock(&dev->t10_alua.lba_map_lock);
> +	if (!list_empty(&dev->t10_alua.lba_map_list))
> +	    bl += sprintf(b + bl, "%u %u\n",
> +			  dev->t10_alua.lba_map_segment_size,
> +			  dev->t10_alua.lba_map_segment_multiplier);
> +	list_for_each_entry(map, &dev->t10_alua.lba_map_list, lba_map_list) {
> +		bl += sprintf(b + bl, "%llu %llu",
> +			      map->lba_map_first_lba, map->lba_map_last_lba);
> +		list_for_each_entry(mem, &map->lba_map_mem_list,
> +				    lba_map_mem_list) {
> +			switch (mem->lba_map_mem_alua_state) {
> +			case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
> +				state = 'O';
> +				break;
> +			case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
> +				state = 'A';
> +				break;
> +			case ALUA_ACCESS_STATE_STANDBY:
> +				state = 'S';
> +				break;
> +			case ALUA_ACCESS_STATE_UNAVAILABLE:
> +				state = 'U';
> +				break;
> +			default:
> +				state = '.';
> +				break;
> +			}
> +			bl += sprintf(b + bl, " %d:%c",
> +				      mem->lba_map_mem_alua_pg_id, state);
> +		}
> +		bl += sprintf(b + bl, "\n");
> +	}
> +	spin_unlock(&dev->t10_alua.lba_map_lock);

The above loop can possibly overflow the passed *page..

Care to send an incremental patch for this as well..?

Btw, I assume this is not the only method for extracting the LBA map
from a running system, right..?

--nab

> +	return bl;
> +}
> +
> +static ssize_t target_core_store_dev_lba_map(
> +	void *p,
> +	const char *page,
> +	size_t count)
> +{
> +	struct se_device *dev = p;
> +	struct t10_alua_lba_map *lba_map = NULL;
> +	struct list_head lba_list;
> +	char *map_entries, *ptr;
> +	char state;
> +	int pg_num = -1, pg;
> +	int ret = 0, num = 0, pg_id, alua_state;
> +	unsigned long start_lba = -1, end_lba = -1;
> +	unsigned long segment_size = -1, segment_mult = -1;
> +
> +	map_entries = kstrdup(page, GFP_KERNEL);
> +	if (!map_entries)
> +		return -ENOMEM;
> +
> +	INIT_LIST_HEAD(&lba_list);
> +	while ((ptr = strsep(&map_entries, "\n")) != NULL) {
> +		if (!*ptr)
> +			continue;
> +
> +		if (num == 0) {
> +			if (sscanf(ptr, "%lu %lu\n",
> +				   &segment_size, &segment_mult) != 2) {
> +				pr_err("Invalid line %d\n", num);
> +				ret = -EINVAL;
> +				break;
> +			}
> +			num++;
> +			continue;
> +		}
> +		if (sscanf(ptr, "%lu %lu", &start_lba, &end_lba) != 2) {
> +			pr_err("Invalid line %d\n", num);
> +			ret = -EINVAL;
> +			break;
> +		}
> +		ptr = strchr(ptr, ' ');
> +		if (!ptr) {
> +			pr_err("Invalid line %d, missing end lba\n", num);
> +			ret = -EINVAL;
> +			break;
> +		}
> +		ptr++;
> +		ptr = strchr(ptr, ' ');
> +		if (!ptr) {
> +			pr_err("Invalid line %d, missing state definitions\n",
> +			       num);
> +			ret = -EINVAL;
> +			break;
> +		}
> +		ptr++;
> +		lba_map = core_alua_allocate_lba_map(&lba_list,
> +						     start_lba, end_lba);
> +		if (IS_ERR(lba_map)) {
> +			ret = PTR_ERR(lba_map);
> +			break;
> +		}
> +		pg = 0;
> +		while (sscanf(ptr, "%d:%c", &pg_id, &state) == 2) {
> +			switch (state) {
> +			case 'O':
> +				alua_state = ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED;
> +				break;
> +			case 'A':
> +				alua_state = ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED;
> +				break;
> +			case 'S':
> +				alua_state = ALUA_ACCESS_STATE_STANDBY;
> +				break;
> +			case 'U':
> +				alua_state = ALUA_ACCESS_STATE_UNAVAILABLE;
> +				break;
> +			default:
> +				pr_err("Invalid ALUA state '%c'\n", state);
> +				ret = -EINVAL;
> +				goto out;
> +			}
> +
> +			ret = core_alua_allocate_lba_map_mem(lba_map,
> +							     pg_id, alua_state);
> +			if (ret) {
> +				pr_err("Invalid target descriptor %d:%c "
> +				       "at line %d\n",
> +				       pg_id, state, num);
> +				break;
> +			}
> +			pg++;
> +			ptr = strchr(ptr, ' ');
> +			if (ptr)
> +				ptr++;
> +			else
> +				break;
> +		}
> +		if (pg_num == -1)
> +		    pg_num = pg;
> +		else if (pg != pg_num) {
> +			pr_err("Only %d from %d port groups definitions "
> +			       "at line %d\n", pg, pg_num, num);
> +			ret = -EINVAL;
> +			break;
> +		}
> +		num++;
> +	}
> +out:
> +	if (ret) {
> +		core_alua_free_lba_map(&lba_list);
> +		count = ret;
> +	} else
> +		core_alua_set_lba_map(dev, &lba_list,
> +				      segment_size, segment_mult);
> +	kfree(map_entries);
> +	return count;
> +}
> +
> +static struct target_core_configfs_attribute target_core_attr_dev_lba_map = {
> +	.attr	= { .ca_owner = THIS_MODULE,
> +		    .ca_name = "lba_map",
> +		    .ca_mode = S_IRUGO | S_IWUSR },
> +	.show	= target_core_show_dev_lba_map,
> +	.store	= target_core_store_dev_lba_map,
> +};
> +
>  static struct configfs_attribute *lio_core_dev_attrs[] = {
>  	&target_core_attr_dev_info.attr,
>  	&target_core_attr_dev_control.attr,
> @@ -1748,6 +1918,7 @@ static struct configfs_attribute *lio_core_dev_attrs[] = {
>  	&target_core_attr_dev_udev_path.attr,
>  	&target_core_attr_dev_enable.attr,
>  	&target_core_attr_dev_alua_lu_gp.attr,
> +	&target_core_attr_dev_lba_map.attr,
>  	NULL,
>  };
>  
> diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
> index 3c08f99..376a4d3 100644
> --- a/drivers/target/target_core_device.c
> +++ b/drivers/target/target_core_device.c
> @@ -1585,6 +1585,7 @@ void target_free_device(struct se_device *dev)
>  	}
>  
>  	core_alua_free_lu_gp_mem(dev);
> +	core_alua_set_lba_map(dev, NULL, 0, 0);
>  	core_scsi3_free_all_registrations(dev);
>  	se_release_vpd_for_dev(dev);
>  
> diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
> index 91953da..18c828d 100644
> --- a/drivers/target/target_core_transport.c
> +++ b/drivers/target/target_core_transport.c
> @@ -62,6 +62,8 @@ struct kmem_cache *t10_alua_lu_gp_cache;
>  struct kmem_cache *t10_alua_lu_gp_mem_cache;
>  struct kmem_cache *t10_alua_tg_pt_gp_cache;
>  struct kmem_cache *t10_alua_tg_pt_gp_mem_cache;
> +struct kmem_cache *t10_alua_lba_map_cache;
> +struct kmem_cache *t10_alua_lba_map_mem_cache;
>  
>  static void transport_complete_task_attr(struct se_cmd *cmd);
>  static void transport_handle_queue_full(struct se_cmd *cmd,
> @@ -128,14 +130,36 @@ int init_se_kmem_caches(void)
>  				"mem_t failed\n");
>  		goto out_free_tg_pt_gp_cache;
>  	}
> +	t10_alua_lba_map_cache = kmem_cache_create(
> +			"t10_alua_lba_map_cache",
> +			sizeof(struct t10_alua_lba_map),
> +			__alignof__(struct t10_alua_lba_map), 0, NULL);
> +	if (!t10_alua_lba_map_cache) {
> +		pr_err("kmem_cache_create() for t10_alua_lba_map_"
> +				"cache failed\n");
> +		goto out_free_tg_pt_gp_mem_cache;
> +	}
> +	t10_alua_lba_map_mem_cache = kmem_cache_create(
> +			"t10_alua_lba_map_mem_cache",
> +			sizeof(struct t10_alua_lba_map_member),
> +			__alignof__(struct t10_alua_lba_map_member), 0, NULL);
> +	if (!t10_alua_lba_map_mem_cache) {
> +		pr_err("kmem_cache_create() for t10_alua_lba_map_mem_"
> +				"cache failed\n");
> +		goto out_free_lba_map_cache;
> +	}
>  
>  	target_completion_wq = alloc_workqueue("target_completion",
>  					       WQ_MEM_RECLAIM, 0);
>  	if (!target_completion_wq)
> -		goto out_free_tg_pt_gp_mem_cache;
> +		goto out_free_lba_map_mem_cache;
>  
>  	return 0;
>  
> +out_free_lba_map_mem_cache:
> +	kmem_cache_destroy(t10_alua_lba_map_mem_cache);
> +out_free_lba_map_cache:
> +	kmem_cache_destroy(t10_alua_lba_map_cache);
>  out_free_tg_pt_gp_mem_cache:
>  	kmem_cache_destroy(t10_alua_tg_pt_gp_mem_cache);
>  out_free_tg_pt_gp_cache:
> @@ -164,6 +188,8 @@ void release_se_kmem_caches(void)
>  	kmem_cache_destroy(t10_alua_lu_gp_mem_cache);
>  	kmem_cache_destroy(t10_alua_tg_pt_gp_cache);
>  	kmem_cache_destroy(t10_alua_tg_pt_gp_mem_cache);
> +	kmem_cache_destroy(t10_alua_lba_map_cache);
> +	kmem_cache_destroy(t10_alua_lba_map_mem_cache);
>  }
>  
>  /* This code ensures unique mib indexes are handed out. */

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 7/8] target_core_alua: Referrals infrastructure
  2013-12-17 20:06   ` Nicholas A. Bellinger
@ 2013-12-18  8:09     ` Hannes Reinecke
  0 siblings, 0 replies; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-18  8:09 UTC (permalink / raw)
  To: Nicholas A. Bellinger; +Cc: Nic Bellinger, target-devel, linux-scsi

On 12/17/2013 09:06 PM, Nicholas A. Bellinger wrote:
> On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
>> Add infrastructure for referrals.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>> ---
>>  drivers/target/target_core_alua.c     | 153 ++++++++++++++++++++++++++++++++++
>>  drivers/target/target_core_alua.h     |   4 +-
>>  drivers/target/target_core_configfs.c |   9 +-
>>  drivers/target/target_core_device.c   |   2 +
>>  drivers/target/target_core_sbc.c      |   5 +-
>>  drivers/target/target_core_spc.c      |  20 +++++
>>  include/scsi/scsi.h                   |   1 +
>>  include/target/target_core_base.h     |  18 ++++
>>  8 files changed, 209 insertions(+), 3 deletions(-)
>>
> 
> Applied, with one comment below..
> 
>> diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
>> index 01f0c71..dbfbf14 100644
>> --- a/drivers/target/target_core_alua.c
>> +++ b/drivers/target/target_core_alua.c
>> @@ -58,6 +58,75 @@ static LIST_HEAD(lu_gps_list);
>>  struct t10_alua_lu_gp *default_lu_gp;
>>  
>>  /*
>> + * REPORT REFERRALS
>> + *
>> + * See sbc3r35 section 5.23
>> + */
>> +sense_reason_t
>> +target_emulate_report_referrals(struct se_cmd *cmd)
>> +{
>> +	struct se_device *dev = cmd->se_dev;
>> +	struct t10_alua_lba_map *map;
>> +	struct t10_alua_lba_map_member *map_mem;
>> +	unsigned char *buf;
>> +	u32 rd_len = 0, off;
>> +
>> +	if (cmd->data_length < 4) {
>> +		pr_warn("REPORT REFERRALS allocation length %u too"
>> +			" small\n", cmd->data_length);
>> +		return TCM_INVALID_CDB_FIELD;
>> +	}
>> +
>> +	buf = transport_kmap_data_sg(cmd);
>> +	if (!buf)
>> +		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>> +
>> +	off = 4;
>> +	spin_lock(&dev->t10_alua.lba_map_lock);
>> +	if (list_empty(&dev->t10_alua.lba_map_list)) {
>> +		spin_unlock(&dev->t10_alua.lba_map_lock);
>> +		transport_kunmap_data_sg(cmd);
>> +
>> +		return TCM_UNSUPPORTED_SCSI_OPCODE;
>> +	}
>> +
>> +	list_for_each_entry(map, &dev->t10_alua.lba_map_list,
>> +			    lba_map_list) {
>> +		int desc_num = off + 3;
>> +		int pg_num;
>> +
>> +		off += 4;
>> +		put_unaligned_be64(map->lba_map_first_lba, &buf[off]);
>> +		off += 8;
>> +		put_unaligned_be64(map->lba_map_last_lba, &buf[off]);
>> +		off += 8;
>> +		rd_len += 20;
>> +		pg_num = 0;
>> +		list_for_each_entry(map_mem, &map->lba_map_mem_list,
>> +				    lba_map_mem_list) {
>> +			buf[off++] = map_mem->lba_map_mem_alua_state & 0x0f;
>> +			off++;
>> +			buf[off++] = (map_mem->lba_map_mem_alua_pg_id >> 8) & 0xff;
>> +			buf[off++] = (map_mem->lba_map_mem_alua_pg_id & 0xff);
>> +			rd_len += 4;
>> +			pg_num++;
>> +		}
>> +		buf[desc_num] = pg_num;
>> +	}
>> +	spin_unlock(&dev->t10_alua.lba_map_lock);
>> +
> 
> The above loop needs a check based on cmd->data_length to not overflow
> buf here..
> 
> Care to send an incremental patch for this..?
> 
On its way. Alongside a patch for the inquiry buffer size.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 8/8] target_core_alua: Referrals configfs integration
  2013-12-17 20:49   ` Nicholas A. Bellinger
@ 2013-12-18  8:15     ` Hannes Reinecke
  2013-12-19  6:25       ` Nicholas A. Bellinger
  0 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-18  8:15 UTC (permalink / raw)
  To: Nicholas A. Bellinger; +Cc: Nic Bellinger, target-devel, linux-scsi

On 12/17/2013 09:49 PM, Nicholas A. Bellinger wrote:
> On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
>> Referrals need an LBA map, which needs to be kept
>> consistent across all target port groups. So
>> instead of tying the map to the target port groups
>> I've implemented a single attribute containing the
>> entire map.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>> ---
>>  drivers/target/target_core_alua.c      | 101 +++++++++++++++++++
>>  drivers/target/target_core_alua.h      |   8 ++
>>  drivers/target/target_core_configfs.c  | 171 +++++++++++++++++++++++++++++++++
>>  drivers/target/target_core_device.c    |   1 +
>>  drivers/target/target_core_transport.c |  28 +++++-
>>  5 files changed, 308 insertions(+), 1 deletion(-)
>>
> 
> Applied, with one comment below..
> 
> <SNIP>
> 
>> diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
>> index e74ef8c..1dbc1bc 100644
>> --- a/drivers/target/target_core_configfs.c
>> +++ b/drivers/target/target_core_configfs.c
>> @@ -1741,6 +1741,176 @@ static struct target_core_configfs_attribute target_core_attr_dev_alua_lu_gp = {
>>  	.store	= target_core_store_alua_lu_gp,
>>  };
>>  
>> +static ssize_t target_core_show_dev_lba_map(void *p, char *page)
>> +{
>> +	struct se_device *dev = p;
>> +	struct t10_alua_lba_map *map;
>> +	struct t10_alua_lba_map_member *mem;
>> +	char *b = page;
>> +	int bl = 0;
>> +	char state;
>> +
>> +	spin_lock(&dev->t10_alua.lba_map_lock);
>> +	if (!list_empty(&dev->t10_alua.lba_map_list))
>> +	    bl += sprintf(b + bl, "%u %u\n",
>> +			  dev->t10_alua.lba_map_segment_size,
>> +			  dev->t10_alua.lba_map_segment_multiplier);
>> +	list_for_each_entry(map, &dev->t10_alua.lba_map_list, lba_map_list) {
>> +		bl += sprintf(b + bl, "%llu %llu",
>> +			      map->lba_map_first_lba, map->lba_map_last_lba);
>> +		list_for_each_entry(mem, &map->lba_map_mem_list,
>> +				    lba_map_mem_list) {
>> +			switch (mem->lba_map_mem_alua_state) {
>> +			case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
>> +				state = 'O';
>> +				break;
>> +			case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
>> +				state = 'A';
>> +				break;
>> +			case ALUA_ACCESS_STATE_STANDBY:
>> +				state = 'S';
>> +				break;
>> +			case ALUA_ACCESS_STATE_UNAVAILABLE:
>> +				state = 'U';
>> +				break;
>> +			default:
>> +				state = '.';
>> +				break;
>> +			}
>> +			bl += sprintf(b + bl, " %d:%c",
>> +				      mem->lba_map_mem_alua_pg_id, state);
>> +		}
>> +		bl += sprintf(b + bl, "\n");
>> +	}
>> +	spin_unlock(&dev->t10_alua.lba_map_lock);
> 
> The above loop can possibly overflow the passed *page..
> 
No. This is the reverse operation to the constructor in
target_core_store_dev_lba_map().

Which (per definition) can only handle maps up to one page in size.

And hence the resulting (formatted) map as constructed by
target_core_show_dev_lba_map() will also fit on one page.

> Care to send an incremental patch for this as well..?
> 
> Btw, I assume this is not the only method for extracting the LBA map
> from a running system, right..?
> 
Of course you can run 'sg_referrals' here. But other than that it
would be the only method.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83
  2013-12-17 20:01     ` Nicholas A. Bellinger
@ 2013-12-18  9:20       ` Hannes Reinecke
  2014-03-05 19:41         ` Andy Grover
  0 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-18  9:20 UTC (permalink / raw)
  To: Nicholas A. Bellinger; +Cc: Nic Bellinger, target-devel, linux-scsi

On 12/17/2013 09:01 PM, Nicholas A. Bellinger wrote:
> On Tue, 2013-12-17 at 11:50 -0800, Nicholas A. Bellinger wrote:
>> On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
>>> We should be including a descriptor referring to the target device
>>> to allow identification of different TCM instances.
>>>
>>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>>> ---
>>>  drivers/target/target_core_spc.c | 43 +++++++++++++++++++++++++++++++++++++++-
>>>  1 file changed, 42 insertions(+), 1 deletion(-)
>>>
>>
>> One issue with this patch.  The local buffer in spc_emulate_inquiry is
>> currently hardcoded to SE_INQUIRY_BUF=512, so two large scsi name
>> designators could overflow here..
>>
>> So for the largest case with EVPD=0x83, this would be:
>>
>> 4 bytes for header +
>> 20 bytes for NAA IEEE Registered Extended Assigned designator +
>> 56 bytes for T10 Vendor Identifier +
>> 8 bytes for Relative target port +
>> 8 bytes for Target port group +
>> 8 bytes for Logical unit group +
>> 256 bytes for SCSI name (target port) +
>> 256 bytes for SCSI name (target device) == 616 bytes.
>>
>> So for good measure, bumping up SE_INQUIRY_BUF to 1024.
>>
> 
> Mmmm, looking at this again, is reporting back two SCSI names in
> EVPD=0x83 with different associations (one for target port, and one for
> target device) really necessary..?
> 
> Doesn't the existing target port association report back the same
> information..?
> 
No.
'Target port' is the identification for the port handling the
request, which is contained within a target device.

The reason why we need this is that we want to identify the scope of
the Target port group number.

Target port group numbers are relative to the encompassing target
device, so when we're having _several_ target devices they might
well provide us with identical target port group numbers.

For explicit ALUA each target port group within a target device can
be thought of a 'scheduling domain', ie if I sent STPG to one of the
devices in that domain there is a _high_ likelihood that _every_
device within that scheduling domain will be affected.
So I can be slightly smarter here and just send one STPG and then
wait for the resulting states on all affected devices.

If I don't have this information I am required to send STPG to each
and every device, thereby flooding the target controller with STPGs
for the same target port group.

So yes, we should be furnishing both.
In addition it's the only sane way of identifying the array :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 8/8] target_core_alua: Referrals configfs integration
  2013-12-18  8:15     ` Hannes Reinecke
@ 2013-12-19  6:25       ` Nicholas A. Bellinger
  2013-12-19  7:04         ` Hannes Reinecke
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas A. Bellinger @ 2013-12-19  6:25 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nic Bellinger, target-devel, linux-scsi

On Wed, 2013-12-18 at 09:15 +0100, Hannes Reinecke wrote:
> On 12/17/2013 09:49 PM, Nicholas A. Bellinger wrote:
> > On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
> >> Referrals need an LBA map, which needs to be kept
> >> consistent across all target port groups. So
> >> instead of tying the map to the target port groups
> >> I've implemented a single attribute containing the
> >> entire map.
> >>
> >> Signed-off-by: Hannes Reinecke <hare@suse.de>
> >> ---
> >>  drivers/target/target_core_alua.c      | 101 +++++++++++++++++++
> >>  drivers/target/target_core_alua.h      |   8 ++
> >>  drivers/target/target_core_configfs.c  | 171 +++++++++++++++++++++++++++++++++
> >>  drivers/target/target_core_device.c    |   1 +
> >>  drivers/target/target_core_transport.c |  28 +++++-
> >>  5 files changed, 308 insertions(+), 1 deletion(-)
> >>
> > 
> > Applied, with one comment below..
> > 
> > <SNIP>
> > 
> >> diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
> >> index e74ef8c..1dbc1bc 100644
> >> --- a/drivers/target/target_core_configfs.c
> >> +++ b/drivers/target/target_core_configfs.c
> >> @@ -1741,6 +1741,176 @@ static struct target_core_configfs_attribute target_core_attr_dev_alua_lu_gp = {
> >>  	.store	= target_core_store_alua_lu_gp,
> >>  };
> >>  
> >> +static ssize_t target_core_show_dev_lba_map(void *p, char *page)
> >> +{
> >> +	struct se_device *dev = p;
> >> +	struct t10_alua_lba_map *map;
> >> +	struct t10_alua_lba_map_member *mem;
> >> +	char *b = page;
> >> +	int bl = 0;
> >> +	char state;
> >> +
> >> +	spin_lock(&dev->t10_alua.lba_map_lock);
> >> +	if (!list_empty(&dev->t10_alua.lba_map_list))
> >> +	    bl += sprintf(b + bl, "%u %u\n",
> >> +			  dev->t10_alua.lba_map_segment_size,
> >> +			  dev->t10_alua.lba_map_segment_multiplier);
> >> +	list_for_each_entry(map, &dev->t10_alua.lba_map_list, lba_map_list) {
> >> +		bl += sprintf(b + bl, "%llu %llu",
> >> +			      map->lba_map_first_lba, map->lba_map_last_lba);
> >> +		list_for_each_entry(mem, &map->lba_map_mem_list,
> >> +				    lba_map_mem_list) {
> >> +			switch (mem->lba_map_mem_alua_state) {
> >> +			case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
> >> +				state = 'O';
> >> +				break;
> >> +			case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
> >> +				state = 'A';
> >> +				break;
> >> +			case ALUA_ACCESS_STATE_STANDBY:
> >> +				state = 'S';
> >> +				break;
> >> +			case ALUA_ACCESS_STATE_UNAVAILABLE:
> >> +				state = 'U';
> >> +				break;
> >> +			default:
> >> +				state = '.';
> >> +				break;
> >> +			}
> >> +			bl += sprintf(b + bl, " %d:%c",
> >> +				      mem->lba_map_mem_alua_pg_id, state);
> >> +		}
> >> +		bl += sprintf(b + bl, "\n");
> >> +	}
> >> +	spin_unlock(&dev->t10_alua.lba_map_lock);
> > 
> > The above loop can possibly overflow the passed *page..
> > 
> No. This is the reverse operation to the constructor in
> target_core_store_dev_lba_map().
> 
> Which (per definition) can only handle maps up to one page in size.
> 
> And hence the resulting (formatted) map as constructed by
> target_core_show_dev_lba_map() will also fit on one page.
> 

Sure, but AFAICT nothing prevents target_core_store_dev_lba_map() ->
core_alua_set_lba_map() from being called multiple times, and thus
potentially increasing target_core_show_dev_lba_map() output to larger
than PAGE_SIZE..

--nab


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 8/8] target_core_alua: Referrals configfs integration
  2013-12-19  6:25       ` Nicholas A. Bellinger
@ 2013-12-19  7:04         ` Hannes Reinecke
  0 siblings, 0 replies; 25+ messages in thread
From: Hannes Reinecke @ 2013-12-19  7:04 UTC (permalink / raw)
  To: Nicholas A. Bellinger; +Cc: Nic Bellinger, target-devel, linux-scsi

On 12/19/2013 07:25 AM, Nicholas A. Bellinger wrote:
> On Wed, 2013-12-18 at 09:15 +0100, Hannes Reinecke wrote:
>> On 12/17/2013 09:49 PM, Nicholas A. Bellinger wrote:
>>> On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
>>>> Referrals need an LBA map, which needs to be kept
>>>> consistent across all target port groups. So
>>>> instead of tying the map to the target port groups
>>>> I've implemented a single attribute containing the
>>>> entire map.
>>>>
>>>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>>>> ---
>>>>  drivers/target/target_core_alua.c      | 101 +++++++++++++++++++
>>>>  drivers/target/target_core_alua.h      |   8 ++
>>>>  drivers/target/target_core_configfs.c  | 171 +++++++++++++++++++++++++++++++++
>>>>  drivers/target/target_core_device.c    |   1 +
>>>>  drivers/target/target_core_transport.c |  28 +++++-
>>>>  5 files changed, 308 insertions(+), 1 deletion(-)
>>>>
>>>
>>> Applied, with one comment below..
>>>
>>> <SNIP>
>>>
>>>> diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
>>>> index e74ef8c..1dbc1bc 100644
>>>> --- a/drivers/target/target_core_configfs.c
>>>> +++ b/drivers/target/target_core_configfs.c
>>>> @@ -1741,6 +1741,176 @@ static struct target_core_configfs_attribute target_core_attr_dev_alua_lu_gp = {
>>>>  	.store	= target_core_store_alua_lu_gp,
>>>>  };
>>>>  
>>>> +static ssize_t target_core_show_dev_lba_map(void *p, char *page)
>>>> +{
>>>> +	struct se_device *dev = p;
>>>> +	struct t10_alua_lba_map *map;
>>>> +	struct t10_alua_lba_map_member *mem;
>>>> +	char *b = page;
>>>> +	int bl = 0;
>>>> +	char state;
>>>> +
>>>> +	spin_lock(&dev->t10_alua.lba_map_lock);
>>>> +	if (!list_empty(&dev->t10_alua.lba_map_list))
>>>> +	    bl += sprintf(b + bl, "%u %u\n",
>>>> +			  dev->t10_alua.lba_map_segment_size,
>>>> +			  dev->t10_alua.lba_map_segment_multiplier);
>>>> +	list_for_each_entry(map, &dev->t10_alua.lba_map_list, lba_map_list) {
>>>> +		bl += sprintf(b + bl, "%llu %llu",
>>>> +			      map->lba_map_first_lba, map->lba_map_last_lba);
>>>> +		list_for_each_entry(mem, &map->lba_map_mem_list,
>>>> +				    lba_map_mem_list) {
>>>> +			switch (mem->lba_map_mem_alua_state) {
>>>> +			case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
>>>> +				state = 'O';
>>>> +				break;
>>>> +			case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
>>>> +				state = 'A';
>>>> +				break;
>>>> +			case ALUA_ACCESS_STATE_STANDBY:
>>>> +				state = 'S';
>>>> +				break;
>>>> +			case ALUA_ACCESS_STATE_UNAVAILABLE:
>>>> +				state = 'U';
>>>> +				break;
>>>> +			default:
>>>> +				state = '.';
>>>> +				break;
>>>> +			}
>>>> +			bl += sprintf(b + bl, " %d:%c",
>>>> +				      mem->lba_map_mem_alua_pg_id, state);
>>>> +		}
>>>> +		bl += sprintf(b + bl, "\n");
>>>> +	}
>>>> +	spin_unlock(&dev->t10_alua.lba_map_lock);
>>>
>>> The above loop can possibly overflow the passed *page..
>>>
>> No. This is the reverse operation to the constructor in
>> target_core_store_dev_lba_map().
>>
>> Which (per definition) can only handle maps up to one page in size.
>>
>> And hence the resulting (formatted) map as constructed by
>> target_core_show_dev_lba_map() will also fit on one page.
>>
> 
> Sure, but AFAICT nothing prevents target_core_store_dev_lba_map() ->
> core_alua_set_lba_map() from being called multiple times, and thus
> potentially increasing target_core_show_dev_lba_map() output to larger
> than PAGE_SIZE..
> 
Multiple times?

Hmm. I had envisioned the 'store' function to store the entire map,
thereby invalidating / overwriting the old one.
Didn't I implement that logic?

Have to check ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83
  2013-12-18  9:20       ` Hannes Reinecke
@ 2014-03-05 19:41         ` Andy Grover
  2014-03-05 19:45           ` Andy Grover
  0 siblings, 1 reply; 25+ messages in thread
From: Andy Grover @ 2014-03-05 19:41 UTC (permalink / raw)
  To: Hannes Reinecke, Nicholas A. Bellinger
  Cc: Nic Bellinger, target-devel, linux-scsi

On 12/18/2013 01:20 AM, Hannes Reinecke wrote:
> On 12/17/2013 09:01 PM, Nicholas A. Bellinger wrote:
>> On Tue, 2013-12-17 at 11:50 -0800, Nicholas A. Bellinger wrote:
>>> On Tue, 2013-12-17 at 09:18 +0100, Hannes Reinecke wrote:
>>>> We should be including a descriptor referring to the target device
>>>> to allow identification of different TCM instances.
>>>>
>>>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>>>> ---
>>>>   drivers/target/target_core_spc.c | 43 +++++++++++++++++++++++++++++++++++++++-
>>>>   1 file changed, 42 insertions(+), 1 deletion(-)
>>>>
>>>
>>> One issue with this patch.  The local buffer in spc_emulate_inquiry is
>>> currently hardcoded to SE_INQUIRY_BUF=512, so two large scsi name
>>> designators could overflow here..
>>>
>>> So for the largest case with EVPD=0x83, this would be:
>>>
>>> 4 bytes for header +
>>> 20 bytes for NAA IEEE Registered Extended Assigned designator +
>>> 56 bytes for T10 Vendor Identifier +
>>> 8 bytes for Relative target port +
>>> 8 bytes for Target port group +
>>> 8 bytes for Logical unit group +
>>> 256 bytes for SCSI name (target port) +
>>> 256 bytes for SCSI name (target device) == 616 bytes.
>>>
>>> So for good measure, bumping up SE_INQUIRY_BUF to 1024.
>>>
>>
>> Mmmm, looking at this again, is reporting back two SCSI names in
>> EVPD=0x83 with different associations (one for target port, and one for
>> target device) really necessary..?
>>
>> Doesn't the existing target port association report back the same
>> information..?
>>
> No.
> 'Target port' is the identification for the port handling the
> request, which is contained within a target device.
>
> The reason why we need this is that we want to identify the scope of
> the Target port group number.
>
> Target port group numbers are relative to the encompassing target
> device, so when we're having _several_ target devices they might
> well provide us with identical target port group numbers.
>
> For explicit ALUA each target port group within a target device can
> be thought of a 'scheduling domain', ie if I sent STPG to one of the
> devices in that domain there is a _high_ likelihood that _every_
> device within that scheduling domain will be affected.
> So I can be slightly smarter here and just send one STPG and then
> wait for the resulting states on all affected devices.
>
> If I don't have this information I am required to send STPG to each
> and every device, thereby flooding the target controller with STPGs
> for the same target port group.
>
> So yes, we should be furnishing both.
> In addition it's the only sane way of identifying the array :-)

Hi, fbfe858 only bumps INQUIRY_BUF to 768 although the comment says 
1024, is this expected and ok?

Regards -- Andy


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83
  2014-03-05 19:41         ` Andy Grover
@ 2014-03-05 19:45           ` Andy Grover
  0 siblings, 0 replies; 25+ messages in thread
From: Andy Grover @ 2014-03-05 19:45 UTC (permalink / raw)
  To: Hannes Reinecke, Nicholas A. Bellinger
  Cc: Nic Bellinger, target-devel, linux-scsi

On 03/05/2014 11:41 AM, Andy Grover wrote:
> Hi, fbfe858 only bumps INQUIRY_BUF to 768 although the comment says
> 1024, is this expected and ok?

Doh, read the diff backwards, disregard. -- Andy


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2014-03-05 19:45 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-17  8:18 [PATCHv3 0/8] Referrals support Hannes Reinecke
2013-12-17  8:18 ` [PATCH 1/8] target_core_alua: validate ALUA state transition Hannes Reinecke
2013-12-17 19:32   ` Nicholas A. Bellinger
2013-12-17  8:18 ` [PATCH 2/8] target_core_alua: Allocate ALUA metadata on demand Hannes Reinecke
2013-12-17 19:32   ` Nicholas A. Bellinger
2013-12-17  8:18 ` [PATCH 3/8] target_core_alua: store old and pending ALUA state Hannes Reinecke
2013-12-17 19:32   ` Nicholas A. Bellinger
2013-12-17  8:18 ` [PATCH 4/8] target_core_alua: Use workqueue for ALUA transitioning Hannes Reinecke
2013-12-17 19:32   ` Nicholas A. Bellinger
2013-12-17  8:18 ` [PATCH 5/8] target_core: simplify scsi_name_len calculation Hannes Reinecke
2013-12-17 19:32   ` Nicholas A. Bellinger
2013-12-17  8:18 ` [PATCH 6/8] target_core_spc: Include target device descriptor in VPD page 83 Hannes Reinecke
2013-12-17 19:50   ` Nicholas A. Bellinger
2013-12-17 20:01     ` Nicholas A. Bellinger
2013-12-18  9:20       ` Hannes Reinecke
2014-03-05 19:41         ` Andy Grover
2014-03-05 19:45           ` Andy Grover
2013-12-17  8:18 ` [PATCH 7/8] target_core_alua: Referrals infrastructure Hannes Reinecke
2013-12-17 20:06   ` Nicholas A. Bellinger
2013-12-18  8:09     ` Hannes Reinecke
2013-12-17  8:18 ` [PATCH 8/8] target_core_alua: Referrals configfs integration Hannes Reinecke
2013-12-17 20:49   ` Nicholas A. Bellinger
2013-12-18  8:15     ` Hannes Reinecke
2013-12-19  6:25       ` Nicholas A. Bellinger
2013-12-19  7:04         ` Hannes Reinecke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.