linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu
@ 2020-07-17 16:12 Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task() Bodo Stroesser
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

This patch series is made on top of Martin's for-next branch.

ChangeLog:

v2: in patch "scsi: target: tcmu: Implement tmr_notify callback"
    changed new comment's style from "// ..." to "/* ... */"
    and correctly use "/** " for function doc.

---

TCM/LIO core handles TMRs without involving backends.
But TMR completion message is sent to initiator only after
commands aborted by the TMR are completed by the backend.
Especially in case of tcmu backend, if userspace executes long
running commands and therefore initiator sends ABORT_TASK on
timeout, the ABORT itself can time out if core still waits for
userspace/tcmu to complete the command.

It would be very helpful for userspace to get a notification
about received TMR and which commands were aborted by that TMR.
Then userspace can decide whether to cancel command processing,
and it can send command completion earlier than it would without
TMR notification.
It is also helpful for userspace traces and device emulation to
get notification about TMR events.

So this patch series in the first two patches implements in
core the usage of a new optional backend callback for TMR
notifications. The core calls it before core waits for
completion of aborted commands (params: se_dev, TMR type,
and list of se_cmds aborted by this TMR).
Of course other backends than tcmu can use this part of the
series also to implement their own TMR notification if
necessary.

The further six patches implement the TMR notify callback for
tcmu. The new configFS attribute tmr_notification allows to
switch on TMR messages on the cmd ring. The default of the
attribute is the old behavior without TMR info on the ring, but
with following changes:
 - if tcmu receives an already aborted command, it immediately
   rejects it. So it will never appear in userspace.
 - if tcmu finds, that according to TMR notification a cmd on
   the qfull_queue was aborted, tcmu removes it from qfull_queue
   and completes it immediately. So userspace will not 'see'
   those commands.

When attribute tmr_notification is set to 1, tcmu additionally
prepares a list of cmd_ids from those commands, that are aborted
by the TMR and are active in cmd ring (not timed out).
This list together with the TMR type is either immediately
written to cmd ring (new TMR entry type) or queued in a separate
tmr queue if ring space is too small.
TMRs in the tmr queue do not time out. If ring space becomes
available, tcmu moves TMRs from tmr queue to ring with higher
priority than cmds from qfull queue.

This mechanism makes sure that userspace receives TMR
notifications as early as possible. Userspace can use the
list of cmd_ids attached to the TMR notification to identify
aborted commands from its list of received and not yet completed
commands. In case userspace has meanwhile completed some of the
cmd_ids on the list, it can just ignore these cmd_ids.
A possible new command having the same cmd_id as one of the
aborted commands will always appear on the ring after the TMR
notification.

Bodo Stroesser (8):
  scsi: target: Modify core_tmr_abort_task()
  scsi: target: Add tmr_notify backend function
  scsi: target: tcmu: Use priv pointer in se_cmd
  scsi: target: tcmu: Do not queue aborted commands
  scsi: target: tcmu: Factor out new helper ring_insert_padding
  scsi: target: tcmu: Fix and simplify timeout handling
  scsi: target: tcmu: Implement tmr_notify callback
  scsi: target: tcmu: Make TMR notification optional

 drivers/target/target_core_tmr.c       |  36 +++-
 drivers/target/target_core_transport.c |   1 +
 drivers/target/target_core_user.c      | 374 +++++++++++++++++++++++++++------
 include/target/target_core_backend.h   |   2 +
 include/target/target_core_base.h      |   1 +
 include/uapi/linux/target_core_user.h  |  25 +++
 6 files changed, 367 insertions(+), 72 deletions(-)

-- 
2.12.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task()
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-26  0:03   ` Mike Christie
  2020-07-17 16:12 ` [PATCH v2 2/8] scsi: target: Add tmr_notify backend function Bodo Stroesser
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

This patch modifies core_tmr_abort_task() to use same looping
and locking scheme as core_tmr_drain_state_list() does.

This frees the state_list element in se_cmd for later use
by tmr notification handling.

Note: __target_check_io_state() now is called with param 0
instead of dev->dev_attrib.emulate_tas, because tas is not
relevant since we always get ABRT on same session like the
aborted command.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_tmr.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
index 89c84d472cd7..b65d7a0a5df1 100644
--- a/drivers/target/target_core_tmr.c
+++ b/drivers/target/target_core_tmr.c
@@ -116,14 +116,15 @@ void core_tmr_abort_task(
 	struct se_tmr_req *tmr,
 	struct se_session *se_sess)
 {
-	struct se_cmd *se_cmd;
+	struct se_cmd *se_cmd, *next;
 	unsigned long flags;
+	bool rc;
 	u64 ref_tag;
 
-	spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
-	list_for_each_entry(se_cmd, &se_sess->sess_cmd_list, se_cmd_list) {
+	spin_lock_irqsave(&dev->execute_task_lock, flags);
+	list_for_each_entry_safe(se_cmd, next, &dev->state_list, state_list) {
 
-		if (dev != se_cmd->se_dev)
+		if (se_sess != se_cmd->se_sess)
 			continue;
 
 		/* skip task management functions, including tmr->task_cmd */
@@ -137,11 +138,16 @@ void core_tmr_abort_task(
 		printk("ABORT_TASK: Found referenced %s task_tag: %llu\n",
 			se_cmd->se_tfo->fabric_name, ref_tag);
 
-		if (!__target_check_io_state(se_cmd, se_sess,
-					     dev->dev_attrib.emulate_tas))
+		spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
+		rc = __target_check_io_state(se_cmd, se_sess, 0);
+		spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+		if (!rc)
 			continue;
 
-		spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+		list_del_init(&se_cmd->state_list);
+		se_cmd->state_active = false;
+
+		spin_unlock_irqrestore(&dev->execute_task_lock, flags);
 
 		/*
 		 * Ensure that this ABORT request is visible to the LU RESET
@@ -159,7 +165,7 @@ void core_tmr_abort_task(
 		atomic_long_inc(&dev->aborts_complete);
 		return;
 	}
-	spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
 
 	printk("ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: %lld\n",
 			tmr->ref_task_tag);
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/8] scsi: target: Add tmr_notify backend function
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task() Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 3/8] scsi: target: tcmu: Use priv pointer in se_cmd Bodo Stroesser
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

Target core is modified to call an optional backend
callback function if a TMR is received or commands
are aborted implicitly after a PR command was received.
The backend function takes as parameters the se_dev, the
type of the TMR, and the list of aborted commands.
If no commands were aborted, an empty list is supplied.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_tmr.c       | 16 +++++++++++++++-
 drivers/target/target_core_transport.c |  1 +
 include/target/target_core_backend.h   |  2 ++
 include/target/target_core_base.h      |  1 +
 4 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
index b65d7a0a5df1..39d93357db65 100644
--- a/drivers/target/target_core_tmr.c
+++ b/drivers/target/target_core_tmr.c
@@ -116,6 +116,7 @@ void core_tmr_abort_task(
 	struct se_tmr_req *tmr,
 	struct se_session *se_sess)
 {
+	LIST_HEAD(aborted_list);
 	struct se_cmd *se_cmd, *next;
 	unsigned long flags;
 	bool rc;
@@ -144,7 +145,7 @@ void core_tmr_abort_task(
 		if (!rc)
 			continue;
 
-		list_del_init(&se_cmd->state_list);
+		list_move_tail(&se_cmd->state_list, &aborted_list);
 		se_cmd->state_active = false;
 
 		spin_unlock_irqrestore(&dev->execute_task_lock, flags);
@@ -157,6 +158,11 @@ void core_tmr_abort_task(
 			WARN_ON_ONCE(transport_lookup_tmr_lun(tmr->task_cmd) <
 					0);
 
+		if (dev->transport->tmr_notify)
+			dev->transport->tmr_notify(dev, TMR_ABORT_TASK,
+						   &aborted_list);
+
+		list_del_init(&se_cmd->state_list);
 		target_put_cmd_and_wait(se_cmd);
 
 		printk("ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for"
@@ -167,6 +173,9 @@ void core_tmr_abort_task(
 	}
 	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
 
+	if (dev->transport->tmr_notify)
+		dev->transport->tmr_notify(dev, TMR_ABORT_TASK, &aborted_list);
+
 	printk("ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: %lld\n",
 			tmr->ref_task_tag);
 	tmr->response = TMR_TASK_DOES_NOT_EXIST;
@@ -318,6 +327,11 @@ static void core_tmr_drain_state_list(
 	}
 	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
 
+	if (dev->transport->tmr_notify)
+		dev->transport->tmr_notify(dev, preempt_and_abort_list ?
+					   TMR_LUN_RESET_PRO : TMR_LUN_RESET,
+					   &drain_task_list);
+
 	while (!list_empty(&drain_task_list)) {
 		cmd = list_entry(drain_task_list.next, struct se_cmd, state_list);
 		list_del_init(&cmd->state_list);
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index e6e1fa68de54..9fb0be0aa620 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -2946,6 +2946,7 @@ static const char *target_tmf_name(enum tcm_tmreq_table tmf)
 	case TMR_LUN_RESET:		return "LUN_RESET";
 	case TMR_TARGET_WARM_RESET:	return "TARGET_WARM_RESET";
 	case TMR_TARGET_COLD_RESET:	return "TARGET_COLD_RESET";
+	case TMR_LUN_RESET_PRO:		return "LUN_RESET_PRO";
 	case TMR_UNKNOWN:		break;
 	}
 	return "(?)";
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index f51452e3b984..6336780d83a7 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -40,6 +40,8 @@ struct target_backend_ops {
 	ssize_t (*show_configfs_dev_params)(struct se_device *, char *);
 
 	sense_reason_t (*parse_cdb)(struct se_cmd *cmd);
+	void (*tmr_notify)(struct se_device *se_dev, enum tcm_tmreq_table,
+			   struct list_head *aborted_cmds);
 	u32 (*get_device_type)(struct se_device *);
 	sector_t (*get_blocks)(struct se_device *);
 	sector_t (*get_alignment_offset_lbas)(struct se_device *);
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 18c3f277b770..549947d407cf 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -207,6 +207,7 @@ enum tcm_tmreq_table {
 	TMR_LUN_RESET		= 5,
 	TMR_TARGET_WARM_RESET	= 6,
 	TMR_TARGET_COLD_RESET	= 7,
+	TMR_LUN_RESET_PRO	= 0x80,
 	TMR_UNKNOWN		= 0xff,
 };
 
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/8] scsi: target: tcmu: Use priv pointer in se_cmd
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task() Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 2/8] scsi: target: Add tmr_notify backend function Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 4/8] scsi: target: tcmu: Do not queue aborted commands Bodo Stroesser
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

We initialize and clean up the se_cmd's priv pointer
under cmd_ring_lock to point to the corresponding tcmu_cmd.

In the patch that implements tmr_notify callback in
tcmu we will use the priv pointer.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_user.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 366878b0b2fd..b06b18d1b135 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -541,6 +541,8 @@ tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dbi)
 
 static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd)
 {
+	if (tcmu_cmd->se_cmd)
+		tcmu_cmd->se_cmd->priv = NULL;
 	kfree(tcmu_cmd->dbi);
 	kmem_cache_free(tcmu_cmd_cache, tcmu_cmd);
 }
@@ -1109,10 +1111,11 @@ tcmu_queue_cmd(struct se_cmd *se_cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
 	mutex_lock(&udev->cmdr_lock);
+	se_cmd->priv = tcmu_cmd;
 	ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
-	mutex_unlock(&udev->cmdr_lock);
 	if (ret < 0)
 		tcmu_free_cmd(tcmu_cmd);
+	mutex_unlock(&udev->cmdr_lock);
 	return scsi_ret;
 }
 
@@ -1179,7 +1182,6 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
 		target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status);
 
 out:
-	cmd->se_cmd = NULL;
 	tcmu_cmd_free_data(cmd, cmd->dbi_cnt);
 	tcmu_free_cmd(cmd);
 }
@@ -1285,6 +1287,7 @@ static void tcmu_check_expired_ring_cmd(struct tcmu_cmd *cmd)
 	set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
 	list_del_init(&cmd->queue_entry);
 	se_cmd = cmd->se_cmd;
+	se_cmd->priv = NULL;
 	cmd->se_cmd = NULL;
 
 	pr_debug("Timing out inflight cmd %u on dev %s.\n",
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/8] scsi: target: tcmu: Do not queue aborted commands
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
                   ` (2 preceding siblings ...)
  2020-07-17 16:12 ` [PATCH v2 3/8] scsi: target: tcmu: Use priv pointer in se_cmd Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 5/8] scsi: target: tcmu: Factor out new helper ring_insert_padding Bodo Stroesser
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

If tcmu receives an already aborted command, tcmu_queue_cmd()
should reject it.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_user.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index b06b18d1b135..25c480fde9ee 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1103,8 +1103,8 @@ tcmu_queue_cmd(struct se_cmd *se_cmd)
 	struct se_device *se_dev = se_cmd->se_dev;
 	struct tcmu_dev *udev = TCMU_DEV(se_dev);
 	struct tcmu_cmd *tcmu_cmd;
-	sense_reason_t scsi_ret;
-	int ret;
+	sense_reason_t scsi_ret = TCM_CHECK_CONDITION_ABORT_CMD;
+	int ret = -1;
 
 	tcmu_cmd = tcmu_alloc_cmd(se_cmd);
 	if (!tcmu_cmd)
@@ -1112,7 +1112,8 @@ tcmu_queue_cmd(struct se_cmd *se_cmd)
 
 	mutex_lock(&udev->cmdr_lock);
 	se_cmd->priv = tcmu_cmd;
-	ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
+	if (!(se_cmd->transport_state & CMD_T_ABORTED))
+		ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
 	if (ret < 0)
 		tcmu_free_cmd(tcmu_cmd);
 	mutex_unlock(&udev->cmdr_lock);
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 5/8] scsi: target: tcmu: Factor out new helper ring_insert_padding
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
                   ` (3 preceding siblings ...)
  2020-07-17 16:12 ` [PATCH v2 4/8] scsi: target: tcmu: Do not queue aborted commands Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 6/8] scsi: target: tcmu: Fix and simplify timeout handling Bodo Stroesser
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

The new helper ring_insert_padding is split off from and then
called by queue_cmd_ring. It inserts a padding if necessary.
The new helper will in a further patch be used during writing
of TMR notifications to command ring.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_user.c | 51 +++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 21 deletions(-)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 25c480fde9ee..7a27e838a7d3 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -926,6 +926,34 @@ static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
 	return 0;
 }
 
+static uint32_t ring_insert_padding(struct tcmu_dev *dev, size_t cmd_size)
+{
+	struct tcmu_cmd_entry_hdr *hdr;
+	struct tcmu_mailbox *mb = dev->mb_addr;
+	uint32_t cmd_head = mb->cmd_head % dev->cmdr_size; /* UAM */
+
+	/* Insert a PAD if end-of-ring space is too small */
+	if (head_to_end(cmd_head, dev->cmdr_size) < cmd_size) {
+		size_t pad_size = head_to_end(cmd_head, dev->cmdr_size);
+
+		hdr = (void *) mb + CMDR_OFF + cmd_head;
+		tcmu_hdr_set_op(&hdr->len_op, TCMU_OP_PAD);
+		tcmu_hdr_set_len(&hdr->len_op, pad_size);
+		hdr->cmd_id = 0; /* not used for PAD */
+		hdr->kflags = 0;
+		hdr->uflags = 0;
+		tcmu_flush_dcache_range(hdr, sizeof(*hdr));
+
+		UPDATE_HEAD(mb->cmd_head, pad_size, dev->cmdr_size);
+		tcmu_flush_dcache_range(mb, sizeof(*mb));
+
+		cmd_head = mb->cmd_head % dev->cmdr_size; /* UAM */
+		WARN_ON(cmd_head != 0);
+	}
+
+	return cmd_head;
+}
+
 /**
  * queue_cmd_ring - queue cmd to ring or internally
  * @tcmu_cmd: cmd to queue
@@ -941,7 +969,7 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
 	struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
 	struct se_cmd *se_cmd = tcmu_cmd->se_cmd;
 	size_t base_command_size, command_size;
-	struct tcmu_mailbox *mb;
+	struct tcmu_mailbox *mb = udev->mb_addr;
 	struct tcmu_cmd_entry *entry;
 	struct iovec *iov;
 	int iov_cnt, cmd_id;
@@ -980,8 +1008,6 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
 	if (!list_empty(&udev->qfull_queue))
 		goto queue;
 
-	mb = udev->mb_addr;
-	cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
 	if ((command_size > (udev->cmdr_size / 2)) ||
 	    data_length > udev->data_size) {
 		pr_warn("TCMU: Request of size %zu/%zu is too big for %u/%zu "
@@ -1001,24 +1027,7 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
 		goto queue;
 	}
 
-	/* Insert a PAD if end-of-ring space is too small */
-	if (head_to_end(cmd_head, udev->cmdr_size) < command_size) {
-		size_t pad_size = head_to_end(cmd_head, udev->cmdr_size);
-
-		entry = (void *) mb + CMDR_OFF + cmd_head;
-		tcmu_hdr_set_op(&entry->hdr.len_op, TCMU_OP_PAD);
-		tcmu_hdr_set_len(&entry->hdr.len_op, pad_size);
-		entry->hdr.cmd_id = 0; /* not used for PAD */
-		entry->hdr.kflags = 0;
-		entry->hdr.uflags = 0;
-		tcmu_flush_dcache_range(entry, sizeof(entry->hdr));
-
-		UPDATE_HEAD(mb->cmd_head, pad_size, udev->cmdr_size);
-		tcmu_flush_dcache_range(mb, sizeof(*mb));
-
-		cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
-		WARN_ON(cmd_head != 0);
-	}
+	cmd_head = ring_insert_padding(udev, command_size);
 
 	entry = (void *) mb + CMDR_OFF + cmd_head;
 	memset(entry, 0, command_size);
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 6/8] scsi: target: tcmu: Fix and simplify timeout handling
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
                   ` (4 preceding siblings ...)
  2020-07-17 16:12 ` [PATCH v2 5/8] scsi: target: tcmu: Factor out new helper ring_insert_padding Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 7/8] scsi: target: tcmu: Implement tmr_notify callback Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional Bodo Stroesser
  7 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

During cmd timeout handling in check_timedout_devices() due to
a race it can happen, that tcmu_set_next_deadline() does not
start a timer as expected:
 1) Either tcmu_check_expired_ring_cmd() checks the
    inflight_queue or tcmu_check_expired_queue_cmd() checks the
    qfull_queue while jiffies has the value X
 2) At the end of the check the queue contains one remaining
    command with deadline X (time_after(X, X) is false and
    thus the command is not handled as being timed out).
 3) After tcmu_check_expired_xxxxx_cmd() a timer interrupt
    happens and jiffies is incremented to X+1.
 4) Now tcmu_set_next_deadline() is called, but it skips
    the command, since time_after(X+1, X) is true. Therefore
    tcmu_set_next_deadline() finds no new deadline and stops
    the timer, which it shouldn't.

Since commands that time out are removed from inflight_queue
or qfull_queue, we don't need the check with time_after() in
tcmu_set_next_deadline(), but can use the deadline from the
first cmd in the queue.

Additionally I replaced the remaining time_after() calls in
tcmu_check_expired_xxxxx_cmd() with time_after_eq(), because
it is not useful to set the timeout to deadline, but then check
for jiffies being greater than deadline.

Next I simplified the end of tcmu_handle_completions() and
changed the check for no more pending commands from
 "mb->cmd_tail == mb->cmd_head"
to
 "idr_is_empty(&udev->commands)"
because the old check doesn't work correctly if paddings or in
the future TMRs are in the ring.

Finally tcmu_set_next_deadline() was shifted in the source as
preparation for later implementation of tmr_notify callback.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_user.c | 59 +++++++++++++++------------------------
 1 file changed, 23 insertions(+), 36 deletions(-)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 7a27e838a7d3..6adf4e7cc00b 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1129,6 +1129,18 @@ tcmu_queue_cmd(struct se_cmd *se_cmd)
 	return scsi_ret;
 }
 
+static void tcmu_set_next_deadline(struct list_head *queue,
+				   struct timer_list *timer)
+{
+	struct tcmu_cmd *cmd;
+
+	if (!list_empty(queue)) {
+		cmd = list_first_entry(queue, struct tcmu_cmd, queue_entry);
+		mod_timer(timer, cmd->deadline);
+	} else
+		del_timer(timer);
+}
+
 static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *entry)
 {
 	struct se_cmd *se_cmd = cmd->se_cmd;
@@ -1196,25 +1208,6 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
 	tcmu_free_cmd(cmd);
 }
 
-static void tcmu_set_next_deadline(struct list_head *queue,
-				   struct timer_list *timer)
-{
-	struct tcmu_cmd *tcmu_cmd, *tmp_cmd;
-	unsigned long deadline = 0;
-
-	list_for_each_entry_safe(tcmu_cmd, tmp_cmd, queue, queue_entry) {
-		if (!time_after(jiffies, tcmu_cmd->deadline)) {
-			deadline = tcmu_cmd->deadline;
-			break;
-		}
-	}
-
-	if (deadline)
-		mod_timer(timer, deadline);
-	else
-		del_timer(timer);
-}
-
 static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
 {
 	struct tcmu_mailbox *mb;
@@ -1267,22 +1260,16 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
 		handled++;
 	}
 
-	if (mb->cmd_tail == mb->cmd_head) {
-		/* no more pending commands */
-		del_timer(&udev->cmd_timer);
-
-		if (list_empty(&udev->qfull_queue)) {
-			/*
-			 * no more pending or waiting commands so try to
-			 * reclaim blocks if needed.
-			 */
-			if (atomic_read(&global_db_count) >
-			    tcmu_global_max_blocks)
-				schedule_delayed_work(&tcmu_unmap_work, 0);
-		}
-	} else if (udev->cmd_time_out) {
-		tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);
+	if (atomic_read(&global_db_count) > tcmu_global_max_blocks &&
+	    idr_is_empty(&udev->commands) && list_empty(&udev->qfull_queue)) {
+		/*
+		 * Allocated blocks exceeded global block limit, currently no
+		 * more pending or waiting commands so try to reclaim blocks.
+		 */
+		schedule_delayed_work(&tcmu_unmap_work, 0);
 	}
+	if (udev->cmd_time_out)
+		tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);
 
 	return handled;
 }
@@ -1291,7 +1278,7 @@ static void tcmu_check_expired_ring_cmd(struct tcmu_cmd *cmd)
 {
 	struct se_cmd *se_cmd;
 
-	if (!time_after(jiffies, cmd->deadline))
+	if (!time_after_eq(jiffies, cmd->deadline))
 		return;
 
 	set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
@@ -1310,7 +1297,7 @@ static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd)
 {
 	struct se_cmd *se_cmd;
 
-	if (!time_after(jiffies, cmd->deadline))
+	if (!time_after_eq(jiffies, cmd->deadline))
 		return;
 
 	pr_debug("Timing out queued cmd %p on dev %s.\n",
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 7/8] scsi: target: tcmu: Implement tmr_notify callback
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
                   ` (5 preceding siblings ...)
  2020-07-17 16:12 ` [PATCH v2 6/8] scsi: target: tcmu: Fix and simplify timeout handling Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-17 16:12 ` [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional Bodo Stroesser
  7 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

This patch implements the tmr_notify callback for tcmu.
When the callback is called, tcmu checks the list of aborted
commands it received as parameter:
 - aborted commands in the qfull_queue are removed from
   the queue and target_complete_command is called
 - from the cmd_ids of aborted commands currently uncompleted
   in cmd ring it creates a list of aborted cmd_ids.
Finally a TMR notification is written to cmd ring containing
TMR type and cmd_id list. If there is no space in ring, the
TMR notification is queued on a TMR specific queue.

The TMR specific queue 'tmr_queue' can be seen as a extension
of the cmd ring. At the end of each iexecution of
tcmu_complete_commands() we check, whether tmr_queue contains
TMRs and try to move them onto the ring. If tmr_queue is not
empty after that, we don't call run_qfull_queue() because
commands must not overtake TMRs.

Operating that way we guarantee that cmd_ids in TMR notification
received by userspace either match an active, not yet completed
command or are no longer valid due to userspace having complete
some cmd_ids meanwhile.

New commands that were assigned to an aborted cmd_id will always
appear on the cmd ring _after_ the TMR.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_user.c     | 225 ++++++++++++++++++++++++++++++++--
 include/uapi/linux/target_core_user.h |  25 ++++
 2 files changed, 241 insertions(+), 9 deletions(-)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 6adf4e7cc00b..92b7a2e84e64 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -137,6 +137,7 @@ struct tcmu_dev {
 
 	struct mutex cmdr_lock;
 	struct list_head qfull_queue;
+	struct list_head tmr_queue;
 
 	uint32_t dbi_max;
 	uint32_t dbi_thresh;
@@ -183,6 +184,15 @@ struct tcmu_cmd {
 #define TCMU_CMD_BIT_EXPIRED 0
 	unsigned long flags;
 };
+
+struct tcmu_tmr {
+	struct list_head queue_entry;
+
+	uint8_t tmr_type;
+	uint32_t tmr_cmd_cnt;
+	int16_t tmr_cmd_ids[0];
+};
+
 /*
  * To avoid dead lock the mutex lock order should always be:
  *
@@ -844,6 +854,9 @@ static bool is_ring_space_avail(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
 		return false;
 	}
 
+	if (!data_needed)
+		return true;
+
 	/* try to check and get the data blocks as needed */
 	space = spc_bitmap_free(udev->data_bitmap, udev->dbi_thresh);
 	if ((space * DATA_BLOCK_SIZE) < data_needed) {
@@ -1106,6 +1119,60 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
 	return 1;
 }
 
+/**
+ * queue_tmr_ring - queue tmr info to ring or internally
+ * @dev: related tcmu_dev
+ * @tmr: tcmu_tmr containing tmr info to queue
+ *
+ * Returns:
+ *  0 success
+ *  1 internally queued to wait for ring memory to free.
+ */
+static int
+queue_tmr_ring(struct tcmu_dev *dev, struct tcmu_tmr *tmr)
+{
+	struct tcmu_tmr_entry *entry;
+	int cmd_size;
+	int id_list_sz;
+	struct tcmu_mailbox *mb = dev->mb_addr;
+	uint32_t cmd_head;
+
+	if (test_bit(TCMU_DEV_BIT_BROKEN, &dev->flags))
+		goto out_free;
+
+	id_list_sz = sizeof(tmr->tmr_cmd_ids[0]) * tmr->tmr_cmd_cnt;
+	cmd_size = round_up(sizeof(*entry) + id_list_sz, TCMU_OP_ALIGN_SIZE);
+
+	if (!list_empty(&dev->tmr_queue) ||
+	    !is_ring_space_avail(dev, NULL, cmd_size, 0)) {
+		list_add_tail(&tmr->queue_entry, &dev->tmr_queue);
+		pr_debug("adding tmr %p on dev %s to TMR ring space wait queue\n",
+			 tmr, dev->name);
+		return 1;
+	}
+
+	cmd_head = ring_insert_padding(dev, cmd_size);
+
+	entry = (void *)mb + CMDR_OFF + cmd_head;
+	memset(entry, 0, cmd_size);
+	tcmu_hdr_set_op(&entry->hdr.len_op, TCMU_OP_TMR);
+	tcmu_hdr_set_len(&entry->hdr.len_op, cmd_size);
+	entry->tmr_type = tmr->tmr_type;
+	entry->cmd_cnt = tmr->tmr_cmd_cnt;
+	memcpy(&entry->cmd_ids[0], &tmr->tmr_cmd_ids[0], id_list_sz);
+	tcmu_flush_dcache_range(entry, cmd_size);
+
+	UPDATE_HEAD(mb->cmd_head, cmd_size, dev->cmdr_size);
+	tcmu_flush_dcache_range(mb, sizeof(*mb));
+
+	uio_event_notify(&dev->uio_info);
+
+out_free:
+	kfree(tmr);
+
+	return 0;
+}
+
 static sense_reason_t
 tcmu_queue_cmd(struct se_cmd *se_cmd)
 {
@@ -1141,6 +1208,85 @@ static void tcmu_set_next_deadline(struct list_head *queue,
 		del_timer(timer);
 }
 
+static int
+tcmu_tmr_type(enum tcm_tmreq_table tmf)
+{
+	switch (tmf) {
+	case TMR_ABORT_TASK:		return TCMU_TMR_ABORT_TASK;
+	case TMR_ABORT_TASK_SET:	return TCMU_TMR_ABORT_TASK_SET;
+	case TMR_CLEAR_ACA:		return TCMU_TMR_CLEAR_ACA;
+	case TMR_CLEAR_TASK_SET:	return TCMU_TMR_CLEAR_TASK_SET;
+	case TMR_LUN_RESET:		return TCMU_TMR_LUN_RESET;
+	case TMR_TARGET_WARM_RESET:	return TCMU_TMR_TARGET_WARM_RESET;
+	case TMR_TARGET_COLD_RESET:	return TCMU_TMR_TARGET_COLD_RESET;
+	case TMR_LUN_RESET_PRO:		return TCMU_TMR_LUN_RESET_PRO;
+	default:			return TCMU_TMR_UNKNOWN;
+	}
+}
+
+static void
+tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf,
+		struct list_head *cmd_list)
+{
+	int i = 0, cmd_cnt = 0;
+	bool unqueued = false;
+	uint16_t *cmd_ids = NULL;
+	struct tcmu_cmd *cmd;
+	struct se_cmd *se_cmd;
+	struct tcmu_tmr *tmr;
+	struct tcmu_dev *dev = TCMU_DEV(se_dev);
+
+	mutex_lock(&dev->cmdr_lock);
+
+	/* First we check for aborted commands in qfull_queue */
+	list_for_each_entry(se_cmd, cmd_list, state_list) {
+		i++;
+		if (!se_cmd->priv)
+			continue;
+		cmd = se_cmd->priv;
+		/* Commands on qfull queue have no id yet */
+		if (cmd->cmd_id) {
+			cmd_cnt++;
+			continue;
+		}
+		pr_debug("Removing aborted command %p from queue on dev %s.\n",
+			 cmd, dev->name);
+
+		list_del_init(&cmd->queue_entry);
+		tcmu_free_cmd(cmd);
+		target_complete_cmd(se_cmd, SAM_STAT_TASK_ABORTED);
+		unqueued = true;
+	}
+	if (unqueued)
+		tcmu_set_next_deadline(&dev->qfull_queue, &dev->qfull_timer);
+
+	pr_debug("TMR event %d on dev %s, aborted cmds %d, afflicted cmd_ids %d\n",
+		 tcmu_tmr_type(tmf), dev->name, i, cmd_cnt);
+
+	tmr = kmalloc(sizeof(*tmr) + cmd_cnt * sizeof(*cmd_ids), GFP_KERNEL);
+	if (!tmr)
+		goto unlock;
+
+	tmr->tmr_type = tcmu_tmr_type(tmf);
+	tmr->tmr_cmd_cnt = cmd_cnt;
+
+	if (cmd_cnt != 0) {
+		cmd_cnt = 0;
+		list_for_each_entry(se_cmd, cmd_list, state_list) {
+			if (!se_cmd->priv)
+				continue;
+			cmd = se_cmd->priv;
+			if (cmd->cmd_id)
+				tmr->tmr_cmd_ids[cmd_cnt++] = cmd->cmd_id;
+		}
+	}
+
+	queue_tmr_ring(dev, tmr);
+
+unlock:
+	mutex_unlock(&dev->cmdr_lock);
+}
+
 static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *entry)
 {
 	struct se_cmd *se_cmd = cmd->se_cmd;
@@ -1208,11 +1354,43 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
 	tcmu_free_cmd(cmd);
 }
 
+static int tcmu_run_tmr_queue(struct tcmu_dev *dev)
+{
+	struct tcmu_tmr *tmr, *tmp;
+	LIST_HEAD(tmrs);
+
+	if (list_empty(&dev->tmr_queue))
+		return 1;
+
+	pr_debug("running %s's tmr queue\n", dev->name);
+
+	list_splice_init(&dev->tmr_queue, &tmrs);
+
+	list_for_each_entry_safe(tmr, tmp, &tmrs, queue_entry) {
+		list_del_init(&tmr->queue_entry);
+
+		pr_debug("removing tmr %p on dev %s from queue\n",
+			 tmr, dev->name);
+
+		if (queue_tmr_ring(dev, tmr)) {
+			pr_debug("ran out of space during tmr queue run\n");
+			/*
+			 * tmr was requeued, so just put all tmrs back in
+			 * the queue
+			 */
+			list_splice_tail(&tmrs, &dev->tmr_queue);
+			return 0;
+		}
+	}
+
+	return 1;
+}
+
 static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
 {
 	struct tcmu_mailbox *mb;
 	struct tcmu_cmd *cmd;
-	int handled = 0;
+	bool free_space = false;
 
 	if (test_bit(TCMU_DEV_BIT_BROKEN, &udev->flags)) {
 		pr_err("ring broken, not handling completions\n");
@@ -1235,7 +1413,10 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
 		tcmu_flush_dcache_range(entry, ring_left < sizeof(*entry) ?
 					ring_left : sizeof(*entry));
 
-		if (tcmu_hdr_get_op(entry->hdr.len_op) == TCMU_OP_PAD) {
+		free_space = true;
+
+		if (tcmu_hdr_get_op(entry->hdr.len_op) == TCMU_OP_PAD ||
+		    tcmu_hdr_get_op(entry->hdr.len_op) == TCMU_OP_TMR) {
 			UPDATE_HEAD(udev->cmdr_last_cleaned,
 				    tcmu_hdr_get_len(entry->hdr.len_op),
 				    udev->cmdr_size);
@@ -1256,9 +1437,9 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
 		UPDATE_HEAD(udev->cmdr_last_cleaned,
 			    tcmu_hdr_get_len(entry->hdr.len_op),
 			    udev->cmdr_size);
-
-		handled++;
 	}
+	if (free_space)
+		free_space = tcmu_run_tmr_queue(udev);
 
 	if (atomic_read(&global_db_count) > tcmu_global_max_blocks &&
 	    idr_is_empty(&udev->commands) && list_empty(&udev->qfull_queue)) {
@@ -1271,7 +1452,7 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
 	if (udev->cmd_time_out)
 		tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);
 
-	return handled;
+	return free_space;
 }
 
 static void tcmu_check_expired_ring_cmd(struct tcmu_cmd *cmd)
@@ -1381,6 +1562,7 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name)
 	INIT_LIST_HEAD(&udev->node);
 	INIT_LIST_HEAD(&udev->timedout_entry);
 	INIT_LIST_HEAD(&udev->qfull_queue);
+	INIT_LIST_HEAD(&udev->tmr_queue);
 	INIT_LIST_HEAD(&udev->inflight_queue);
 	idr_init(&udev->commands);
 
@@ -1455,8 +1637,8 @@ static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
 	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
 
 	mutex_lock(&udev->cmdr_lock);
-	tcmu_handle_completions(udev);
-	run_qfull_queue(udev, false);
+	if (tcmu_handle_completions(udev))
+		run_qfull_queue(udev, false);
 	mutex_unlock(&udev->cmdr_lock);
 
 	return 0;
@@ -1609,6 +1791,16 @@ static void tcmu_blocks_release(struct radix_tree_root *blocks,
 	}
 }
 
+static void tcmu_remove_all_queued_tmr(struct tcmu_dev *dev)
+{
+	struct tcmu_tmr *tmr, *tmp;
+
+	list_for_each_entry_safe(tmr, tmp, &dev->tmr_queue, queue_entry) {
+		list_del_init(&tmr->queue_entry);
+		kfree(tmr);
+	}
+}
+
 static void tcmu_dev_kref_release(struct kref *kref)
 {
 	struct tcmu_dev *udev = container_of(kref, struct tcmu_dev, kref);
@@ -1631,6 +1823,8 @@ static void tcmu_dev_kref_release(struct kref *kref)
 		if (tcmu_check_and_free_pending_cmd(cmd) != 0)
 			all_expired = false;
 	}
+	/* There can be left over TMR cmds. Remove them. */
+	tcmu_remove_all_queued_tmr(udev);
 	if (!list_empty(&udev->qfull_queue))
 		all_expired = false;
 	idr_destroy(&udev->commands);
@@ -1885,7 +2079,9 @@ static int tcmu_configure_device(struct se_device *dev)
 	/* Initialise the mailbox of the ring buffer */
 	mb = udev->mb_addr;
 	mb->version = TCMU_MAILBOX_VERSION;
-	mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC | TCMU_MAILBOX_FLAG_CAP_READ_LEN;
+	mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC |
+		    TCMU_MAILBOX_FLAG_CAP_READ_LEN |
+		    TCMU_MAILBOX_FLAG_CAP_TMR;
 	mb->cmdr_off = CMDR_OFF;
 	mb->cmdr_size = udev->cmdr_size;
 
@@ -2055,6 +2251,15 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
 
 	del_timer(&udev->cmd_timer);
 
+	/*
+	 * ring is empty and qfull queue never contains aborted commands.
+	 * So TMRs in tmr queue do not contain relevant cmd_ids.
+	 * After a ring reset userspace should do a fresh start, so
+	 * even LUN RESET message is no longer relevant.
+	 * Therefore remove all TMRs from qfull queue
+	 */
+	tcmu_remove_all_queued_tmr(udev);
+
 	run_qfull_queue(udev, false);
 
 	mutex_unlock(&udev->cmdr_lock);
@@ -2607,6 +2812,7 @@ static struct target_backend_ops tcmu_ops = {
 	.destroy_device		= tcmu_destroy_device,
 	.free_device		= tcmu_free_device,
 	.parse_cdb		= tcmu_parse_cdb,
+	.tmr_notify		= tcmu_tmr_notify,
 	.set_configfs_dev_params = tcmu_set_configfs_dev_params,
 	.show_configfs_dev_params = tcmu_show_configfs_dev_params,
 	.get_device_type	= sbc_get_device_type,
@@ -2633,7 +2839,8 @@ static void find_free_blocks(void)
 		}
 
 		/* Try to complete the finished commands first */
-		tcmu_handle_completions(udev);
+		if (tcmu_handle_completions(udev))
+			run_qfull_queue(udev, false);
 
 		/* Skip the udevs in idle */
 		if (!udev->dbi_thresh) {
diff --git a/include/uapi/linux/target_core_user.h b/include/uapi/linux/target_core_user.h
index b7b57967d90f..95b1597f16ae 100644
--- a/include/uapi/linux/target_core_user.h
+++ b/include/uapi/linux/target_core_user.h
@@ -45,6 +45,7 @@
 #define ALIGN_SIZE 64 /* Should be enough for most CPUs */
 #define TCMU_MAILBOX_FLAG_CAP_OOOC (1 << 0) /* Out-of-order completions */
 #define TCMU_MAILBOX_FLAG_CAP_READ_LEN (1 << 1) /* Read data length */
+#define TCMU_MAILBOX_FLAG_CAP_TMR (1 << 2) /* TMR notifications */
 
 struct tcmu_mailbox {
 	__u16 version;
@@ -62,6 +63,7 @@ struct tcmu_mailbox {
 enum tcmu_opcode {
 	TCMU_OP_PAD = 0,
 	TCMU_OP_CMD,
+	TCMU_OP_TMR,
 };
 
 /*
@@ -128,6 +130,29 @@ struct tcmu_cmd_entry {
 
 } __packed;
 
+struct tcmu_tmr_entry {
+	struct tcmu_cmd_entry_hdr hdr;
+
+#define TCMU_TMR_UNKNOWN		0
+#define TCMU_TMR_ABORT_TASK		1
+#define TCMU_TMR_ABORT_TASK_SET		2
+#define TCMU_TMR_CLEAR_ACA		3
+#define TCMU_TMR_CLEAR_TASK_SET		4
+#define TCMU_TMR_LUN_RESET		5
+#define TCMU_TMR_TARGET_WARM_RESET	6
+#define TCMU_TMR_TARGET_COLD_RESET	7
+/* Pseudo reset due to received PR OUT */
+#define TCMU_TMR_LUN_RESET_PRO		128
+	__u8 tmr_type;
+
+	__u8 __pad1;
+	__u16 __pad2;
+	__u32 cmd_cnt;
+	__u64 __pad3;
+	__u64 __pad4;
+	__u16 cmd_ids[0];
+} __packed;
+
 #define TCMU_OP_ALIGN_SIZE sizeof(__u64)
 
 enum tcmu_genl_cmd {
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional
  2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
                   ` (6 preceding siblings ...)
  2020-07-17 16:12 ` [PATCH v2 7/8] scsi: target: tcmu: Implement tmr_notify callback Bodo Stroesser
@ 2020-07-17 16:12 ` Bodo Stroesser
  2020-07-26  0:06   ` Mike Christie
  7 siblings, 1 reply; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-17 16:12 UTC (permalink / raw)
  To: Martin K. Petersen, Mike Christie, linux-scsi, target-devel
  Cc: Bodo Stroesser

Add "tmr_notification" configFS attribute to tcmu devices.
If default value 0 of the attribute is used, tcmu only
removes aborted commands from qfull_queue.
If user changes tmr_notification to 1, additionally
TMR notifications will be written to the cmd ring.

Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
---
 drivers/target/target_core_user.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 92b7a2e84e64..95e66b707373 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -118,6 +118,7 @@ struct tcmu_dev {
 #define TCMU_DEV_BIT_OPEN 0
 #define TCMU_DEV_BIT_BROKEN 1
 #define TCMU_DEV_BIT_BLOCKED 2
+#define TCMU_DEV_BIT_TMR_NOTIFY 3
 	unsigned long flags;
 
 	struct uio_info uio_info;
@@ -1260,6 +1261,9 @@ tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf,
 	if (unqueued)
 		tcmu_set_next_deadline(&dev->qfull_queue, &dev->qfull_timer);
 
+	if (!test_bit(TCMU_DEV_BIT_TMR_NOTIFY, &dev->flags))
+		goto unlock;
+
 	pr_debug("TMR event %d on dev %s, aborted cmds %d, afflicted cmd_ids %d\n",
 		 tcmu_tmr_type(tmf), dev->name, i, cmd_cnt);
 
@@ -2706,6 +2710,40 @@ static ssize_t tcmu_emulate_write_cache_store(struct config_item *item,
 }
 CONFIGFS_ATTR(tcmu_, emulate_write_cache);
 
+static ssize_t tcmu_tmr_notification_show(struct config_item *item,
+					     char *page)
+{
+	struct se_dev_attrib *da = container_of(to_config_group(item),
+					struct se_dev_attrib, da_group);
+	struct tcmu_dev *dev = TCMU_DEV(da->da_dev);
+
+	return snprintf(page, PAGE_SIZE, "%i\n",
+			test_bit(TCMU_DEV_BIT_TMR_NOTIFY, &dev->flags));
+}
+
+static ssize_t tcmu_tmr_notification_store(struct config_item *item,
+					   const char *page, size_t count)
+{
+	struct se_dev_attrib *da = container_of(to_config_group(item),
+					struct se_dev_attrib, da_group);
+	struct tcmu_dev *dev = TCMU_DEV(da->da_dev);
+	u8 val;
+	int ret;
+
+	ret = kstrtou8(page, 0, &val);
+	if (ret < 0)
+		return ret;
+	if (val > 1)
+		return -EINVAL;
+
+	if (val)
+		set_bit(TCMU_DEV_BIT_TMR_NOTIFY, &dev->flags);
+	else
+		clear_bit(TCMU_DEV_BIT_TMR_NOTIFY, &dev->flags);
+	return count;
+}
+CONFIGFS_ATTR(tcmu_, tmr_notification);
+
 static ssize_t tcmu_block_dev_show(struct config_item *item, char *page)
 {
 	struct se_device *se_dev = container_of(to_config_group(item),
@@ -2787,6 +2825,7 @@ static struct configfs_attribute *tcmu_attrib_attrs[] = {
 	&tcmu_attr_dev_config,
 	&tcmu_attr_dev_size,
 	&tcmu_attr_emulate_write_cache,
+	&tcmu_attr_tmr_notification,
 	&tcmu_attr_nl_reply_supported,
 	NULL,
 };
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task()
  2020-07-17 16:12 ` [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task() Bodo Stroesser
@ 2020-07-26  0:03   ` Mike Christie
  2020-07-26 11:35     ` Bodo Stroesser
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Christie @ 2020-07-26  0:03 UTC (permalink / raw)
  To: Bodo Stroesser, Martin K. Petersen, linux-scsi, target-devel

On 7/17/20 11:12 AM, Bodo Stroesser wrote:
> This patch modifies core_tmr_abort_task() to use same looping
> and locking scheme as core_tmr_drain_state_list() does.
> 
> This frees the state_list element in se_cmd for later use
> by tmr notification handling.
> 
> Note: __target_check_io_state() now is called with param 0
> instead of dev->dev_attrib.emulate_tas, because tas is not
> relevant since we always get ABRT on same session like the
> aborted command.
> 
> Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
> ---
>  drivers/target/target_core_tmr.c | 22 ++++++++++++++--------
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
> index 89c84d472cd7..b65d7a0a5df1 100644
> --- a/drivers/target/target_core_tmr.c
> +++ b/drivers/target/target_core_tmr.c
> @@ -116,14 +116,15 @@ void core_tmr_abort_task(
>  	struct se_tmr_req *tmr,
>  	struct se_session *se_sess)
>  {
> -	struct se_cmd *se_cmd;
> +	struct se_cmd *se_cmd, *next;
>  	unsigned long flags;
> +	bool rc;
>  	u64 ref_tag;
>  
> -	spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
> -	list_for_each_entry(se_cmd, &se_sess->sess_cmd_list, se_cmd_list) {
> +	spin_lock_irqsave(&dev->execute_task_lock, flags);
> +	list_for_each_entry_safe(se_cmd, next, &dev->state_list, state_list) {
>  
> -		if (dev != se_cmd->se_dev)
> +		if (se_sess != se_cmd->se_sess)
>  			continue;
>  
>  		/* skip task management functions, including tmr->task_cmd */
> @@ -137,11 +138,16 @@ void core_tmr_abort_task(
>  		printk("ABORT_TASK: Found referenced %s task_tag: %llu\n",
>  			se_cmd->se_tfo->fabric_name, ref_tag);
>  
> -		if (!__target_check_io_state(se_cmd, se_sess,
> -					     dev->dev_attrib.emulate_tas))
> +		spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);

I don't think you can use flags twice. This call seems to be overwriting the outer spin lock caller's value, and I'm seeing a lot of warnings.


> +		rc = __target_check_io_state(se_cmd, se_sess, 0);
> +		spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);


Use spin_lock/spin_unlock since we know above we have disabled irqs right above. Or make the drain users work like above.

Just make it consistent so future devs don't have to wonder why we use different calls.

> +		if (!rc)
>  			continue;
>  
> -		spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);> +		list_del_init(&se_cmd->state_list);
> +		se_cmd->state_active = false;
> +
> +		spin_unlock_irqrestore(&dev->execute_task_lock, flags);
>  
>  		/*
>  		 * Ensure that this ABORT request is visible to the LU RESET
> @@ -159,7 +165,7 @@ void core_tmr_abort_task(
>  		atomic_long_inc(&dev->aborts_complete);
>  		return;
>  	}
> -	spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
> +	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
>  
>  	printk("ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: %lld\n",
>  			tmr->ref_task_tag);
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional
  2020-07-17 16:12 ` [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional Bodo Stroesser
@ 2020-07-26  0:06   ` Mike Christie
  2020-07-26 11:47     ` Bodo Stroesser
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Christie @ 2020-07-26  0:06 UTC (permalink / raw)
  To: Bodo Stroesser, Martin K. Petersen, linux-scsi, target-devel

On 7/17/20 11:12 AM, Bodo Stroesser wrote:
> Add "tmr_notification" configFS attribute to tcmu devices.
> If default value 0 of the attribute is used, tcmu only
> removes aborted commands from qfull_queue.
> If user changes tmr_notification to 1, additionally
> TMR notifications will be written to the cmd ring.
> 
> Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
> ---
>  drivers/target/target_core_user.c | 39 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 39 insertions(+)
> 
> diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
> index 92b7a2e84e64..95e66b707373 100644
> --- a/drivers/target/target_core_user.c
> +++ b/drivers/target/target_core_user.c
> @@ -118,6 +118,7 @@ struct tcmu_dev {
>  #define TCMU_DEV_BIT_OPEN 0
>  #define TCMU_DEV_BIT_BROKEN 1
>  #define TCMU_DEV_BIT_BLOCKED 2
> +#define TCMU_DEV_BIT_TMR_NOTIFY 3
>  	unsigned long flags;
>  
>  	struct uio_info uio_info;
> @@ -1260,6 +1261,9 @@ tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf,
>  	if (unqueued)
>  		tcmu_set_next_deadline(&dev->qfull_queue, &dev->qfull_timer);
>  
> +	if (!test_bit(TCMU_DEV_BIT_TMR_NOTIFY, &dev->flags))
> +		goto unlock;
> +
>  	pr_debug("TMR event %d on dev %s, aborted cmds %d, afflicted cmd_ids %d\n",
>  		 tcmu_tmr_type(tmf), dev->name, i, cmd_cnt);
>  
> @@ -2706,6 +2710,40 @@ static ssize_t tcmu_emulate_write_cache_store(struct config_item *item,
>  }
>  CONFIGFS_ATTR(tcmu_, emulate_write_cache);
>  
> +static ssize_t tcmu_tmr_notification_show(struct config_item *item,
> +					       char *page)

Sorry about this. Just some nits.

The spacing above got messed up a little. I think you only need 2 spaces, but it looks like we got some extras.


> +{
> +	struct se_dev_attrib *da = container_of(to_config_group(item),
> +					struct se_dev_attrib, da_group);
> +	struct tcmu_dev *dev = TCMU_DEV(da->da_dev);


Could you use udev or tcmu_dev for the name. Sorry for being a broken record on this one. We use dev or se_dev for the se_device struct already and it throws me off when scanning the code.

I think patch 5 and 7 need the same fix up.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task()
  2020-07-26  0:03   ` Mike Christie
@ 2020-07-26 11:35     ` Bodo Stroesser
  0 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-26 11:35 UTC (permalink / raw)
  To: Mike Christie, Martin K. Petersen, linux-scsi, target-devel

On 2020-07-26 02:03, Mike Christie wrote:
> On 7/17/20 11:12 AM, Bodo Stroesser wrote:
>> This patch modifies core_tmr_abort_task() to use same looping
>> and locking scheme as core_tmr_drain_state_list() does.
>>
>> This frees the state_list element in se_cmd for later use
>> by tmr notification handling.
>>
>> Note: __target_check_io_state() now is called with param 0
>> instead of dev->dev_attrib.emulate_tas, because tas is not
>> relevant since we always get ABRT on same session like the
>> aborted command.
>>
>> Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
>> ---
>>   drivers/target/target_core_tmr.c | 22 ++++++++++++++--------
>>   1 file changed, 14 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
>> index 89c84d472cd7..b65d7a0a5df1 100644
>> --- a/drivers/target/target_core_tmr.c
>> +++ b/drivers/target/target_core_tmr.c
>> @@ -116,14 +116,15 @@ void core_tmr_abort_task(
>>   	struct se_tmr_req *tmr,
>>   	struct se_session *se_sess)
>>   {
>> -	struct se_cmd *se_cmd;
>> +	struct se_cmd *se_cmd, *next;
>>   	unsigned long flags;
>> +	bool rc;
>>   	u64 ref_tag;
>>   
>> -	spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
>> -	list_for_each_entry(se_cmd, &se_sess->sess_cmd_list, se_cmd_list) {
>> +	spin_lock_irqsave(&dev->execute_task_lock, flags);
>> +	list_for_each_entry_safe(se_cmd, next, &dev->state_list, state_list) {
>>   
>> -		if (dev != se_cmd->se_dev)
>> +		if (se_sess != se_cmd->se_sess)
>>   			continue;
>>   
>>   		/* skip task management functions, including tmr->task_cmd */
>> @@ -137,11 +138,16 @@ void core_tmr_abort_task(
>>   		printk("ABORT_TASK: Found referenced %s task_tag: %llu\n",
>>   			se_cmd->se_tfo->fabric_name, ref_tag);
>>   
>> -		if (!__target_check_io_state(se_cmd, se_sess,
>> -					     dev->dev_attrib.emulate_tas))
>> +		spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
> 
> I don't think you can use flags twice. This call seems to be overwriting the outer spin lock caller's value, and I'm seeing a lot of warnings.

Oops. Thank you!

Bad cut and paste. I wanted to make core_tmr_abort_task similar to core_tmr_drain_state_list.
But then I moved the calls for se_sess->sess_cmd_lock inside core_tmr_abort_task instead of copying the ones from core_tmr_drain_state_list.

I'm wondering why I don't see warnings when waiting with irq blocked ...

I'll resend without _irqsave / _irqrestore.
 
> 
> 
>> +		rc = __target_check_io_state(se_cmd, se_sess, 0);
>> +		spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
> 
> 
> Use spin_lock/spin_unlock since we know above we have disabled irqs right above. Or make the drain users work like above.
> 
> Just make it consistent so future devs don't have to wonder why we use different calls.
> 
>> +		if (!rc)
>>   			continue;
>>   
>> -		spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);> +		list_del_init(&se_cmd->state_list);
>> +		se_cmd->state_active = false;
>> +
>> +		spin_unlock_irqrestore(&dev->execute_task_lock, flags);
>>   
>>   		/*
>>   		 * Ensure that this ABORT request is visible to the LU RESET
>> @@ -159,7 +165,7 @@ void core_tmr_abort_task(
>>   		atomic_long_inc(&dev->aborts_complete);
>>   		return;
>>   	}
>> -	spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
>> +	spin_unlock_irqrestore(&dev->execute_task_lock, flags);
>>   
>>   	printk("ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: %lld\n",
>>   			tmr->ref_task_tag);
>>
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional
  2020-07-26  0:06   ` Mike Christie
@ 2020-07-26 11:47     ` Bodo Stroesser
  0 siblings, 0 replies; 13+ messages in thread
From: Bodo Stroesser @ 2020-07-26 11:47 UTC (permalink / raw)
  To: Mike Christie, Martin K. Petersen, linux-scsi, target-devel

On 2020-07-26 02:06, Mike Christie wrote:
> On 7/17/20 11:12 AM, Bodo Stroesser wrote:
>> Add "tmr_notification" configFS attribute to tcmu devices.
>> If default value 0 of the attribute is used, tcmu only
>> removes aborted commands from qfull_queue.
>> If user changes tmr_notification to 1, additionally
>> TMR notifications will be written to the cmd ring.
>>
>> Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
>> ---
>>   drivers/target/target_core_user.c | 39 +++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 39 insertions(+)
>>
>> diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
>> index 92b7a2e84e64..95e66b707373 100644
>> --- a/drivers/target/target_core_user.c
>> +++ b/drivers/target/target_core_user.c
>> @@ -118,6 +118,7 @@ struct tcmu_dev {
>>   #define TCMU_DEV_BIT_OPEN 0
>>   #define TCMU_DEV_BIT_BROKEN 1
>>   #define TCMU_DEV_BIT_BLOCKED 2
>> +#define TCMU_DEV_BIT_TMR_NOTIFY 3
>>   	unsigned long flags;
>>   
>>   	struct uio_info uio_info;
>> @@ -1260,6 +1261,9 @@ tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf,
>>   	if (unqueued)
>>   		tcmu_set_next_deadline(&dev->qfull_queue, &dev->qfull_timer);
>>   
>> +	if (!test_bit(TCMU_DEV_BIT_TMR_NOTIFY, &dev->flags))
>> +		goto unlock;
>> +
>>   	pr_debug("TMR event %d on dev %s, aborted cmds %d, afflicted cmd_ids %d\n",
>>   		 tcmu_tmr_type(tmf), dev->name, i, cmd_cnt);
>>   
>> @@ -2706,6 +2710,40 @@ static ssize_t tcmu_emulate_write_cache_store(struct config_item *item,
>>   }
>>   CONFIGFS_ATTR(tcmu_, emulate_write_cache);
>>   
>> +static ssize_t tcmu_tmr_notification_show(struct config_item *item,
>> +					       char *page)
> 
> Sorry about this. Just some nits.
> 
> The spacing above got messed up a little. I think you only need 2 spaces, but it looks like we got some extras.

I'll fix.

> 
> 
>> +{
>> +	struct se_dev_attrib *da = container_of(to_config_group(item),
>> +					struct se_dev_attrib, da_group);
>> +	struct tcmu_dev *dev = TCMU_DEV(da->da_dev);
> 
> 
> Could you use udev or tcmu_dev for the name. Sorry for being a broken record on this one. We use dev or se_dev for the se_device struct already and it throws me off when scanning the code.
> 
> I think patch 5 and 7 need the same fix up.
> 

I think I missunderstood your previous writing. I intentionally used dev and not udev because I thought you wanted it ...

No problem. I'll fix. 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-07-26 11:47 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-17 16:12 [PATCH v2 0/8] scsi: target: tcmu: Add TMR notification for tcmu Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 1/8] scsi: target: Modify core_tmr_abort_task() Bodo Stroesser
2020-07-26  0:03   ` Mike Christie
2020-07-26 11:35     ` Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 2/8] scsi: target: Add tmr_notify backend function Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 3/8] scsi: target: tcmu: Use priv pointer in se_cmd Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 4/8] scsi: target: tcmu: Do not queue aborted commands Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 5/8] scsi: target: tcmu: Factor out new helper ring_insert_padding Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 6/8] scsi: target: tcmu: Fix and simplify timeout handling Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 7/8] scsi: target: tcmu: Implement tmr_notify callback Bodo Stroesser
2020-07-17 16:12 ` [PATCH v2 8/8] scsi: target: tcmu: Make TMR notification optional Bodo Stroesser
2020-07-26  0:06   ` Mike Christie
2020-07-26 11:47     ` Bodo Stroesser

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).