All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: bostroesser@gmail.com, mst@redhat.com, stefanha@redhat.com,
	Chaitanya.Kulkarni@wdc.com, hch@lst.de, loberman@redhat.com,
	martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org
Cc: Mike Christie <michael.christie@oracle.com>,
	Himanshu Madhani <himanshu.madhani@oracle.com>
Subject: [PATCH 25/25] target: make completion affinity configurable
Date: Sat, 27 Feb 2021 11:00:06 -0600	[thread overview]
Message-ID: <20210227170006.5077-26-michael.christie@oracle.com> (raw)
In-Reply-To: <20210227170006.5077-1-michael.christie@oracle.com>

It may not always be best to complete the IO on same CPU as it was
submitted on. This allows userspace to config it.

This has been useful for vhost-scsi where we have a single thread for
submissions and completions. If we force the completion on the
submission cpu we may be adding conflicts with what the user has setup
in the lower levels with settings like the block layer rq_affinity or
the driver's irq or softirq (the network's rps_cpus value) settings.

We may also want to set it up where the vhost thread runs on CPU N
and does it's submissions/completions there, and then have LIO do
it's completion booking on CPU M, but can't config the lower levels
due to issues like using dm-multipath with lots of paths (the path
selector can throw commands all over the system because it's only
aking into account latency/throughput at its level).

The new setting is in
/sys/kernel/config/target/$fabric/$target/param/cmd_completion_affinity

Writing:
-1  -> gives the current default behavior of completing on the submission
CPU.
-2  -> completes the cmd on the CPU the lower layers sent it to us from.
0 > -> complete on the CPU userspace has specified.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
---
 drivers/target/target_core_fabric_configfs.c | 58 ++++++++++++++++++++
 drivers/target/target_core_internal.h        |  1 +
 drivers/target/target_core_transport.c       | 11 +++-
 include/target/target_core_base.h            |  9 +++
 4 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c
index ee85602213f7..fc7edc04ee09 100644
--- a/drivers/target/target_core_fabric_configfs.c
+++ b/drivers/target/target_core_fabric_configfs.c
@@ -892,6 +892,7 @@ static void target_fabric_release_wwn(struct config_item *item)
 	struct target_fabric_configfs *tf = wwn->wwn_tf;
 
 	configfs_remove_default_groups(&wwn->fabric_stat_group);
+	configfs_remove_default_groups(&wwn->param_group);
 	tf->tf_ops->fabric_drop_wwn(wwn);
 }
 
@@ -918,6 +919,57 @@ TF_CIT_SETUP(wwn_fabric_stats, NULL, NULL, NULL);
 
 /* End of tfc_wwn_fabric_stats_cit */
 
+static ssize_t
+target_fabric_wwn_cmd_completion_affinity_show(struct config_item *item,
+					       char *page)
+{
+	struct se_wwn *wwn = container_of(to_config_group(item), struct se_wwn,
+					  param_group);
+	return sprintf(page, "%d\n",
+		       wwn->cmd_compl_affinity == WORK_CPU_UNBOUND ?
+		       SE_COMPL_AFFINITY_CURR_CPU : wwn->cmd_compl_affinity);
+}
+
+static ssize_t
+target_fabric_wwn_cmd_completion_affinity_store(struct config_item *item,
+						const char *page, size_t count)
+{
+	struct se_wwn *wwn = container_of(to_config_group(item), struct se_wwn,
+					  param_group);
+	int compl_val;
+
+	if (kstrtoint(page, 0, &compl_val))
+		return -EINVAL;
+
+	switch (compl_val) {
+	case SE_COMPL_AFFINITY_CPUID:
+		wwn->cmd_compl_affinity = compl_val;
+		break;
+	case SE_COMPL_AFFINITY_CURR_CPU:
+		wwn->cmd_compl_affinity = WORK_CPU_UNBOUND;
+		break;
+	default:
+		if (compl_val < 0 || compl_val >= nr_cpu_ids ||
+		    !cpu_online(compl_val)) {
+			pr_err("Command completion value must be between %d and %d or an online CPU.\n",
+			       SE_COMPL_AFFINITY_CPUID,
+			       SE_COMPL_AFFINITY_CURR_CPU);
+			return -EINVAL;
+		}
+		wwn->cmd_compl_affinity = compl_val;
+	}
+
+	return count;
+}
+CONFIGFS_ATTR(target_fabric_wwn_, cmd_completion_affinity);
+
+static struct configfs_attribute *target_fabric_wwn_param_attrs[] = {
+	&target_fabric_wwn_attr_cmd_completion_affinity,
+	NULL,
+};
+
+TF_CIT_SETUP(wwn_param, NULL, NULL, target_fabric_wwn_param_attrs);
+
 /* Start of tfc_wwn_cit */
 
 static struct config_group *target_fabric_make_wwn(
@@ -937,6 +989,7 @@ static struct config_group *target_fabric_make_wwn(
 	if (!wwn || IS_ERR(wwn))
 		return ERR_PTR(-EINVAL);
 
+	wwn->cmd_compl_affinity = SE_COMPL_AFFINITY_CPUID;
 	wwn->wwn_tf = tf;
 
 	config_group_init_type_name(&wwn->wwn_group, name, &tf->tf_tpg_cit);
@@ -945,6 +998,10 @@ static struct config_group *target_fabric_make_wwn(
 			&tf->tf_wwn_fabric_stats_cit);
 	configfs_add_default_group(&wwn->fabric_stat_group, &wwn->wwn_group);
 
+	config_group_init_type_name(&wwn->param_group, "param",
+			&tf->tf_wwn_param_cit);
+	configfs_add_default_group(&wwn->param_group, &wwn->wwn_group);
+
 	if (tf->tf_ops->add_wwn_groups)
 		tf->tf_ops->add_wwn_groups(wwn);
 	return &wwn->wwn_group;
@@ -974,6 +1031,7 @@ int target_fabric_setup_cits(struct target_fabric_configfs *tf)
 	target_fabric_setup_discovery_cit(tf);
 	target_fabric_setup_wwn_cit(tf);
 	target_fabric_setup_wwn_fabric_stats_cit(tf);
+	target_fabric_setup_wwn_param_cit(tf);
 	target_fabric_setup_tpg_cit(tf);
 	target_fabric_setup_tpg_base_cit(tf);
 	target_fabric_setup_tpg_port_cit(tf);
diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
index 56f841fd7f04..a343bcfa2180 100644
--- a/drivers/target/target_core_internal.h
+++ b/drivers/target/target_core_internal.h
@@ -34,6 +34,7 @@ struct target_fabric_configfs {
 	struct config_item_type tf_discovery_cit;
 	struct config_item_type	tf_wwn_cit;
 	struct config_item_type tf_wwn_fabric_stats_cit;
+	struct config_item_type tf_wwn_param_cit;
 	struct config_item_type tf_tpg_cit;
 	struct config_item_type tf_tpg_base_cit;
 	struct config_item_type tf_tpg_lun_cit;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 18cb00a1ee2f..6b8ccc4bbf2b 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -857,7 +857,8 @@ static bool target_cmd_interrupted(struct se_cmd *cmd)
 /* May be called from interrupt context so must not sleep. */
 void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
 {
-	int success;
+	struct se_wwn *wwn = cmd->se_sess->se_tpg->se_tpg_wwn;
+	int success, cpu;
 	unsigned long flags;
 
 	if (target_cmd_interrupted(cmd))
@@ -884,7 +885,13 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
 
 	INIT_WORK(&cmd->work, success ? target_complete_ok_work :
 		  target_complete_failure_work);
-	queue_work_on(cmd->cpuid, target_completion_wq, &cmd->work);
+
+	if (wwn->cmd_compl_affinity == SE_COMPL_AFFINITY_CPUID)
+		cpu = cmd->cpuid;
+	else
+		cpu = wwn->cmd_compl_affinity;
+
+	queue_work_on(cpu, target_completion_wq, &cmd->work);
 }
 EXPORT_SYMBOL(target_complete_cmd);
 
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index b8e0a3250bd0..2a73b6209a15 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -943,11 +943,20 @@ static inline struct se_portal_group *param_to_tpg(struct config_item *item)
 			tpg_param_group);
 }
 
+enum {
+	/* Use se_cmd's cpuid for completion */
+	SE_COMPL_AFFINITY_CPUID		= -1,
+	/* Complete on current CPU */
+	SE_COMPL_AFFINITY_CURR_CPU	= -2,
+};
+
 struct se_wwn {
 	struct target_fabric_configfs *wwn_tf;
 	void			*priv;
 	struct config_group	wwn_group;
 	struct config_group	fabric_stat_group;
+	struct config_group	param_group;
+	int			cmd_compl_affinity;
 };
 
 static inline void atomic_inc_mb(atomic_t *v)
-- 
2.25.1


  parent reply	other threads:[~2021-02-27 17:09 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-27 16:59 [PATCH 00/25 V5] target: fix cmd plugging and submission Mike Christie
2021-02-27 16:59 ` [PATCH 01/25] target: move t_task_cdb initialization Mike Christie
2021-03-04  4:15   ` Martin K. Petersen
2021-02-27 16:59 ` [PATCH 02/25] target: drop kref_get_unless_zero in target_get_sess_cmd Mike Christie
2021-02-27 16:59 ` [PATCH 03/25] target: rename transport_init_se_cmd Mike Christie
2021-02-27 16:59 ` [PATCH 04/25] target: break up target_submit_cmd_map_sgls Mike Christie
2021-02-27 16:59 ` [PATCH 05/25] srpt: Convert to new submission API Mike Christie
2021-02-27 16:59 ` [PATCH 06/25] ibmvscsi_tgt: " Mike Christie
2021-02-27 16:59 ` [PATCH 07/25] qla2xxx: " Mike Christie
2021-02-27 16:59 ` [PATCH 08/25] tcm_loop: " Mike Christie
2021-02-27 16:59 ` [PATCH 09/25] sbp_target: " Mike Christie
2021-02-27 16:59 ` [PATCH 10/25] usb gadget: " Mike Christie
2021-02-27 16:59 ` [PATCH 11/25] vhost-scsi: " Mike Christie
2021-02-27 16:59 ` [PATCH 12/25] xen-scsiback: " Mike Christie
2021-02-27 16:59 ` [PATCH 13/25] tcm_fc: " Mike Christie
2021-02-27 16:59 ` [PATCH 14/25] target: remove target_submit_cmd_map_sgls Mike Christie
2021-02-27 16:59 ` [PATCH 15/25] target: add gfp_t arg to target_cmd_init_cdb Mike Christie
2021-02-27 16:59 ` [PATCH 16/25] target: add workqueue based cmd submission Mike Christie
2021-02-27 16:59 ` [PATCH 17/25] vhost scsi: use lio wq cmd submission helper Mike Christie
2021-02-27 16:59 ` [PATCH 18/25] tcm loop: use blk cmd allocator for se_cmds Mike Christie
2021-02-27 17:00 ` [PATCH 19/25] tcm loop: use lio wq cmd submission helper Mike Christie
2021-02-27 17:00 ` [PATCH 20/25] target: cleanup cmd flag bits Mike Christie
2021-02-27 17:00 ` [PATCH 21/25] target: fix backend plugging Mike Christie
2021-02-27 17:00 ` [PATCH 22/25] target iblock: add backend plug/unplug callouts Mike Christie
2021-02-27 17:00 ` [PATCH 23/25] target_core_user: " Mike Christie
2021-02-27 17:00 ` [PATCH 24/25] target: flush submission work during TMR processing Mike Christie
2021-02-27 17:00 ` Mike Christie [this message]
2021-03-01 10:01 ` [PATCH 00/25 V5] target: fix cmd plugging and submission Stefan Hajnoczi
  -- strict thread matches above, loose matches on Subject: below --
2021-02-17 20:27 Mike Christie
2021-02-17 20:28 ` [PATCH 25/25] target: make completion affinity configurable Mike Christie
2021-02-12  7:26 PATCH 00/25 V4] target: fix cmd plugging and submission Mike Christie
2021-02-12  7:26 ` [PATCH 25/25] target: make completion affinity configurable Mike Christie
2021-02-12 19:35   ` Himanshu Madhani

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210227170006.5077-26-michael.christie@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=Chaitanya.Kulkarni@wdc.com \
    --cc=bostroesser@gmail.com \
    --cc=hch@lst.de \
    --cc=himanshu.madhani@oracle.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=loberman@redhat.com \
    --cc=martin.petersen@oracle.com \
    --cc=mst@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.