All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
@ 2020-07-23  1:01 Stephen Boyd
  2020-07-23 17:42 ` Lina Iyer
  2020-07-24 17:42 ` Doug Anderson
  0 siblings, 2 replies; 14+ messages in thread
From: Stephen Boyd @ 2020-07-23  1:01 UTC (permalink / raw)
  To: Andy Gross, Bjorn Andersson
  Cc: linux-kernel, linux-arm-msm, Douglas Anderson, Maulik Shah, Lina Iyer

The busy loop in rpmh_rsc_send_data() is written with the assumption
that the udelay will be preempted by the tcs_tx_done() irq handler when
the TCS slots are all full. This doesn't hold true when the calling
thread is an irqthread and the tcs_tx_done() irq is also an irqthread.
That's because kernel irqthreads are SCHED_FIFO and thus need to
voluntarily give up priority by calling into the scheduler so that other
threads can run.

I see RCU stalls when I boot with irqthreads on the kernel commandline
because the modem remoteproc driver is trying to send an rpmh async
message from an irqthread that needs to give up the CPU for the rpmh
irqthread to run and clear out tcs slots.

 rcu: INFO: rcu_preempt self-detected stall on CPU
 rcu:     0-....: (1 GPs behind) idle=402/1/0x4000000000000002 softirq=2108/2109 fqs=4920
  (t=21016 jiffies g=2933 q=590)
 Task dump for CPU 0:
 irq/11-smp2p    R  running task        0   148      2 0x00000028
 Call trace:
  dump_backtrace+0x0/0x154
  show_stack+0x20/0x2c
  sched_show_task+0xfc/0x108
  dump_cpu_task+0x44/0x50
  rcu_dump_cpu_stacks+0xa4/0xf8
  rcu_sched_clock_irq+0x7dc/0xaa8
  update_process_times+0x30/0x54
  tick_sched_handle+0x50/0x64
  tick_sched_timer+0x4c/0x8c
  __hrtimer_run_queues+0x21c/0x36c
  hrtimer_interrupt+0xf0/0x22c
  arch_timer_handler_phys+0x40/0x50
  handle_percpu_devid_irq+0x114/0x25c
  __handle_domain_irq+0x84/0xc4
  gic_handle_irq+0xd0/0x178
  el1_irq+0xbc/0x180
  save_return_addr+0x18/0x28
  return_address+0x54/0x88
  preempt_count_sub+0x40/0x88
  _raw_spin_unlock_irqrestore+0x4c/0x6c
  ___ratelimit+0xd0/0x128
  rpmh_rsc_send_data+0x24c/0x378
  __rpmh_write+0x1b0/0x208
  rpmh_write_async+0x90/0xbc
  rpmhpd_send_corner+0x60/0x8c
  rpmhpd_aggregate_corner+0x8c/0x124
  rpmhpd_set_performance_state+0x8c/0xbc
  _genpd_set_performance_state+0xdc/0x1b8
  dev_pm_genpd_set_performance_state+0xb8/0xf8
  q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss]
  qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss]
  q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5]
  handle_nested_irq+0xd0/0x138
  qcom_smp2p_intr+0x188/0x200
  irq_thread_fn+0x2c/0x70
  irq_thread+0xfc/0x14c
  kthread+0x11c/0x12c
  ret_from_fork+0x10/0x18

This busy loop naturally lends itself to using a wait queue so that each
thread that tries to send a message will sleep waiting on the waitqueue
and only be woken up when a free slot is available. This should make
things more predictable too because the scheduler will be able to sleep
tasks that are waiting on a free tcs instead of the busy loop we
currently have today.

Cc: Douglas Anderson <dianders@chromium.org>
Cc: Maulik Shah <mkshah@codeaurora.org>
Cc: Lina Iyer <ilina@codeaurora.org>
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
---
 drivers/soc/qcom/rpmh-internal.h |   2 +
 drivers/soc/qcom/rpmh-rsc.c      | 101 ++++++++++++-------------------
 2 files changed, 41 insertions(+), 62 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index ef60e790a750..9a325bac58fe 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -8,6 +8,7 @@
 #define __RPM_INTERNAL_H__
 
 #include <linux/bitmap.h>
+#include <linux/wait.h>
 #include <soc/qcom/tcs.h>
 
 #define TCS_TYPE_NR			4
@@ -118,6 +119,7 @@ struct rsc_drv {
 	struct tcs_group tcs[TCS_TYPE_NR];
 	DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
 	spinlock_t lock;
+	wait_queue_head_t tcs_wait;
 	struct rpmh_ctrlr client;
 };
 
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 076fd27f3081..6c758b052c95 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -19,6 +19,7 @@
 #include <linux/platform_device.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/wait.h>
 
 #include <soc/qcom/cmd-db.h>
 #include <soc/qcom/tcs.h>
@@ -444,6 +445,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
 		 */
 		if (!drv->tcs[ACTIVE_TCS].num_tcs)
 			enable_tcs_irq(drv, i, false);
+		wake_up(&drv->tcs_wait);
 		spin_unlock(&drv->lock);
 		if (req)
 			rpmh_tx_done(req, err);
@@ -562,44 +564,59 @@ static int find_free_tcs(struct tcs_group *tcs)
 	return -EBUSY;
 }
 
+static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs,
+			     const struct tcs_request *msg)
+{
+	int ret;
+
+	/*
+	 * The h/w does not like if we send a request to the same address,
+	 * when one is already in-flight or being processed.
+	 */
+	ret = check_for_req_inflight(drv, tcs, msg);
+	if (ret)
+		return ret;
+
+	return find_free_tcs(tcs);
+}
+
 /**
- * tcs_write() - Store messages into a TCS right now, or return -EBUSY.
+ * rpmh_rsc_send_data() - Write / trigger active-only message.
  * @drv: The controller.
  * @msg: The data to be sent.
  *
- * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it.
- *
- * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for
- * the same address is already transferring returns -EBUSY which means the
- * client should retry shortly.
+ * NOTES:
+ * - This is only used for "ACTIVE_ONLY" since the limitations of this
+ *   function don't make sense for sleep/wake cases.
+ * - To do the transfer, we will grab a whole TCS for ourselves--we don't
+ *   try to share. If there are none available we'll wait indefinitely
+ *   for a free one.
+ * - This function will not wait for the commands to be finished, only for
+ *   data to be programmed into the RPMh. See rpmh_tx_done() which will
+ *   be called when the transfer is fully complete.
+ * - This function must be called with interrupts enabled. If the hardware
+ *   is busy doing someone else's transfer we need that transfer to fully
+ *   finish so that we can have the hardware, and to fully finish it needs
+ *   the interrupt handler to run. If the interrupts is set to run on the
+ *   active CPU this can never happen if interrupts are disabled.
  *
- * Return: 0 on success, -EBUSY if client should retry, or an error.
- *         Client should have interrupts enabled for a bit before retrying.
+ * Return: 0 on success, -EINVAL on error.
  */
-static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
+int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
 {
 	struct tcs_group *tcs;
 	int tcs_id;
 	unsigned long flags;
-	int ret;
 
 	tcs = get_tcs_for_msg(drv, msg);
 	if (IS_ERR(tcs))
 		return PTR_ERR(tcs);
 
 	spin_lock_irqsave(&drv->lock, flags);
-	/*
-	 * The h/w does not like if we send a request to the same address,
-	 * when one is already in-flight or being processed.
-	 */
-	ret = check_for_req_inflight(drv, tcs, msg);
-	if (ret)
-		goto unlock;
 
-	ret = find_free_tcs(tcs);
-	if (ret < 0)
-		goto unlock;
-	tcs_id = ret;
+	wait_event_lock_irq(drv->tcs_wait,
+			    (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
+			    drv->lock);
 
 	tcs->req[tcs_id - tcs->offset] = msg;
 	set_bit(tcs_id, drv->tcs_in_use);
@@ -627,47 +644,6 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
 	__tcs_set_trigger(drv, tcs_id, true);
 
 	return 0;
-unlock:
-	spin_unlock_irqrestore(&drv->lock, flags);
-	return ret;
-}
-
-/**
- * rpmh_rsc_send_data() - Write / trigger active-only message.
- * @drv: The controller.
- * @msg: The data to be sent.
- *
- * NOTES:
- * - This is only used for "ACTIVE_ONLY" since the limitations of this
- *   function don't make sense for sleep/wake cases.
- * - To do the transfer, we will grab a whole TCS for ourselves--we don't
- *   try to share. If there are none available we'll wait indefinitely
- *   for a free one.
- * - This function will not wait for the commands to be finished, only for
- *   data to be programmed into the RPMh. See rpmh_tx_done() which will
- *   be called when the transfer is fully complete.
- * - This function must be called with interrupts enabled. If the hardware
- *   is busy doing someone else's transfer we need that transfer to fully
- *   finish so that we can have the hardware, and to fully finish it needs
- *   the interrupt handler to run. If the interrupts is set to run on the
- *   active CPU this can never happen if interrupts are disabled.
- *
- * Return: 0 on success, -EINVAL on error.
- */
-int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
-{
-	int ret;
-
-	do {
-		ret = tcs_write(drv, msg);
-		if (ret == -EBUSY) {
-			pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
-					    msg->cmds[0].addr);
-			udelay(10);
-		}
-	} while (ret == -EBUSY);
-
-	return ret;
 }
 
 /**
@@ -975,6 +951,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
 		return ret;
 
 	spin_lock_init(&drv->lock);
+	init_waitqueue_head(&drv->tcs_wait);
 	bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
 
 	irq = platform_get_irq(pdev, drv->id);

base-commit: b3a9e3b9622ae10064826dccb4f7a52bd88c7407
-- 
Sent by a computer, using git, on the internet


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-23  1:01 [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free Stephen Boyd
@ 2020-07-23 17:42 ` Lina Iyer
  2020-07-23 19:36   ` Stephen Boyd
  2020-07-24 17:42 ` Doug Anderson
  1 sibling, 1 reply; 14+ messages in thread
From: Lina Iyer @ 2020-07-23 17:42 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Andy Gross, Bjorn Andersson, linux-kernel, linux-arm-msm,
	Douglas Anderson, Maulik Shah

On Wed, Jul 22 2020 at 19:01 -0600, Stephen Boyd wrote:
>The busy loop in rpmh_rsc_send_data() is written with the assumption
>that the udelay will be preempted by the tcs_tx_done() irq handler when
>the TCS slots are all full. This doesn't hold true when the calling
>thread is an irqthread and the tcs_tx_done() irq is also an irqthread.
>That's because kernel irqthreads are SCHED_FIFO and thus need to
>voluntarily give up priority by calling into the scheduler so that other
>threads can run.
>
>I see RCU stalls when I boot with irqthreads on the kernel commandline
>because the modem remoteproc driver is trying to send an rpmh async
>message from an irqthread that needs to give up the CPU for the rpmh
>irqthread to run and clear out tcs slots.
>
Would this be not better, if we we use a threaded IRQ handler or offload
tx_done to another waitqueue instead of handling it in IRQ handler?

-- Lina

> rcu: INFO: rcu_preempt self-detected stall on CPU
> rcu:     0-....: (1 GPs behind) idle=402/1/0x4000000000000002 softirq=2108/2109 fqs=4920
>  (t=21016 jiffies g=2933 q=590)
> Task dump for CPU 0:
> irq/11-smp2p    R  running task        0   148      2 0x00000028
> Call trace:
>  dump_backtrace+0x0/0x154
>  show_stack+0x20/0x2c
>  sched_show_task+0xfc/0x108
>  dump_cpu_task+0x44/0x50
>  rcu_dump_cpu_stacks+0xa4/0xf8
>  rcu_sched_clock_irq+0x7dc/0xaa8
>  update_process_times+0x30/0x54
>  tick_sched_handle+0x50/0x64
>  tick_sched_timer+0x4c/0x8c
>  __hrtimer_run_queues+0x21c/0x36c
>  hrtimer_interrupt+0xf0/0x22c
>  arch_timer_handler_phys+0x40/0x50
>  handle_percpu_devid_irq+0x114/0x25c
>  __handle_domain_irq+0x84/0xc4
>  gic_handle_irq+0xd0/0x178
>  el1_irq+0xbc/0x180
>  save_return_addr+0x18/0x28
>  return_address+0x54/0x88
>  preempt_count_sub+0x40/0x88
>  _raw_spin_unlock_irqrestore+0x4c/0x6c
>  ___ratelimit+0xd0/0x128
>  rpmh_rsc_send_data+0x24c/0x378
>  __rpmh_write+0x1b0/0x208
>  rpmh_write_async+0x90/0xbc
>  rpmhpd_send_corner+0x60/0x8c
>  rpmhpd_aggregate_corner+0x8c/0x124
>  rpmhpd_set_performance_state+0x8c/0xbc
>  _genpd_set_performance_state+0xdc/0x1b8
>  dev_pm_genpd_set_performance_state+0xb8/0xf8
>  q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss]
>  qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss]
>  q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5]
>  handle_nested_irq+0xd0/0x138
>  qcom_smp2p_intr+0x188/0x200
>  irq_thread_fn+0x2c/0x70
>  irq_thread+0xfc/0x14c
>  kthread+0x11c/0x12c
>  ret_from_fork+0x10/0x18
>
>This busy loop naturally lends itself to using a wait queue so that each
>thread that tries to send a message will sleep waiting on the waitqueue
>and only be woken up when a free slot is available. This should make
>things more predictable too because the scheduler will be able to sleep
>tasks that are waiting on a free tcs instead of the busy loop we
>currently have today.
>
>Cc: Douglas Anderson <dianders@chromium.org>
>Cc: Maulik Shah <mkshah@codeaurora.org>
>Cc: Lina Iyer <ilina@codeaurora.org>
>Signed-off-by: Stephen Boyd <swboyd@chromium.org>
>---
> drivers/soc/qcom/rpmh-internal.h |   2 +
> drivers/soc/qcom/rpmh-rsc.c      | 101 ++++++++++++-------------------
> 2 files changed, 41 insertions(+), 62 deletions(-)
>
>diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>index ef60e790a750..9a325bac58fe 100644
>--- a/drivers/soc/qcom/rpmh-internal.h
>+++ b/drivers/soc/qcom/rpmh-internal.h
>@@ -8,6 +8,7 @@
> #define __RPM_INTERNAL_H__
>
> #include <linux/bitmap.h>
>+#include <linux/wait.h>
> #include <soc/qcom/tcs.h>
>
> #define TCS_TYPE_NR			4
>@@ -118,6 +119,7 @@ struct rsc_drv {
> 	struct tcs_group tcs[TCS_TYPE_NR];
> 	DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
> 	spinlock_t lock;
>+	wait_queue_head_t tcs_wait;
> 	struct rpmh_ctrlr client;
> };
>
>diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>index 076fd27f3081..6c758b052c95 100644
>--- a/drivers/soc/qcom/rpmh-rsc.c
>+++ b/drivers/soc/qcom/rpmh-rsc.c
>@@ -19,6 +19,7 @@
> #include <linux/platform_device.h>
> #include <linux/slab.h>
> #include <linux/spinlock.h>
>+#include <linux/wait.h>
>
> #include <soc/qcom/cmd-db.h>
> #include <soc/qcom/tcs.h>
>@@ -444,6 +445,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
> 		 */
> 		if (!drv->tcs[ACTIVE_TCS].num_tcs)
> 			enable_tcs_irq(drv, i, false);
>+		wake_up(&drv->tcs_wait);
> 		spin_unlock(&drv->lock);
> 		if (req)
> 			rpmh_tx_done(req, err);
>@@ -562,44 +564,59 @@ static int find_free_tcs(struct tcs_group *tcs)
> 	return -EBUSY;
> }
>
>+static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs,
>+			     const struct tcs_request *msg)
>+{
>+	int ret;
>+
>+	/*
>+	 * The h/w does not like if we send a request to the same address,
>+	 * when one is already in-flight or being processed.
>+	 */
>+	ret = check_for_req_inflight(drv, tcs, msg);
>+	if (ret)
>+		return ret;
>+
>+	return find_free_tcs(tcs);
>+}
>+
> /**
>- * tcs_write() - Store messages into a TCS right now, or return -EBUSY.
>+ * rpmh_rsc_send_data() - Write / trigger active-only message.
>  * @drv: The controller.
>  * @msg: The data to be sent.
>  *
>- * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it.
>- *
>- * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for
>- * the same address is already transferring returns -EBUSY which means the
>- * client should retry shortly.
>+ * NOTES:
>+ * - This is only used for "ACTIVE_ONLY" since the limitations of this
>+ *   function don't make sense for sleep/wake cases.
>+ * - To do the transfer, we will grab a whole TCS for ourselves--we don't
>+ *   try to share. If there are none available we'll wait indefinitely
>+ *   for a free one.
>+ * - This function will not wait for the commands to be finished, only for
>+ *   data to be programmed into the RPMh. See rpmh_tx_done() which will
>+ *   be called when the transfer is fully complete.
>+ * - This function must be called with interrupts enabled. If the hardware
>+ *   is busy doing someone else's transfer we need that transfer to fully
>+ *   finish so that we can have the hardware, and to fully finish it needs
>+ *   the interrupt handler to run. If the interrupts is set to run on the
>+ *   active CPU this can never happen if interrupts are disabled.
>  *
>- * Return: 0 on success, -EBUSY if client should retry, or an error.
>- *         Client should have interrupts enabled for a bit before retrying.
>+ * Return: 0 on success, -EINVAL on error.
>  */
>-static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
>+int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
> {
> 	struct tcs_group *tcs;
> 	int tcs_id;
> 	unsigned long flags;
>-	int ret;
>
> 	tcs = get_tcs_for_msg(drv, msg);
> 	if (IS_ERR(tcs))
> 		return PTR_ERR(tcs);
>
> 	spin_lock_irqsave(&drv->lock, flags);
>-	/*
>-	 * The h/w does not like if we send a request to the same address,
>-	 * when one is already in-flight or being processed.
>-	 */
>-	ret = check_for_req_inflight(drv, tcs, msg);
>-	if (ret)
>-		goto unlock;
>
>-	ret = find_free_tcs(tcs);
>-	if (ret < 0)
>-		goto unlock;
>-	tcs_id = ret;
>+	wait_event_lock_irq(drv->tcs_wait,
>+			    (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
>+			    drv->lock);
>
> 	tcs->req[tcs_id - tcs->offset] = msg;
> 	set_bit(tcs_id, drv->tcs_in_use);
>@@ -627,47 +644,6 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
> 	__tcs_set_trigger(drv, tcs_id, true);
>
> 	return 0;
>-unlock:
>-	spin_unlock_irqrestore(&drv->lock, flags);
>-	return ret;
>-}
>-
>-/**
>- * rpmh_rsc_send_data() - Write / trigger active-only message.
>- * @drv: The controller.
>- * @msg: The data to be sent.
>- *
>- * NOTES:
>- * - This is only used for "ACTIVE_ONLY" since the limitations of this
>- *   function don't make sense for sleep/wake cases.
>- * - To do the transfer, we will grab a whole TCS for ourselves--we don't
>- *   try to share. If there are none available we'll wait indefinitely
>- *   for a free one.
>- * - This function will not wait for the commands to be finished, only for
>- *   data to be programmed into the RPMh. See rpmh_tx_done() which will
>- *   be called when the transfer is fully complete.
>- * - This function must be called with interrupts enabled. If the hardware
>- *   is busy doing someone else's transfer we need that transfer to fully
>- *   finish so that we can have the hardware, and to fully finish it needs
>- *   the interrupt handler to run. If the interrupts is set to run on the
>- *   active CPU this can never happen if interrupts are disabled.
>- *
>- * Return: 0 on success, -EINVAL on error.
>- */
>-int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
>-{
>-	int ret;
>-
>-	do {
>-		ret = tcs_write(drv, msg);
>-		if (ret == -EBUSY) {
>-			pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
>-					    msg->cmds[0].addr);
>-			udelay(10);
>-		}
>-	} while (ret == -EBUSY);
>-
>-	return ret;
> }
>
> /**
>@@ -975,6 +951,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
> 		return ret;
>
> 	spin_lock_init(&drv->lock);
>+	init_waitqueue_head(&drv->tcs_wait);
> 	bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
>
> 	irq = platform_get_irq(pdev, drv->id);
>
>base-commit: b3a9e3b9622ae10064826dccb4f7a52bd88c7407
>-- 
>Sent by a computer, using git, on the internet
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-23 17:42 ` Lina Iyer
@ 2020-07-23 19:36   ` Stephen Boyd
  0 siblings, 0 replies; 14+ messages in thread
From: Stephen Boyd @ 2020-07-23 19:36 UTC (permalink / raw)
  To: Lina Iyer
  Cc: Andy Gross, Bjorn Andersson, linux-kernel, linux-arm-msm,
	Douglas Anderson, Maulik Shah

Quoting Lina Iyer (2020-07-23 10:42:54)
> On Wed, Jul 22 2020 at 19:01 -0600, Stephen Boyd wrote:
> >The busy loop in rpmh_rsc_send_data() is written with the assumption
> >that the udelay will be preempted by the tcs_tx_done() irq handler when
> >the TCS slots are all full. This doesn't hold true when the calling
> >thread is an irqthread and the tcs_tx_done() irq is also an irqthread.
> >That's because kernel irqthreads are SCHED_FIFO and thus need to
> >voluntarily give up priority by calling into the scheduler so that other
> >threads can run.
> >
> >I see RCU stalls when I boot with irqthreads on the kernel commandline
> >because the modem remoteproc driver is trying to send an rpmh async
> >message from an irqthread that needs to give up the CPU for the rpmh
> >irqthread to run and clear out tcs slots.
> >
> Would this be not better, if we we use a threaded IRQ handler or offload
> tx_done to another waitqueue instead of handling it in IRQ handler?
> 

Are you asking if jitter is reduced when the rpmh irq is made into a
threaded irq? I haven't done any benchmarking to see if it improves
things.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-23  1:01 [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free Stephen Boyd
  2020-07-23 17:42 ` Lina Iyer
@ 2020-07-24 17:42 ` Doug Anderson
  2020-07-24 19:44   ` Stephen Boyd
  1 sibling, 1 reply; 14+ messages in thread
From: Doug Anderson @ 2020-07-24 17:42 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Hi,

On Wed, Jul 22, 2020 at 6:01 PM Stephen Boyd <swboyd@chromium.org> wrote:
>
> The busy loop in rpmh_rsc_send_data() is written with the assumption
> that the udelay will be preempted by the tcs_tx_done() irq handler when
> the TCS slots are all full. This doesn't hold true when the calling
> thread is an irqthread and the tcs_tx_done() irq is also an irqthread.
> That's because kernel irqthreads are SCHED_FIFO and thus need to
> voluntarily give up priority by calling into the scheduler so that other
> threads can run.
>
> I see RCU stalls when I boot with irqthreads on the kernel commandline
> because the modem remoteproc driver is trying to send an rpmh async
> message from an irqthread that needs to give up the CPU for the rpmh
> irqthread to run and clear out tcs slots.
>
>  rcu: INFO: rcu_preempt self-detected stall on CPU
>  rcu:     0-....: (1 GPs behind) idle=402/1/0x4000000000000002 softirq=2108/2109 fqs=4920
>   (t=21016 jiffies g=2933 q=590)
>  Task dump for CPU 0:
>  irq/11-smp2p    R  running task        0   148      2 0x00000028
>  Call trace:
>   dump_backtrace+0x0/0x154
>   show_stack+0x20/0x2c
>   sched_show_task+0xfc/0x108
>   dump_cpu_task+0x44/0x50
>   rcu_dump_cpu_stacks+0xa4/0xf8
>   rcu_sched_clock_irq+0x7dc/0xaa8
>   update_process_times+0x30/0x54
>   tick_sched_handle+0x50/0x64
>   tick_sched_timer+0x4c/0x8c
>   __hrtimer_run_queues+0x21c/0x36c
>   hrtimer_interrupt+0xf0/0x22c
>   arch_timer_handler_phys+0x40/0x50
>   handle_percpu_devid_irq+0x114/0x25c
>   __handle_domain_irq+0x84/0xc4
>   gic_handle_irq+0xd0/0x178
>   el1_irq+0xbc/0x180
>   save_return_addr+0x18/0x28
>   return_address+0x54/0x88
>   preempt_count_sub+0x40/0x88
>   _raw_spin_unlock_irqrestore+0x4c/0x6c
>   ___ratelimit+0xd0/0x128
>   rpmh_rsc_send_data+0x24c/0x378
>   __rpmh_write+0x1b0/0x208
>   rpmh_write_async+0x90/0xbc
>   rpmhpd_send_corner+0x60/0x8c
>   rpmhpd_aggregate_corner+0x8c/0x124
>   rpmhpd_set_performance_state+0x8c/0xbc
>   _genpd_set_performance_state+0xdc/0x1b8
>   dev_pm_genpd_set_performance_state+0xb8/0xf8
>   q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss]
>   qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss]
>   q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5]
>   handle_nested_irq+0xd0/0x138
>   qcom_smp2p_intr+0x188/0x200
>   irq_thread_fn+0x2c/0x70
>   irq_thread+0xfc/0x14c
>   kthread+0x11c/0x12c
>   ret_from_fork+0x10/0x18
>
> This busy loop naturally lends itself to using a wait queue so that each
> thread that tries to send a message will sleep waiting on the waitqueue
> and only be woken up when a free slot is available. This should make
> things more predictable too because the scheduler will be able to sleep
> tasks that are waiting on a free tcs instead of the busy loop we
> currently have today.
>
> Cc: Douglas Anderson <dianders@chromium.org>
> Cc: Maulik Shah <mkshah@codeaurora.org>
> Cc: Lina Iyer <ilina@codeaurora.org>
> Signed-off-by: Stephen Boyd <swboyd@chromium.org>
> ---
>  drivers/soc/qcom/rpmh-internal.h |   2 +
>  drivers/soc/qcom/rpmh-rsc.c      | 101 ++++++++++++-------------------
>  2 files changed, 41 insertions(+), 62 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> index ef60e790a750..9a325bac58fe 100644
> --- a/drivers/soc/qcom/rpmh-internal.h
> +++ b/drivers/soc/qcom/rpmh-internal.h
> @@ -8,6 +8,7 @@
>  #define __RPM_INTERNAL_H__
>
>  #include <linux/bitmap.h>
> +#include <linux/wait.h>
>  #include <soc/qcom/tcs.h>
>
>  #define TCS_TYPE_NR                    4
> @@ -118,6 +119,7 @@ struct rsc_drv {
>         struct tcs_group tcs[TCS_TYPE_NR];
>         DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
>         spinlock_t lock;
> +       wait_queue_head_t tcs_wait;

nit: this structure has a kernel-doc comment above it describing the
elements.  Could you add yours?


>         struct rpmh_ctrlr client;
>  };
>
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index 076fd27f3081..6c758b052c95 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -19,6 +19,7 @@
>  #include <linux/platform_device.h>
>  #include <linux/slab.h>
>  #include <linux/spinlock.h>
> +#include <linux/wait.h>
>
>  #include <soc/qcom/cmd-db.h>
>  #include <soc/qcom/tcs.h>
> @@ -444,6 +445,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
>                  */
>                 if (!drv->tcs[ACTIVE_TCS].num_tcs)
>                         enable_tcs_irq(drv, i, false);
> +               wake_up(&drv->tcs_wait);
>                 spin_unlock(&drv->lock);

nit: I think it's slightly better to do the wake_up() after the
spin_unlock(), no?  The first thing the other task will do is to try
to grab the spinlock and we might as well give it a chance of
succeeding without looping.  I don't see any reason why we'd need to
be holding the lock while calling wake_up().


>                 if (req)
>                         rpmh_tx_done(req, err);
> @@ -562,44 +564,59 @@ static int find_free_tcs(struct tcs_group *tcs)
>         return -EBUSY;
>  }
>
> +static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs,
> +                            const struct tcs_request *msg)

nit: I know this is a short function and kernel convention doesn't
strictly require comments in front of all functions.  However, every
other function in this file has a comment and I had a really hard time
dealing with the rpmh-rsc code before the comments.  Could you add one
for your function, even if it's short?  One thing that would be nice
to note is that the only error it returns is -EBUSY.  See below.


> +{
> +       int ret;
> +
> +       /*
> +        * The h/w does not like if we send a request to the same address,
> +        * when one is already in-flight or being processed.
> +        */
> +       ret = check_for_req_inflight(drv, tcs, msg);
> +       if (ret)
> +               return ret;
> +
> +       return find_free_tcs(tcs);
> +}
> +
>  /**
> - * tcs_write() - Store messages into a TCS right now, or return -EBUSY.
> + * rpmh_rsc_send_data() - Write / trigger active-only message.
>   * @drv: The controller.
>   * @msg: The data to be sent.
>   *
> - * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it.
> - *
> - * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for
> - * the same address is already transferring returns -EBUSY which means the
> - * client should retry shortly.
> + * NOTES:
> + * - This is only used for "ACTIVE_ONLY" since the limitations of this
> + *   function don't make sense for sleep/wake cases.
> + * - To do the transfer, we will grab a whole TCS for ourselves--we don't
> + *   try to share. If there are none available we'll wait indefinitely
> + *   for a free one.
> + * - This function will not wait for the commands to be finished, only for
> + *   data to be programmed into the RPMh. See rpmh_tx_done() which will
> + *   be called when the transfer is fully complete.
> + * - This function must be called with interrupts enabled. If the hardware
> + *   is busy doing someone else's transfer we need that transfer to fully
> + *   finish so that we can have the hardware, and to fully finish it needs
> + *   the interrupt handler to run. If the interrupts is set to run on the
> + *   active CPU this can never happen if interrupts are disabled.
>   *
> - * Return: 0 on success, -EBUSY if client should retry, or an error.
> - *         Client should have interrupts enabled for a bit before retrying.
> + * Return: 0 on success, -EINVAL on error.
>   */
> -static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
> +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
>  {
>         struct tcs_group *tcs;
>         int tcs_id;
>         unsigned long flags;
> -       int ret;
>
>         tcs = get_tcs_for_msg(drv, msg);
>         if (IS_ERR(tcs))
>                 return PTR_ERR(tcs);
>
>         spin_lock_irqsave(&drv->lock, flags);
> -       /*
> -        * The h/w does not like if we send a request to the same address,
> -        * when one is already in-flight or being processed.
> -        */
> -       ret = check_for_req_inflight(drv, tcs, msg);
> -       if (ret)
> -               goto unlock;
>
> -       ret = find_free_tcs(tcs);
> -       if (ret < 0)
> -               goto unlock;
> -       tcs_id = ret;
> +       wait_event_lock_irq(drv->tcs_wait,
> +                           (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,

Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it
never returns error codes other than -EBUSY), should we handle it?  If
we don't, claim_tcs_for_req() should be very clear that it shouldn't
return any errors other than -EBUSY.


Other than small nits this looks great to me, thanks!

-Doug

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 17:42 ` Doug Anderson
@ 2020-07-24 19:44   ` Stephen Boyd
  2020-07-24 19:49     ` Doug Anderson
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Boyd @ 2020-07-24 19:44 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Quoting Doug Anderson (2020-07-24 10:42:55)
> Hi,
> 
> On Wed, Jul 22, 2020 at 6:01 PM Stephen Boyd <swboyd@chromium.org> wrote:
> > diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> > index ef60e790a750..9a325bac58fe 100644
> > --- a/drivers/soc/qcom/rpmh-internal.h
> > +++ b/drivers/soc/qcom/rpmh-internal.h
> > @@ -118,6 +119,7 @@ struct rsc_drv {
> >         struct tcs_group tcs[TCS_TYPE_NR];
> >         DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
> >         spinlock_t lock;
> > +       wait_queue_head_t tcs_wait;
> 
> nit: this structure has a kernel-doc comment above it describing the
> elements.  Could you add yours?

Sure.

> 
> 
> >         struct rpmh_ctrlr client;
> >  };
> >
> > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> > index 076fd27f3081..6c758b052c95 100644
> > --- a/drivers/soc/qcom/rpmh-rsc.c
> > +++ b/drivers/soc/qcom/rpmh-rsc.c
> > @@ -19,6 +19,7 @@
> >  #include <linux/platform_device.h>
> >  #include <linux/slab.h>
> >  #include <linux/spinlock.h>
> > +#include <linux/wait.h>
> >
> >  #include <soc/qcom/cmd-db.h>
> >  #include <soc/qcom/tcs.h>
> > @@ -444,6 +445,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
> >                  */
> >                 if (!drv->tcs[ACTIVE_TCS].num_tcs)
> >                         enable_tcs_irq(drv, i, false);
> > +               wake_up(&drv->tcs_wait);
> >                 spin_unlock(&drv->lock);
> 
> nit: I think it's slightly better to do the wake_up() after the
> spin_unlock(), no?  The first thing the other task will do is to try
> to grab the spinlock and we might as well give it a chance of
> succeeding without looping.  I don't see any reason why we'd need to
> be holding the lock while calling wake_up().

Right that's better.

> 
> 
> >                 if (req)
> >                         rpmh_tx_done(req, err);
> > @@ -562,44 +564,59 @@ static int find_free_tcs(struct tcs_group *tcs)
> >         return -EBUSY;
> >  }
> >
> > +static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs,
> > +                            const struct tcs_request *msg)
> 
> nit: I know this is a short function and kernel convention doesn't
> strictly require comments in front of all functions.  However, every
> other function in this file has a comment and I had a really hard time
> dealing with the rpmh-rsc code before the comments.  Could you add one
> for your function, even if it's short?  One thing that would be nice
> to note is that the only error it returns is -EBUSY.  See below.

Sure I'll write up some kernel-doc.

> 
> > -       if (ret)
> > -               goto unlock;
> >
> > -       ret = find_free_tcs(tcs);
> > -       if (ret < 0)
> > -               goto unlock;
> > -       tcs_id = ret;
> > +       wait_event_lock_irq(drv->tcs_wait,
> > +                           (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
> 
> Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it
> never returns error codes other than -EBUSY), should we handle it?  If
> we don't, claim_tcs_for_req() should be very clear that it shouldn't
> return any errors other than -EBUSY.

Do you mean you want to change it to be

	(tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY

instead of >= 0? It should return the tcs_id that was claimed, not just
0 or -EBUSY.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 19:44   ` Stephen Boyd
@ 2020-07-24 19:49     ` Doug Anderson
  2020-07-24 20:01       ` Stephen Boyd
  0 siblings, 1 reply; 14+ messages in thread
From: Doug Anderson @ 2020-07-24 19:49 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Hi,

On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd <swboyd@chromium.org> wrote:
>
> > > -       if (ret)
> > > -               goto unlock;
> > >
> > > -       ret = find_free_tcs(tcs);
> > > -       if (ret < 0)
> > > -               goto unlock;
> > > -       tcs_id = ret;
> > > +       wait_event_lock_irq(drv->tcs_wait,
> > > +                           (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
> >
> > Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it
> > never returns error codes other than -EBUSY), should we handle it?  If
> > we don't, claim_tcs_for_req() should be very clear that it shouldn't
> > return any errors other than -EBUSY.
>
> Do you mean you want to change it to be
>
>         (tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY
>
> instead of >= 0? It should return the tcs_id that was claimed, not just
> 0 or -EBUSY.

Ah, right.  Yes, you got it right.  Of course then we have to add a
"if (tcd_id < 0) goto unlock", too.  If you think it's not worth
adding this then we just need to make sure it's super obvious in
claim_tcs_for_req() that it's not allowed to return other errors.

-Doug

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 19:49     ` Doug Anderson
@ 2020-07-24 20:01       ` Stephen Boyd
  2020-07-24 20:08         ` Lina Iyer
  2020-07-24 20:11         ` Doug Anderson
  0 siblings, 2 replies; 14+ messages in thread
From: Stephen Boyd @ 2020-07-24 20:01 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Quoting Doug Anderson (2020-07-24 12:49:56)
> Hi,
> 
> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd <swboyd@chromium.org> wrote:
> >
> > > > -       if (ret)
> > > > -               goto unlock;
> > > >
> > > > -       ret = find_free_tcs(tcs);
> > > > -       if (ret < 0)
> > > > -               goto unlock;
> > > > -       tcs_id = ret;
> > > > +       wait_event_lock_irq(drv->tcs_wait,
> > > > +                           (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
> > >
> > > Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it
> > > never returns error codes other than -EBUSY), should we handle it?  If
> > > we don't, claim_tcs_for_req() should be very clear that it shouldn't
> > > return any errors other than -EBUSY.
> >
> > Do you mean you want to change it to be
> >
> >         (tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY
> >
> > instead of >= 0? It should return the tcs_id that was claimed, not just
> > 0 or -EBUSY.
> 
> Ah, right.  Yes, you got it right.  Of course then we have to add a
> "if (tcd_id < 0) goto unlock", too.  If you think it's not worth
> adding this then we just need to make sure it's super obvious in
> claim_tcs_for_req() that it's not allowed to return other errors.
> 

Hmm right now the code will wait forever for the condition to become
true, so it will only ever continue past this point if tcs_id >= 0. We
could add a timeout here in another patch, but I didn't want to change
the behavior of what is there in this patch. I don't really care if >= 0
or != -EBUSY is used here so I can change it to -EBUSY to provide
clarity.

If we add a timeout here, it would be better to change this driver to
use a pull model instead of the push model it is using today so that the
timeout isn't necessary. That would entail making a proper kthread that
pulls requests off a queue of messages and then this asnyc call would
append messages to the end of the queue and return immediately. That
would be necessary if we want the async calls to work from non-sleeping
contexts for example. I think Lina was alluding to this earlier in this
thread.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:01       ` Stephen Boyd
@ 2020-07-24 20:08         ` Lina Iyer
  2020-07-24 20:11           ` Stephen Boyd
  2020-07-24 20:11         ` Doug Anderson
  1 sibling, 1 reply; 14+ messages in thread
From: Lina Iyer @ 2020-07-24 20:08 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Doug Anderson, Andy Gross, Bjorn Andersson, LKML, linux-arm-msm,
	Maulik Shah

On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote:
>Quoting Doug Anderson (2020-07-24 12:49:56)
>> Hi,
>>
>> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd <swboyd@chromium.org> wrote:
>I think Lina was alluding to this earlier in this
>thread.
I was thinking more of threaded irq handler than a kthread to post the
requests. We went away from post-thread approach a few years ago.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:08         ` Lina Iyer
@ 2020-07-24 20:11           ` Stephen Boyd
  2020-07-24 20:23             ` Lina Iyer
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Boyd @ 2020-07-24 20:11 UTC (permalink / raw)
  To: Lina Iyer
  Cc: Doug Anderson, Andy Gross, Bjorn Andersson, LKML, linux-arm-msm,
	Maulik Shah

Quoting Lina Iyer (2020-07-24 13:08:41)
> On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote:
> >Quoting Doug Anderson (2020-07-24 12:49:56)
> >> Hi,
> >>
> >> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd <swboyd@chromium.org> wrote:
> >I think Lina was alluding to this earlier in this
> >thread.
> I was thinking more of threaded irq handler than a kthread to post the
> requests. We went away from post-thread approach a few years ago.
> 

Ok, got it. Why was the kthread approach abandoned?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:01       ` Stephen Boyd
  2020-07-24 20:08         ` Lina Iyer
@ 2020-07-24 20:11         ` Doug Anderson
  2020-07-24 20:27           ` Stephen Boyd
  1 sibling, 1 reply; 14+ messages in thread
From: Doug Anderson @ 2020-07-24 20:11 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Hi,

On Fri, Jul 24, 2020 at 1:01 PM Stephen Boyd <swboyd@chromium.org> wrote:
>
> Quoting Doug Anderson (2020-07-24 12:49:56)
> > Hi,
> >
> > On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd <swboyd@chromium.org> wrote:
> > >
> > > > > -       if (ret)
> > > > > -               goto unlock;
> > > > >
> > > > > -       ret = find_free_tcs(tcs);
> > > > > -       if (ret < 0)
> > > > > -               goto unlock;
> > > > > -       tcs_id = ret;
> > > > > +       wait_event_lock_irq(drv->tcs_wait,
> > > > > +                           (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
> > > >
> > > > Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it
> > > > never returns error codes other than -EBUSY), should we handle it?  If
> > > > we don't, claim_tcs_for_req() should be very clear that it shouldn't
> > > > return any errors other than -EBUSY.
> > >
> > > Do you mean you want to change it to be
> > >
> > >         (tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY
> > >
> > > instead of >= 0? It should return the tcs_id that was claimed, not just
> > > 0 or -EBUSY.
> >
> > Ah, right.  Yes, you got it right.  Of course then we have to add a
> > "if (tcd_id < 0) goto unlock", too.  If you think it's not worth
> > adding this then we just need to make sure it's super obvious in
> > claim_tcs_for_req() that it's not allowed to return other errors.
> >
>
> Hmm right now the code will wait forever for the condition to become
> true, so it will only ever continue past this point if tcs_id >= 0. We
> could add a timeout here in another patch, but I didn't want to change
> the behavior of what is there in this patch. I don't really care if >= 0
> or != -EBUSY is used here so I can change it to -EBUSY to provide
> clarity.
>
> If we add a timeout here, it would be better to change this driver to
> use a pull model instead of the push model it is using today so that the
> timeout isn't necessary. That would entail making a proper kthread that
> pulls requests off a queue of messages and then this asnyc call would
> append messages to the end of the queue and return immediately. That
> would be necessary if we want the async calls to work from non-sleeping
> contexts for example. I think Lina was alluding to this earlier in this
> thread.

I wasn't suggesting adding a timeout.  I was just saying that if
claim_tcs_for_req() were to ever return an error code other than
-EBUSY that we'd need a check for it because otherwise we'd interpret
the result as a tcs_id.

-Doug

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:11           ` Stephen Boyd
@ 2020-07-24 20:23             ` Lina Iyer
  0 siblings, 0 replies; 14+ messages in thread
From: Lina Iyer @ 2020-07-24 20:23 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Doug Anderson, Andy Gross, Bjorn Andersson, LKML, linux-arm-msm,
	Maulik Shah

On Fri, Jul 24 2020 at 14:11 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2020-07-24 13:08:41)
>> On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote:
>> >Quoting Doug Anderson (2020-07-24 12:49:56)
>> >> Hi,
>> >>
>> >> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd <swboyd@chromium.org> wrote:
>> >I think Lina was alluding to this earlier in this
>> >thread.
>> I was thinking more of threaded irq handler than a kthread to post the
>> requests. We went away from post-thread approach a few years ago.
>>
>
>Ok, got it. Why was the kthread approach abandoned?

Simplification mostly, I think. Also, somebody requested that when the
async call returns they would atleast be guaranteed that the request has
been posted not necessarily processed at the remote end.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:11         ` Doug Anderson
@ 2020-07-24 20:27           ` Stephen Boyd
  2020-07-24 20:31             ` Doug Anderson
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Boyd @ 2020-07-24 20:27 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Quoting Doug Anderson (2020-07-24 13:11:59)
> 
> I wasn't suggesting adding a timeout.  I was just saying that if
> claim_tcs_for_req() were to ever return an error code other than
> -EBUSY that we'd need a check for it because otherwise we'd interpret
> the result as a tcs_id.
> 

Ok that sounds like you don't want a check for -EBUSY so I'll leave this
as >= 0.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:27           ` Stephen Boyd
@ 2020-07-24 20:31             ` Doug Anderson
  2020-07-24 21:08               ` Stephen Boyd
  0 siblings, 1 reply; 14+ messages in thread
From: Doug Anderson @ 2020-07-24 20:31 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Hi,

On Fri, Jul 24, 2020 at 1:27 PM Stephen Boyd <swboyd@chromium.org> wrote:
>
> Quoting Doug Anderson (2020-07-24 13:11:59)
> >
> > I wasn't suggesting adding a timeout.  I was just saying that if
> > claim_tcs_for_req() were to ever return an error code other than
> > -EBUSY that we'd need a check for it because otherwise we'd interpret
> > the result as a tcs_id.
> >
>
> Ok that sounds like you don't want a check for -EBUSY so I'll leave this
> as >= 0.

To clarify, I'd be OK with either of these (slight preference towards
#2, but not a strong one):

1. Your current code and a REALLY OBVIOUS comment in
claim_tcs_for_req() saying that we'd better not return any error codes
other than -EBUSY (because we'll just blindly retry on all of them).

- or -

2. Handling error codes other than -EBUSY, like this:

wait_event_lock_irq(drv->tcs_wait,
                    (tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY,
                    drv->lock);
if (tcs_id < 0)
  goto unlock;

-Doug

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
  2020-07-24 20:31             ` Doug Anderson
@ 2020-07-24 21:08               ` Stephen Boyd
  0 siblings, 0 replies; 14+ messages in thread
From: Stephen Boyd @ 2020-07-24 21:08 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Andy Gross, Bjorn Andersson, LKML, linux-arm-msm, Maulik Shah, Lina Iyer

Quoting Doug Anderson (2020-07-24 13:31:39)
> Hi,
> 
> On Fri, Jul 24, 2020 at 1:27 PM Stephen Boyd <swboyd@chromium.org> wrote:
> >
> > Quoting Doug Anderson (2020-07-24 13:11:59)
> > >
> > > I wasn't suggesting adding a timeout.  I was just saying that if
> > > claim_tcs_for_req() were to ever return an error code other than
> > > -EBUSY that we'd need a check for it because otherwise we'd interpret
> > > the result as a tcs_id.
> > >
> >
> > Ok that sounds like you don't want a check for -EBUSY so I'll leave this
> > as >= 0.
> 
> To clarify, I'd be OK with either of these (slight preference towards
> #2, but not a strong one):
> 
> 1. Your current code and a REALLY OBVIOUS comment in
> claim_tcs_for_req() saying that we'd better not return any error codes
> other than -EBUSY (because we'll just blindly retry on all of them).
> 
> - or -
> 
> 2. Handling error codes other than -EBUSY, like this:
> 
> wait_event_lock_irq(drv->tcs_wait,
>                     (tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY,
>                     drv->lock);
> if (tcs_id < 0)
>   goto unlock;
> 

Ah I think I understand. You're thinking that claim_tcs_for_req() may
return an error value that isn't -EBUSY some day and then this
wait_event_lock_irq() will keep spinning forever when it isn't busy but
invalid or some such? I'd rather not do #2 because it is dead code until
claim_tcs_for_req() changes. I'll add a comment indicating that it must
return something that is claimed or the caller will keep trying. When
the code changes the call sites should be evaluated by the author to
make sure that it keeps working. I'm afraid a really big comment won't
do much to help with that in the future.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-07-24 21:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-23  1:01 [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free Stephen Boyd
2020-07-23 17:42 ` Lina Iyer
2020-07-23 19:36   ` Stephen Boyd
2020-07-24 17:42 ` Doug Anderson
2020-07-24 19:44   ` Stephen Boyd
2020-07-24 19:49     ` Doug Anderson
2020-07-24 20:01       ` Stephen Boyd
2020-07-24 20:08         ` Lina Iyer
2020-07-24 20:11           ` Stephen Boyd
2020-07-24 20:23             ` Lina Iyer
2020-07-24 20:11         ` Doug Anderson
2020-07-24 20:27           ` Stephen Boyd
2020-07-24 20:31             ` Doug Anderson
2020-07-24 21:08               ` Stephen Boyd

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.