linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Remove self-implemented queue of Mediatek cmdq
@ 2019-01-16  5:04 CK Hu
  2019-01-16  5:04 ` [PATCH 1/3] mailbox: Add ability for clients to abort data in channel CK Hu
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: CK Hu @ 2019-01-16  5:04 UTC (permalink / raw)
  To: Jassi Brar, Matthias Brugger, Houlong Wei
  Cc: srv_heupstream, linux-mediatek, linux-kernel, linux-arm-kernel

Mediatek mailbox controller implement its own data queue rather than using
mailbox framework's queue. This series let the framework provide abort
function and Mediatek mailbox controller implement the abort-function, so
it could use framework's queue.

The reason that Mediatek mailbox controller has to implement its own queue:
One client of Mediatek mailbox controller is display driver. When a cursor
is moving, display continuously update the register related to the cursor.
Display hardware has a limitation that register should be updated in the
vblank period which is a small interval. In tradition, display hardware
would trigger an interrupt when vblank start, and driver could update
register in this irq handler. But the interrupt handler has the risk that
it could be delayed by some reason so the handler may be postponed out of
this vblank interval. In order to reduce the risk, display driver use GCE
hardware to write register. If a cursor move 3 times before vblank,
display driver would send 3 messages sequentially to mailbox controller.
If the controller use framework's queue, controller just receive the
first message and the others is queued in framework. The first message
could be executed exactly in vblank interval but the other messages are
sent to controller when controller notify framework tx_done in interrupt
handler. The interrupt may be postponed by some reason this is what we
worried. So Mediatek mailbox controller has to implement its own queue to
make sure that all message execute in vblank interval.

The reason that abort-function could let Mediatek mailbox controller use
framework's queue:
The primary concept is to let display driver send at most one message to
mailbox controller. When it need to send the second message before the
first message is done, it should abort the first message and then send the
second message which should merge the first message. For other client
driver, it could still send multiple messages into framework's queue.

CK Hu (3):
  mailbox: Add ability for clients to abort data in channel
  mailbox: mediatek: Implement abort_data function.
  mailbox: mediatek: Remove busylist

 drivers/mailbox/mailbox.c          |  23 +++
 drivers/mailbox/mtk-cmdq-mailbox.c | 281 +++++++----------------------
 include/linux/mailbox_client.h     |   1 +
 include/linux/mailbox_controller.h |   4 +
 4 files changed, 90 insertions(+), 219 deletions(-)

-- 
2.18.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] mailbox: Add ability for clients to abort data in channel
  2019-01-16  5:04 [PATCH 0/3] Remove self-implemented queue of Mediatek cmdq CK Hu
@ 2019-01-16  5:04 ` CK Hu
  2019-01-16 16:22   ` Jassi Brar
  2019-01-16  5:04 ` [PATCH 2/3] mailbox: mediatek: Implement abort_data function CK Hu
  2019-01-16  5:04 ` [PATCH 3/3] mailbox: mediatek: Remove busylist CK Hu
  2 siblings, 1 reply; 10+ messages in thread
From: CK Hu @ 2019-01-16  5:04 UTC (permalink / raw)
  To: Jassi Brar, Matthias Brugger, Houlong Wei
  Cc: CK Hu, srv_heupstream, linux-mediatek, linux-kernel, linux-arm-kernel

This patch supplies a new framework API, mbox_abort_channel(), and
a new controller interface, abort_data().

For some client's application, it need to clean up the data in channel
but keep the channel so it could send data to channel later.

Signed-off-by: CK Hu <ck.hu@mediatek.com>
---
 drivers/mailbox/mailbox.c          | 23 +++++++++++++++++++++++
 include/linux/mailbox_client.h     |  1 +
 include/linux/mailbox_controller.h |  4 ++++
 3 files changed, 28 insertions(+)

diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
index c6a7d4582dc6..281647162c76 100644
--- a/drivers/mailbox/mailbox.c
+++ b/drivers/mailbox/mailbox.c
@@ -428,6 +428,29 @@ struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl,
 }
 EXPORT_SYMBOL_GPL(mbox_request_channel_byname);
 
+/**
+ * mbox_abort_channel - The client abort all data in a mailbox
+ *			channel by this call.
+ * @chan: The mailbox channel to be aborted.
+ */
+void mbox_abort_channel(struct mbox_chan *chan)
+{
+	unsigned long flags;
+
+	if (!chan || !chan->cl)
+		return;
+
+	if (chan->mbox->ops->abort_data)
+		chan->mbox->ops->abort_data(chan);
+
+	/* The queued TX requests are simply aborted, no callbacks are made */
+	spin_lock_irqsave(&chan->lock, flags);
+	chan->cl = NULL;
+	chan->active_req = NULL;
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+EXPORT_SYMBOL_GPL(mbox_abort_channel);
+
 /**
  * mbox_free_channel - The client relinquishes control of a mailbox
  *			channel by this call.
diff --git a/include/linux/mailbox_client.h b/include/linux/mailbox_client.h
index faa7da3c9c8b..209d1d458029 100644
--- a/include/linux/mailbox_client.h
+++ b/include/linux/mailbox_client.h
@@ -47,6 +47,7 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg);
 int mbox_flush(struct mbox_chan *chan, unsigned long timeout);
 void mbox_client_txdone(struct mbox_chan *chan, int r); /* atomic */
 bool mbox_client_peek_data(struct mbox_chan *chan); /* atomic */
+void mbox_abort_channel(struct mbox_chan *chan); /* may sleep */
 void mbox_free_channel(struct mbox_chan *chan); /* may sleep */
 
 #endif /* __MAILBOX_CLIENT_H */
diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h
index 4994a438444c..518aa4ca2fe4 100644
--- a/include/linux/mailbox_controller.h
+++ b/include/linux/mailbox_controller.h
@@ -27,6 +27,9 @@ struct mbox_chan;
  * @flush:	Called when a client requests transmissions to be blocking but
  *		the context doesn't allow sleeping. Typically the controller
  *		will implement a busy loop waiting for the data to flush out.
+ * @abort_data:	Called when a client relinquishes control of a chan.
+ *		This call may block too. The controller may do stuff
+ *		that need to sleep.
  * @startup:	Called when a client requests the chan. The controller
  *		could ask clients for additional parameters of communication
  *		to be provided via client's chan_data. This call may
@@ -50,6 +53,7 @@ struct mbox_chan;
 struct mbox_chan_ops {
 	int (*send_data)(struct mbox_chan *chan, void *data);
 	int (*flush)(struct mbox_chan *chan, unsigned long timeout);
+	void (*abort_data)(struct mbox_chan *chan);
 	int (*startup)(struct mbox_chan *chan);
 	void (*shutdown)(struct mbox_chan *chan);
 	bool (*last_tx_done)(struct mbox_chan *chan);
-- 
2.18.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] mailbox: mediatek: Implement abort_data function.
  2019-01-16  5:04 [PATCH 0/3] Remove self-implemented queue of Mediatek cmdq CK Hu
  2019-01-16  5:04 ` [PATCH 1/3] mailbox: Add ability for clients to abort data in channel CK Hu
@ 2019-01-16  5:04 ` CK Hu
  2019-01-16  5:04 ` [PATCH 3/3] mailbox: mediatek: Remove busylist CK Hu
  2 siblings, 0 replies; 10+ messages in thread
From: CK Hu @ 2019-01-16  5:04 UTC (permalink / raw)
  To: Jassi Brar, Matthias Brugger, Houlong Wei
  Cc: CK Hu, srv_heupstream, linux-mediatek, linux-kernel, linux-arm-kernel

For client driver which need to reorganize the command buffer, it could
use this function to abort the sent but not executed command buffer.

Signed-off-by: CK Hu <ck.hu@mediatek.com>
---
 drivers/mailbox/mtk-cmdq-mailbox.c | 40 ++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
index 22811784dc7d..f2219f263ef6 100644
--- a/drivers/mailbox/mtk-cmdq-mailbox.c
+++ b/drivers/mailbox/mtk-cmdq-mailbox.c
@@ -417,6 +417,45 @@ static int cmdq_mbox_send_data(struct mbox_chan *chan, void *data)
 	return 0;
 }
 
+static void cmdq_mbox_abort_data(struct mbox_chan *chan)
+{
+	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
+	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
+	struct cmdq_task *task, *tmp;
+	unsigned long flags;
+	u32 enable;
+
+	spin_lock_irqsave(&thread->chan->lock, flags);
+	if (list_empty(&thread->task_busy_list))
+		goto out;
+
+	WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
+	if (!cmdq_thread_is_in_wfe(thread))
+		goto wait;
+
+	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
+				 list_entry) {
+		list_del(&task->list_entry);
+		kfree(task);
+	}
+
+	cmdq_thread_resume(thread);
+	cmdq_thread_disable(cmdq, thread);
+	clk_disable(cmdq->clock);
+
+out:
+	spin_unlock_irqrestore(&thread->chan->lock, flags);
+	return;
+
+wait:
+	cmdq_thread_resume(thread);
+	spin_unlock_irqrestore(&thread->chan->lock, flags);
+	if (readl_poll_timeout_atomic(thread->base + CMDQ_THR_ENABLE_TASK,
+				      enable, !enable, 1, 20))
+		dev_err(cmdq->mbox.dev, "Fail to wait GCE thread 0x%x done\n",
+			(u32)(thread->base - cmdq->base));
+}
+
 static int cmdq_mbox_startup(struct mbox_chan *chan)
 {
 	return 0;
@@ -427,6 +466,7 @@ static void cmdq_mbox_shutdown(struct mbox_chan *chan)
 }
 
 static const struct mbox_chan_ops cmdq_mbox_chan_ops = {
+	.abort_data = cmdq_mbox_abort_data,
 	.send_data = cmdq_mbox_send_data,
 	.startup = cmdq_mbox_startup,
 	.shutdown = cmdq_mbox_shutdown,
-- 
2.18.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] mailbox: mediatek: Remove busylist
  2019-01-16  5:04 [PATCH 0/3] Remove self-implemented queue of Mediatek cmdq CK Hu
  2019-01-16  5:04 ` [PATCH 1/3] mailbox: Add ability for clients to abort data in channel CK Hu
  2019-01-16  5:04 ` [PATCH 2/3] mailbox: mediatek: Implement abort_data function CK Hu
@ 2019-01-16  5:04 ` CK Hu
       [not found]   ` <40d519083fe94640a22181388bcbbb09@MTKMBS31N1.mediatek.inc>
  2 siblings, 1 reply; 10+ messages in thread
From: CK Hu @ 2019-01-16  5:04 UTC (permalink / raw)
  To: Jassi Brar, Matthias Brugger, Houlong Wei
  Cc: CK Hu, srv_heupstream, linux-mediatek, linux-kernel, linux-arm-kernel

After implement abort_data, controller need not to implement its own
queue. Remove busylist because it's useless.

Signed-off-by: CK Hu <ck.hu@mediatek.com>
---
 drivers/mailbox/mtk-cmdq-mailbox.c | 255 ++++-------------------------
 1 file changed, 29 insertions(+), 226 deletions(-)

diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
index f2219f263ef6..45c59f677ecb 100644
--- a/drivers/mailbox/mtk-cmdq-mailbox.c
+++ b/drivers/mailbox/mtk-cmdq-mailbox.c
@@ -16,9 +16,7 @@
 #include <linux/mailbox/mtk-cmdq-mailbox.h>
 #include <linux/of_device.h>
 
-#define CMDQ_OP_CODE_MASK		(0xff << CMDQ_OP_CODE_SHIFT)
 #define CMDQ_IRQ_MASK			0xffff
-#define CMDQ_NUM_CMD(t)			(t->cmd_buf_size / CMDQ_INST_SIZE)
 
 #define CMDQ_CURR_IRQ_STATUS		0x10
 #define CMDQ_THR_SLOT_CYCLES		0x30
@@ -47,22 +45,10 @@
 #define CMDQ_THR_IRQ_EN			(CMDQ_THR_IRQ_ERROR | CMDQ_THR_IRQ_DONE)
 #define CMDQ_THR_IS_WAITING		BIT(31)
 
-#define CMDQ_JUMP_BY_OFFSET		0x10000000
-#define CMDQ_JUMP_BY_PA			0x10000001
-
 struct cmdq_thread {
 	struct mbox_chan	*chan;
 	void __iomem		*base;
-	struct list_head	task_busy_list;
 	u32			priority;
-	bool			atomic_exec;
-};
-
-struct cmdq_task {
-	struct cmdq		*cmdq;
-	struct list_head	list_entry;
-	dma_addr_t		pa_base;
-	struct cmdq_thread	*thread;
 	struct cmdq_pkt		*pkt; /* the packet sent from mailbox client */
 };
 
@@ -130,171 +116,47 @@ static void cmdq_thread_disable(struct cmdq *cmdq, struct cmdq_thread *thread)
 	writel(CMDQ_THR_DISABLED, thread->base + CMDQ_THR_ENABLE_TASK);
 }
 
-/* notify GCE to re-fetch commands by setting GCE thread PC */
-static void cmdq_thread_invalidate_fetched_data(struct cmdq_thread *thread)
-{
-	writel(readl(thread->base + CMDQ_THR_CURR_ADDR),
-	       thread->base + CMDQ_THR_CURR_ADDR);
-}
-
-static void cmdq_task_insert_into_thread(struct cmdq_task *task)
-{
-	struct device *dev = task->cmdq->mbox.dev;
-	struct cmdq_thread *thread = task->thread;
-	struct cmdq_task *prev_task = list_last_entry(
-			&thread->task_busy_list, typeof(*task), list_entry);
-	u64 *prev_task_base = prev_task->pkt->va_base;
-
-	/* let previous task jump to this task */
-	dma_sync_single_for_cpu(dev, prev_task->pa_base,
-				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
-	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
-		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
-	dma_sync_single_for_device(dev, prev_task->pa_base,
-				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
-
-	cmdq_thread_invalidate_fetched_data(thread);
-}
-
-static bool cmdq_command_is_wfe(u64 cmd)
-{
-	u64 wfe_option = CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | CMDQ_WFE_WAIT_VALUE;
-	u64 wfe_op = (u64)(CMDQ_CODE_WFE << CMDQ_OP_CODE_SHIFT) << 32;
-	u64 wfe_mask = (u64)CMDQ_OP_CODE_MASK << 32 | 0xffffffff;
-
-	return ((cmd & wfe_mask) == (wfe_op | wfe_option));
-}
-
-/* we assume tasks in the same display GCE thread are waiting the same event. */
-static void cmdq_task_remove_wfe(struct cmdq_task *task)
-{
-	struct device *dev = task->cmdq->mbox.dev;
-	u64 *base = task->pkt->va_base;
-	int i;
-
-	dma_sync_single_for_cpu(dev, task->pa_base, task->pkt->cmd_buf_size,
-				DMA_TO_DEVICE);
-	for (i = 0; i < CMDQ_NUM_CMD(task->pkt); i++)
-		if (cmdq_command_is_wfe(base[i]))
-			base[i] = (u64)CMDQ_JUMP_BY_OFFSET << 32 |
-				  CMDQ_JUMP_PASS;
-	dma_sync_single_for_device(dev, task->pa_base, task->pkt->cmd_buf_size,
-				   DMA_TO_DEVICE);
-}
-
 static bool cmdq_thread_is_in_wfe(struct cmdq_thread *thread)
 {
 	return readl(thread->base + CMDQ_THR_WAIT_TOKEN) & CMDQ_THR_IS_WAITING;
 }
 
-static void cmdq_thread_wait_end(struct cmdq_thread *thread,
-				 unsigned long end_pa)
-{
-	struct device *dev = thread->chan->mbox->dev;
-	unsigned long curr_pa;
-
-	if (readl_poll_timeout_atomic(thread->base + CMDQ_THR_CURR_ADDR,
-			curr_pa, curr_pa == end_pa, 1, 20))
-		dev_err(dev, "GCE thread cannot run to end.\n");
-}
-
-static void cmdq_task_exec_done(struct cmdq_task *task, enum cmdq_cb_status sta)
-{
-	struct cmdq_task_cb *cb = &task->pkt->async_cb;
-	struct cmdq_cb_data data;
-
-	WARN_ON(cb->cb == (cmdq_async_flush_cb)NULL);
-	data.sta = sta;
-	data.data = cb->data;
-	cb->cb(data);
-
-	list_del(&task->list_entry);
-}
-
-static void cmdq_task_handle_error(struct cmdq_task *task)
-{
-	struct cmdq_thread *thread = task->thread;
-	struct cmdq_task *next_task;
-
-	dev_err(task->cmdq->mbox.dev, "task 0x%p error\n", task);
-	WARN_ON(cmdq_thread_suspend(task->cmdq, thread) < 0);
-	next_task = list_first_entry_or_null(&thread->task_busy_list,
-			struct cmdq_task, list_entry);
-	if (next_task)
-		writel(next_task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
-	cmdq_thread_resume(thread);
-}
-
 static void cmdq_thread_irq_handler(struct cmdq *cmdq,
 				    struct cmdq_thread *thread)
 {
-	struct cmdq_task *task, *tmp, *curr_task = NULL;
-	u32 curr_pa, irq_flag, task_end_pa;
-	bool err;
+	unsigned long flags;
+	u32 curr_pa, irq_flag, end_pa;
+	int ret = 0;
 
+	spin_lock_irqsave(&thread->chan->lock, flags);
 	irq_flag = readl(thread->base + CMDQ_THR_IRQ_STATUS);
 	writel(~irq_flag, thread->base + CMDQ_THR_IRQ_STATUS);
 
-	/*
-	 * When ISR call this function, another CPU core could run
-	 * "release task" right before we acquire the spin lock, and thus
-	 * reset / disable this GCE thread, so we need to check the enable
-	 * bit of this GCE thread.
-	 */
-	if (!(readl(thread->base + CMDQ_THR_ENABLE_TASK) & CMDQ_THR_ENABLED))
-		return;
-
-	if (irq_flag & CMDQ_THR_IRQ_ERROR)
-		err = true;
-	else if (irq_flag & CMDQ_THR_IRQ_DONE)
-		err = false;
-	else
-		return;
-
 	curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
+	end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
 
-	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
-				 list_entry) {
-		task_end_pa = task->pa_base + task->pkt->cmd_buf_size;
-		if (curr_pa >= task->pa_base && curr_pa < task_end_pa)
-			curr_task = task;
-
-		if (!curr_task || curr_pa == task_end_pa - CMDQ_INST_SIZE) {
-			cmdq_task_exec_done(task, CMDQ_CB_NORMAL);
-			kfree(task);
-		} else if (err) {
-			cmdq_task_exec_done(task, CMDQ_CB_ERROR);
-			cmdq_task_handle_error(curr_task);
-			kfree(task);
-		}
-
-		if (curr_task)
-			break;
-	}
+	if (curr_pa != end_pa ||  irq_flag & CMDQ_THR_IRQ_ERROR)
+		ret = -EFAULT;
 
-	if (list_empty(&thread->task_busy_list)) {
-		cmdq_thread_disable(cmdq, thread);
-		clk_disable(cmdq->clock);
-	}
+	thread->pkt = NULL;
+	cmdq_thread_disable(cmdq, thread);
+	clk_disable(cmdq->clock);
+	spin_unlock_irqrestore(&thread->chan->lock, flags);
+	mbox_chan_txdone(thread->chan, ret);
 }
 
 static irqreturn_t cmdq_irq_handler(int irq, void *dev)
 {
 	struct cmdq *cmdq = dev;
-	unsigned long irq_status, flags = 0L;
+	unsigned long irq_status;
 	int bit;
 
 	irq_status = readl(cmdq->base + CMDQ_CURR_IRQ_STATUS) & CMDQ_IRQ_MASK;
 	if (!(irq_status ^ CMDQ_IRQ_MASK))
 		return IRQ_NONE;
 
-	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK)) {
-		struct cmdq_thread *thread = &cmdq->thread[bit];
-
-		spin_lock_irqsave(&thread->chan->lock, flags);
-		cmdq_thread_irq_handler(cmdq, thread);
-		spin_unlock_irqrestore(&thread->chan->lock, flags);
-	}
+	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK))
+		cmdq_thread_irq_handler(cmdq, &cmdq->thread[bit]);
 
 	return IRQ_HANDLED;
 }
@@ -310,7 +172,7 @@ static int cmdq_suspend(struct device *dev)
 
 	for (i = 0; i < cmdq->thread_nr; i++) {
 		thread = &cmdq->thread[i];
-		if (!list_empty(&thread->task_busy_list)) {
+		if (thread->pkt) {
 			task_running = true;
 			break;
 		}
@@ -347,72 +209,21 @@ static int cmdq_mbox_send_data(struct mbox_chan *chan, void *data)
 	struct cmdq_pkt *pkt = (struct cmdq_pkt *)data;
 	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
 	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
-	struct cmdq_task *task;
-	unsigned long curr_pa, end_pa;
 
 	/* Client should not flush new tasks if suspended. */
 	WARN_ON(cmdq->suspended);
 
-	task = kzalloc(sizeof(*task), GFP_ATOMIC);
-	if (!task)
-		return -ENOMEM;
+	thread->pkt = pkt;
 
-	task->cmdq = cmdq;
-	INIT_LIST_HEAD(&task->list_entry);
-	task->pa_base = pkt->pa_base;
-	task->thread = thread;
-	task->pkt = pkt;
-
-	if (list_empty(&thread->task_busy_list)) {
-		WARN_ON(clk_enable(cmdq->clock) < 0);
-		WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
-
-		writel(task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
-		writel(task->pa_base + pkt->cmd_buf_size,
-		       thread->base + CMDQ_THR_END_ADDR);
-		writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
-		writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
-		writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
-	} else {
-		WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
-		curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
-		end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
-
-		/*
-		 * Atomic execution should remove the following wfe, i.e. only
-		 * wait event at first task, and prevent to pause when running.
-		 */
-		if (thread->atomic_exec) {
-			/* GCE is executing if command is not WFE */
-			if (!cmdq_thread_is_in_wfe(thread)) {
-				cmdq_thread_resume(thread);
-				cmdq_thread_wait_end(thread, end_pa);
-				WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
-				/* set to this task directly */
-				writel(task->pa_base,
-				       thread->base + CMDQ_THR_CURR_ADDR);
-			} else {
-				cmdq_task_insert_into_thread(task);
-				cmdq_task_remove_wfe(task);
-				smp_mb(); /* modify jump before enable thread */
-			}
-		} else {
-			/* check boundary */
-			if (curr_pa == end_pa - CMDQ_INST_SIZE ||
-			    curr_pa == end_pa) {
-				/* set to this task directly */
-				writel(task->pa_base,
-				       thread->base + CMDQ_THR_CURR_ADDR);
-			} else {
-				cmdq_task_insert_into_thread(task);
-				smp_mb(); /* modify jump before enable thread */
-			}
-		}
-		writel(task->pa_base + pkt->cmd_buf_size,
-		       thread->base + CMDQ_THR_END_ADDR);
-		cmdq_thread_resume(thread);
-	}
-	list_move_tail(&task->list_entry, &thread->task_busy_list);
+	WARN_ON(clk_enable(cmdq->clock) < 0);
+	WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
+
+	writel(thread->pkt->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
+	writel(thread->pkt->pa_base + pkt->cmd_buf_size,
+	       thread->base + CMDQ_THR_END_ADDR);
+	writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
+	writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
+	writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
 
 	return 0;
 }
@@ -421,23 +232,18 @@ static void cmdq_mbox_abort_data(struct mbox_chan *chan)
 {
 	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
 	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
-	struct cmdq_task *task, *tmp;
 	unsigned long flags;
 	u32 enable;
 
 	spin_lock_irqsave(&thread->chan->lock, flags);
-	if (list_empty(&thread->task_busy_list))
+	if (!thread->pkt)
 		goto out;
 
 	WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
 	if (!cmdq_thread_is_in_wfe(thread))
 		goto wait;
 
-	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
-				 list_entry) {
-		list_del(&task->list_entry);
-		kfree(task);
-	}
+	thread->pkt = NULL;
 
 	cmdq_thread_resume(thread);
 	cmdq_thread_disable(cmdq, thread);
@@ -483,7 +289,6 @@ static struct mbox_chan *cmdq_xlate(struct mbox_controller *mbox,
 
 	thread = (struct cmdq_thread *)mbox->chans[ind].con_priv;
 	thread->priority = sp->args[1];
-	thread->atomic_exec = (sp->args[2] != 0);
 	thread->chan = &mbox->chans[ind];
 
 	return &mbox->chans[ind];
@@ -539,8 +344,7 @@ static int cmdq_probe(struct platform_device *pdev)
 	cmdq->mbox.ops = &cmdq_mbox_chan_ops;
 	cmdq->mbox.of_xlate = cmdq_xlate;
 
-	/* make use of TXDONE_BY_ACK */
-	cmdq->mbox.txdone_irq = false;
+	cmdq->mbox.txdone_irq = true;
 	cmdq->mbox.txdone_poll = false;
 
 	cmdq->thread = devm_kcalloc(dev, cmdq->thread_nr,
@@ -551,7 +355,6 @@ static int cmdq_probe(struct platform_device *pdev)
 	for (i = 0; i < cmdq->thread_nr; i++) {
 		cmdq->thread[i].base = cmdq->base + CMDQ_THR_BASE +
 				CMDQ_THR_SIZE * i;
-		INIT_LIST_HEAD(&cmdq->thread[i].task_busy_list);
 		cmdq->mbox.chans[i].con_priv = (void *)&cmdq->thread[i];
 	}
 
-- 
2.18.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] mailbox: Add ability for clients to abort data in channel
  2019-01-16  5:04 ` [PATCH 1/3] mailbox: Add ability for clients to abort data in channel CK Hu
@ 2019-01-16 16:22   ` Jassi Brar
  2019-01-17  8:00     ` CK Hu
  0 siblings, 1 reply; 10+ messages in thread
From: Jassi Brar @ 2019-01-16 16:22 UTC (permalink / raw)
  To: CK Hu
  Cc: srv_heupstream, Linux Kernel Mailing List, Houlong Wei,
	linux-mediatek, Matthias Brugger, linux-arm-kernel

On Tue, Jan 15, 2019 at 11:07 PM CK Hu <ck.hu@mediatek.com> wrote:
>
> This patch supplies a new framework API, mbox_abort_channel(), and
> a new controller interface, abort_data().
>
> For some client's application, it need to clean up the data in channel
> but keep the channel so it could send data to channel later.
>
> Signed-off-by: CK Hu <ck.hu@mediatek.com>
> ---
>  drivers/mailbox/mailbox.c          | 23 +++++++++++++++++++++++
>  include/linux/mailbox_client.h     |  1 +
>  include/linux/mailbox_controller.h |  4 ++++
>  3 files changed, 28 insertions(+)
>
> diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
> index c6a7d4582dc6..281647162c76 100644
> --- a/drivers/mailbox/mailbox.c
> +++ b/drivers/mailbox/mailbox.c
> @@ -428,6 +428,29 @@ struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl,
>  }
>  EXPORT_SYMBOL_GPL(mbox_request_channel_byname);
>
> +/**
> + * mbox_abort_channel - The client abort all data in a mailbox
> + *                     channel by this call.
> + * @chan: The mailbox channel to be aborted.
> + */
> +void mbox_abort_channel(struct mbox_chan *chan)
> +{
> +       unsigned long flags;
> +
> +       if (!chan || !chan->cl)
> +               return;
> +
> +       if (chan->mbox->ops->abort_data)
> +               chan->mbox->ops->abort_data(chan);
> +
> +       /* The queued TX requests are simply aborted, no callbacks are made */
> +       spin_lock_irqsave(&chan->lock, flags);
> +       chan->cl = NULL;
> +       chan->active_req = NULL;
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
>
Why not just release and then request channel again ?
mbox_abort_channel() is just a copy of mbox_free_channel() and if the
abort can sleep, that is more reason to just do that.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] mailbox: Add ability for clients to abort data in channel
  2019-01-16 16:22   ` Jassi Brar
@ 2019-01-17  8:00     ` CK Hu
  2019-01-17 15:22       ` Jassi Brar
  0 siblings, 1 reply; 10+ messages in thread
From: CK Hu @ 2019-01-17  8:00 UTC (permalink / raw)
  To: Jassi Brar
  Cc: srv_heupstream, Linux Kernel Mailing List, Houlong Wei,
	linux-mediatek, Matthias Brugger, linux-arm-kernel

Hi, Jassi:

On Wed, 2019-01-16 at 10:22 -0600, Jassi Brar wrote:
> On Tue, Jan 15, 2019 at 11:07 PM CK Hu <ck.hu@mediatek.com> wrote:
> >
> > This patch supplies a new framework API, mbox_abort_channel(), and
> > a new controller interface, abort_data().
> >
> > For some client's application, it need to clean up the data in channel
> > but keep the channel so it could send data to channel later.
> >
> > Signed-off-by: CK Hu <ck.hu@mediatek.com>
> > ---
> >  drivers/mailbox/mailbox.c          | 23 +++++++++++++++++++++++
> >  include/linux/mailbox_client.h     |  1 +
> >  include/linux/mailbox_controller.h |  4 ++++
> >  3 files changed, 28 insertions(+)
> >
> > diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
> > index c6a7d4582dc6..281647162c76 100644
> > --- a/drivers/mailbox/mailbox.c
> > +++ b/drivers/mailbox/mailbox.c
> > @@ -428,6 +428,29 @@ struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl,
> >  }
> >  EXPORT_SYMBOL_GPL(mbox_request_channel_byname);
> >
> > +/**
> > + * mbox_abort_channel - The client abort all data in a mailbox
> > + *                     channel by this call.
> > + * @chan: The mailbox channel to be aborted.
> > + */
> > +void mbox_abort_channel(struct mbox_chan *chan)
> > +{
> > +       unsigned long flags;
> > +
> > +       if (!chan || !chan->cl)
> > +               return;
> > +
> > +       if (chan->mbox->ops->abort_data)
> > +               chan->mbox->ops->abort_data(chan);
> > +
> > +       /* The queued TX requests are simply aborted, no callbacks are made */
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       chan->cl = NULL;
> > +       chan->active_req = NULL;
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +}
> >
> Why not just release and then request channel again ?
> mbox_abort_channel() is just a copy of mbox_free_channel() and if the
> abort can sleep, that is more reason to just do that.

The cursor may change position 300 times in one second, so I still
concern the processing time. Request and free channel looks too heavy to
do 300 times per second. Conceptually, I just want to clean up the
channel rather than free and request it, so they are somewhat different.
I say it may sleep because mbox_abort_channel() is a copy of
mbox_free_channel(). I could change it to 'must not sleep' because
Mediatek controller does not sleep in abort callback.

Regards,
CK


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] mailbox: Add ability for clients to abort data in channel
  2019-01-17  8:00     ` CK Hu
@ 2019-01-17 15:22       ` Jassi Brar
  0 siblings, 0 replies; 10+ messages in thread
From: Jassi Brar @ 2019-01-17 15:22 UTC (permalink / raw)
  To: CK Hu
  Cc: srv_heupstream, Linux Kernel Mailing List, Houlong Wei,
	linux-mediatek, Matthias Brugger, linux-arm-kernel

On Thu, Jan 17, 2019 at 2:00 AM CK Hu <ck.hu@mediatek.com> wrote:
>
> Hi, Jassi:
>
> On Wed, 2019-01-16 at 10:22 -0600, Jassi Brar wrote:
> > On Tue, Jan 15, 2019 at 11:07 PM CK Hu <ck.hu@mediatek.com> wrote:
> > >
> > > This patch supplies a new framework API, mbox_abort_channel(), and
> > > a new controller interface, abort_data().
> > >
> > > For some client's application, it need to clean up the data in channel
> > > but keep the channel so it could send data to channel later.
> > >
> > > Signed-off-by: CK Hu <ck.hu@mediatek.com>
> > > ---
> > >  drivers/mailbox/mailbox.c          | 23 +++++++++++++++++++++++
> > >  include/linux/mailbox_client.h     |  1 +
> > >  include/linux/mailbox_controller.h |  4 ++++
> > >  3 files changed, 28 insertions(+)
> > >
> > > diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
> > > index c6a7d4582dc6..281647162c76 100644
> > > --- a/drivers/mailbox/mailbox.c
> > > +++ b/drivers/mailbox/mailbox.c
> > > @@ -428,6 +428,29 @@ struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl,
> > >  }
> > >  EXPORT_SYMBOL_GPL(mbox_request_channel_byname);
> > >
> > > +/**
> > > + * mbox_abort_channel - The client abort all data in a mailbox
> > > + *                     channel by this call.
> > > + * @chan: The mailbox channel to be aborted.
> > > + */
> > > +void mbox_abort_channel(struct mbox_chan *chan)
> > > +{
> > > +       unsigned long flags;
> > > +
> > > +       if (!chan || !chan->cl)
> > > +               return;
> > > +
> > > +       if (chan->mbox->ops->abort_data)
> > > +               chan->mbox->ops->abort_data(chan);
> > > +
> > > +       /* The queued TX requests are simply aborted, no callbacks are made */
> > > +       spin_lock_irqsave(&chan->lock, flags);
> > > +       chan->cl = NULL;
> > > +       chan->active_req = NULL;
> > > +       spin_unlock_irqrestore(&chan->lock, flags);
> > > +}
> > >
> > Why not just release and then request channel again ?
> > mbox_abort_channel() is just a copy of mbox_free_channel() and if the
> > abort can sleep, that is more reason to just do that.
>
> The cursor may change position 300 times in one second, so I still
> concern the processing time. Request and free channel looks too heavy to
> do 300 times per second.
You send data 300Hz, not abort ... which is supposed to be a rare event.
And then your shutdown/startup is currently empty, while abort isn't
quite lightweight.

> Conceptually, I just want to clean up the
> channel rather than free and request it, so they are somewhat different.
> I say it may sleep because mbox_abort_channel() is a copy of
> mbox_free_channel(). I could change it to 'must not sleep' because
> Mediatek controller does not sleep in abort callback.
>
No please. Just use the shutdown/startup.

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] mailbox: mediatek: Remove busylist
       [not found]   ` <40d519083fe94640a22181388bcbbb09@MTKMBS31N1.mediatek.inc>
@ 2019-02-12  2:18     ` Dennis-YC Hsieh
  2019-02-14 16:01       ` CK Hu
  0 siblings, 1 reply; 10+ messages in thread
From: Dennis-YC Hsieh @ 2019-02-12  2:18 UTC (permalink / raw)
  To: Jassi Brar, Matthias Brugger, CK Hu (胡俊光)
  Cc: srv_heupstream, linux-kernel,
	Houlong Wei (魏厚龙),
	linux-mediatek, Bibby Hsieh, linux-arm-kernel

Hi CK,

On Tue, 2019-01-29 at 17:20 +0800, Houlong Wei (魏厚龙) wrote:
> -----Original Message-----
> From: CK Hu [mailto:ck.hu@mediatek.com] 
> Sent: Wednesday, January 16, 2019 1:05 PM
> To: Jassi Brar <jassisinghbrar@gmail.com>; Matthias Brugger <matthias.bgg@gmail.com>; Houlong Wei (魏厚龙) <houlong.wei@mediatek.com>
> Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-mediatek@lists.infradead.org; srv_heupstream <srv_heupstream@mediatek.com>; CK Hu (胡俊光) <ck.hu@mediatek.com>
> Subject: [PATCH 3/3] mailbox: mediatek: Remove busylist
> 
> After implement abort_data, controller need not to implement its own queue. Remove busylist because it's useless.

Remove busy list in controller makes client driver have no way to queue
pkt in gce hardware thread, which may hurt display and multimedia
performance since each pkt waits IRQ delay before previous pkt. Suggest
keep busy list design.


Regards,
Dennis

> 
> Signed-off-by: CK Hu <ck.hu@mediatek.com>
> ---
>  drivers/mailbox/mtk-cmdq-mailbox.c | 255 ++++-------------------------
>  1 file changed, 29 insertions(+), 226 deletions(-)
> 
> diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
> index f2219f263ef6..45c59f677ecb 100644
> --- a/drivers/mailbox/mtk-cmdq-mailbox.c
> +++ b/drivers/mailbox/mtk-cmdq-mailbox.c
> @@ -16,9 +16,7 @@
>  #include <linux/mailbox/mtk-cmdq-mailbox.h>
>  #include <linux/of_device.h>
>  
> -#define CMDQ_OP_CODE_MASK		(0xff << CMDQ_OP_CODE_SHIFT)
>  #define CMDQ_IRQ_MASK			0xffff
> -#define CMDQ_NUM_CMD(t)			(t->cmd_buf_size / CMDQ_INST_SIZE)
>  
>  #define CMDQ_CURR_IRQ_STATUS		0x10
>  #define CMDQ_THR_SLOT_CYCLES		0x30
> @@ -47,22 +45,10 @@
>  #define CMDQ_THR_IRQ_EN			(CMDQ_THR_IRQ_ERROR | CMDQ_THR_IRQ_DONE)
>  #define CMDQ_THR_IS_WAITING		BIT(31)
>  
> -#define CMDQ_JUMP_BY_OFFSET		0x10000000
> -#define CMDQ_JUMP_BY_PA			0x10000001
> -
>  struct cmdq_thread {
>  	struct mbox_chan	*chan;
>  	void __iomem		*base;
> -	struct list_head	task_busy_list;
>  	u32			priority;
> -	bool			atomic_exec;
> -};
> -
> -struct cmdq_task {
> -	struct cmdq		*cmdq;
> -	struct list_head	list_entry;
> -	dma_addr_t		pa_base;
> -	struct cmdq_thread	*thread;
>  	struct cmdq_pkt		*pkt; /* the packet sent from mailbox client */
>  };
>  
> @@ -130,171 +116,47 @@ static void cmdq_thread_disable(struct cmdq *cmdq, struct cmdq_thread *thread)
>  	writel(CMDQ_THR_DISABLED, thread->base + CMDQ_THR_ENABLE_TASK);  }
>  
> -/* notify GCE to re-fetch commands by setting GCE thread PC */ -static void cmdq_thread_invalidate_fetched_data(struct cmdq_thread *thread) -{
> -	writel(readl(thread->base + CMDQ_THR_CURR_ADDR),
> -	       thread->base + CMDQ_THR_CURR_ADDR);
> -}
> -
> -static void cmdq_task_insert_into_thread(struct cmdq_task *task) -{
> -	struct device *dev = task->cmdq->mbox.dev;
> -	struct cmdq_thread *thread = task->thread;
> -	struct cmdq_task *prev_task = list_last_entry(
> -			&thread->task_busy_list, typeof(*task), list_entry);
> -	u64 *prev_task_base = prev_task->pkt->va_base;
> -
> -	/* let previous task jump to this task */
> -	dma_sync_single_for_cpu(dev, prev_task->pa_base,
> -				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
> -	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
> -		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
> -	dma_sync_single_for_device(dev, prev_task->pa_base,
> -				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
> -
> -	cmdq_thread_invalidate_fetched_data(thread);
> -}
> -
> -static bool cmdq_command_is_wfe(u64 cmd) -{
> -	u64 wfe_option = CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | CMDQ_WFE_WAIT_VALUE;
> -	u64 wfe_op = (u64)(CMDQ_CODE_WFE << CMDQ_OP_CODE_SHIFT) << 32;
> -	u64 wfe_mask = (u64)CMDQ_OP_CODE_MASK << 32 | 0xffffffff;
> -
> -	return ((cmd & wfe_mask) == (wfe_op | wfe_option));
> -}
> -
> -/* we assume tasks in the same display GCE thread are waiting the same event. */ -static void cmdq_task_remove_wfe(struct cmdq_task *task) -{
> -	struct device *dev = task->cmdq->mbox.dev;
> -	u64 *base = task->pkt->va_base;
> -	int i;
> -
> -	dma_sync_single_for_cpu(dev, task->pa_base, task->pkt->cmd_buf_size,
> -				DMA_TO_DEVICE);
> -	for (i = 0; i < CMDQ_NUM_CMD(task->pkt); i++)
> -		if (cmdq_command_is_wfe(base[i]))
> -			base[i] = (u64)CMDQ_JUMP_BY_OFFSET << 32 |
> -				  CMDQ_JUMP_PASS;
> -	dma_sync_single_for_device(dev, task->pa_base, task->pkt->cmd_buf_size,
> -				   DMA_TO_DEVICE);
> -}
> -
>  static bool cmdq_thread_is_in_wfe(struct cmdq_thread *thread)  {
>  	return readl(thread->base + CMDQ_THR_WAIT_TOKEN) & CMDQ_THR_IS_WAITING;  }
>  
> -static void cmdq_thread_wait_end(struct cmdq_thread *thread,
> -				 unsigned long end_pa)
> -{
> -	struct device *dev = thread->chan->mbox->dev;
> -	unsigned long curr_pa;
> -
> -	if (readl_poll_timeout_atomic(thread->base + CMDQ_THR_CURR_ADDR,
> -			curr_pa, curr_pa == end_pa, 1, 20))
> -		dev_err(dev, "GCE thread cannot run to end.\n");
> -}
> -
> -static void cmdq_task_exec_done(struct cmdq_task *task, enum cmdq_cb_status sta) -{
> -	struct cmdq_task_cb *cb = &task->pkt->async_cb;
> -	struct cmdq_cb_data data;
> -
> -	WARN_ON(cb->cb == (cmdq_async_flush_cb)NULL);
> -	data.sta = sta;
> -	data.data = cb->data;
> -	cb->cb(data);
> -
> -	list_del(&task->list_entry);
> -}
> -
> -static void cmdq_task_handle_error(struct cmdq_task *task) -{
> -	struct cmdq_thread *thread = task->thread;
> -	struct cmdq_task *next_task;
> -
> -	dev_err(task->cmdq->mbox.dev, "task 0x%p error\n", task);
> -	WARN_ON(cmdq_thread_suspend(task->cmdq, thread) < 0);
> -	next_task = list_first_entry_or_null(&thread->task_busy_list,
> -			struct cmdq_task, list_entry);
> -	if (next_task)
> -		writel(next_task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> -	cmdq_thread_resume(thread);
> -}
> -
>  static void cmdq_thread_irq_handler(struct cmdq *cmdq,
>  				    struct cmdq_thread *thread)
>  {
> -	struct cmdq_task *task, *tmp, *curr_task = NULL;
> -	u32 curr_pa, irq_flag, task_end_pa;
> -	bool err;
> +	unsigned long flags;
> +	u32 curr_pa, irq_flag, end_pa;
> +	int ret = 0;
>  
> +	spin_lock_irqsave(&thread->chan->lock, flags);
>  	irq_flag = readl(thread->base + CMDQ_THR_IRQ_STATUS);
>  	writel(~irq_flag, thread->base + CMDQ_THR_IRQ_STATUS);
>  
> -	/*
> -	 * When ISR call this function, another CPU core could run
> -	 * "release task" right before we acquire the spin lock, and thus
> -	 * reset / disable this GCE thread, so we need to check the enable
> -	 * bit of this GCE thread.
> -	 */
> -	if (!(readl(thread->base + CMDQ_THR_ENABLE_TASK) & CMDQ_THR_ENABLED))
> -		return;
> -
> -	if (irq_flag & CMDQ_THR_IRQ_ERROR)
> -		err = true;
> -	else if (irq_flag & CMDQ_THR_IRQ_DONE)
> -		err = false;
> -	else
> -		return;
> -
>  	curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
> +	end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
>  
> -	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
> -				 list_entry) {
> -		task_end_pa = task->pa_base + task->pkt->cmd_buf_size;
> -		if (curr_pa >= task->pa_base && curr_pa < task_end_pa)
> -			curr_task = task;
> -
> -		if (!curr_task || curr_pa == task_end_pa - CMDQ_INST_SIZE) {
> -			cmdq_task_exec_done(task, CMDQ_CB_NORMAL);
> -			kfree(task);
> -		} else if (err) {
> -			cmdq_task_exec_done(task, CMDQ_CB_ERROR);
> -			cmdq_task_handle_error(curr_task);
> -			kfree(task);
> -		}
> -
> -		if (curr_task)
> -			break;
> -	}
> +	if (curr_pa != end_pa ||  irq_flag & CMDQ_THR_IRQ_ERROR)
> +		ret = -EFAULT;
>  
> -	if (list_empty(&thread->task_busy_list)) {
> -		cmdq_thread_disable(cmdq, thread);
> -		clk_disable(cmdq->clock);
> -	}
> +	thread->pkt = NULL;
> +	cmdq_thread_disable(cmdq, thread);
> +	clk_disable(cmdq->clock);
> +	spin_unlock_irqrestore(&thread->chan->lock, flags);
> +	mbox_chan_txdone(thread->chan, ret);
>  }
>  
>  static irqreturn_t cmdq_irq_handler(int irq, void *dev)  {
>  	struct cmdq *cmdq = dev;
> -	unsigned long irq_status, flags = 0L;
> +	unsigned long irq_status;
>  	int bit;
>  
>  	irq_status = readl(cmdq->base + CMDQ_CURR_IRQ_STATUS) & CMDQ_IRQ_MASK;
>  	if (!(irq_status ^ CMDQ_IRQ_MASK))
>  		return IRQ_NONE;
>  
> -	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK)) {
> -		struct cmdq_thread *thread = &cmdq->thread[bit];
> -
> -		spin_lock_irqsave(&thread->chan->lock, flags);
> -		cmdq_thread_irq_handler(cmdq, thread);
> -		spin_unlock_irqrestore(&thread->chan->lock, flags);
> -	}
> +	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK))
> +		cmdq_thread_irq_handler(cmdq, &cmdq->thread[bit]);
>  
>  	return IRQ_HANDLED;
>  }
> @@ -310,7 +172,7 @@ static int cmdq_suspend(struct device *dev)
>  
>  	for (i = 0; i < cmdq->thread_nr; i++) {
>  		thread = &cmdq->thread[i];
> -		if (!list_empty(&thread->task_busy_list)) {
> +		if (thread->pkt) {
>  			task_running = true;
>  			break;
>  		}
> @@ -347,72 +209,21 @@ static int cmdq_mbox_send_data(struct mbox_chan *chan, void *data)
>  	struct cmdq_pkt *pkt = (struct cmdq_pkt *)data;
>  	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
>  	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
> -	struct cmdq_task *task;
> -	unsigned long curr_pa, end_pa;
>  
>  	/* Client should not flush new tasks if suspended. */
>  	WARN_ON(cmdq->suspended);
>  
> -	task = kzalloc(sizeof(*task), GFP_ATOMIC);
> -	if (!task)
> -		return -ENOMEM;
> +	thread->pkt = pkt;
>  
> -	task->cmdq = cmdq;
> -	INIT_LIST_HEAD(&task->list_entry);
> -	task->pa_base = pkt->pa_base;
> -	task->thread = thread;
> -	task->pkt = pkt;
> -
> -	if (list_empty(&thread->task_busy_list)) {
> -		WARN_ON(clk_enable(cmdq->clock) < 0);
> -		WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
> -
> -		writel(task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> -		writel(task->pa_base + pkt->cmd_buf_size,
> -		       thread->base + CMDQ_THR_END_ADDR);
> -		writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
> -		writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
> -		writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
> -	} else {
> -		WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> -		curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
> -		end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
> -
> -		/*
> -		 * Atomic execution should remove the following wfe, i.e. only
> -		 * wait event at first task, and prevent to pause when running.
> -		 */
> -		if (thread->atomic_exec) {
> -			/* GCE is executing if command is not WFE */
> -			if (!cmdq_thread_is_in_wfe(thread)) {
> -				cmdq_thread_resume(thread);
> -				cmdq_thread_wait_end(thread, end_pa);
> -				WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> -				/* set to this task directly */
> -				writel(task->pa_base,
> -				       thread->base + CMDQ_THR_CURR_ADDR);
> -			} else {
> -				cmdq_task_insert_into_thread(task);
> -				cmdq_task_remove_wfe(task);
> -				smp_mb(); /* modify jump before enable thread */
> -			}
> -		} else {
> -			/* check boundary */
> -			if (curr_pa == end_pa - CMDQ_INST_SIZE ||
> -			    curr_pa == end_pa) {
> -				/* set to this task directly */
> -				writel(task->pa_base,
> -				       thread->base + CMDQ_THR_CURR_ADDR);
> -			} else {
> -				cmdq_task_insert_into_thread(task);
> -				smp_mb(); /* modify jump before enable thread */
> -			}
> -		}
> -		writel(task->pa_base + pkt->cmd_buf_size,
> -		       thread->base + CMDQ_THR_END_ADDR);
> -		cmdq_thread_resume(thread);
> -	}
> -	list_move_tail(&task->list_entry, &thread->task_busy_list);
> +	WARN_ON(clk_enable(cmdq->clock) < 0);
> +	WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
> +
> +	writel(thread->pkt->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> +	writel(thread->pkt->pa_base + pkt->cmd_buf_size,
> +	       thread->base + CMDQ_THR_END_ADDR);
> +	writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
> +	writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
> +	writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
>  
>  	return 0;
>  }
> @@ -421,23 +232,18 @@ static void cmdq_mbox_abort_data(struct mbox_chan *chan)  {
>  	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
>  	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
> -	struct cmdq_task *task, *tmp;
>  	unsigned long flags;
>  	u32 enable;
>  
>  	spin_lock_irqsave(&thread->chan->lock, flags);
> -	if (list_empty(&thread->task_busy_list))
> +	if (!thread->pkt)
>  		goto out;
>  
>  	WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
>  	if (!cmdq_thread_is_in_wfe(thread))
>  		goto wait;
>  
> -	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
> -				 list_entry) {
> -		list_del(&task->list_entry);
> -		kfree(task);
> -	}
> +	thread->pkt = NULL;
>  
>  	cmdq_thread_resume(thread);
>  	cmdq_thread_disable(cmdq, thread);
> @@ -483,7 +289,6 @@ static struct mbox_chan *cmdq_xlate(struct mbox_controller *mbox,
>  
>  	thread = (struct cmdq_thread *)mbox->chans[ind].con_priv;
>  	thread->priority = sp->args[1];
> -	thread->atomic_exec = (sp->args[2] != 0);
>  	thread->chan = &mbox->chans[ind];
>  
>  	return &mbox->chans[ind];
> @@ -539,8 +344,7 @@ static int cmdq_probe(struct platform_device *pdev)
>  	cmdq->mbox.ops = &cmdq_mbox_chan_ops;
>  	cmdq->mbox.of_xlate = cmdq_xlate;
>  
> -	/* make use of TXDONE_BY_ACK */
> -	cmdq->mbox.txdone_irq = false;
> +	cmdq->mbox.txdone_irq = true;
>  	cmdq->mbox.txdone_poll = false;
>  
>  	cmdq->thread = devm_kcalloc(dev, cmdq->thread_nr, @@ -551,7 +355,6 @@ static int cmdq_probe(struct platform_device *pdev)
>  	for (i = 0; i < cmdq->thread_nr; i++) {
>  		cmdq->thread[i].base = cmdq->base + CMDQ_THR_BASE +
>  				CMDQ_THR_SIZE * i;
> -		INIT_LIST_HEAD(&cmdq->thread[i].task_busy_list);
>  		cmdq->mbox.chans[i].con_priv = (void *)&cmdq->thread[i];
>  	}
>  
> --
> 2.18.1
> 



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] mailbox: mediatek: Remove busylist
  2019-02-12  2:18     ` Dennis-YC Hsieh
@ 2019-02-14 16:01       ` CK Hu
  2019-02-14 16:12         ` Dennis-YC Hsieh
  0 siblings, 1 reply; 10+ messages in thread
From: CK Hu @ 2019-02-14 16:01 UTC (permalink / raw)
  To: Dennis-YC Hsieh
  Cc: srv_heupstream, Jassi Brar, linux-kernel,
	Houlong Wei (魏厚龙),
	linux-mediatek, Bibby Hsieh, Matthias Brugger, linux-arm-kernel

Hi, Dennis:

On Tue, 2019-02-12 at 10:18 +0800, Dennis-YC Hsieh wrote:
> Hi CK,
> 
> On Tue, 2019-01-29 at 17:20 +0800, Houlong Wei (魏厚龙) wrote:
> > -----Original Message-----
> > From: CK Hu [mailto:ck.hu@mediatek.com] 
> > Sent: Wednesday, January 16, 2019 1:05 PM
> > To: Jassi Brar <jassisinghbrar@gmail.com>; Matthias Brugger <matthias.bgg@gmail.com>; Houlong Wei (魏厚龙) <houlong.wei@mediatek.com>
> > Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-mediatek@lists.infradead.org; srv_heupstream <srv_heupstream@mediatek.com>; CK Hu (胡俊光) <ck.hu@mediatek.com>
> > Subject: [PATCH 3/3] mailbox: mediatek: Remove busylist
> > 
> > After implement abort_data, controller need not to implement its own queue. Remove busylist because it's useless.
> 
> Remove busy list in controller makes client driver have no way to queue
> pkt in gce hardware thread, which may hurt display and multimedia
> performance since each pkt waits IRQ delay before previous pkt. Suggest
> keep busy list design.

If some client driver need gce to cascade pkt to prevent irq delay, I
should keep busylist. For drm driver, I still want to apply the first
two patches of this series and remove atomic feature because drm could
keep just one pkt and reuse it to prevent frequently allocate/free pkt
and the total commands executed in vblank period would be
well-controlled. Do you have any concern about removing atomic feature?

Regards,
CK

> 
> 
> Regards,
> Dennis
> 
> > 
> > Signed-off-by: CK Hu <ck.hu@mediatek.com>
> > ---
> >  drivers/mailbox/mtk-cmdq-mailbox.c | 255 ++++-------------------------
> >  1 file changed, 29 insertions(+), 226 deletions(-)
> > 
> > diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
> > index f2219f263ef6..45c59f677ecb 100644
> > --- a/drivers/mailbox/mtk-cmdq-mailbox.c
> > +++ b/drivers/mailbox/mtk-cmdq-mailbox.c
> > @@ -16,9 +16,7 @@
> >  #include <linux/mailbox/mtk-cmdq-mailbox.h>
> >  #include <linux/of_device.h>
> >  
> > -#define CMDQ_OP_CODE_MASK		(0xff << CMDQ_OP_CODE_SHIFT)
> >  #define CMDQ_IRQ_MASK			0xffff
> > -#define CMDQ_NUM_CMD(t)			(t->cmd_buf_size / CMDQ_INST_SIZE)
> >  
> >  #define CMDQ_CURR_IRQ_STATUS		0x10
> >  #define CMDQ_THR_SLOT_CYCLES		0x30
> > @@ -47,22 +45,10 @@
> >  #define CMDQ_THR_IRQ_EN			(CMDQ_THR_IRQ_ERROR | CMDQ_THR_IRQ_DONE)
> >  #define CMDQ_THR_IS_WAITING		BIT(31)
> >  
> > -#define CMDQ_JUMP_BY_OFFSET		0x10000000
> > -#define CMDQ_JUMP_BY_PA			0x10000001
> > -
> >  struct cmdq_thread {
> >  	struct mbox_chan	*chan;
> >  	void __iomem		*base;
> > -	struct list_head	task_busy_list;
> >  	u32			priority;
> > -	bool			atomic_exec;
> > -};
> > -
> > -struct cmdq_task {
> > -	struct cmdq		*cmdq;
> > -	struct list_head	list_entry;
> > -	dma_addr_t		pa_base;
> > -	struct cmdq_thread	*thread;
> >  	struct cmdq_pkt		*pkt; /* the packet sent from mailbox client */
> >  };
> >  
> > @@ -130,171 +116,47 @@ static void cmdq_thread_disable(struct cmdq *cmdq, struct cmdq_thread *thread)
> >  	writel(CMDQ_THR_DISABLED, thread->base + CMDQ_THR_ENABLE_TASK);  }
> >  
> > -/* notify GCE to re-fetch commands by setting GCE thread PC */ -static void cmdq_thread_invalidate_fetched_data(struct cmdq_thread *thread) -{
> > -	writel(readl(thread->base + CMDQ_THR_CURR_ADDR),
> > -	       thread->base + CMDQ_THR_CURR_ADDR);
> > -}
> > -
> > -static void cmdq_task_insert_into_thread(struct cmdq_task *task) -{
> > -	struct device *dev = task->cmdq->mbox.dev;
> > -	struct cmdq_thread *thread = task->thread;
> > -	struct cmdq_task *prev_task = list_last_entry(
> > -			&thread->task_busy_list, typeof(*task), list_entry);
> > -	u64 *prev_task_base = prev_task->pkt->va_base;
> > -
> > -	/* let previous task jump to this task */
> > -	dma_sync_single_for_cpu(dev, prev_task->pa_base,
> > -				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
> > -	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
> > -		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
> > -	dma_sync_single_for_device(dev, prev_task->pa_base,
> > -				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
> > -
> > -	cmdq_thread_invalidate_fetched_data(thread);
> > -}
> > -
> > -static bool cmdq_command_is_wfe(u64 cmd) -{
> > -	u64 wfe_option = CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | CMDQ_WFE_WAIT_VALUE;
> > -	u64 wfe_op = (u64)(CMDQ_CODE_WFE << CMDQ_OP_CODE_SHIFT) << 32;
> > -	u64 wfe_mask = (u64)CMDQ_OP_CODE_MASK << 32 | 0xffffffff;
> > -
> > -	return ((cmd & wfe_mask) == (wfe_op | wfe_option));
> > -}
> > -
> > -/* we assume tasks in the same display GCE thread are waiting the same event. */ -static void cmdq_task_remove_wfe(struct cmdq_task *task) -{
> > -	struct device *dev = task->cmdq->mbox.dev;
> > -	u64 *base = task->pkt->va_base;
> > -	int i;
> > -
> > -	dma_sync_single_for_cpu(dev, task->pa_base, task->pkt->cmd_buf_size,
> > -				DMA_TO_DEVICE);
> > -	for (i = 0; i < CMDQ_NUM_CMD(task->pkt); i++)
> > -		if (cmdq_command_is_wfe(base[i]))
> > -			base[i] = (u64)CMDQ_JUMP_BY_OFFSET << 32 |
> > -				  CMDQ_JUMP_PASS;
> > -	dma_sync_single_for_device(dev, task->pa_base, task->pkt->cmd_buf_size,
> > -				   DMA_TO_DEVICE);
> > -}
> > -
> >  static bool cmdq_thread_is_in_wfe(struct cmdq_thread *thread)  {
> >  	return readl(thread->base + CMDQ_THR_WAIT_TOKEN) & CMDQ_THR_IS_WAITING;  }
> >  
> > -static void cmdq_thread_wait_end(struct cmdq_thread *thread,
> > -				 unsigned long end_pa)
> > -{
> > -	struct device *dev = thread->chan->mbox->dev;
> > -	unsigned long curr_pa;
> > -
> > -	if (readl_poll_timeout_atomic(thread->base + CMDQ_THR_CURR_ADDR,
> > -			curr_pa, curr_pa == end_pa, 1, 20))
> > -		dev_err(dev, "GCE thread cannot run to end.\n");
> > -}
> > -
> > -static void cmdq_task_exec_done(struct cmdq_task *task, enum cmdq_cb_status sta) -{
> > -	struct cmdq_task_cb *cb = &task->pkt->async_cb;
> > -	struct cmdq_cb_data data;
> > -
> > -	WARN_ON(cb->cb == (cmdq_async_flush_cb)NULL);
> > -	data.sta = sta;
> > -	data.data = cb->data;
> > -	cb->cb(data);
> > -
> > -	list_del(&task->list_entry);
> > -}
> > -
> > -static void cmdq_task_handle_error(struct cmdq_task *task) -{
> > -	struct cmdq_thread *thread = task->thread;
> > -	struct cmdq_task *next_task;
> > -
> > -	dev_err(task->cmdq->mbox.dev, "task 0x%p error\n", task);
> > -	WARN_ON(cmdq_thread_suspend(task->cmdq, thread) < 0);
> > -	next_task = list_first_entry_or_null(&thread->task_busy_list,
> > -			struct cmdq_task, list_entry);
> > -	if (next_task)
> > -		writel(next_task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> > -	cmdq_thread_resume(thread);
> > -}
> > -
> >  static void cmdq_thread_irq_handler(struct cmdq *cmdq,
> >  				    struct cmdq_thread *thread)
> >  {
> > -	struct cmdq_task *task, *tmp, *curr_task = NULL;
> > -	u32 curr_pa, irq_flag, task_end_pa;
> > -	bool err;
> > +	unsigned long flags;
> > +	u32 curr_pa, irq_flag, end_pa;
> > +	int ret = 0;
> >  
> > +	spin_lock_irqsave(&thread->chan->lock, flags);
> >  	irq_flag = readl(thread->base + CMDQ_THR_IRQ_STATUS);
> >  	writel(~irq_flag, thread->base + CMDQ_THR_IRQ_STATUS);
> >  
> > -	/*
> > -	 * When ISR call this function, another CPU core could run
> > -	 * "release task" right before we acquire the spin lock, and thus
> > -	 * reset / disable this GCE thread, so we need to check the enable
> > -	 * bit of this GCE thread.
> > -	 */
> > -	if (!(readl(thread->base + CMDQ_THR_ENABLE_TASK) & CMDQ_THR_ENABLED))
> > -		return;
> > -
> > -	if (irq_flag & CMDQ_THR_IRQ_ERROR)
> > -		err = true;
> > -	else if (irq_flag & CMDQ_THR_IRQ_DONE)
> > -		err = false;
> > -	else
> > -		return;
> > -
> >  	curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
> > +	end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
> >  
> > -	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
> > -				 list_entry) {
> > -		task_end_pa = task->pa_base + task->pkt->cmd_buf_size;
> > -		if (curr_pa >= task->pa_base && curr_pa < task_end_pa)
> > -			curr_task = task;
> > -
> > -		if (!curr_task || curr_pa == task_end_pa - CMDQ_INST_SIZE) {
> > -			cmdq_task_exec_done(task, CMDQ_CB_NORMAL);
> > -			kfree(task);
> > -		} else if (err) {
> > -			cmdq_task_exec_done(task, CMDQ_CB_ERROR);
> > -			cmdq_task_handle_error(curr_task);
> > -			kfree(task);
> > -		}
> > -
> > -		if (curr_task)
> > -			break;
> > -	}
> > +	if (curr_pa != end_pa ||  irq_flag & CMDQ_THR_IRQ_ERROR)
> > +		ret = -EFAULT;
> >  
> > -	if (list_empty(&thread->task_busy_list)) {
> > -		cmdq_thread_disable(cmdq, thread);
> > -		clk_disable(cmdq->clock);
> > -	}
> > +	thread->pkt = NULL;
> > +	cmdq_thread_disable(cmdq, thread);
> > +	clk_disable(cmdq->clock);
> > +	spin_unlock_irqrestore(&thread->chan->lock, flags);
> > +	mbox_chan_txdone(thread->chan, ret);
> >  }
> >  
> >  static irqreturn_t cmdq_irq_handler(int irq, void *dev)  {
> >  	struct cmdq *cmdq = dev;
> > -	unsigned long irq_status, flags = 0L;
> > +	unsigned long irq_status;
> >  	int bit;
> >  
> >  	irq_status = readl(cmdq->base + CMDQ_CURR_IRQ_STATUS) & CMDQ_IRQ_MASK;
> >  	if (!(irq_status ^ CMDQ_IRQ_MASK))
> >  		return IRQ_NONE;
> >  
> > -	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK)) {
> > -		struct cmdq_thread *thread = &cmdq->thread[bit];
> > -
> > -		spin_lock_irqsave(&thread->chan->lock, flags);
> > -		cmdq_thread_irq_handler(cmdq, thread);
> > -		spin_unlock_irqrestore(&thread->chan->lock, flags);
> > -	}
> > +	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK))
> > +		cmdq_thread_irq_handler(cmdq, &cmdq->thread[bit]);
> >  
> >  	return IRQ_HANDLED;
> >  }
> > @@ -310,7 +172,7 @@ static int cmdq_suspend(struct device *dev)
> >  
> >  	for (i = 0; i < cmdq->thread_nr; i++) {
> >  		thread = &cmdq->thread[i];
> > -		if (!list_empty(&thread->task_busy_list)) {
> > +		if (thread->pkt) {
> >  			task_running = true;
> >  			break;
> >  		}
> > @@ -347,72 +209,21 @@ static int cmdq_mbox_send_data(struct mbox_chan *chan, void *data)
> >  	struct cmdq_pkt *pkt = (struct cmdq_pkt *)data;
> >  	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
> >  	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
> > -	struct cmdq_task *task;
> > -	unsigned long curr_pa, end_pa;
> >  
> >  	/* Client should not flush new tasks if suspended. */
> >  	WARN_ON(cmdq->suspended);
> >  
> > -	task = kzalloc(sizeof(*task), GFP_ATOMIC);
> > -	if (!task)
> > -		return -ENOMEM;
> > +	thread->pkt = pkt;
> >  
> > -	task->cmdq = cmdq;
> > -	INIT_LIST_HEAD(&task->list_entry);
> > -	task->pa_base = pkt->pa_base;
> > -	task->thread = thread;
> > -	task->pkt = pkt;
> > -
> > -	if (list_empty(&thread->task_busy_list)) {
> > -		WARN_ON(clk_enable(cmdq->clock) < 0);
> > -		WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
> > -
> > -		writel(task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> > -		writel(task->pa_base + pkt->cmd_buf_size,
> > -		       thread->base + CMDQ_THR_END_ADDR);
> > -		writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
> > -		writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
> > -		writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
> > -	} else {
> > -		WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> > -		curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
> > -		end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
> > -
> > -		/*
> > -		 * Atomic execution should remove the following wfe, i.e. only
> > -		 * wait event at first task, and prevent to pause when running.
> > -		 */
> > -		if (thread->atomic_exec) {
> > -			/* GCE is executing if command is not WFE */
> > -			if (!cmdq_thread_is_in_wfe(thread)) {
> > -				cmdq_thread_resume(thread);
> > -				cmdq_thread_wait_end(thread, end_pa);
> > -				WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> > -				/* set to this task directly */
> > -				writel(task->pa_base,
> > -				       thread->base + CMDQ_THR_CURR_ADDR);
> > -			} else {
> > -				cmdq_task_insert_into_thread(task);
> > -				cmdq_task_remove_wfe(task);
> > -				smp_mb(); /* modify jump before enable thread */
> > -			}
> > -		} else {
> > -			/* check boundary */
> > -			if (curr_pa == end_pa - CMDQ_INST_SIZE ||
> > -			    curr_pa == end_pa) {
> > -				/* set to this task directly */
> > -				writel(task->pa_base,
> > -				       thread->base + CMDQ_THR_CURR_ADDR);
> > -			} else {
> > -				cmdq_task_insert_into_thread(task);
> > -				smp_mb(); /* modify jump before enable thread */
> > -			}
> > -		}
> > -		writel(task->pa_base + pkt->cmd_buf_size,
> > -		       thread->base + CMDQ_THR_END_ADDR);
> > -		cmdq_thread_resume(thread);
> > -	}
> > -	list_move_tail(&task->list_entry, &thread->task_busy_list);
> > +	WARN_ON(clk_enable(cmdq->clock) < 0);
> > +	WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
> > +
> > +	writel(thread->pkt->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> > +	writel(thread->pkt->pa_base + pkt->cmd_buf_size,
> > +	       thread->base + CMDQ_THR_END_ADDR);
> > +	writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
> > +	writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
> > +	writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
> >  
> >  	return 0;
> >  }
> > @@ -421,23 +232,18 @@ static void cmdq_mbox_abort_data(struct mbox_chan *chan)  {
> >  	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
> >  	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
> > -	struct cmdq_task *task, *tmp;
> >  	unsigned long flags;
> >  	u32 enable;
> >  
> >  	spin_lock_irqsave(&thread->chan->lock, flags);
> > -	if (list_empty(&thread->task_busy_list))
> > +	if (!thread->pkt)
> >  		goto out;
> >  
> >  	WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> >  	if (!cmdq_thread_is_in_wfe(thread))
> >  		goto wait;
> >  
> > -	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
> > -				 list_entry) {
> > -		list_del(&task->list_entry);
> > -		kfree(task);
> > -	}
> > +	thread->pkt = NULL;
> >  
> >  	cmdq_thread_resume(thread);
> >  	cmdq_thread_disable(cmdq, thread);
> > @@ -483,7 +289,6 @@ static struct mbox_chan *cmdq_xlate(struct mbox_controller *mbox,
> >  
> >  	thread = (struct cmdq_thread *)mbox->chans[ind].con_priv;
> >  	thread->priority = sp->args[1];
> > -	thread->atomic_exec = (sp->args[2] != 0);
> >  	thread->chan = &mbox->chans[ind];
> >  
> >  	return &mbox->chans[ind];
> > @@ -539,8 +344,7 @@ static int cmdq_probe(struct platform_device *pdev)
> >  	cmdq->mbox.ops = &cmdq_mbox_chan_ops;
> >  	cmdq->mbox.of_xlate = cmdq_xlate;
> >  
> > -	/* make use of TXDONE_BY_ACK */
> > -	cmdq->mbox.txdone_irq = false;
> > +	cmdq->mbox.txdone_irq = true;
> >  	cmdq->mbox.txdone_poll = false;
> >  
> >  	cmdq->thread = devm_kcalloc(dev, cmdq->thread_nr, @@ -551,7 +355,6 @@ static int cmdq_probe(struct platform_device *pdev)
> >  	for (i = 0; i < cmdq->thread_nr; i++) {
> >  		cmdq->thread[i].base = cmdq->base + CMDQ_THR_BASE +
> >  				CMDQ_THR_SIZE * i;
> > -		INIT_LIST_HEAD(&cmdq->thread[i].task_busy_list);
> >  		cmdq->mbox.chans[i].con_priv = (void *)&cmdq->thread[i];
> >  	}
> >  
> > --
> > 2.18.1
> > 
> 
> 



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] mailbox: mediatek: Remove busylist
  2019-02-14 16:01       ` CK Hu
@ 2019-02-14 16:12         ` Dennis-YC Hsieh
  0 siblings, 0 replies; 10+ messages in thread
From: Dennis-YC Hsieh @ 2019-02-14 16:12 UTC (permalink / raw)
  To: CK Hu
  Cc: srv_heupstream, Jassi Brar, linux-kernel,
	Houlong Wei (魏厚龙),
	linux-mediatek, Bibby Hsieh, Matthias Brugger, linux-arm-kernel

Hi CK,

On Fri, 2019-02-15 at 00:01 +0800, CK Hu wrote:
> Hi, Dennis:
> 
> On Tue, 2019-02-12 at 10:18 +0800, Dennis-YC Hsieh wrote:
> > Hi CK,
> > 
> > On Tue, 2019-01-29 at 17:20 +0800, Houlong Wei (魏厚龙) wrote:
> > > -----Original Message-----
> > > From: CK Hu [mailto:ck.hu@mediatek.com] 
> > > Sent: Wednesday, January 16, 2019 1:05 PM
> > > To: Jassi Brar <jassisinghbrar@gmail.com>; Matthias Brugger <matthias.bgg@gmail.com>; Houlong Wei (魏厚龙) <houlong.wei@mediatek.com>
> > > Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-mediatek@lists.infradead.org; srv_heupstream <srv_heupstream@mediatek.com>; CK Hu (胡俊光) <ck.hu@mediatek.com>
> > > Subject: [PATCH 3/3] mailbox: mediatek: Remove busylist
> > > 
> > > After implement abort_data, controller need not to implement its own queue. Remove busylist because it's useless.
> > 
> > Remove busy list in controller makes client driver have no way to queue
> > pkt in gce hardware thread, which may hurt display and multimedia
> > performance since each pkt waits IRQ delay before previous pkt. Suggest
> > keep busy list design.
> 
> If some client driver need gce to cascade pkt to prevent irq delay, I
> should keep busylist. For drm driver, I still want to apply the first
> two patches of this series and remove atomic feature because drm could
> keep just one pkt and reuse it to prevent frequently allocate/free pkt
> and the total commands executed in vblank period would be
> well-controlled. Do you have any concern about removing atomic feature?
> 
> Regards,
> CK
> 

I have no concern on remove atomic feature.


Regards,
Dennis


> > 
> > 
> > Regards,
> > Dennis
> > 
> > > 
> > > Signed-off-by: CK Hu <ck.hu@mediatek.com>
> > > ---
> > >  drivers/mailbox/mtk-cmdq-mailbox.c | 255 ++++-------------------------
> > >  1 file changed, 29 insertions(+), 226 deletions(-)
> > > 
> > > diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
> > > index f2219f263ef6..45c59f677ecb 100644
> > > --- a/drivers/mailbox/mtk-cmdq-mailbox.c
> > > +++ b/drivers/mailbox/mtk-cmdq-mailbox.c
> > > @@ -16,9 +16,7 @@
> > >  #include <linux/mailbox/mtk-cmdq-mailbox.h>
> > >  #include <linux/of_device.h>
> > >  
> > > -#define CMDQ_OP_CODE_MASK		(0xff << CMDQ_OP_CODE_SHIFT)
> > >  #define CMDQ_IRQ_MASK			0xffff
> > > -#define CMDQ_NUM_CMD(t)			(t->cmd_buf_size / CMDQ_INST_SIZE)
> > >  
> > >  #define CMDQ_CURR_IRQ_STATUS		0x10
> > >  #define CMDQ_THR_SLOT_CYCLES		0x30
> > > @@ -47,22 +45,10 @@
> > >  #define CMDQ_THR_IRQ_EN			(CMDQ_THR_IRQ_ERROR | CMDQ_THR_IRQ_DONE)
> > >  #define CMDQ_THR_IS_WAITING		BIT(31)
> > >  
> > > -#define CMDQ_JUMP_BY_OFFSET		0x10000000
> > > -#define CMDQ_JUMP_BY_PA			0x10000001
> > > -
> > >  struct cmdq_thread {
> > >  	struct mbox_chan	*chan;
> > >  	void __iomem		*base;
> > > -	struct list_head	task_busy_list;
> > >  	u32			priority;
> > > -	bool			atomic_exec;
> > > -};
> > > -
> > > -struct cmdq_task {
> > > -	struct cmdq		*cmdq;
> > > -	struct list_head	list_entry;
> > > -	dma_addr_t		pa_base;
> > > -	struct cmdq_thread	*thread;
> > >  	struct cmdq_pkt		*pkt; /* the packet sent from mailbox client */
> > >  };
> > >  
> > > @@ -130,171 +116,47 @@ static void cmdq_thread_disable(struct cmdq *cmdq, struct cmdq_thread *thread)
> > >  	writel(CMDQ_THR_DISABLED, thread->base + CMDQ_THR_ENABLE_TASK);  }
> > >  
> > > -/* notify GCE to re-fetch commands by setting GCE thread PC */ -static void cmdq_thread_invalidate_fetched_data(struct cmdq_thread *thread) -{
> > > -	writel(readl(thread->base + CMDQ_THR_CURR_ADDR),
> > > -	       thread->base + CMDQ_THR_CURR_ADDR);
> > > -}
> > > -
> > > -static void cmdq_task_insert_into_thread(struct cmdq_task *task) -{
> > > -	struct device *dev = task->cmdq->mbox.dev;
> > > -	struct cmdq_thread *thread = task->thread;
> > > -	struct cmdq_task *prev_task = list_last_entry(
> > > -			&thread->task_busy_list, typeof(*task), list_entry);
> > > -	u64 *prev_task_base = prev_task->pkt->va_base;
> > > -
> > > -	/* let previous task jump to this task */
> > > -	dma_sync_single_for_cpu(dev, prev_task->pa_base,
> > > -				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
> > > -	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
> > > -		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
> > > -	dma_sync_single_for_device(dev, prev_task->pa_base,
> > > -				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
> > > -
> > > -	cmdq_thread_invalidate_fetched_data(thread);
> > > -}
> > > -
> > > -static bool cmdq_command_is_wfe(u64 cmd) -{
> > > -	u64 wfe_option = CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | CMDQ_WFE_WAIT_VALUE;
> > > -	u64 wfe_op = (u64)(CMDQ_CODE_WFE << CMDQ_OP_CODE_SHIFT) << 32;
> > > -	u64 wfe_mask = (u64)CMDQ_OP_CODE_MASK << 32 | 0xffffffff;
> > > -
> > > -	return ((cmd & wfe_mask) == (wfe_op | wfe_option));
> > > -}
> > > -
> > > -/* we assume tasks in the same display GCE thread are waiting the same event. */ -static void cmdq_task_remove_wfe(struct cmdq_task *task) -{
> > > -	struct device *dev = task->cmdq->mbox.dev;
> > > -	u64 *base = task->pkt->va_base;
> > > -	int i;
> > > -
> > > -	dma_sync_single_for_cpu(dev, task->pa_base, task->pkt->cmd_buf_size,
> > > -				DMA_TO_DEVICE);
> > > -	for (i = 0; i < CMDQ_NUM_CMD(task->pkt); i++)
> > > -		if (cmdq_command_is_wfe(base[i]))
> > > -			base[i] = (u64)CMDQ_JUMP_BY_OFFSET << 32 |
> > > -				  CMDQ_JUMP_PASS;
> > > -	dma_sync_single_for_device(dev, task->pa_base, task->pkt->cmd_buf_size,
> > > -				   DMA_TO_DEVICE);
> > > -}
> > > -
> > >  static bool cmdq_thread_is_in_wfe(struct cmdq_thread *thread)  {
> > >  	return readl(thread->base + CMDQ_THR_WAIT_TOKEN) & CMDQ_THR_IS_WAITING;  }
> > >  
> > > -static void cmdq_thread_wait_end(struct cmdq_thread *thread,
> > > -				 unsigned long end_pa)
> > > -{
> > > -	struct device *dev = thread->chan->mbox->dev;
> > > -	unsigned long curr_pa;
> > > -
> > > -	if (readl_poll_timeout_atomic(thread->base + CMDQ_THR_CURR_ADDR,
> > > -			curr_pa, curr_pa == end_pa, 1, 20))
> > > -		dev_err(dev, "GCE thread cannot run to end.\n");
> > > -}
> > > -
> > > -static void cmdq_task_exec_done(struct cmdq_task *task, enum cmdq_cb_status sta) -{
> > > -	struct cmdq_task_cb *cb = &task->pkt->async_cb;
> > > -	struct cmdq_cb_data data;
> > > -
> > > -	WARN_ON(cb->cb == (cmdq_async_flush_cb)NULL);
> > > -	data.sta = sta;
> > > -	data.data = cb->data;
> > > -	cb->cb(data);
> > > -
> > > -	list_del(&task->list_entry);
> > > -}
> > > -
> > > -static void cmdq_task_handle_error(struct cmdq_task *task) -{
> > > -	struct cmdq_thread *thread = task->thread;
> > > -	struct cmdq_task *next_task;
> > > -
> > > -	dev_err(task->cmdq->mbox.dev, "task 0x%p error\n", task);
> > > -	WARN_ON(cmdq_thread_suspend(task->cmdq, thread) < 0);
> > > -	next_task = list_first_entry_or_null(&thread->task_busy_list,
> > > -			struct cmdq_task, list_entry);
> > > -	if (next_task)
> > > -		writel(next_task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> > > -	cmdq_thread_resume(thread);
> > > -}
> > > -
> > >  static void cmdq_thread_irq_handler(struct cmdq *cmdq,
> > >  				    struct cmdq_thread *thread)
> > >  {
> > > -	struct cmdq_task *task, *tmp, *curr_task = NULL;
> > > -	u32 curr_pa, irq_flag, task_end_pa;
> > > -	bool err;
> > > +	unsigned long flags;
> > > +	u32 curr_pa, irq_flag, end_pa;
> > > +	int ret = 0;
> > >  
> > > +	spin_lock_irqsave(&thread->chan->lock, flags);
> > >  	irq_flag = readl(thread->base + CMDQ_THR_IRQ_STATUS);
> > >  	writel(~irq_flag, thread->base + CMDQ_THR_IRQ_STATUS);
> > >  
> > > -	/*
> > > -	 * When ISR call this function, another CPU core could run
> > > -	 * "release task" right before we acquire the spin lock, and thus
> > > -	 * reset / disable this GCE thread, so we need to check the enable
> > > -	 * bit of this GCE thread.
> > > -	 */
> > > -	if (!(readl(thread->base + CMDQ_THR_ENABLE_TASK) & CMDQ_THR_ENABLED))
> > > -		return;
> > > -
> > > -	if (irq_flag & CMDQ_THR_IRQ_ERROR)
> > > -		err = true;
> > > -	else if (irq_flag & CMDQ_THR_IRQ_DONE)
> > > -		err = false;
> > > -	else
> > > -		return;
> > > -
> > >  	curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
> > > +	end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
> > >  
> > > -	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
> > > -				 list_entry) {
> > > -		task_end_pa = task->pa_base + task->pkt->cmd_buf_size;
> > > -		if (curr_pa >= task->pa_base && curr_pa < task_end_pa)
> > > -			curr_task = task;
> > > -
> > > -		if (!curr_task || curr_pa == task_end_pa - CMDQ_INST_SIZE) {
> > > -			cmdq_task_exec_done(task, CMDQ_CB_NORMAL);
> > > -			kfree(task);
> > > -		} else if (err) {
> > > -			cmdq_task_exec_done(task, CMDQ_CB_ERROR);
> > > -			cmdq_task_handle_error(curr_task);
> > > -			kfree(task);
> > > -		}
> > > -
> > > -		if (curr_task)
> > > -			break;
> > > -	}
> > > +	if (curr_pa != end_pa ||  irq_flag & CMDQ_THR_IRQ_ERROR)
> > > +		ret = -EFAULT;
> > >  
> > > -	if (list_empty(&thread->task_busy_list)) {
> > > -		cmdq_thread_disable(cmdq, thread);
> > > -		clk_disable(cmdq->clock);
> > > -	}
> > > +	thread->pkt = NULL;
> > > +	cmdq_thread_disable(cmdq, thread);
> > > +	clk_disable(cmdq->clock);
> > > +	spin_unlock_irqrestore(&thread->chan->lock, flags);
> > > +	mbox_chan_txdone(thread->chan, ret);
> > >  }
> > >  
> > >  static irqreturn_t cmdq_irq_handler(int irq, void *dev)  {
> > >  	struct cmdq *cmdq = dev;
> > > -	unsigned long irq_status, flags = 0L;
> > > +	unsigned long irq_status;
> > >  	int bit;
> > >  
> > >  	irq_status = readl(cmdq->base + CMDQ_CURR_IRQ_STATUS) & CMDQ_IRQ_MASK;
> > >  	if (!(irq_status ^ CMDQ_IRQ_MASK))
> > >  		return IRQ_NONE;
> > >  
> > > -	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK)) {
> > > -		struct cmdq_thread *thread = &cmdq->thread[bit];
> > > -
> > > -		spin_lock_irqsave(&thread->chan->lock, flags);
> > > -		cmdq_thread_irq_handler(cmdq, thread);
> > > -		spin_unlock_irqrestore(&thread->chan->lock, flags);
> > > -	}
> > > +	for_each_clear_bit(bit, &irq_status, fls(CMDQ_IRQ_MASK))
> > > +		cmdq_thread_irq_handler(cmdq, &cmdq->thread[bit]);
> > >  
> > >  	return IRQ_HANDLED;
> > >  }
> > > @@ -310,7 +172,7 @@ static int cmdq_suspend(struct device *dev)
> > >  
> > >  	for (i = 0; i < cmdq->thread_nr; i++) {
> > >  		thread = &cmdq->thread[i];
> > > -		if (!list_empty(&thread->task_busy_list)) {
> > > +		if (thread->pkt) {
> > >  			task_running = true;
> > >  			break;
> > >  		}
> > > @@ -347,72 +209,21 @@ static int cmdq_mbox_send_data(struct mbox_chan *chan, void *data)
> > >  	struct cmdq_pkt *pkt = (struct cmdq_pkt *)data;
> > >  	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
> > >  	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
> > > -	struct cmdq_task *task;
> > > -	unsigned long curr_pa, end_pa;
> > >  
> > >  	/* Client should not flush new tasks if suspended. */
> > >  	WARN_ON(cmdq->suspended);
> > >  
> > > -	task = kzalloc(sizeof(*task), GFP_ATOMIC);
> > > -	if (!task)
> > > -		return -ENOMEM;
> > > +	thread->pkt = pkt;
> > >  
> > > -	task->cmdq = cmdq;
> > > -	INIT_LIST_HEAD(&task->list_entry);
> > > -	task->pa_base = pkt->pa_base;
> > > -	task->thread = thread;
> > > -	task->pkt = pkt;
> > > -
> > > -	if (list_empty(&thread->task_busy_list)) {
> > > -		WARN_ON(clk_enable(cmdq->clock) < 0);
> > > -		WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
> > > -
> > > -		writel(task->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> > > -		writel(task->pa_base + pkt->cmd_buf_size,
> > > -		       thread->base + CMDQ_THR_END_ADDR);
> > > -		writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
> > > -		writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
> > > -		writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
> > > -	} else {
> > > -		WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> > > -		curr_pa = readl(thread->base + CMDQ_THR_CURR_ADDR);
> > > -		end_pa = readl(thread->base + CMDQ_THR_END_ADDR);
> > > -
> > > -		/*
> > > -		 * Atomic execution should remove the following wfe, i.e. only
> > > -		 * wait event at first task, and prevent to pause when running.
> > > -		 */
> > > -		if (thread->atomic_exec) {
> > > -			/* GCE is executing if command is not WFE */
> > > -			if (!cmdq_thread_is_in_wfe(thread)) {
> > > -				cmdq_thread_resume(thread);
> > > -				cmdq_thread_wait_end(thread, end_pa);
> > > -				WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> > > -				/* set to this task directly */
> > > -				writel(task->pa_base,
> > > -				       thread->base + CMDQ_THR_CURR_ADDR);
> > > -			} else {
> > > -				cmdq_task_insert_into_thread(task);
> > > -				cmdq_task_remove_wfe(task);
> > > -				smp_mb(); /* modify jump before enable thread */
> > > -			}
> > > -		} else {
> > > -			/* check boundary */
> > > -			if (curr_pa == end_pa - CMDQ_INST_SIZE ||
> > > -			    curr_pa == end_pa) {
> > > -				/* set to this task directly */
> > > -				writel(task->pa_base,
> > > -				       thread->base + CMDQ_THR_CURR_ADDR);
> > > -			} else {
> > > -				cmdq_task_insert_into_thread(task);
> > > -				smp_mb(); /* modify jump before enable thread */
> > > -			}
> > > -		}
> > > -		writel(task->pa_base + pkt->cmd_buf_size,
> > > -		       thread->base + CMDQ_THR_END_ADDR);
> > > -		cmdq_thread_resume(thread);
> > > -	}
> > > -	list_move_tail(&task->list_entry, &thread->task_busy_list);
> > > +	WARN_ON(clk_enable(cmdq->clock) < 0);
> > > +	WARN_ON(cmdq_thread_reset(cmdq, thread) < 0);
> > > +
> > > +	writel(thread->pkt->pa_base, thread->base + CMDQ_THR_CURR_ADDR);
> > > +	writel(thread->pkt->pa_base + pkt->cmd_buf_size,
> > > +	       thread->base + CMDQ_THR_END_ADDR);
> > > +	writel(thread->priority, thread->base + CMDQ_THR_PRIORITY);
> > > +	writel(CMDQ_THR_IRQ_EN, thread->base + CMDQ_THR_IRQ_ENABLE);
> > > +	writel(CMDQ_THR_ENABLED, thread->base + CMDQ_THR_ENABLE_TASK);
> > >  
> > >  	return 0;
> > >  }
> > > @@ -421,23 +232,18 @@ static void cmdq_mbox_abort_data(struct mbox_chan *chan)  {
> > >  	struct cmdq_thread *thread = (struct cmdq_thread *)chan->con_priv;
> > >  	struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev);
> > > -	struct cmdq_task *task, *tmp;
> > >  	unsigned long flags;
> > >  	u32 enable;
> > >  
> > >  	spin_lock_irqsave(&thread->chan->lock, flags);
> > > -	if (list_empty(&thread->task_busy_list))
> > > +	if (!thread->pkt)
> > >  		goto out;
> > >  
> > >  	WARN_ON(cmdq_thread_suspend(cmdq, thread) < 0);
> > >  	if (!cmdq_thread_is_in_wfe(thread))
> > >  		goto wait;
> > >  
> > > -	list_for_each_entry_safe(task, tmp, &thread->task_busy_list,
> > > -				 list_entry) {
> > > -		list_del(&task->list_entry);
> > > -		kfree(task);
> > > -	}
> > > +	thread->pkt = NULL;
> > >  
> > >  	cmdq_thread_resume(thread);
> > >  	cmdq_thread_disable(cmdq, thread);
> > > @@ -483,7 +289,6 @@ static struct mbox_chan *cmdq_xlate(struct mbox_controller *mbox,
> > >  
> > >  	thread = (struct cmdq_thread *)mbox->chans[ind].con_priv;
> > >  	thread->priority = sp->args[1];
> > > -	thread->atomic_exec = (sp->args[2] != 0);
> > >  	thread->chan = &mbox->chans[ind];
> > >  
> > >  	return &mbox->chans[ind];
> > > @@ -539,8 +344,7 @@ static int cmdq_probe(struct platform_device *pdev)
> > >  	cmdq->mbox.ops = &cmdq_mbox_chan_ops;
> > >  	cmdq->mbox.of_xlate = cmdq_xlate;
> > >  
> > > -	/* make use of TXDONE_BY_ACK */
> > > -	cmdq->mbox.txdone_irq = false;
> > > +	cmdq->mbox.txdone_irq = true;
> > >  	cmdq->mbox.txdone_poll = false;
> > >  
> > >  	cmdq->thread = devm_kcalloc(dev, cmdq->thread_nr, @@ -551,7 +355,6 @@ static int cmdq_probe(struct platform_device *pdev)
> > >  	for (i = 0; i < cmdq->thread_nr; i++) {
> > >  		cmdq->thread[i].base = cmdq->base + CMDQ_THR_BASE +
> > >  				CMDQ_THR_SIZE * i;
> > > -		INIT_LIST_HEAD(&cmdq->thread[i].task_busy_list);
> > >  		cmdq->mbox.chans[i].con_priv = (void *)&cmdq->thread[i];
> > >  	}
> > >  
> > > --
> > > 2.18.1
> > > 
> > 
> > 
> 
> 



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-02-14 16:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-16  5:04 [PATCH 0/3] Remove self-implemented queue of Mediatek cmdq CK Hu
2019-01-16  5:04 ` [PATCH 1/3] mailbox: Add ability for clients to abort data in channel CK Hu
2019-01-16 16:22   ` Jassi Brar
2019-01-17  8:00     ` CK Hu
2019-01-17 15:22       ` Jassi Brar
2019-01-16  5:04 ` [PATCH 2/3] mailbox: mediatek: Implement abort_data function CK Hu
2019-01-16  5:04 ` [PATCH 3/3] mailbox: mediatek: Remove busylist CK Hu
     [not found]   ` <40d519083fe94640a22181388bcbbb09@MTKMBS31N1.mediatek.inc>
2019-02-12  2:18     ` Dennis-YC Hsieh
2019-02-14 16:01       ` CK Hu
2019-02-14 16:12         ` Dennis-YC Hsieh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).