All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 00/22] mmc: Add Command Queue support
@ 2017-03-13 12:36 Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly Adrian Hunter
                   ` (23 more replies)
  0 siblings, 24 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Hi

Here are the hardware command queue patches without the software command
queue patches or sdhci patches.

Changes since V1:

	"Share mmc request array between partitions" is dependent
	on changes in "Introduce queue semantics", so added that
	and block fixes:

	Added "Fix is_waiting_last_req set incorrectly"
	Added "Fix cmd error reset failure path"
	Added "Use local var for mqrq_cur"
	Added "Introduce queue semantics"

Changes since RFC:

	Re-based on next.
	Added comment about command queue priority.
	Added some acks and reviews.


Adrian Hunter (21):
      mmc: block: Fix is_waiting_last_req set incorrectly
      mmc: block: Fix cmd error reset failure path
      mmc: block: Use local var for mqrq_cur
      mmc: block: Introduce queue semantics
      mmc: queue: Share mmc request array between partitions
      mmc: mmc: Add functions to enable / disable the Command Queue
      mmc: mmc_test: Disable Command Queue while mmc_test is used
      mmc: block: Disable Command Queue while RPMB is used
      mmc: block: Change mmc_apply_rel_rw() to get block address from the request
      mmc: block: Factor out data preparation
      mmc: core: Factor out debug prints from mmc_start_request()
      mmc: core: Factor out mrq preparation from mmc_start_request()
      mmc: core: Add mmc_retune_hold_now()
      mmc: core: Add members to mmc_request and mmc_data for CQE's
      mmc: host: Add CQE interface
      mmc: core: Turn off CQE before sending commands
      mmc: core: Add support for handling CQE requests
      mmc: mmc: Enable Command Queuing
      mmc: mmc: Enable CQE's
      mmc: block: Prepare CQE data
      mmc: block: Add CQE support

Venkat Gopalakrishnan (1):
      mmc: cqhci: support for command queue enabled host

 Documentation/mmc/mmc-dev-attrs.txt |    1 +
 drivers/mmc/core/block.c            |  527 ++++++++++++----
 drivers/mmc/core/block.h            |    7 +
 drivers/mmc/core/bus.c              |    7 +
 drivers/mmc/core/core.c             |  203 ++++++-
 drivers/mmc/core/host.c             |    6 +
 drivers/mmc/core/host.h             |    1 +
 drivers/mmc/core/mmc.c              |   39 +-
 drivers/mmc/core/mmc_ops.c          |   28 +
 drivers/mmc/core/mmc_ops.h          |    2 +
 drivers/mmc/core/mmc_test.c         |   14 +
 drivers/mmc/core/queue.c            |  607 ++++++++++++++----
 drivers/mmc/core/queue.h            |   55 +-
 drivers/mmc/host/Kconfig            |   13 +
 drivers/mmc/host/Makefile           |    1 +
 drivers/mmc/host/cqhci.c            | 1148 +++++++++++++++++++++++++++++++++++
 drivers/mmc/host/cqhci.h            |  240 ++++++++
 include/linux/mmc/card.h            |    8 +
 include/linux/mmc/core.h            |   19 +-
 include/linux/mmc/host.h            |   24 +
 include/trace/events/mmc.h          |   17 +-
 21 files changed, 2694 insertions(+), 273 deletions(-)
 create mode 100644 drivers/mmc/host/cqhci.c
 create mode 100644 drivers/mmc/host/cqhci.h


Regards
Adrian

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-14 16:22   ` Ulf Hansson
  2017-03-13 12:36 ` [PATCH V2 02/22] mmc: block: Fix cmd error reset failure path Adrian Hunter
                   ` (22 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Commit 15520111500c ("mmc: core: Further fix thread wake-up") allowed a
queue to release the host with is_waiting_last_req set to true. A queue
waiting to claim the host will not reset it, which can result in the
queue getting stuck in a loop.

Fixes: 15520111500c ("mmc: core: Further fix thread wake-up")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.10+
---
 drivers/mmc/core/block.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 1621fa08e206..e59107ca512a 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1817,6 +1817,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		mmc_blk_issue_flush(mq, req);
 	} else {
 		mmc_blk_issue_rw_rq(mq, req);
+		card->host->context_info.is_waiting_last_req = false;
 	}
 
 out:
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 02/22] mmc: block: Fix cmd error reset failure path
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-14 16:22   ` Ulf Hansson
  2017-03-13 12:36 ` [PATCH V2 03/22] mmc: block: Use local var for mqrq_cur Adrian Hunter
                   ` (21 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Commit 4e1f780032c5 ("mmc: block: break out mmc_blk_rw_cmd_abort()")
assumed the request had not completed, but in one case it had. Fix that.

Fixes: 4e1f780032c5 ("mmc: block: break out mmc_blk_rw_cmd_abort()")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
---
 drivers/mmc/core/block.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index e59107ca512a..05afefcfb611 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1701,7 +1701,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 		case MMC_BLK_CMD_ERR:
 			req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
 			if (mmc_blk_reset(md, card->host, type)) {
-				mmc_blk_rw_cmd_abort(card, old_req);
+				if (req_pending)
+					mmc_blk_rw_cmd_abort(card, old_req);
 				mmc_blk_rw_try_restart(mq, new_req);
 				return;
 			}
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 03/22] mmc: block: Use local var for mqrq_cur
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 02/22] mmc: block: Fix cmd error reset failure path Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:37   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 04/22] mmc: block: Introduce queue semantics Adrian Hunter
                   ` (20 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
assigning it to a local variable.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 05afefcfb611..465c933b45cf 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1600,7 +1600,8 @@ static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req)
  * @mq: the queue with the card and host to restart
  * @req: a new request that want to be started after the current one
  */
-static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req,
+				   struct mmc_queue_req *mqrq)
 {
 	if (!req)
 		return;
@@ -1614,8 +1615,8 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req)
 		return;
 	}
 	/* Else proceed and try to restart the current async request */
-	mmc_blk_rw_rq_prep(mq->mqrq_cur, mq->card, 0, mq);
-	mmc_start_areq(mq->card->host, &mq->mqrq_cur->areq, NULL);
+	mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
+	mmc_start_areq(mq->card->host, &mqrq->areq, NULL);
 }
 
 static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
@@ -1625,6 +1626,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	struct mmc_blk_request *brq;
 	int disable_multi = 0, retry = 0, type, retune_retry_done = 0;
 	enum mmc_blk_status status;
+	struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
 	struct mmc_queue_req *mq_rq;
 	struct request *old_req;
 	struct mmc_async_req *new_areq;
@@ -1648,8 +1650,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				return;
 			}
 
-			mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
-			new_areq = &mq->mqrq_cur->areq;
+			mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+			new_areq = &mqrq_cur->areq;
 		} else
 			new_areq = NULL;
 
@@ -1703,11 +1705,11 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			if (mmc_blk_reset(md, card->host, type)) {
 				if (req_pending)
 					mmc_blk_rw_cmd_abort(card, old_req);
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			if (!req_pending) {
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			break;
@@ -1720,7 +1722,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			if (!mmc_blk_reset(md, card->host, type))
 				break;
 			mmc_blk_rw_cmd_abort(card, old_req);
-			mmc_blk_rw_try_restart(mq, new_req);
+			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		case MMC_BLK_DATA_ERR: {
 			int err;
@@ -1730,7 +1732,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				break;
 			if (err == -ENODEV) {
 				mmc_blk_rw_cmd_abort(card, old_req);
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			/* Fall through */
@@ -1751,19 +1753,19 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			req_pending = blk_end_request(old_req, -EIO,
 						      brq->data.blksz);
 			if (!req_pending) {
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			break;
 		case MMC_BLK_NOMEDIUM:
 			mmc_blk_rw_cmd_abort(card, old_req);
-			mmc_blk_rw_try_restart(mq, new_req);
+			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		default:
 			pr_err("%s: Unhandled return value (%d)",
 					old_req->rq_disk->disk_name, status);
 			mmc_blk_rw_cmd_abort(card, old_req);
-			mmc_blk_rw_try_restart(mq, new_req);
+			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		}
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 04/22] mmc: block: Introduce queue semantics
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (2 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 03/22] mmc: block: Use local var for mqrq_cur Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:40   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 05/22] mmc: queue: Share mmc request array between partitions Adrian Hunter
                   ` (19 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Change from viewing the requests in progress as 'current' and 'previous',
to viewing them as a queue. The current request is allocated to the first
free slot. The presence of incomplete requests is determined from the
count (mq->qcnt) of entries in the queue. Non-read-write requests (i.e.
discards and flushes) are not added to the queue at all and require no
special handling. Also no special handling is needed for the
MMC_BLK_NEW_REQUEST case.

As well as allowing an arbitrarily sized queue, the queue thread function
is significantly simpler.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 66 ++++++++++++++++++++++++++----------------
 drivers/mmc/core/queue.c | 75 ++++++++++++++++++++++++++++++------------------
 drivers/mmc/core/queue.h | 10 +++++--
 3 files changed, 95 insertions(+), 56 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 465c933b45cf..18bb639e9695 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -129,6 +129,13 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 				      struct mmc_blk_data *md);
 static int get_card_status(struct mmc_card *card, u32 *status, int retries);
 
+static void mmc_blk_requeue(struct request_queue *q, struct request *req)
+{
+	spin_lock_irq(q->queue_lock);
+	blk_requeue_request(q, req);
+	spin_unlock_irq(q->queue_lock);
+}
+
 static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
 {
 	struct mmc_blk_data *md;
@@ -1588,11 +1595,14 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
 	return req_pending;
 }
 
-static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req)
+static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card,
+				 struct request *req,
+				 struct mmc_queue_req *mqrq)
 {
 	if (mmc_card_removed(card))
 		req->rq_flags |= RQF_QUIET;
 	while (blk_end_request(req, -EIO, blk_rq_cur_bytes(req)));
+	mmc_queue_req_free(mq, mqrq);
 }
 
 /**
@@ -1612,6 +1622,7 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req,
 	if (mmc_card_removed(mq->card)) {
 		req->rq_flags |= RQF_QUIET;
 		blk_end_request_all(req, -EIO);
+		mmc_queue_req_free(mq, mqrq);
 		return;
 	}
 	/* Else proceed and try to restart the current async request */
@@ -1626,14 +1637,23 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	struct mmc_blk_request *brq;
 	int disable_multi = 0, retry = 0, type, retune_retry_done = 0;
 	enum mmc_blk_status status;
-	struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+	struct mmc_queue_req *mqrq_cur = NULL;
 	struct mmc_queue_req *mq_rq;
 	struct request *old_req;
 	struct mmc_async_req *new_areq;
 	struct mmc_async_req *old_areq;
 	bool req_pending = true;
 
-	if (!new_req && !mq->mqrq_prev->req)
+	if (new_req) {
+		mqrq_cur = mmc_queue_req_find(mq, new_req);
+		if (!mqrq_cur) {
+			WARN_ON(1);
+			mmc_blk_requeue(mq->queue, new_req);
+			new_req = NULL;
+		}
+	}
+
+	if (!mq->qcnt)
 		return;
 
 	do {
@@ -1646,7 +1666,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				!IS_ALIGNED(blk_rq_sectors(new_req), 8)) {
 				pr_err("%s: Transfer size is not 4KB sector size aligned\n",
 					new_req->rq_disk->disk_name);
-				mmc_blk_rw_cmd_abort(card, new_req);
+				mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur);
 				return;
 			}
 
@@ -1662,8 +1682,6 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			 * and there is nothing more to do until it is
 			 * complete.
 			 */
-			if (status == MMC_BLK_NEW_REQUEST)
-				mq->new_request = true;
 			return;
 		}
 
@@ -1696,7 +1714,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				pr_err("%s BUG rq_tot %d d_xfer %d\n",
 				       __func__, blk_rq_bytes(old_req),
 				       brq->data.bytes_xfered);
-				mmc_blk_rw_cmd_abort(card, old_req);
+				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
 				return;
 			}
 			break;
@@ -1704,11 +1722,14 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
 			if (mmc_blk_reset(md, card->host, type)) {
 				if (req_pending)
-					mmc_blk_rw_cmd_abort(card, old_req);
+					mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+				else
+					mmc_queue_req_free(mq, mq_rq);
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			if (!req_pending) {
+				mmc_queue_req_free(mq, mq_rq);
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
@@ -1721,7 +1742,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 		case MMC_BLK_ABORT:
 			if (!mmc_blk_reset(md, card->host, type))
 				break;
-			mmc_blk_rw_cmd_abort(card, old_req);
+			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
 			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		case MMC_BLK_DATA_ERR: {
@@ -1731,7 +1752,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			if (!err)
 				break;
 			if (err == -ENODEV) {
-				mmc_blk_rw_cmd_abort(card, old_req);
+				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
@@ -1753,18 +1774,19 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			req_pending = blk_end_request(old_req, -EIO,
 						      brq->data.blksz);
 			if (!req_pending) {
+				mmc_queue_req_free(mq, mq_rq);
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			break;
 		case MMC_BLK_NOMEDIUM:
-			mmc_blk_rw_cmd_abort(card, old_req);
+			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
 			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		default:
 			pr_err("%s: Unhandled return value (%d)",
 					old_req->rq_disk->disk_name, status);
-			mmc_blk_rw_cmd_abort(card, old_req);
+			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
 			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		}
@@ -1781,6 +1803,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			mq_rq->brq.retune_retry_done = retune_retry_done;
 		}
 	} while (req_pending);
+
+	mmc_queue_req_free(mq, mq_rq);
 }
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
@@ -1788,9 +1812,8 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	int ret;
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_card *card = md->queue.card;
-	bool req_is_special = mmc_req_is_special(req);
 
-	if (req && !mq->mqrq_prev->req)
+	if (req && !mq->qcnt)
 		/* claim host only for the first request */
 		mmc_get_card(card);
 
@@ -1802,20 +1825,19 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		goto out;
 	}
 
-	mq->new_request = false;
 	if (req && req_op(req) == REQ_OP_DISCARD) {
 		/* complete ongoing async transfer before issuing discard */
-		if (card->host->areq)
+		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		mmc_blk_issue_discard_rq(mq, req);
 	} else if (req && req_op(req) == REQ_OP_SECURE_ERASE) {
 		/* complete ongoing async transfer before issuing secure erase*/
-		if (card->host->areq)
+		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		mmc_blk_issue_secdiscard_rq(mq, req);
 	} else if (req && req_op(req) == REQ_OP_FLUSH) {
 		/* complete ongoing async transfer before issuing flush */
-		if (card->host->areq)
+		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		mmc_blk_issue_flush(mq, req);
 	} else {
@@ -1824,13 +1846,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	}
 
 out:
-	if ((!req && !mq->new_request) || req_is_special)
-		/*
-		 * Release host when there are no more requests
-		 * and after special request(discard, flush) is done.
-		 * In case sepecial request, there is no reentry to
-		 * the 'mmc_blk_issue_rq' with 'mqrq_prev->req'.
-		 */
+	if (!mq->qcnt)
 		mmc_put_card(card);
 }
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 493eb10ce580..4a2045527b62 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -40,6 +40,35 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
 	return BLKPREP_OK;
 }
 
+struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
+					 struct request *req)
+{
+	struct mmc_queue_req *mqrq;
+	int i = ffz(mq->qslots);
+
+	if (i >= mq->qdepth)
+		return NULL;
+
+	mqrq = &mq->mqrq[i];
+	WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
+		test_bit(mqrq->task_id, &mq->qslots));
+	mqrq->req = req;
+	mq->qcnt += 1;
+	__set_bit(mqrq->task_id, &mq->qslots);
+
+	return mqrq;
+}
+
+void mmc_queue_req_free(struct mmc_queue *mq,
+			struct mmc_queue_req *mqrq)
+{
+	WARN_ON(!mqrq->req || mq->qcnt < 1 ||
+		!test_bit(mqrq->task_id, &mq->qslots));
+	mqrq->req = NULL;
+	mq->qcnt -= 1;
+	__clear_bit(mqrq->task_id, &mq->qslots);
+}
+
 static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
@@ -50,7 +79,7 @@ static int mmc_queue_thread(void *d)
 
 	down(&mq->thread_sem);
 	do {
-		struct request *req = NULL;
+		struct request *req;
 
 		spin_lock_irq(q->queue_lock);
 		set_current_state(TASK_INTERRUPTIBLE);
@@ -63,38 +92,17 @@ static int mmc_queue_thread(void *d)
 			 * Dispatch queue is empty so set flags for
 			 * mmc_request_fn() to wake us up.
 			 */
-			if (mq->mqrq_prev->req)
+			if (mq->qcnt)
 				cntx->is_waiting_last_req = true;
 			else
 				mq->asleep = true;
 		}
-		mq->mqrq_cur->req = req;
 		spin_unlock_irq(q->queue_lock);
 
-		if (req || mq->mqrq_prev->req) {
-			bool req_is_special = mmc_req_is_special(req);
-
+		if (req || mq->qcnt) {
 			set_current_state(TASK_RUNNING);
 			mmc_blk_issue_rq(mq, req);
 			cond_resched();
-			if (mq->new_request) {
-				mq->new_request = false;
-				continue; /* fetch again */
-			}
-
-			/*
-			 * Current request becomes previous request
-			 * and vice versa.
-			 * In case of special requests, current request
-			 * has been finished. Do not assign it to previous
-			 * request.
-			 */
-			if (req_is_special)
-				mq->mqrq_cur->req = NULL;
-
-			mq->mqrq_prev->brq.mrq.data = NULL;
-			mq->mqrq_prev->req = NULL;
-			swap(mq->mqrq_prev, mq->mqrq_cur);
 		} else {
 			if (kthread_should_stop()) {
 				set_current_state(TASK_RUNNING);
@@ -177,6 +185,20 @@ static void mmc_queue_setup_discard(struct request_queue *q,
 		queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
 }
 
+static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
+{
+	struct mmc_queue_req *mqrq;
+	int i;
+
+	mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
+	if (mqrq) {
+		for (i = 0; i < qdepth; i++)
+			mqrq[i].task_id = i;
+	}
+
+	return mqrq;
+}
+
 #ifdef CONFIG_MMC_BLOCK_BOUNCE
 static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
 					unsigned int bouncesz)
@@ -279,12 +301,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 		return -ENOMEM;
 
 	mq->qdepth = 2;
-	mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req),
-			   GFP_KERNEL);
+	mq->mqrq = mmc_queue_alloc_mqrqs(mq->qdepth);
 	if (!mq->mqrq)
 		goto blk_cleanup;
-	mq->mqrq_cur = &mq->mqrq[0];
-	mq->mqrq_prev = &mq->mqrq[1];
 	mq->queue->queuedata = mq;
 
 	blk_queue_prep_rq(mq->queue, mmc_prep_request);
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index e298f100101b..967808df45b8 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -34,21 +34,21 @@ struct mmc_queue_req {
 	struct scatterlist	*bounce_sg;
 	unsigned int		bounce_sg_len;
 	struct mmc_async_req	areq;
+	int			task_id;
 };
 
 struct mmc_queue {
 	struct mmc_card		*card;
 	struct task_struct	*thread;
 	struct semaphore	thread_sem;
-	bool			new_request;
 	bool			suspended;
 	bool			asleep;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
 	struct mmc_queue_req	*mqrq;
-	struct mmc_queue_req	*mqrq_cur;
-	struct mmc_queue_req	*mqrq_prev;
 	int			qdepth;
+	int			qcnt;
+	unsigned long		qslots;
 };
 
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
@@ -64,4 +64,8 @@ extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
 
 extern int mmc_access_rpmb(struct mmc_queue *);
 
+extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
+						struct request *);
+extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
+
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 05/22] mmc: queue: Share mmc request array between partitions
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (3 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 04/22] mmc: block: Introduce queue semantics Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:41   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
                   ` (18 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

eMMC can have multiple internal partitions that are represented as separate
disks / queues. However switching between partitions is only done when the
queue is empty. Consequently the array of mmc requests that are queued can
be shared between partitions saving memory.

Keep a pointer to the mmc request queue on the card, and use that instead
of allocating a new one for each partition.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c |  11 ++-
 drivers/mmc/core/queue.c | 234 ++++++++++++++++++++++++++++-------------------
 drivers/mmc/core/queue.h |   2 +
 include/linux/mmc/card.h |   5 +
 4 files changed, 156 insertions(+), 96 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 18bb639e9695..b9890dcfb913 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2126,6 +2126,7 @@ static int mmc_blk_probe(struct mmc_card *card)
 {
 	struct mmc_blk_data *md, *part_md;
 	char cap_str[10];
+	int ret;
 
 	/*
 	 * Check that the card supports the command class(es) we need.
@@ -2135,9 +2136,15 @@ static int mmc_blk_probe(struct mmc_card *card)
 
 	mmc_fixup_device(card, mmc_blk_fixups);
 
+	ret = mmc_queue_alloc_shared_queue(card);
+	if (ret)
+		return ret;
+
 	md = mmc_blk_alloc(card);
-	if (IS_ERR(md))
+	if (IS_ERR(md)) {
+		mmc_queue_free_shared_queue(card);
 		return PTR_ERR(md);
+	}
 
 	string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2,
 			cap_str, sizeof(cap_str));
@@ -2175,6 +2182,7 @@ static int mmc_blk_probe(struct mmc_card *card)
  out:
 	mmc_blk_remove_parts(card, md);
 	mmc_blk_remove_req(md);
+	mmc_queue_free_shared_queue(card);
 	return 0;
 }
 
@@ -2192,6 +2200,7 @@ static void mmc_blk_remove(struct mmc_card *card)
 	pm_runtime_put_noidle(&card->dev);
 	mmc_blk_remove_req(md);
 	dev_set_drvdata(&card->dev, NULL);
+	mmc_queue_free_shared_queue(card);
 }
 
 static int _mmc_blk_suspend(struct mmc_card *card)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 4a2045527b62..3423b7acf744 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -149,17 +149,13 @@ static void mmc_request_fn(struct request_queue *q)
 		wake_up_process(mq->thread);
 }
 
-static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
+static struct scatterlist *mmc_alloc_sg(int sg_len)
 {
 	struct scatterlist *sg;
 
 	sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL);
-	if (!sg)
-		*err = -ENOMEM;
-	else {
-		*err = 0;
+	if (sg)
 		sg_init_table(sg, sg_len);
-	}
 
 	return sg;
 }
@@ -185,6 +181,32 @@ static void mmc_queue_setup_discard(struct request_queue *q,
 		queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
 }
 
+static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
+{
+	kfree(mqrq->bounce_sg);
+	mqrq->bounce_sg = NULL;
+
+	kfree(mqrq->sg);
+	mqrq->sg = NULL;
+
+	kfree(mqrq->bounce_buf);
+	mqrq->bounce_buf = NULL;
+}
+
+static void mmc_queue_reqs_free_bufs(struct mmc_queue_req *mqrq, int qdepth)
+{
+	int i;
+
+	for (i = 0; i < qdepth; i++)
+		mmc_queue_req_free_bufs(&mqrq[i]);
+}
+
+static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth)
+{
+	mmc_queue_reqs_free_bufs(mqrq, qdepth);
+	kfree(mqrq);
+}
+
 static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
 {
 	struct mmc_queue_req *mqrq;
@@ -200,79 +222,137 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
 }
 
 #ifdef CONFIG_MMC_BLOCK_BOUNCE
-static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
-					unsigned int bouncesz)
+static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth,
+				       unsigned int bouncesz)
 {
 	int i;
 
-	for (i = 0; i < mq->qdepth; i++) {
-		mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
-		if (!mq->mqrq[i].bounce_buf)
-			goto out_err;
-	}
+	for (i = 0; i < qdepth; i++) {
+		mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+		if (!mqrq[i].bounce_buf)
+			return -ENOMEM;
 
-	return true;
+		mqrq[i].sg = mmc_alloc_sg(1);
+		if (!mqrq[i].sg)
+			return -ENOMEM;
 
-out_err:
-	while (--i >= 0) {
-		kfree(mq->mqrq[i].bounce_buf);
-		mq->mqrq[i].bounce_buf = NULL;
+		mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512);
+		if (!mqrq[i].bounce_sg)
+			return -ENOMEM;
 	}
-	pr_warn("%s: unable to allocate bounce buffers\n",
-		mmc_card_name(mq->card));
-	return false;
+
+	return 0;
 }
 
-static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
-				      unsigned int bouncesz)
+static bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, int qdepth,
+				   unsigned int bouncesz)
 {
-	int i, ret;
+	int ret;
 
-	for (i = 0; i < mq->qdepth; i++) {
-		mq->mqrq[i].sg = mmc_alloc_sg(1, &ret);
-		if (ret)
-			return ret;
+	ret = mmc_queue_alloc_bounce_bufs(mqrq, qdepth, bouncesz);
+	if (ret)
+		mmc_queue_reqs_free_bufs(mqrq, qdepth);
 
-		mq->mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
-		if (ret)
-			return ret;
-	}
+	return !ret;
+}
+
+static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
+{
+	unsigned int bouncesz = MMC_QUEUE_BOUNCESZ;
+
+	if (host->max_segs != 1)
+		return 0;
+
+	if (bouncesz > host->max_req_size)
+		bouncesz = host->max_req_size;
+	if (bouncesz > host->max_seg_size)
+		bouncesz = host->max_seg_size;
+	if (bouncesz > host->max_blk_count * 512)
+		bouncesz = host->max_blk_count * 512;
+
+	if (bouncesz <= 512)
+		return 0;
+
+	return bouncesz;
+}
+#else
+static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq,
+					  int qdepth, unsigned int bouncesz)
+{
+	return false;
+}
 
+static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
+{
 	return 0;
 }
 #endif
 
-static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
+static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth,
+			       int max_segs)
 {
-	int i, ret;
+	int i;
 
-	for (i = 0; i < mq->qdepth; i++) {
-		mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret);
-		if (ret)
-			return ret;
+	for (i = 0; i < qdepth; i++) {
+		mqrq[i].sg = mmc_alloc_sg(max_segs);
+		if (!mqrq[i].sg)
+			return -ENOMEM;
 	}
 
 	return 0;
 }
 
-static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
+void mmc_queue_free_shared_queue(struct mmc_card *card)
 {
-	kfree(mqrq->bounce_sg);
-	mqrq->bounce_sg = NULL;
+	if (card->mqrq) {
+		mmc_queue_free_mqrqs(card->mqrq, card->qdepth);
+		card->mqrq = NULL;
+	}
+}
 
-	kfree(mqrq->sg);
-	mqrq->sg = NULL;
+static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth)
+{
+	struct mmc_host *host = card->host;
+	struct mmc_queue_req *mqrq;
+	unsigned int bouncesz;
+	int ret = 0;
 
-	kfree(mqrq->bounce_buf);
-	mqrq->bounce_buf = NULL;
+	if (card->mqrq)
+		return -EINVAL;
+
+	mqrq = mmc_queue_alloc_mqrqs(qdepth);
+	if (!mqrq)
+		return -ENOMEM;
+
+	card->mqrq = mqrq;
+	card->qdepth = qdepth;
+
+	bouncesz = mmc_queue_calc_bouncesz(host);
+
+	if (bouncesz && !mmc_queue_alloc_bounce(mqrq, qdepth, bouncesz)) {
+		bouncesz = 0;
+		pr_warn("%s: unable to allocate bounce buffers\n",
+			mmc_card_name(card));
+	}
+
+	card->bouncesz = bouncesz;
+
+	if (!bouncesz) {
+		ret = mmc_queue_alloc_sgs(mqrq, qdepth, host->max_segs);
+		if (ret)
+			goto out_err;
+	}
+
+	return ret;
+
+out_err:
+	mmc_queue_free_shared_queue(card);
+	return ret;
 }
 
-static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
+int mmc_queue_alloc_shared_queue(struct mmc_card *card)
 {
-	int i;
-
-	for (i = 0; i < mq->qdepth; i++)
-		mmc_queue_req_free_bufs(&mq->mqrq[i]);
+	return __mmc_queue_alloc_shared_queue(card, 2);
 }
 
 /**
@@ -289,7 +369,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 {
 	struct mmc_host *host = card->host;
 	u64 limit = BLK_BOUNCE_HIGH;
-	bool bounce = false;
 	int ret = -ENOMEM;
 
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
@@ -300,10 +379,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (!mq->queue)
 		return -ENOMEM;
 
-	mq->qdepth = 2;
-	mq->mqrq = mmc_queue_alloc_mqrqs(mq->qdepth);
-	if (!mq->mqrq)
-		goto blk_cleanup;
+	mq->mqrq = card->mqrq;
+	mq->qdepth = card->qdepth;
 	mq->queue->queuedata = mq;
 
 	blk_queue_prep_rq(mq->queue, mmc_prep_request);
@@ -312,44 +389,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (mmc_can_erase(card))
 		mmc_queue_setup_discard(mq->queue, card);
 
-#ifdef CONFIG_MMC_BLOCK_BOUNCE
-	if (host->max_segs == 1) {
-		unsigned int bouncesz;
-
-		bouncesz = MMC_QUEUE_BOUNCESZ;
-
-		if (bouncesz > host->max_req_size)
-			bouncesz = host->max_req_size;
-		if (bouncesz > host->max_seg_size)
-			bouncesz = host->max_seg_size;
-		if (bouncesz > (host->max_blk_count * 512))
-			bouncesz = host->max_blk_count * 512;
-
-		if (bouncesz > 512 &&
-		    mmc_queue_alloc_bounce_bufs(mq, bouncesz)) {
-			blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
-			blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
-			blk_queue_max_segments(mq->queue, bouncesz / 512);
-			blk_queue_max_segment_size(mq->queue, bouncesz);
-
-			ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz);
-			if (ret)
-				goto cleanup_queue;
-			bounce = true;
-		}
-	}
-#endif
-
-	if (!bounce) {
+	if (card->bouncesz) {
+		blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
+		blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512);
+		blk_queue_max_segments(mq->queue, card->bouncesz / 512);
+		blk_queue_max_segment_size(mq->queue, card->bouncesz);
+	} else {
 		blk_queue_bounce_limit(mq->queue, limit);
 		blk_queue_max_hw_sectors(mq->queue,
 			min(host->max_blk_count, host->max_req_size / 512));
 		blk_queue_max_segments(mq->queue, host->max_segs);
 		blk_queue_max_segment_size(mq->queue, host->max_seg_size);
-
-		ret = mmc_queue_alloc_sgs(mq, host->max_segs);
-		if (ret)
-			goto cleanup_queue;
 	}
 
 	sema_init(&mq->thread_sem, 1);
@@ -364,11 +414,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 
 	return 0;
 
- cleanup_queue:
-	mmc_queue_reqs_free_bufs(mq);
-	kfree(mq->mqrq);
+cleanup_queue:
 	mq->mqrq = NULL;
-blk_cleanup:
 	blk_cleanup_queue(mq->queue);
 	return ret;
 }
@@ -390,10 +437,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
 	blk_start_queue(q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
-	mmc_queue_reqs_free_bufs(mq);
-	kfree(mq->mqrq);
 	mq->mqrq = NULL;
-
 	mq->card = NULL;
 }
 EXPORT_SYMBOL(mmc_cleanup_queue);
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 967808df45b8..871796c3f406 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -51,6 +51,8 @@ struct mmc_queue {
 	unsigned long		qslots;
 };
 
+extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
+extern void mmc_queue_free_shared_queue(struct mmc_card *card);
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
 			  const char *);
 extern void mmc_cleanup_queue(struct mmc_queue *);
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 77e61e0a216a..119ef8f0155c 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -208,6 +208,7 @@ struct sdio_cis {
 struct mmc_host;
 struct sdio_func;
 struct sdio_func_tuple;
+struct mmc_queue_req;
 
 #define SDIO_MAX_FUNCS		7
 
@@ -300,6 +301,10 @@ struct mmc_card {
 	struct dentry		*debugfs_root;
 	struct mmc_part	part[MMC_NUM_PHY_PARTITION]; /* physical partitions */
 	unsigned int    nr_parts;
+
+	struct mmc_queue_req	*mqrq;		/* Shared queue structure */
+	unsigned int		bouncesz;	/* Bounce buffer size */
+	int			qdepth;		/* Shared queue depth */
 };
 
 static inline bool mmc_large_sector(struct mmc_card *card)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (4 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 05/22] mmc: queue: Share mmc request array between partitions Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:39   ` Linus Walleij
  2017-04-10 11:01   ` Ulf Hansson
  2017-03-13 12:36 ` [PATCH V2 07/22] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
                   ` (17 subsequent siblings)
  23 siblings, 2 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Add helper functions to enable or disable the Command Queue.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 Documentation/mmc/mmc-dev-attrs.txt |  1 +
 drivers/mmc/core/mmc.c              |  2 ++
 drivers/mmc/core/mmc_ops.c          | 28 ++++++++++++++++++++++++++++
 drivers/mmc/core/mmc_ops.h          |  2 ++
 include/linux/mmc/card.h            |  1 +
 5 files changed, 34 insertions(+)

diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
index 404a0e9e92b0..dcd1252877fb 100644
--- a/Documentation/mmc/mmc-dev-attrs.txt
+++ b/Documentation/mmc/mmc-dev-attrs.txt
@@ -30,6 +30,7 @@ All attributes are read-only.
 	rel_sectors		Reliable write sector count
 	ocr 			Operation Conditions Register
 	dsr			Driver Stage Register
+	cmdq_en			Command Queue enabled: 1 => enabled, 0 => not enabled
 
 Note on Erase Size and Preferred Erase Size:
 
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 7fd722868875..5727a0842a59 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -790,6 +790,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
 MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
 MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
 MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
+MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
 
 static ssize_t mmc_fwrev_show(struct device *dev,
 			      struct device_attribute *attr,
@@ -845,6 +846,7 @@ static ssize_t mmc_dsr_show(struct device *dev,
 	&dev_attr_rel_sectors.attr,
 	&dev_attr_ocr.attr,
 	&dev_attr_dsr.attr,
+	&dev_attr_cmdq_en.attr,
 	NULL,
 };
 ATTRIBUTE_GROUPS(mmc_std);
diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index fe80f26d6971..24c58d24c19a 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -838,3 +838,31 @@ int mmc_can_ext_csd(struct mmc_card *card)
 {
 	return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
 }
+
+static int mmc_cmdq_switch(struct mmc_card *card, bool enable)
+{
+	u8 val = enable ? EXT_CSD_CMDQ_MODE_ENABLED : 0;
+	int err;
+
+	if (!card->ext_csd.cmdq_support)
+		return -EOPNOTSUPP;
+
+	err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
+			 val, card->ext_csd.generic_cmd6_time);
+	if (!err)
+		card->ext_csd.cmdq_en = enable;
+
+	return err;
+}
+
+int mmc_cmdq_enable(struct mmc_card *card)
+{
+	return mmc_cmdq_switch(card, true);
+}
+EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
+
+int mmc_cmdq_disable(struct mmc_card *card)
+{
+	return mmc_cmdq_switch(card, false);
+}
+EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
diff --git a/drivers/mmc/core/mmc_ops.h b/drivers/mmc/core/mmc_ops.h
index 74beea8a9c7e..978bd2e60f8a 100644
--- a/drivers/mmc/core/mmc_ops.h
+++ b/drivers/mmc/core/mmc_ops.h
@@ -46,6 +46,8 @@ int mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 void mmc_start_bkops(struct mmc_card *card, bool from_exception);
 int mmc_can_reset(struct mmc_card *card);
 int mmc_flush_cache(struct mmc_card *card);
+int mmc_cmdq_enable(struct mmc_card *card);
+int mmc_cmdq_disable(struct mmc_card *card);
 
 #endif
 
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 119ef8f0155c..94637796b99c 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -89,6 +89,7 @@ struct mmc_ext_csd {
 	unsigned int		boot_ro_lock;		/* ro lock support */
 	bool			boot_ro_lockable;
 	bool			ffu_capable;	/* Firmware upgrade support */
+	bool			cmdq_en;	/* Command Queue enabled */
 	bool			cmdq_support;	/* Command Queue supported */
 	unsigned int		cmdq_depth;	/* Command Queue depth */
 #define MMC_FIRMWARE_LEN 8
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 07/22] mmc: mmc_test: Disable Command Queue while mmc_test is used
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (5 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:43   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 08/22] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
                   ` (16 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Normal read and write commands may not be used while the command queue is
enabled. Disable the Command Queue when mmc_test is probed and re-enable it
when it is removed.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
---
 drivers/mmc/core/mmc.c      |  7 +++++++
 drivers/mmc/core/mmc_test.c | 14 ++++++++++++++
 include/linux/mmc/card.h    |  2 ++
 3 files changed, 23 insertions(+)

diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 5727a0842a59..d1f0c4b247ac 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1790,6 +1790,13 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 	}
 
 	/*
+	 * In some cases (e.g. RPMB or mmc_test), the Command Queue must be
+	 * disabled for a time, so a flag is needed to indicate to re-enable the
+	 * Command Queue.
+	 */
+	card->reenable_cmdq = card->ext_csd.cmdq_en;
+
+	/*
 	 * The mandatory minimum values are defined for packed command.
 	 * read: 5, write: 3
 	 */
diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
index f99ac3123fd2..fd1b4b8510b9 100644
--- a/drivers/mmc/core/mmc_test.c
+++ b/drivers/mmc/core/mmc_test.c
@@ -26,6 +26,7 @@
 #include "card.h"
 #include "host.h"
 #include "bus.h"
+#include "mmc_ops.h"
 
 #define RESULT_OK		0
 #define RESULT_FAIL		1
@@ -3264,6 +3265,14 @@ static int mmc_test_probe(struct mmc_card *card)
 	if (ret)
 		return ret;
 
+	if (card->ext_csd.cmdq_en) {
+		mmc_claim_host(card->host);
+		ret = mmc_cmdq_disable(card);
+		mmc_release_host(card->host);
+		if (ret)
+			return ret;
+	}
+
 	dev_info(&card->dev, "Card claimed for testing.\n");
 
 	return 0;
@@ -3271,6 +3280,11 @@ static int mmc_test_probe(struct mmc_card *card)
 
 static void mmc_test_remove(struct mmc_card *card)
 {
+	if (card->reenable_cmdq) {
+		mmc_claim_host(card->host);
+		mmc_cmdq_enable(card);
+		mmc_release_host(card->host);
+	}
 	mmc_test_free_result(card);
 	mmc_test_free_dbgfs_file(card);
 }
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 94637796b99c..85b5f2bc8bb9 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -269,6 +269,8 @@ struct mmc_card {
 #define MMC_QUIRK_TRIM_BROKEN	(1<<12)		/* Skip trim */
 #define MMC_QUIRK_BROKEN_HPI	(1<<13)		/* Disable broken HPI support */
 
+	bool			reenable_cmdq;	/* Re-enable Command Queue */
+
 	unsigned int		erase_size;	/* erase size in sectors */
  	unsigned int		erase_shift;	/* if erase unit is power 2 */
  	unsigned int		pref_erase;	/* in sectors */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 08/22] mmc: block: Disable Command Queue while RPMB is used
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (6 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 07/22] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:44   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 09/22] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
                   ` (15 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

RPMB does not allow Command Queue commands. Disable and re-enable the
Command Queue when switching.

Note that the driver only switches partitions when the queue is empty.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
---
 drivers/mmc/core/block.c | 46 ++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 38 insertions(+), 8 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index b9890dcfb913..849b55654163 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -728,10 +728,41 @@ static int mmc_blk_compat_ioctl(struct block_device *bdev, fmode_t mode,
 #endif
 };
 
+static int mmc_blk_part_switch_pre(struct mmc_card *card,
+				   unsigned int part_type)
+{
+	int ret = 0;
+
+	if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
+		if (card->ext_csd.cmdq_en) {
+			ret = mmc_cmdq_disable(card);
+			if (ret)
+				return ret;
+		}
+		mmc_retune_pause(card->host);
+	}
+
+	return ret;
+}
+
+static int mmc_blk_part_switch_post(struct mmc_card *card,
+				    unsigned int part_type)
+{
+	int ret = 0;
+
+	if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
+		mmc_retune_unpause(card->host);
+		if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
+			ret = mmc_cmdq_enable(card);
+	}
+
+	return ret;
+}
+
 static inline int mmc_blk_part_switch(struct mmc_card *card,
 				      struct mmc_blk_data *md)
 {
-	int ret;
+	int ret = 0;
 	struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
 
 	if (main_md->part_curr == md->part_type)
@@ -740,8 +771,9 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 	if (mmc_card_mmc(card)) {
 		u8 part_config = card->ext_csd.part_config;
 
-		if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
-			mmc_retune_pause(card->host);
+		ret = mmc_blk_part_switch_pre(card, md->part_type);
+		if (ret)
+			return ret;
 
 		part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK;
 		part_config |= md->part_type;
@@ -750,19 +782,17 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 				 EXT_CSD_PART_CONFIG, part_config,
 				 card->ext_csd.part_time);
 		if (ret) {
-			if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
-				mmc_retune_unpause(card->host);
+			mmc_blk_part_switch_post(card, md->part_type);
 			return ret;
 		}
 
 		card->ext_csd.part_config = part_config;
 
-		if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB)
-			mmc_retune_unpause(card->host);
+		ret = mmc_blk_part_switch_post(card, main_md->part_curr);
 	}
 
 	main_md->part_curr = md->part_type;
-	return 0;
+	return ret;
 }
 
 static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 09/22] mmc: block: Change mmc_apply_rel_rw() to get block address from the request
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (7 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 08/22] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-10 13:49   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 10/22] mmc: block: Factor out data preparation Adrian Hunter
                   ` (14 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

mmc_apply_rel_rw() will be used by Software Command Queuing also. In that
case the command argument is not the block address so change
mmc_apply_rel_rw() to get block address from the request.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 849b55654163..2750c42926d7 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1309,7 +1309,7 @@ static inline void mmc_apply_rel_rw(struct mmc_blk_request *brq,
 {
 	if (!(card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN)) {
 		/* Legacy mode imposes restrictions on transfers. */
-		if (!IS_ALIGNED(brq->cmd.arg, card->ext_csd.rel_sectors))
+		if (!IS_ALIGNED(blk_rq_pos(req), card->ext_csd.rel_sectors))
 			brq->data.blocks = 1;
 
 		if (brq->data.blocks > card->ext_csd.rel_sectors)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 10/22] mmc: block: Factor out data preparation
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (8 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 09/22] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-10 13:52   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 11/22] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
                   ` (13 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Factor out data preparation into a separate function mmc_blk_data_prep()
which can be re-used for command queuing.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 151 +++++++++++++++++++++++++----------------------
 1 file changed, 82 insertions(+), 69 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 2750c42926d7..15a6705a81fe 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1433,36 +1433,39 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card,
 	return MMC_BLK_SUCCESS;
 }
 
-static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
-			       struct mmc_card *card,
-			       int disable_multi,
-			       struct mmc_queue *mq)
+static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+			      int disable_multi, bool *do_rel_wr,
+			      bool *do_data_tag)
 {
-	u32 readcmd, writecmd;
+	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_card *card = md->queue.card;
 	struct mmc_blk_request *brq = &mqrq->brq;
 	struct request *req = mqrq->req;
-	struct mmc_blk_data *md = mq->blkdata;
-	bool do_data_tag;
 
 	/*
 	 * Reliable writes are used to implement Forced Unit Access and
 	 * are supported only on MMCs.
 	 */
-	bool do_rel_wr = (req->cmd_flags & REQ_FUA) &&
-		(rq_data_dir(req) == WRITE) &&
-		(md->flags & MMC_BLK_REL_WR);
+	*do_rel_wr = (req->cmd_flags & REQ_FUA) &&
+		     rq_data_dir(req) == WRITE &&
+		     (md->flags & MMC_BLK_REL_WR);
 
 	memset(brq, 0, sizeof(struct mmc_blk_request));
-	brq->mrq.cmd = &brq->cmd;
+
 	brq->mrq.data = &brq->data;
 
-	brq->cmd.arg = blk_rq_pos(req);
-	if (!mmc_card_blockaddr(card))
-		brq->cmd.arg <<= 9;
-	brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
-	brq->data.blksz = 512;
 	brq->stop.opcode = MMC_STOP_TRANSMISSION;
 	brq->stop.arg = 0;
+
+	if (rq_data_dir(req) == READ) {
+		brq->data.flags = MMC_DATA_READ;
+		brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC;
+	} else {
+		brq->data.flags = MMC_DATA_WRITE;
+		brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
+	}
+
+	brq->data.blksz = 512;
 	brq->data.blocks = blk_rq_sectors(req);
 
 	/*
@@ -1493,6 +1496,68 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 						brq->data.blocks);
 	}
 
+	if (*do_rel_wr)
+		mmc_apply_rel_rw(brq, card, req);
+
+	/*
+	 * Data tag is used only during writing meta data to speed
+	 * up write and any subsequent read of this meta data
+	 */
+	*do_data_tag = card->ext_csd.data_tag_unit_size &&
+		       (req->cmd_flags & REQ_META) &&
+		       (rq_data_dir(req) == WRITE) &&
+		       ((brq->data.blocks * brq->data.blksz) >=
+			card->ext_csd.data_tag_unit_size);
+
+	mmc_set_data_timeout(&brq->data, card);
+
+	brq->data.sg = mqrq->sg;
+	brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
+
+	/*
+	 * Adjust the sg list so it is the same size as the
+	 * request.
+	 */
+	if (brq->data.blocks != blk_rq_sectors(req)) {
+		int i, data_size = brq->data.blocks << 9;
+		struct scatterlist *sg;
+
+		for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) {
+			data_size -= sg->length;
+			if (data_size <= 0) {
+				sg->length += data_size;
+				i++;
+				break;
+			}
+		}
+		brq->data.sg_len = i;
+	}
+
+	mqrq->areq.mrq = &brq->mrq;
+
+	mmc_queue_bounce_pre(mqrq);
+}
+
+static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+			       struct mmc_card *card,
+			       int disable_multi,
+			       struct mmc_queue *mq)
+{
+	u32 readcmd, writecmd;
+	struct mmc_blk_request *brq = &mqrq->brq;
+	struct request *req = mqrq->req;
+	struct mmc_blk_data *md = mq->blkdata;
+	bool do_rel_wr, do_data_tag;
+
+	mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag);
+
+	brq->mrq.cmd = &brq->cmd;
+
+	brq->cmd.arg = blk_rq_pos(req);
+	if (!mmc_card_blockaddr(card))
+		brq->cmd.arg <<= 9;
+	brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+
 	if (brq->data.blocks > 1 || do_rel_wr) {
 		/* SPI multiblock writes terminate using a special
 		 * token, not a STOP_TRANSMISSION request.
@@ -1507,32 +1572,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 		readcmd = MMC_READ_SINGLE_BLOCK;
 		writecmd = MMC_WRITE_BLOCK;
 	}
-	if (rq_data_dir(req) == READ) {
-		brq->cmd.opcode = readcmd;
-		brq->data.flags = MMC_DATA_READ;
-		if (brq->mrq.stop)
-			brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 |
-					MMC_CMD_AC;
-	} else {
-		brq->cmd.opcode = writecmd;
-		brq->data.flags = MMC_DATA_WRITE;
-		if (brq->mrq.stop)
-			brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B |
-					MMC_CMD_AC;
-	}
-
-	if (do_rel_wr)
-		mmc_apply_rel_rw(brq, card, req);
-
-	/*
-	 * Data tag is used only during writing meta data to speed
-	 * up write and any subsequent read of this meta data
-	 */
-	do_data_tag = (card->ext_csd.data_tag_unit_size) &&
-		(req->cmd_flags & REQ_META) &&
-		(rq_data_dir(req) == WRITE) &&
-		((brq->data.blocks * brq->data.blksz) >=
-		 card->ext_csd.data_tag_unit_size);
+	brq->cmd.opcode = rq_data_dir(req) == READ ? readcmd : writecmd;
 
 	/*
 	 * Pre-defined multi-block transfers are preferable to
@@ -1563,34 +1603,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 		brq->mrq.sbc = &brq->sbc;
 	}
 
-	mmc_set_data_timeout(&brq->data, card);
-
-	brq->data.sg = mqrq->sg;
-	brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
-
-	/*
-	 * Adjust the sg list so it is the same size as the
-	 * request.
-	 */
-	if (brq->data.blocks != blk_rq_sectors(req)) {
-		int i, data_size = brq->data.blocks << 9;
-		struct scatterlist *sg;
-
-		for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) {
-			data_size -= sg->length;
-			if (data_size <= 0) {
-				sg->length += data_size;
-				i++;
-				break;
-			}
-		}
-		brq->data.sg_len = i;
-	}
-
-	mqrq->areq.mrq = &brq->mrq;
 	mqrq->areq.err_check = mmc_blk_err_check;
-
-	mmc_queue_bounce_pre(mqrq);
 }
 
 static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 11/22] mmc: core: Factor out debug prints from mmc_start_request()
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (9 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 10/22] mmc: block: Factor out data preparation Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-10 13:53   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 12/22] mmc: core: Factor out mrq preparation " Adrian Hunter
                   ` (12 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

In preparation to reuse the code for CQE support.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c | 33 ++++++++++++++++++++-------------
 1 file changed, 20 insertions(+), 13 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 926e0fde07d7..6b063f0c2553 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -262,26 +262,19 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	host->ops->request(host, mrq);
 }
 
-static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
 {
-#ifdef CONFIG_MMC_DEBUG
-	unsigned int i, sz;
-	struct scatterlist *sg;
-#endif
-	mmc_retune_hold(host);
-
-	if (mmc_card_removed(host->card))
-		return -ENOMEDIUM;
-
 	if (mrq->sbc) {
 		pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n",
 			 mmc_hostname(host), mrq->sbc->opcode,
 			 mrq->sbc->arg, mrq->sbc->flags);
 	}
 
-	pr_debug("%s: starting CMD%u arg %08x flags %08x\n",
-		 mmc_hostname(host), mrq->cmd->opcode,
-		 mrq->cmd->arg, mrq->cmd->flags);
+	if (mrq->cmd) {
+		pr_debug("%s: starting CMD%u arg %08x flags %08x\n",
+			 mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->arg,
+			 mrq->cmd->flags);
+	}
 
 	if (mrq->data) {
 		pr_debug("%s:     blksz %d blocks %d flags %08x "
@@ -297,6 +290,20 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 			 mmc_hostname(host), mrq->stop->opcode,
 			 mrq->stop->arg, mrq->stop->flags);
 	}
+}
+
+static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+{
+#ifdef CONFIG_MMC_DEBUG
+	unsigned int i, sz;
+	struct scatterlist *sg;
+#endif
+	mmc_retune_hold(host);
+
+	if (mmc_card_removed(host->card))
+		return -ENOMEDIUM;
+
+	mmc_mrq_pr_debug(host, mrq);
 
 	WARN_ON(!host->claimed);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 12/22] mmc: core: Factor out mrq preparation from mmc_start_request()
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (10 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 11/22] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-10 13:54   ` Linus Walleij
  2017-03-13 12:36 ` [PATCH V2 13/22] mmc: core: Add mmc_retune_hold_now() Adrian Hunter
                   ` (11 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

In preparation to reuse the code for CQE support.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c | 40 +++++++++++++++++++++++++++-------------
 1 file changed, 27 insertions(+), 13 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 6b063f0c2553..ffc263283f54 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -292,23 +292,18 @@ static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
 	}
 }
 
-static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq)
 {
 #ifdef CONFIG_MMC_DEBUG
 	unsigned int i, sz;
 	struct scatterlist *sg;
 #endif
-	mmc_retune_hold(host);
-
-	if (mmc_card_removed(host->card))
-		return -ENOMEDIUM;
-
-	mmc_mrq_pr_debug(host, mrq);
-
-	WARN_ON(!host->claimed);
 
-	mrq->cmd->error = 0;
-	mrq->cmd->mrq = mrq;
+	if (mrq->cmd) {
+		mrq->cmd->error = 0;
+		mrq->cmd->mrq = mrq;
+		mrq->cmd->data = mrq->data;
+	}
 	if (mrq->sbc) {
 		mrq->sbc->error = 0;
 		mrq->sbc->mrq = mrq;
@@ -325,8 +320,6 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 		if (sz != mrq->data->blocks * mrq->data->blksz)
 			return -EINVAL;
 #endif
-
-		mrq->cmd->data = mrq->data;
 		mrq->data->error = 0;
 		mrq->data->mrq = mrq;
 		if (mrq->stop) {
@@ -335,6 +328,27 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 			mrq->stop->mrq = mrq;
 		}
 	}
+
+	return 0;
+}
+
+static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+{
+	int err;
+
+	mmc_retune_hold(host);
+
+	if (mmc_card_removed(host->card))
+		return -ENOMEDIUM;
+
+	mmc_mrq_pr_debug(host, mrq);
+
+	WARN_ON(!host->claimed);
+
+	err = mmc_mrq_prep(host, mrq);
+	if (err)
+		return err;
+
 	led_trigger_event(host->led, LED_FULL);
 	__mmc_start_request(host, mrq);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 13/22] mmc: core: Add mmc_retune_hold_now()
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (11 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 12/22] mmc: core: Factor out mrq preparation " Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 14/22] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

In preparation for CQE support.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/host.c | 6 ++++++
 drivers/mmc/core/host.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 3f8c85d5aa09..2c8696d835ba 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -110,6 +110,12 @@ void mmc_retune_hold(struct mmc_host *host)
 	host->hold_retune += 1;
 }
 
+void mmc_retune_hold_now(struct mmc_host *host)
+{
+	host->retune_now = 0;
+	host->hold_retune += 1;
+}
+
 void mmc_retune_release(struct mmc_host *host)
 {
 	if (host->hold_retune)
diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
index fb6a76a03833..77d6f60d1bf9 100644
--- a/drivers/mmc/core/host.h
+++ b/drivers/mmc/core/host.h
@@ -19,6 +19,7 @@
 void mmc_retune_enable(struct mmc_host *host);
 void mmc_retune_disable(struct mmc_host *host);
 void mmc_retune_hold(struct mmc_host *host);
+void mmc_retune_hold_now(struct mmc_host *host);
 void mmc_retune_release(struct mmc_host *host);
 int mmc_retune(struct mmc_host *host);
 void mmc_retune_pause(struct mmc_host *host);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 14/22] mmc: core: Add members to mmc_request and mmc_data for CQE's
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (12 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 13/22] mmc: core: Add mmc_retune_hold_now() Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 15/22] mmc: host: Add CQE interface Adrian Hunter
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Most of the information needed to issue requests to a CQE is already in
struct mmc_request and struct mmc_data. Add data block address, some flags,
and the task id (tag).

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 include/linux/mmc/core.h   | 13 +++++++++++--
 include/trace/events/mmc.h | 17 ++++++++++++-----
 2 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index a0c63ea28796..bf1788a224e6 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -122,11 +122,18 @@ struct mmc_data {
 	unsigned int		timeout_clks;	/* data timeout (in clocks) */
 	unsigned int		blksz;		/* data block size */
 	unsigned int		blocks;		/* number of blocks */
+	unsigned int		blk_addr;	/* block address */
 	int			error;		/* data error */
 	unsigned int		flags;
 
-#define MMC_DATA_WRITE	(1 << 8)
-#define MMC_DATA_READ	(1 << 9)
+#define MMC_DATA_WRITE		BIT(8)
+#define MMC_DATA_READ		BIT(9)
+/* Extra flags used by CQE */
+#define MMC_DATA_QBR		BIT(10)		/* CQE queue barrier*/
+#define MMC_DATA_PRIO		BIT(11)		/* CQE high priority */
+#define MMC_DATA_REL_WR		BIT(12)		/* Reliable write */
+#define MMC_DATA_DAT_TAG	BIT(13)		/* Tag request */
+#define MMC_DATA_FORCED_PRG	BIT(14)		/* Forced programming */
 
 	unsigned int		bytes_xfered;
 
@@ -153,6 +160,8 @@ struct mmc_request {
 
 	/* Allow other commands during this ongoing data transfer or busy wait */
 	bool			cap_cmd_during_tfr;
+
+	int			tag;
 };
 
 struct mmc_card;
diff --git a/include/trace/events/mmc.h b/include/trace/events/mmc.h
index a72f9b94c80b..baf96d15a184 100644
--- a/include/trace/events/mmc.h
+++ b/include/trace/events/mmc.h
@@ -29,8 +29,10 @@
 		__field(unsigned int,		sbc_flags)
 		__field(unsigned int,		sbc_retries)
 		__field(unsigned int,		blocks)
+		__field(unsigned int,		blk_addr)
 		__field(unsigned int,		blksz)
 		__field(unsigned int,		data_flags)
+		__field(int,			tag)
 		__field(unsigned int,		can_retune)
 		__field(unsigned int,		doing_retune)
 		__field(unsigned int,		retune_now)
@@ -56,7 +58,9 @@
 		__entry->sbc_retries = mrq->sbc ? mrq->sbc->retries : 0;
 		__entry->blksz = mrq->data ? mrq->data->blksz : 0;
 		__entry->blocks = mrq->data ? mrq->data->blocks : 0;
+		__entry->blk_addr = mrq->data ? mrq->data->blk_addr : 0;
 		__entry->data_flags = mrq->data ? mrq->data->flags : 0;
+		__entry->tag = mrq->tag;
 		__entry->can_retune = host->can_retune;
 		__entry->doing_retune = host->doing_retune;
 		__entry->retune_now = host->retune_now;
@@ -71,8 +75,8 @@
 		  "cmd_opcode=%u cmd_arg=0x%x cmd_flags=0x%x cmd_retries=%u "
 		  "stop_opcode=%u stop_arg=0x%x stop_flags=0x%x stop_retries=%u "
 		  "sbc_opcode=%u sbc_arg=0x%x sbc_flags=0x%x sbc_retires=%u "
-		  "blocks=%u block_size=%u data_flags=0x%x "
-		  "can_retune=%u doing_retune=%u retune_now=%u "
+		  "blocks=%u block_size=%u blk_addr=%u data_flags=0x%x "
+		  "tag=%d can_retune=%u doing_retune=%u retune_now=%u "
 		  "need_retune=%d hold_retune=%d retune_period=%u",
 		  __get_str(name), __entry->mrq,
 		  __entry->cmd_opcode, __entry->cmd_arg,
@@ -81,7 +85,8 @@
 		  __entry->stop_flags, __entry->stop_retries,
 		  __entry->sbc_opcode, __entry->sbc_arg,
 		  __entry->sbc_flags, __entry->sbc_retries,
-		  __entry->blocks, __entry->blksz, __entry->data_flags,
+		  __entry->blocks, __entry->blk_addr,
+		  __entry->blksz, __entry->data_flags, __entry->tag,
 		  __entry->can_retune, __entry->doing_retune,
 		  __entry->retune_now, __entry->need_retune,
 		  __entry->hold_retune, __entry->retune_period)
@@ -108,6 +113,7 @@
 		__field(unsigned int,		sbc_retries)
 		__field(unsigned int,		bytes_xfered)
 		__field(int,			data_err)
+		__field(int,			tag)
 		__field(unsigned int,		can_retune)
 		__field(unsigned int,		doing_retune)
 		__field(unsigned int,		retune_now)
@@ -139,6 +145,7 @@
 		__entry->sbc_retries = mrq->sbc ? mrq->sbc->retries : 0;
 		__entry->bytes_xfered = mrq->data ? mrq->data->bytes_xfered : 0;
 		__entry->data_err = mrq->data ? mrq->data->error : 0;
+		__entry->tag = mrq->tag;
 		__entry->can_retune = host->can_retune;
 		__entry->doing_retune = host->doing_retune;
 		__entry->retune_now = host->retune_now;
@@ -154,7 +161,7 @@
 		  "cmd_retries=%u stop_opcode=%u stop_err=%d "
 		  "stop_resp=0x%x 0x%x 0x%x 0x%x stop_retries=%u "
 		  "sbc_opcode=%u sbc_err=%d sbc_resp=0x%x 0x%x 0x%x 0x%x "
-		  "sbc_retries=%u bytes_xfered=%u data_err=%d "
+		  "sbc_retries=%u bytes_xfered=%u data_err=%d tag=%d "
 		  "can_retune=%u doing_retune=%u retune_now=%u need_retune=%d "
 		  "hold_retune=%d retune_period=%u",
 		  __get_str(name), __entry->mrq,
@@ -170,7 +177,7 @@
 		  __entry->sbc_resp[0], __entry->sbc_resp[1],
 		  __entry->sbc_resp[2], __entry->sbc_resp[3],
 		  __entry->sbc_retries,
-		  __entry->bytes_xfered, __entry->data_err,
+		  __entry->bytes_xfered, __entry->data_err, __entry->tag,
 		  __entry->can_retune, __entry->doing_retune,
 		  __entry->retune_now, __entry->need_retune,
 		  __entry->hold_retune, __entry->retune_period)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 15/22] mmc: host: Add CQE interface
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (13 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 14/22] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 16/22] mmc: core: Turn off CQE before sending commands Adrian Hunter
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Add CQE host operations, capabilities, and host members.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 include/linux/mmc/host.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 83f1c4a9f03b..7624017e8bcd 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -160,6 +160,19 @@ struct mmc_host_ops {
 				  unsigned int direction, int blk_size);
 };
 
+struct mmc_cqe_ops {
+	int	(*cqe_enable)(struct mmc_host *host, struct mmc_card *card);
+	void	(*cqe_disable)(struct mmc_host *host);
+	int	(*cqe_request)(struct mmc_host *host, struct mmc_request *mrq);
+	void	(*cqe_post_req)(struct mmc_host *host, struct mmc_request *mrq);
+	void	(*cqe_off)(struct mmc_host *host);
+	int	(*cqe_wait_for_idle)(struct mmc_host *host);
+	bool	(*cqe_timeout)(struct mmc_host *host, struct mmc_request *mrq,
+			       bool *recovery_needed);
+	void	(*cqe_recovery_start)(struct mmc_host *host);
+	void	(*cqe_recovery_finish)(struct mmc_host *host, bool forget_reqs);
+};
+
 struct mmc_async_req {
 	/* active mmc request */
 	struct mmc_request	*mrq;
@@ -303,6 +316,8 @@ struct mmc_host {
 #define MMC_CAP2_HS400_ES	(1 << 20)	/* Host supports enhanced strobe */
 #define MMC_CAP2_NO_SD		(1 << 21)	/* Do not send SD commands during initialization */
 #define MMC_CAP2_NO_MMC		(1 << 22)	/* Do not send (e)MMC commands during initialization */
+#define MMC_CAP2_CQE		(1 << 23)	/* Has eMMC command queue engine */
+#define MMC_CAP2_CQE_DCMD	(1 << 24)	/* CQE can issue a direct command */
 
 	mmc_pm_flag_t		pm_caps;	/* supported pm features */
 
@@ -388,6 +403,15 @@ struct mmc_host {
 	int			dsr_req;	/* DSR value is valid */
 	u32			dsr;	/* optional driver stage (DSR) value */
 
+	/* Command Queue Engine (CQE) support */
+	const struct mmc_cqe_ops *cqe_ops;
+	void			*cqe_private;
+	void			(*cqe_recovery_notifier)(struct mmc_host *,
+							 struct mmc_request *);
+	int			cqe_qdepth;
+	bool			cqe_enabled;
+	bool			cqe_on;
+
 	unsigned long		private[0] ____cacheline_aligned;
 };
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 16/22] mmc: core: Turn off CQE before sending commands
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (14 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 15/22] mmc: host: Add CQE interface Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 17/22] mmc: core: Add support for handling CQE requests Adrian Hunter
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Turn off the CQE before sending commands, and ensure it is off in any reset
or power management paths, or re-tuning.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index ffc263283f54..480dfd5d9c13 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -259,6 +259,9 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 
 	trace_mmc_request_start(host, mrq);
 
+	if (host->cqe_on)
+		host->cqe_ops->cqe_off(host);
+
 	host->ops->request(host, mrq);
 }
 
@@ -1184,6 +1187,9 @@ int mmc_execute_tuning(struct mmc_card *card)
 	if (!host->ops->execute_tuning)
 		return 0;
 
+	if (host->cqe_on)
+		host->cqe_ops->cqe_off(host);
+
 	if (mmc_card_mmc(card))
 		opcode = MMC_SEND_TUNING_BLOCK_HS200;
 	else
@@ -1223,6 +1229,9 @@ void mmc_set_bus_width(struct mmc_host *host, unsigned int width)
  */
 void mmc_set_initial_state(struct mmc_host *host)
 {
+	if (host->cqe_on)
+		host->cqe_ops->cqe_off(host);
+
 	mmc_retune_disable(host);
 
 	if (mmc_host_is_spi(host))
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 17/22] mmc: core: Add support for handling CQE requests
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (15 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 16/22] mmc: core: Turn off CQE before sending commands Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 18/22] mmc: mmc: Enable Command Queuing Adrian Hunter
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Add core support for handling CQE requests, including starting, completing
and recovering.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c  | 147 +++++++++++++++++++++++++++++++++++++++++++++--
 include/linux/mmc/core.h |   6 ++
 2 files changed, 148 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 480dfd5d9c13..07bd48fb70c6 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -265,7 +265,8 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	host->ops->request(host, mrq);
 }
 
-static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
+static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq,
+			     bool cqe)
 {
 	if (mrq->sbc) {
 		pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n",
@@ -274,9 +275,12 @@ static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
 	}
 
 	if (mrq->cmd) {
-		pr_debug("%s: starting CMD%u arg %08x flags %08x\n",
-			 mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->arg,
-			 mrq->cmd->flags);
+		pr_debug("%s: starting %sCMD%u arg %08x flags %08x\n",
+			 mmc_hostname(host), cqe ? "CQE direct " : "",
+			 mrq->cmd->opcode, mrq->cmd->arg, mrq->cmd->flags);
+	} else if (cqe) {
+		pr_debug("%s: starting CQE transfer for tag %d blkaddr %u\n",
+			 mmc_hostname(host), mrq->tag, mrq->data->blk_addr);
 	}
 
 	if (mrq->data) {
@@ -344,7 +348,7 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	if (mmc_card_removed(host->card))
 		return -ENOMEDIUM;
 
-	mmc_mrq_pr_debug(host, mrq);
+	mmc_mrq_pr_debug(host, mrq, false);
 
 	WARN_ON(!host->claimed);
 
@@ -358,6 +362,139 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	return 0;
 }
 
+int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	int err;
+
+	/* Caller must hold retuning while CQE is in use */
+	err = mmc_retune(host);
+	if (err)
+		goto out_err;
+
+	mrq->host = host;
+
+	mmc_mrq_pr_debug(host, mrq, true);
+
+	err = mmc_mrq_prep(host, mrq);
+	if (err)
+		goto out_err;
+
+	err = host->cqe_ops->cqe_request(host, mrq);
+	if (err)
+		goto out_err;
+
+	trace_mmc_request_start(host, mrq);
+
+	return 0;
+
+out_err:
+	if (mrq->cmd) {
+		pr_debug("%s: failed to start CQE direct CMD%u, error %d\n",
+			 mmc_hostname(host), mrq->cmd->opcode, err);
+	} else {
+		pr_debug("%s: failed to start CQE transfer for tag %d, error %d\n",
+			 mmc_hostname(host), mrq->tag, err);
+	}
+	return err;
+}
+EXPORT_SYMBOL(mmc_cqe_start_req);
+
+void __mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq)
+{
+	mmc_should_fail_request(host, mrq);
+
+	/* Flag re-tuning needed on CRC errors */
+	if ((mrq->cmd && mrq->cmd->error == -EILSEQ) ||
+	    (mrq->data && mrq->data->error == -EILSEQ))
+		mmc_retune_needed(host);
+
+	trace_mmc_request_done(host, mrq);
+
+	if (mrq->cmd) {
+		pr_debug("%s: CQE req done (direct CMD%u): %d\n",
+			 mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->error);
+	} else {
+		pr_debug("%s: CQE transfer done tag %d\n",
+			 mmc_hostname(host), mrq->tag);
+	}
+
+	if (mrq->data) {
+		pr_debug("%s:     %d bytes transferred: %d\n",
+			 mmc_hostname(host),
+			 mrq->data->bytes_xfered, mrq->data->error);
+	}
+}
+EXPORT_SYMBOL(__mmc_cqe_request_done);
+
+/**
+ *	mmc_cqe_request_done - CQE has finished processing an MMC request
+ *	@host: MMC host which completed request
+ *	@mrq: MMC request which completed
+ *
+ *	CQE drivers should call this function when they have completed
+ *	their processing of a request.
+ */
+void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq)
+{
+	__mmc_cqe_request_done(host, mrq);
+
+	mrq->done(mrq);
+}
+EXPORT_SYMBOL(mmc_cqe_request_done);
+
+/**
+ *	mmc_cqe_post_req - CQE post process of a completed MMC request
+ *	@host: MMC host
+ *	@mrq: MMC request to be processed
+ */
+void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	if (host->cqe_ops->cqe_post_req)
+		host->cqe_ops->cqe_post_req(host, mrq);
+}
+EXPORT_SYMBOL(mmc_cqe_post_req);
+
+/* Arbitrary 1 second timeout */
+#define MMC_CQE_RECOVERY_TIMEOUT	1000
+
+int mmc_cqe_recovery(struct mmc_host *host, bool forget_reqs)
+{
+	struct mmc_command cmd;
+	int err;
+
+	mmc_retune_hold_now(host);
+
+	/*
+	 * Recovery is expected seldom, if at all, but it reduces performance,
+	 * so make sure it is not completely silent.
+	 */
+	pr_warn("%s: running CQE recovery\n", mmc_hostname(host));
+
+	host->cqe_ops->cqe_recovery_start(host);
+
+	memset(&cmd, 0, sizeof(cmd));
+	cmd.opcode       = MMC_STOP_TRANSMISSION,
+	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC,
+	cmd.flags       &= ~MMC_RSP_CRC; /* Ignore CRC */
+	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
+	mmc_wait_for_cmd(host, &cmd, 0);
+
+	memset(&cmd, 0, sizeof(cmd));
+	cmd.opcode       = MMC_CMDQ_TASK_MGMT;
+	cmd.arg          = 1; /* Discard entire queue */
+	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC;
+	cmd.flags       &= ~MMC_RSP_CRC; /* Ignore CRC */
+	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
+	err = mmc_wait_for_cmd(host, &cmd, 0);
+
+	host->cqe_ops->cqe_recovery_finish(host, forget_reqs);
+
+	mmc_retune_release(host);
+
+	return err;
+}
+EXPORT_SYMBOL(mmc_cqe_recovery);
+
 /**
  *	mmc_start_bkops - start BKOPS for supported cards
  *	@card: MMC card to start BKOPS
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index bf1788a224e6..1aae492f9515 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -174,6 +174,12 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd,
 		int retries);
 
+int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq);
+void __mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq);
+void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq);
+void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq);
+int mmc_cqe_recovery(struct mmc_host *host, bool forget_reqs);
+
 int mmc_hw_reset(struct mmc_host *host);
 void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 18/22] mmc: mmc: Enable Command Queuing
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (16 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 17/22] mmc: core: Add support for handling CQE requests Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 19/22] mmc: mmc: Enable CQE's Adrian Hunter
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Enable the Command Queue if the host controller supports i a command queue
engine. It is not compatible with Packed Commands, so do not enable that
at the same time.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/mmc.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index d1f0c4b247ac..70337068d85f 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1789,6 +1789,20 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 		}
 	}
 
+	/* Enable Command Queue if supported */
+	card->ext_csd.cmdq_en = false;
+	if (card->ext_csd.cmdq_support && host->caps2 & MMC_CAP2_CQE) {
+		err = mmc_cmdq_enable(card);
+		if (err && err != -EBADMSG)
+			goto free_card;
+		if (err) {
+			pr_warn("%s: Enabling CMDQ failed\n",
+				mmc_hostname(card->host));
+			card->ext_csd.cmdq_support = false;
+			card->ext_csd.cmdq_depth = 0;
+			err = 0;
+		}
+	}
 	/*
 	 * In some cases (e.g. RPMB or mmc_test), the Command Queue must be
 	 * disabled for a time, so a flag is needed to indicate to re-enable the
@@ -1802,7 +1816,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 	 */
 	if (card->ext_csd.max_packed_writes >= 3 &&
 	    card->ext_csd.max_packed_reads >= 5 &&
-	    host->caps2 & MMC_CAP2_PACKED_CMD) {
+	    host->caps2 & MMC_CAP2_PACKED_CMD &&
+	    !card->ext_csd.cmdq_en) {
 		err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
 				EXT_CSD_EXP_EVENTS_CTRL,
 				EXT_CSD_PACKED_EVENT_EN,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 19/22] mmc: mmc: Enable CQE's
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (17 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 18/22] mmc: mmc: Enable Command Queuing Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 20/22] mmc: block: Prepare CQE data Adrian Hunter
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Enable or disable CQE when a card is added or removed respectively.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/bus.c |  7 +++++++
 drivers/mmc/core/mmc.c | 13 +++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
index 301246513a37..a4b49e25fe96 100644
--- a/drivers/mmc/core/bus.c
+++ b/drivers/mmc/core/bus.c
@@ -369,10 +369,17 @@ int mmc_add_card(struct mmc_card *card)
  */
 void mmc_remove_card(struct mmc_card *card)
 {
+	struct mmc_host *host = card->host;
+
 #ifdef CONFIG_DEBUG_FS
 	mmc_remove_card_debugfs(card);
 #endif
 
+	if (host->cqe_enabled) {
+		host->cqe_ops->cqe_disable(host);
+		host->cqe_enabled = false;
+	}
+
 	if (mmc_card_present(card)) {
 		if (mmc_host_is_spi(card->host)) {
 			pr_info("%s: SPI card removed\n",
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 70337068d85f..0b161024c214 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1810,6 +1810,19 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 	 */
 	card->reenable_cmdq = card->ext_csd.cmdq_en;
 
+	if (card->ext_csd.cmdq_en && (host->caps2 & MMC_CAP2_CQE) &&
+	    !host->cqe_enabled) {
+		err = host->cqe_ops->cqe_enable(host, card);
+		if (err) {
+			pr_err("%s: Failed to enable CQE, error %d\n",
+				mmc_hostname(host), err);
+		} else {
+			host->cqe_enabled = true;
+			pr_info("%s: Command Queue Engine enabled\n",
+				mmc_hostname(host));
+		}
+	}
+
 	/*
 	 * The mandatory minimum values are defined for packed command.
 	 * read: 5, write: 3
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 20/22] mmc: block: Prepare CQE data
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (18 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 19/22] mmc: mmc: Enable CQE's Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 21/22] mmc: block: Add CQE support Adrian Hunter
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Enhance mmc_blk_data_prep() to support CQE requests.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 45 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 34 insertions(+), 11 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 15a6705a81fe..105b87aa8ffc 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -36,6 +36,7 @@
 #include <linux/compat.h>
 #include <linux/pm_runtime.h>
 #include <linux/idr.h>
+#include <linux/ioprio.h>
 
 #include <linux/mmc/ioctl.h>
 #include <linux/mmc/card.h>
@@ -1434,25 +1435,27 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card,
 }
 
 static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
-			      int disable_multi, bool *do_rel_wr,
-			      bool *do_data_tag)
+			      int disable_multi, bool *do_rel_wr_p,
+			      bool *do_data_tag_p)
 {
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 	struct mmc_blk_request *brq = &mqrq->brq;
 	struct request *req = mqrq->req;
+	bool do_rel_wr, do_data_tag;
 
 	/*
 	 * Reliable writes are used to implement Forced Unit Access and
 	 * are supported only on MMCs.
 	 */
-	*do_rel_wr = (req->cmd_flags & REQ_FUA) &&
-		     rq_data_dir(req) == WRITE &&
-		     (md->flags & MMC_BLK_REL_WR);
+	do_rel_wr = (req->cmd_flags & REQ_FUA) &&
+		    rq_data_dir(req) == WRITE &&
+		    (md->flags & MMC_BLK_REL_WR);
 
 	memset(brq, 0, sizeof(struct mmc_blk_request));
 
 	brq->mrq.data = &brq->data;
+	brq->mrq.tag = req->tag;
 
 	brq->stop.opcode = MMC_STOP_TRANSMISSION;
 	brq->stop.arg = 0;
@@ -1467,6 +1470,15 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 
 	brq->data.blksz = 512;
 	brq->data.blocks = blk_rq_sectors(req);
+	brq->data.blk_addr = blk_rq_pos(req);
+
+	/*
+	 * The command queue supports 2 priorities: "high" (1) and "simple" (0).
+	 * The eMMC will give "high" priority tasks priority over "simple"
+	 * priority tasks. Here we give priority to IOPRIO_CLASS_RT.
+	 */
+	if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
+		brq->data.flags |= MMC_DATA_PRIO;
 
 	/*
 	 * The block layer doesn't support all sector count
@@ -1496,18 +1508,23 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 						brq->data.blocks);
 	}
 
-	if (*do_rel_wr)
+	if (do_rel_wr) {
 		mmc_apply_rel_rw(brq, card, req);
+		brq->data.flags |= MMC_DATA_REL_WR;
+	}
 
 	/*
 	 * Data tag is used only during writing meta data to speed
 	 * up write and any subsequent read of this meta data
 	 */
-	*do_data_tag = card->ext_csd.data_tag_unit_size &&
-		       (req->cmd_flags & REQ_META) &&
-		       (rq_data_dir(req) == WRITE) &&
-		       ((brq->data.blocks * brq->data.blksz) >=
-			card->ext_csd.data_tag_unit_size);
+	do_data_tag = card->ext_csd.data_tag_unit_size &&
+		      (req->cmd_flags & REQ_META) &&
+		      (rq_data_dir(req) == WRITE) &&
+		      ((brq->data.blocks * brq->data.blksz) >=
+		       card->ext_csd.data_tag_unit_size);
+
+	if (do_data_tag)
+		brq->data.flags |= MMC_DATA_DAT_TAG;
 
 	mmc_set_data_timeout(&brq->data, card);
 
@@ -1536,6 +1553,12 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 	mqrq->areq.mrq = &brq->mrq;
 
 	mmc_queue_bounce_pre(mqrq);
+
+	if (do_rel_wr_p)
+		*do_rel_wr_p = do_rel_wr;
+
+	if (do_data_tag_p)
+		*do_data_tag_p = do_data_tag;
 }
 
 static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 21/22] mmc: block: Add CQE support
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (19 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 20/22] mmc: block: Prepare CQE data Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-03-13 12:36 ` [PATCH V2 22/22] mmc: cqhci: support for command queue enabled host Adrian Hunter
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

Add CQE support to the block driver, including:
	- optionally using DCMD for flush requests
	- manually issuing discard requests
	- issuing read / write requests to the CQE
	- supporting block-layer timeouts
	- handling recovery
	- supporting re-tuning

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 202 ++++++++++++++++++++++++++++++-
 drivers/mmc/core/block.h |   7 ++
 drivers/mmc/core/queue.c | 300 ++++++++++++++++++++++++++++++++++++++++++++++-
 drivers/mmc/core/queue.h |  43 ++++++-
 4 files changed, 545 insertions(+), 7 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 105b87aa8ffc..946862c65551 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -109,6 +109,7 @@ struct mmc_blk_data {
 #define MMC_BLK_WRITE		BIT(1)
 #define MMC_BLK_DISCARD		BIT(2)
 #define MMC_BLK_SECDISCARD	BIT(3)
+#define MMC_BLK_CQE_RECOVERY	BIT(4)
 
 	/*
 	 * Only set in main mmc_blk_data associated
@@ -1561,6 +1562,205 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 		*do_data_tag_p = do_data_tag;
 }
 
+#define MMC_CQE_RETRIES 2
+
+void mmc_blk_cqe_complete_rq(struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+	struct mmc_request *mrq = &mqrq->brq.mrq;
+	struct request_queue *q = req->q;
+	struct mmc_queue *mq = q->queuedata;
+	struct mmc_host *host = mq->card->host;
+	unsigned long flags;
+	bool put_card;
+	int err;
+
+	mmc_cqe_post_req(host, mrq);
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	mq->cqe_in_flight[mmc_cqe_issue_type(host, req)] -= 1;
+
+	put_card = mmc_cqe_tot_in_flight(mq) == 0;
+
+	mmc_queue_clr_special(req);
+
+	if (mrq->cmd && mrq->cmd->error)
+		err = mrq->cmd->error;
+	else if (mrq->data && mrq->data->error)
+		err = mrq->data->error;
+	else
+		err = 0;
+
+	if (err) {
+		/*
+		 * !req->retries means we have not seen this request before, so
+		 * we add 1 to the number of retries and compare to 1 to decide
+		 * whether or not to retry.
+		 */
+		if (!req->retries)
+			req->retries = MMC_CQE_RETRIES + 1;
+		if (--req->retries >= 1)
+			blk_requeue_request(q, req);
+		else
+			__blk_end_request_all(req, -EIO);
+	} else if (mrq->data) {
+		if (__blk_end_request(req, 0, mrq->data->bytes_xfered))
+			blk_requeue_request(q, req);
+	} else {
+		__blk_end_request_all(req, 0);
+	}
+
+	mmc_cqe_kick_queue(mq);
+
+	spin_unlock_irqrestore(q->queue_lock, flags);
+
+	if (put_card)
+		mmc_put_card(mq->card);
+}
+
+void mmc_blk_cqe_recovery(struct mmc_queue *mq)
+{
+	struct mmc_card *card = mq->card;
+	struct mmc_host *host = card->host;
+	int err, i;
+
+	mmc_get_card(card);
+
+	pr_debug("%s: CQE recovery start\n", mmc_hostname(host));
+
+	/*
+	 * Block layer timeouts race with completions which means the normal
+	 * completion path cannot be used so tell CQE to forget the requests.
+	 */
+	err = mmc_cqe_recovery(host, true);
+
+	/* Then complete all requests directly */
+	for (i = 0; i < mq->qdepth; i++) {
+		struct mmc_queue_req *mqrq = &mq->mqrq[i];
+
+		if (mqrq->req) {
+			__mmc_cqe_request_done(host, &mqrq->brq.mrq);
+			mmc_blk_cqe_complete_rq(mqrq->req);
+		}
+	}
+
+	if (err)
+		mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY);
+	else
+		mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
+
+	pr_debug("%s: CQE recovery done\n", mmc_hostname(host));
+
+	mmc_put_card(card);
+}
+
+static void mmc_blk_cqe_req_done(struct mmc_request *mrq)
+{
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+
+	blk_complete_request(mqrq->req);
+}
+
+static int mmc_blk_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	mrq->done = mmc_blk_cqe_req_done;
+	return mmc_cqe_start_req(host, mrq);
+}
+
+static struct mmc_request *mmc_blk_cqe_prep_dcmd(struct mmc_queue_req *mqrq)
+{
+	struct mmc_blk_request *brq = &mqrq->brq;
+
+	memset(brq, 0, sizeof(*brq));
+
+	brq->mrq.cmd = &brq->cmd;
+	brq->mrq.tag = mqrq->req->tag;
+
+	return &brq->mrq;
+}
+
+static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+	struct mmc_request *mrq = mmc_blk_cqe_prep_dcmd(mqrq);
+
+	mrq->cmd->opcode = MMC_SWITCH;
+	mrq->cmd->arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
+			(EXT_CSD_FLUSH_CACHE << 16) |
+			(1 << 8) |
+			EXT_CSD_CMD_SET_NORMAL;
+	mrq->cmd->flags = MMC_CMD_AC | MMC_RSP_R1B;
+
+	return mmc_blk_cqe_start_req(mq->card->host, mrq);
+}
+
+static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+
+	mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
+
+	return mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq);
+}
+
+enum mmc_issued mmc_blk_cqe_issue_rq(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_card *card = md->queue.card;
+	struct mmc_host *host = card->host;
+	int ret;
+
+	ret = mmc_blk_part_switch(card, md);
+	if (ret)
+		return MMC_REQ_FAILED_TO_START;
+
+	switch (mmc_cqe_issue_type(host, req)) {
+	case MMC_ISSUE_SYNC:
+		ret = host->cqe_ops->cqe_wait_for_idle(host);
+		if (ret)
+			return MMC_REQ_BUSY;
+		switch (req_op(req)) {
+		case REQ_OP_DISCARD:
+			mmc_blk_issue_discard_rq(mq, req);
+			break;
+		case REQ_OP_SECURE_ERASE:
+			mmc_blk_issue_secdiscard_rq(mq, req);
+			break;
+		case REQ_OP_FLUSH:
+			mmc_blk_issue_flush(mq, req);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			return MMC_REQ_FAILED_TO_START;
+		}
+		return MMC_REQ_FINISHED;
+	case MMC_ISSUE_DCMD:
+	case MMC_ISSUE_ASYNC:
+		mmc_queue_set_special(mq, req);
+		switch (req_op(req)) {
+		case REQ_OP_FLUSH:
+			ret = mmc_blk_cqe_issue_flush(mq, req);
+			break;
+		case REQ_OP_READ:
+		case REQ_OP_WRITE:
+			ret = mmc_blk_cqe_issue_rw_rq(mq, req);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			ret = -EINVAL;
+		}
+		if (!ret)
+			return MMC_REQ_STARTED;
+		mmc_queue_clr_special(req);
+		return ret == -EBUSY ? MMC_REQ_BUSY : MMC_REQ_FAILED_TO_START;
+	default:
+		WARN_ON_ONCE(1);
+		return MMC_REQ_FAILED_TO_START;
+	}
+}
+
 static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 			       struct mmc_card *card,
 			       int disable_multi,
@@ -1960,7 +2160,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 	INIT_LIST_HEAD(&md->part);
 	md->usage = 1;
 
-	ret = mmc_init_queue(&md->queue, card, &md->lock, subname);
+	ret = mmc_init_queue(&md->queue, card, &md->lock, subname, area_type);
 	if (ret)
 		goto err_putdisk;
 
diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
index 860ca7c8df86..d7b3d7008b00 100644
--- a/drivers/mmc/core/block.h
+++ b/drivers/mmc/core/block.h
@@ -6,4 +6,11 @@
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
 
+enum mmc_issued;
+
+enum mmc_issued mmc_blk_cqe_issue_rq(struct mmc_queue *mq,
+				     struct request *req);
+void mmc_blk_cqe_complete_rq(struct request *rq);
+void mmc_blk_cqe_recovery(struct mmc_queue *mq);
+
 #endif
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 3423b7acf744..58bd67bc5876 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -69,6 +69,273 @@ void mmc_queue_req_free(struct mmc_queue *mq,
 	__clear_bit(mqrq->task_id, &mq->qslots);
 }
 
+static void mmc_cqe_request_fn(struct request_queue *q)
+{
+	struct mmc_queue *mq = q->queuedata;
+	struct request *req;
+
+	if (!mq) {
+		while ((req = blk_fetch_request(q)) != NULL) {
+			req->rq_flags |= RQF_QUIET;
+			__blk_end_request_all(req, -EIO);
+		}
+		return;
+	}
+
+	if (mq->asleep && !mq->cqe_busy)
+		wake_up_process(mq->thread);
+}
+
+static inline bool mmc_cqe_dcmd_busy(struct mmc_queue *mq)
+{
+	/* Allow only 1 DCMD at a time */
+	return mq->cqe_in_flight[MMC_ISSUE_DCMD];
+}
+
+static inline bool mmc_cqe_queue_full(struct mmc_queue *mq)
+{
+	return mmc_cqe_qcnt(mq) >= mq->qdepth;
+}
+
+void mmc_cqe_kick_queue(struct mmc_queue *mq)
+{
+	if ((mq->cqe_busy & MMC_CQE_DCMD_BUSY) && !mmc_cqe_dcmd_busy(mq))
+		mq->cqe_busy &= ~MMC_CQE_DCMD_BUSY;
+
+	if ((mq->cqe_busy & MMC_CQE_QUEUE_FULL) && !mmc_cqe_queue_full(mq))
+		mq->cqe_busy &= ~MMC_CQE_QUEUE_FULL;
+
+	if (mq->asleep && !mq->cqe_busy)
+		__blk_run_queue(mq->queue);
+}
+
+static inline bool mmc_cqe_can_dcmd(struct mmc_host *host)
+{
+	return host->caps2 & MMC_CAP2_CQE_DCMD;
+}
+
+enum mmc_issue_type mmc_cqe_issue_type(struct mmc_host *host,
+				       struct request *req)
+{
+	switch (req_op(req)) {
+	case REQ_OP_DISCARD:
+	case REQ_OP_SECURE_ERASE:
+		return MMC_ISSUE_SYNC;
+	case REQ_OP_FLUSH:
+		return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC;
+	default:
+		return MMC_ISSUE_ASYNC;
+	}
+}
+
+void mmc_queue_set_special(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq = &mq->mqrq[req->tag];
+
+	mqrq->req = req;
+	req->special = mqrq;
+}
+
+void mmc_queue_clr_special(struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+
+	if (!mqrq)
+		return;
+
+	mqrq->req = NULL;
+	req->special = NULL;
+}
+
+static void __mmc_cqe_recovery_notifier(struct mmc_queue *mq)
+{
+	if (!mq->cqe_recovery_needed) {
+		mq->cqe_recovery_needed = true;
+		wake_up_process(mq->thread);
+	}
+}
+
+static void mmc_cqe_recovery_notifier(struct mmc_host *host,
+				      struct mmc_request *mrq)
+{
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+	struct request *req = mqrq->req;
+	struct request_queue *q = req->q;
+	struct mmc_queue *mq = q->queuedata;
+	unsigned long flags;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	__mmc_cqe_recovery_notifier(mq);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+static int mmc_cqe_thread(void *d)
+{
+	struct mmc_queue *mq = d;
+	struct request_queue *q = mq->queue;
+	struct mmc_card *card = mq->card;
+	struct mmc_host *host = card->host;
+	unsigned long flags;
+	int get_put = 0;
+
+	current->flags |= PF_MEMALLOC;
+
+	down(&mq->thread_sem);
+	spin_lock_irqsave(q->queue_lock, flags);
+	while (1) {
+		struct request *req = NULL;
+		enum mmc_issue_type issue_type;
+		bool retune_ok = false;
+
+		if (mq->cqe_recovery_needed) {
+			spin_unlock_irqrestore(q->queue_lock, flags);
+			mmc_blk_cqe_recovery(mq);
+			spin_lock_irqsave(q->queue_lock, flags);
+			mq->cqe_recovery_needed = false;
+		}
+
+		set_current_state(TASK_INTERRUPTIBLE);
+
+		if (!kthread_should_stop())
+			req = blk_peek_request(q);
+
+		if (req) {
+			issue_type = mmc_cqe_issue_type(host, req);
+			switch (issue_type) {
+			case MMC_ISSUE_DCMD:
+				if (mmc_cqe_dcmd_busy(mq)) {
+					mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
+					req = NULL;
+					break;
+				}
+				/* Fall through */
+			case MMC_ISSUE_ASYNC:
+				if (blk_queue_start_tag(q, req)) {
+					mq->cqe_busy |= MMC_CQE_QUEUE_FULL;
+					req = NULL;
+				}
+				break;
+			default:
+				/*
+				 * Timeouts are handled by mmc core, so set a
+				 * large value to avoid races.
+				 */
+				req->timeout = 600 * HZ;
+				req->special = NULL;
+				blk_start_request(req);
+				break;
+			}
+			if (req) {
+				mq->cqe_in_flight[issue_type] += 1;
+				if (mmc_cqe_tot_in_flight(mq) == 1)
+					get_put += 1;
+				if (mmc_cqe_qcnt(mq) == 1)
+					retune_ok = true;
+			}
+		}
+
+		mq->asleep = !req;
+
+		spin_unlock_irqrestore(q->queue_lock, flags);
+
+		if (req) {
+			enum mmc_issued issued;
+
+			set_current_state(TASK_RUNNING);
+
+			if (get_put) {
+				get_put = 0;
+				mmc_get_card(card);
+			}
+
+			if (host->need_retune && retune_ok &&
+			    !host->hold_retune)
+				host->retune_now = true;
+			else
+				host->retune_now = false;
+
+			issued = mmc_blk_cqe_issue_rq(mq, req);
+
+			cond_resched();
+
+			spin_lock_irqsave(q->queue_lock, flags);
+
+			switch (issued) {
+			case MMC_REQ_STARTED:
+				break;
+			case MMC_REQ_BUSY:
+				blk_requeue_request(q, req);
+				goto finished;
+			case MMC_REQ_FAILED_TO_START:
+				__blk_end_request_all(req, -EIO);
+				/* Fall through */
+			case MMC_REQ_FINISHED:
+finished:
+				mq->cqe_in_flight[issue_type] -= 1;
+				if (mmc_cqe_tot_in_flight(mq) == 0)
+					get_put = -1;
+			}
+		} else {
+			if (get_put < 0) {
+				get_put = 0;
+				mmc_put_card(card);
+			}
+			/*
+			 * Do not stop with requests in flight in case recovery
+			 * is needed.
+			 */
+			if (kthread_should_stop() &&
+			    !mmc_cqe_tot_in_flight(mq)) {
+				set_current_state(TASK_RUNNING);
+				break;
+			}
+			up(&mq->thread_sem);
+			schedule();
+			down(&mq->thread_sem);
+			spin_lock_irqsave(q->queue_lock, flags);
+		}
+	} /* loop */
+	up(&mq->thread_sem);
+
+	return 0;
+}
+
+static enum blk_eh_timer_return __mmc_cqe_timed_out(struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+	struct mmc_request *mrq = &mqrq->brq.mrq;
+	struct mmc_queue *mq = req->q->queuedata;
+	struct mmc_host *host = mq->card->host;
+	enum mmc_issue_type issue_type = mmc_cqe_issue_type(host, req);
+	bool recovery_needed = false;
+
+	switch (issue_type) {
+	case MMC_ISSUE_ASYNC:
+	case MMC_ISSUE_DCMD:
+		if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) {
+			if (recovery_needed)
+				__mmc_cqe_recovery_notifier(mq);
+			return BLK_EH_RESET_TIMER;
+		}
+		/* No timeout */
+		return BLK_EH_HANDLED;
+	default:
+		/* Timeout is handled by mmc core */
+		return BLK_EH_RESET_TIMER;
+	}
+}
+
+static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
+{
+	struct mmc_queue *mq = req->q->queuedata;
+
+	if (!req->special || mq->cqe_recovery_needed)
+		return BLK_EH_RESET_TIMER;
+
+	return __mmc_cqe_timed_out(req);
+}
+
 static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
@@ -365,20 +632,43 @@ int mmc_queue_alloc_shared_queue(struct mmc_card *card)
  * Initialise a MMC card request queue.
  */
 int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
-		   spinlock_t *lock, const char *subname)
+		   spinlock_t *lock, const char *subname, int area_type)
 {
 	struct mmc_host *host = card->host;
 	u64 limit = BLK_BOUNCE_HIGH;
 	int ret = -ENOMEM;
+	bool use_cqe = host->cqe_enabled && area_type != MMC_BLK_DATA_AREA_RPMB;
 
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
 	mq->card = card;
-	mq->queue = blk_init_queue(mmc_request_fn, lock);
+
+	mq->queue = blk_init_queue(use_cqe ?
+				   mmc_cqe_request_fn : mmc_request_fn, lock);
 	if (!mq->queue)
 		return -ENOMEM;
 
+	if (use_cqe) {
+		int q_depth = card->ext_csd.cmdq_depth;
+
+		if (q_depth > host->cqe_qdepth)
+			q_depth = host->cqe_qdepth;
+		if (q_depth > card->qdepth)
+			q_depth = card->qdepth;
+
+		ret = blk_queue_init_tags(mq->queue, q_depth, NULL,
+					  BLK_TAG_ALLOC_FIFO);
+		if (ret)
+			goto cleanup_queue;
+
+		blk_queue_softirq_done(mq->queue, mmc_blk_cqe_complete_rq);
+		blk_queue_rq_timed_out(mq->queue, mmc_cqe_timed_out);
+		blk_queue_rq_timeout(mq->queue, 60 * HZ);
+
+		host->cqe_recovery_notifier = mmc_cqe_recovery_notifier;
+	}
+
 	mq->mqrq = card->mqrq;
 	mq->qdepth = card->qdepth;
 	mq->queue->queuedata = mq;
@@ -404,9 +694,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 
 	sema_init(&mq->thread_sem, 1);
 
-	mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
-		host->index, subname ? subname : "");
-
+	mq->thread = kthread_run(use_cqe ? mmc_cqe_thread : mmc_queue_thread,
+				 mq, "mmcqd/%d%s", host->index,
+				 subname ? subname : "");
 	if (IS_ERR(mq->thread)) {
 		ret = PTR_ERR(mq->thread);
 		goto cleanup_queue;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 871796c3f406..8bfb3cbea572 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -14,6 +14,13 @@ static inline bool mmc_req_is_special(struct request *req)
 		 req_op(req) == REQ_OP_SECURE_ERASE);
 }
 
+enum mmc_issued {
+	MMC_REQ_STARTED,
+	MMC_REQ_BUSY,
+	MMC_REQ_FAILED_TO_START,
+	MMC_REQ_FINISHED,
+};
+
 struct task_struct;
 struct mmc_blk_data;
 
@@ -37,6 +44,13 @@ struct mmc_queue_req {
 	int			task_id;
 };
 
+enum mmc_issue_type {
+	MMC_ISSUE_SYNC,
+	MMC_ISSUE_DCMD,
+	MMC_ISSUE_ASYNC,
+	MMC_ISSUE_MAX,
+};
+
 struct mmc_queue {
 	struct mmc_card		*card;
 	struct task_struct	*thread;
@@ -49,12 +63,18 @@ struct mmc_queue {
 	int			qdepth;
 	int			qcnt;
 	unsigned long		qslots;
+	/* Following are defined for a Command Queue Engine */
+	int			cqe_in_flight[MMC_ISSUE_MAX];
+	unsigned int		cqe_busy;
+	bool			cqe_recovery_needed;
+#define MMC_CQE_DCMD_BUSY	BIT(0)
+#define MMC_CQE_QUEUE_FULL	BIT(1)
 };
 
 extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
 extern void mmc_queue_free_shared_queue(struct mmc_card *card);
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
-			  const char *);
+			  const char *, int);
 extern void mmc_cleanup_queue(struct mmc_queue *);
 extern void mmc_queue_suspend(struct mmc_queue *);
 extern void mmc_queue_resume(struct mmc_queue *);
@@ -70,4 +90,25 @@ extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
 						struct request *);
 extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
 
+void mmc_queue_set_special(struct mmc_queue *mq, struct request *req);
+void mmc_queue_clr_special(struct request *req);
+
+void mmc_cqe_kick_queue(struct mmc_queue *mq);
+
+enum mmc_issue_type mmc_cqe_issue_type(struct mmc_host *host,
+				       struct request *req);
+
+static inline int mmc_cqe_tot_in_flight(struct mmc_queue *mq)
+{
+	return mq->cqe_in_flight[MMC_ISSUE_SYNC] +
+	       mq->cqe_in_flight[MMC_ISSUE_DCMD] +
+	       mq->cqe_in_flight[MMC_ISSUE_ASYNC];
+}
+
+static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
+{
+	return mq->cqe_in_flight[MMC_ISSUE_DCMD] +
+	       mq->cqe_in_flight[MMC_ISSUE_ASYNC];
+}
+
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH V2 22/22] mmc: cqhci: support for command queue enabled host
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (20 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 21/22] mmc: block: Add CQE support Adrian Hunter
@ 2017-03-13 12:36 ` Adrian Hunter
  2017-04-08 17:37 ` [PATCH V2 00/22] mmc: Add Command Queue support Linus Walleij
  2017-04-10 13:53 ` Ulf Hansson
  23 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-03-13 12:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

From: Venkat Gopalakrishnan <venkatg@codeaurora.org>

This patch adds CMDQ support for command-queue compatible
hosts.

Command queue is added in eMMC-5.1 specification. This
enables the controller to process upto 32 requests at
a time.

Adrian Hunter contributed renaming to cqhci, recovery, suspend
and resume, cqhci_off, cqhci_wait_for_idle, and external timeout
handling.

Signed-off-by: Asutosh Das <asutoshd@codeaurora.org>
Signed-off-by: Sujit Reddy Thumma <sthumma@codeaurora.org>
Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org>
Signed-off-by: Ritesh Harjani <riteshh@codeaurora.org>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/Kconfig  |   13 +
 drivers/mmc/host/Makefile |    1 +
 drivers/mmc/host/cqhci.c  | 1148 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/mmc/host/cqhci.h  |  240 ++++++++++
 4 files changed, 1402 insertions(+)
 create mode 100644 drivers/mmc/host/cqhci.c
 create mode 100644 drivers/mmc/host/cqhci.h

diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index f08691a58d7e..fceba81b8f37 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -794,6 +794,19 @@ config MMC_SUNXI
 	  This selects support for the SD/MMC Host Controller on
 	  Allwinner sunxi SoCs.
 
+config MMC_CQHCI
+	tristate "Command Queue Host Controller Interface support"
+	depends on HAS_DMA
+	help
+	  This selects the Command Queue Host Controller Interface (CQHCI)
+	  support present in host controllers of Qualcomm Technologies, Inc
+	  amongst others.
+	  This controller supports eMMC devices with command queue support.
+
+	  If you have a controller with this interface, say Y or M here.
+
+	  If unsure, say N.
+
 config MMC_TOSHIBA_PCI
 	tristate "Toshiba Type A SD/MMC Card Interface Driver"
 	depends on PCI
diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile
index 6d548c4ee2fa..b78964d3cbea 100644
--- a/drivers/mmc/host/Makefile
+++ b/drivers/mmc/host/Makefile
@@ -79,6 +79,7 @@ obj-$(CONFIG_MMC_SDHCI_MSM)		+= sdhci-msm.o
 obj-$(CONFIG_MMC_SDHCI_ST)		+= sdhci-st.o
 obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32)	+= sdhci-pic32.o
 obj-$(CONFIG_MMC_SDHCI_BRCMSTB)		+= sdhci-brcmstb.o
+obj-$(CONFIG_MMC_CQHCI)			+= cqhci.o
 
 ifeq ($(CONFIG_CB710_DEBUG),y)
 	CFLAGS-cb710-mmc	+= -DDEBUG
diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
new file mode 100644
index 000000000000..bdf2626bdc16
--- /dev/null
+++ b/drivers/mmc/host/cqhci.c
@@ -0,0 +1,1148 @@
+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/scatterlist.h>
+#include <linux/platform_device.h>
+#include <linux/ktime.h>
+
+#include <linux/mmc/mmc.h>
+#include <linux/mmc/host.h>
+#include <linux/mmc/card.h>
+
+#include "cqhci.h"
+
+#define DCMD_SLOT 31
+#define NUM_SLOTS 32
+
+struct cqhci_slot {
+	struct mmc_request *mrq;
+	unsigned int flags;
+#define CQHCI_EXTERNAL_TIMEOUT	BIT(0)
+#define CQHCI_COMPLETED		BIT(1)
+#define CQHCI_HOST_CRC		BIT(2)
+#define CQHCI_HOST_TIMEOUT	BIT(3)
+#define CQHCI_HOST_OTHER	BIT(4)
+};
+
+static inline u8 *get_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	return cq_host->desc_base + (tag * cq_host->slot_sz);
+}
+
+static inline u8 *get_link_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	u8 *desc = get_desc(cq_host, tag);
+
+	return desc + cq_host->task_desc_len;
+}
+
+static inline dma_addr_t get_trans_desc_dma(struct cqhci_host *cq_host, u8 tag)
+{
+	return cq_host->trans_desc_dma_base +
+		(cq_host->mmc->max_segs * tag *
+		 cq_host->trans_desc_len);
+}
+
+static inline u8 *get_trans_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	return cq_host->trans_desc_base +
+		(cq_host->trans_desc_len * cq_host->mmc->max_segs * tag);
+}
+
+static void setup_trans_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	u8 *link_temp;
+	dma_addr_t trans_temp;
+
+	link_temp = get_link_desc(cq_host, tag);
+	trans_temp = get_trans_desc_dma(cq_host, tag);
+
+	memset(link_temp, 0, cq_host->link_desc_len);
+	if (cq_host->link_desc_len > 8)
+		*(link_temp + 8) = 0;
+
+	if (tag == DCMD_SLOT) {
+		*link_temp = CQHCI_VALID(0) | CQHCI_ACT(0) | CQHCI_END(1);
+		return;
+	}
+
+	*link_temp = CQHCI_VALID(1) | CQHCI_ACT(0x6) | CQHCI_END(0);
+
+	if (cq_host->dma64) {
+		__le64 *data_addr = (__le64 __force *)(link_temp + 4);
+
+		data_addr[0] = cpu_to_le64(trans_temp);
+	} else {
+		__le32 *data_addr = (__le32 __force *)(link_temp + 4);
+
+		data_addr[0] = cpu_to_le32(trans_temp);
+	}
+}
+
+static void cqhci_set_irqs(struct cqhci_host *cq_host, u32 set)
+{
+	u32 ier;
+
+	ier = cqhci_readl(cq_host, CQHCI_ISTE);
+	ier |= set;
+	cqhci_writel(cq_host, ier, CQHCI_ISTE);
+	cqhci_writel(cq_host, ier, CQHCI_ISGE);
+}
+
+#define DRV_NAME "cqhci"
+
+#define CQHCI_DUMP(f, x...) \
+	pr_err("%s: " DRV_NAME ": " f, mmc_hostname(mmc), ## x)
+
+static void cqhci_dumpregs(struct cqhci_host *cq_host)
+{
+	struct mmc_host *mmc = cq_host->mmc;
+
+	CQHCI_DUMP("============ CQHCI REGISTER DUMP ===========\n");
+
+	CQHCI_DUMP("Caps:      0x%08x | Version:  0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_CAP),
+		   cqhci_readl(cq_host, CQHCI_VER));
+	CQHCI_DUMP("Config:    0x%08x | Control:  0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_CFG),
+		   cqhci_readl(cq_host, CQHCI_CTL));
+	CQHCI_DUMP("Int stat:  0x%08x | Int enab: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_IS),
+		   cqhci_readl(cq_host, CQHCI_ISTE));
+	CQHCI_DUMP("Int sig:   0x%08x | Int Coal: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_ISGE),
+		   cqhci_readl(cq_host, CQHCI_IC));
+	CQHCI_DUMP("TDL base:  0x%08x | TDL up32: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_TDLBA),
+		   cqhci_readl(cq_host, CQHCI_TDLBAU));
+	CQHCI_DUMP("Doorbell:  0x%08x | TCN:      0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_TDBR),
+		   cqhci_readl(cq_host, CQHCI_TCN));
+	CQHCI_DUMP("Dev queue: 0x%08x | Dev Pend: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_DQS),
+		   cqhci_readl(cq_host, CQHCI_DPT));
+	CQHCI_DUMP("Task clr:  0x%08x | SSC1:     0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_TCLR),
+		   cqhci_readl(cq_host, CQHCI_SSC1));
+	CQHCI_DUMP("SSC2:      0x%08x | DCMD rsp: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_SSC2),
+		   cqhci_readl(cq_host, CQHCI_CRDCT));
+	CQHCI_DUMP("RED mask:  0x%08x | TERRI:    0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_RMEM),
+		   cqhci_readl(cq_host, CQHCI_TERRI));
+	CQHCI_DUMP("Resp idx:  0x%08x | Resp arg: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_CRI),
+		   cqhci_readl(cq_host, CQHCI_CRA));
+
+	if (cq_host->ops->dumpregs)
+		cq_host->ops->dumpregs(mmc);
+	else
+		CQHCI_DUMP(": ===========================================\n");
+}
+
+/**
+ * The allocated descriptor table for task, link & transfer descritors
+ * looks like:
+ * |----------|
+ * |task desc |  |->|----------|
+ * |----------|  |  |trans desc|
+ * |link desc-|->|  |----------|
+ * |----------|          .
+ *      .                .
+ *  no. of slots      max-segs
+ *      .           |----------|
+ * |----------|
+ * The idea here is to create the [task+trans] table and mark & point the
+ * link desc to the transfer desc table on a per slot basis.
+ */
+static int cqhci_host_alloc_tdl(struct cqhci_host *cq_host)
+{
+	int i = 0;
+
+	/* task descriptor can be 64/128 bit irrespective of arch */
+	if (cq_host->caps & CQHCI_TASK_DESC_SZ_128) {
+		cqhci_writel(cq_host, cqhci_readl(cq_host, CQHCI_CFG) |
+			       CQHCI_TASK_DESC_SZ, CQHCI_CFG);
+		cq_host->task_desc_len = 16;
+	} else {
+		cq_host->task_desc_len = 8;
+	}
+
+	/*
+	 * 96 bits length of transfer desc instead of 128 bits which means
+	 * ADMA would expect next valid descriptor at the 96th bit
+	 * or 128th bit
+	 */
+	if (cq_host->dma64) {
+		if (cq_host->quirks & CQHCI_QUIRK_SHORT_TXFR_DESC_SZ)
+			cq_host->trans_desc_len = 12;
+		else
+			cq_host->trans_desc_len = 16;
+		cq_host->link_desc_len = 16;
+	} else {
+		cq_host->trans_desc_len = 8;
+		cq_host->link_desc_len = 8;
+	}
+
+	/* total size of a slot: 1 task & 1 transfer (link) */
+	cq_host->slot_sz = cq_host->task_desc_len + cq_host->link_desc_len;
+
+	cq_host->desc_size = cq_host->slot_sz * cq_host->num_slots;
+
+	cq_host->data_size = cq_host->trans_desc_len * cq_host->mmc->max_segs *
+		(cq_host->num_slots - 1);
+
+	pr_debug("%s: cqhci: desc_size: %zu data_sz: %zu slot-sz: %d\n",
+		 mmc_hostname(cq_host->mmc), cq_host->desc_size, cq_host->data_size,
+		 cq_host->slot_sz);
+
+	/*
+	 * allocate a dma-mapped chunk of memory for the descriptors
+	 * allocate a dma-mapped chunk of memory for link descriptors
+	 * setup each link-desc memory offset per slot-number to
+	 * the descriptor table.
+	 */
+	cq_host->desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc),
+						 cq_host->desc_size,
+						 &cq_host->desc_dma_base,
+						 GFP_KERNEL);
+	cq_host->trans_desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc),
+					      cq_host->data_size,
+					      &cq_host->trans_desc_dma_base,
+					      GFP_KERNEL);
+	if (!cq_host->desc_base || !cq_host->trans_desc_base)
+		return -ENOMEM;
+
+	pr_debug("%s: cqhci: desc-base: 0x%p trans-base: 0x%p\n desc_dma 0x%llx trans_dma: 0x%llx\n",
+		 mmc_hostname(cq_host->mmc), cq_host->desc_base, cq_host->trans_desc_base,
+		(unsigned long long)cq_host->desc_dma_base,
+		(unsigned long long)cq_host->trans_desc_dma_base);
+
+	for (; i < (cq_host->num_slots); i++)
+		setup_trans_desc(cq_host, i);
+
+	return 0;
+}
+
+static void __cqhci_enable(struct cqhci_host *cq_host)
+{
+	struct mmc_host *mmc = cq_host->mmc;
+	u32 cqcfg;
+
+	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+
+	/* Configuration must not be changed while enabled */
+	if (cqcfg & CQHCI_ENABLE) {
+		cqcfg &= ~CQHCI_ENABLE;
+		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+	}
+
+	cqcfg &= ~(CQHCI_DCMD | CQHCI_TASK_DESC_SZ);
+
+	if (mmc->caps2 & MMC_CAP2_CQE_DCMD)
+		cqcfg |= CQHCI_DCMD;
+
+	if (cq_host->caps & CQHCI_TASK_DESC_SZ_128)
+		cqcfg |= CQHCI_TASK_DESC_SZ;
+
+	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+
+	cqhci_writel(cq_host, lower_32_bits(cq_host->desc_dma_base),
+		     CQHCI_TDLBA);
+	cqhci_writel(cq_host, upper_32_bits(cq_host->desc_dma_base),
+		     CQHCI_TDLBAU);
+
+	cqhci_writel(cq_host, cq_host->rca, CQHCI_SSC2);
+
+	cqhci_set_irqs(cq_host, 0);
+
+	cqcfg |= CQHCI_ENABLE;
+
+	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+
+	mmc->cqe_on = true;
+
+	if (cq_host->ops->enable)
+		cq_host->ops->enable(mmc);
+
+	/* Ensure all writes are done before interrupts are enabled */
+	wmb();
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_MASK);
+
+	cq_host->activated = true;
+}
+
+static void __cqhci_disable(struct cqhci_host *cq_host)
+{
+	u32 cqcfg;
+
+	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+	cqcfg &= ~CQHCI_ENABLE;
+	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+
+	cq_host->mmc->cqe_on = false;
+
+	cq_host->activated = false;
+}
+
+int cqhci_suspend(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (cq_host->enabled)
+		__cqhci_disable(cq_host);
+
+	return 0;
+}
+EXPORT_SYMBOL(cqhci_suspend);
+
+int cqhci_resume(struct mmc_host *mmc)
+{
+	/* Re-enable is done upon first request */
+	return 0;
+}
+EXPORT_SYMBOL(cqhci_resume);
+
+static int cqhci_enable(struct mmc_host *mmc, struct mmc_card *card)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	int err;
+
+	if (cq_host->enabled)
+		return 0;
+
+	cq_host->rca = card->rca;
+
+	err = cqhci_host_alloc_tdl(cq_host);
+	if (err)
+		return err;
+
+	__cqhci_enable(cq_host);
+
+	cq_host->enabled = true;
+
+#ifdef DEBUG
+	cqhci_dumpregs(cq_host);
+#endif
+	return 0;
+}
+
+/* CQHCI is idle and should halt immediately, so set a small timeout */
+#define CQHCI_OFF_TIMEOUT 100
+
+static void cqhci_off(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	ktime_t timeout;
+	bool timed_out;
+	u32 reg;
+
+	if (!cq_host->enabled || !mmc->cqe_on || cq_host->recovery_halt)
+		return;
+
+	if (cq_host->ops->disable)
+		cq_host->ops->disable(mmc, false);
+
+	cqhci_writel(cq_host, CQHCI_HALT, CQHCI_CTL);
+
+	timeout = ktime_add_us(ktime_get(), CQHCI_OFF_TIMEOUT);
+	while (1) {
+		timed_out = ktime_compare(ktime_get(), timeout) > 0;
+		reg = cqhci_readl(cq_host, CQHCI_CTL);
+		if ((reg & CQHCI_HALT) || timed_out)
+			break;
+	}
+
+	if (timed_out)
+		pr_err("%s: cqhci: CQE stuck on\n", mmc_hostname(mmc));
+	else
+		pr_debug("%s: cqhci: CQE off\n", mmc_hostname(mmc));
+
+	mmc->cqe_on = false;
+}
+
+static void cqhci_disable(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (!cq_host->enabled)
+		return;
+
+	cqhci_off(mmc);
+
+	__cqhci_disable(cq_host);
+
+	dmam_free_coherent(mmc_dev(mmc), cq_host->data_size,
+			   cq_host->trans_desc_base,
+			   cq_host->trans_desc_dma_base);
+
+	dmam_free_coherent(mmc_dev(mmc), cq_host->desc_size,
+			   cq_host->desc_base,
+			   cq_host->desc_dma_base);
+
+	cq_host->trans_desc_base = NULL;
+	cq_host->desc_base = NULL;
+
+	cq_host->enabled = false;
+}
+
+static void cqhci_prep_task_desc(struct mmc_request *mrq,
+					u64 *data, bool intr)
+{
+	u32 req_flags = mrq->data->flags;
+
+	*data = CQHCI_VALID(1) |
+		CQHCI_END(1) |
+		CQHCI_INT(intr) |
+		CQHCI_ACT(0x5) |
+		CQHCI_FORCED_PROG(!!(req_flags & MMC_DATA_FORCED_PRG)) |
+		CQHCI_DATA_TAG(!!(req_flags & MMC_DATA_DAT_TAG)) |
+		CQHCI_DATA_DIR(!!(req_flags & MMC_DATA_READ)) |
+		CQHCI_PRIORITY(!!(req_flags & MMC_DATA_PRIO)) |
+		CQHCI_QBAR(!!(req_flags & MMC_DATA_QBR)) |
+		CQHCI_REL_WRITE(!!(req_flags & MMC_DATA_REL_WR)) |
+		CQHCI_BLK_COUNT(mrq->data->blocks) |
+		CQHCI_BLK_ADDR((u64)mrq->data->blk_addr);
+
+	pr_debug("%s: cqhci: tag %d task descriptor 0x016%llx\n",
+		 mmc_hostname(mrq->host), mrq->tag, (unsigned long long)*data);
+}
+
+static int cqhci_dma_map(struct mmc_host *host, struct mmc_request *mrq)
+{
+	int sg_count;
+	struct mmc_data *data = mrq->data;
+
+	if (!data)
+		return -EINVAL;
+
+	sg_count = dma_map_sg(mmc_dev(host), data->sg,
+			      data->sg_len,
+			      (data->flags & MMC_DATA_WRITE) ?
+			      DMA_TO_DEVICE : DMA_FROM_DEVICE);
+	if (!sg_count) {
+		pr_err("%s: sg-len: %d\n", __func__, data->sg_len);
+		return -ENOMEM;
+	}
+
+	return sg_count;
+}
+
+static void cqhci_set_tran_desc(u8 *desc,
+				 dma_addr_t addr, int len, bool end)
+{
+	__le64 *dataddr = (__le64 __force *)(desc + 4);
+	__le32 *attr = (__le32 __force *)desc;
+
+	*attr = (CQHCI_VALID(1) |
+		 CQHCI_END(end ? 1 : 0) |
+		 CQHCI_INT(0) |
+		 CQHCI_ACT(0x4) |
+		 CQHCI_DAT_LENGTH(len));
+
+	dataddr[0] = cpu_to_le64(addr);
+}
+
+static int cqhci_prep_tran_desc(struct mmc_request *mrq,
+			       struct cqhci_host *cq_host, int tag)
+{
+	struct mmc_data *data = mrq->data;
+	int i, sg_count, len;
+	bool end = false;
+	dma_addr_t addr;
+	u8 *desc;
+	struct scatterlist *sg;
+
+	sg_count = cqhci_dma_map(mrq->host, mrq);
+	if (sg_count < 0) {
+		pr_err("%s: %s: unable to map sg lists, %d\n",
+				mmc_hostname(mrq->host), __func__, sg_count);
+		return sg_count;
+	}
+
+	desc = get_trans_desc(cq_host, tag);
+
+	for_each_sg(data->sg, sg, sg_count, i) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+
+		if ((i+1) == sg_count)
+			end = true;
+		cqhci_set_tran_desc(desc, addr, len, end);
+		desc += cq_host->trans_desc_len;
+	}
+
+	return 0;
+}
+
+static void cqhci_prep_dcmd_desc(struct mmc_host *mmc,
+				   struct mmc_request *mrq)
+{
+	u64 *task_desc = NULL;
+	u64 data = 0;
+	u8 resp_type;
+	u8 *desc;
+	__le64 *dataddr;
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	u8 timing;
+
+	if (!(mrq->cmd->flags & MMC_RSP_PRESENT)) {
+		resp_type = 0x0;
+		timing = 0x1;
+	} else {
+		if (mrq->cmd->flags & MMC_RSP_R1B) {
+			resp_type = 0x3;
+			timing = 0x0;
+		} else {
+			resp_type = 0x2;
+			timing = 0x1;
+		}
+	}
+
+	task_desc = (__le64 __force *)get_desc(cq_host, cq_host->dcmd_slot);
+	memset(task_desc, 0, cq_host->task_desc_len);
+	data |= (CQHCI_VALID(1) |
+		 CQHCI_END(1) |
+		 CQHCI_INT(1) |
+		 CQHCI_QBAR(1) |
+		 CQHCI_ACT(0x5) |
+		 CQHCI_CMD_INDEX(mrq->cmd->opcode) |
+		 CQHCI_CMD_TIMING(timing) | CQHCI_RESP_TYPE(resp_type));
+	*task_desc |= data;
+	desc = (u8 *)task_desc;
+	pr_debug("%s: cqhci: dcmd: cmd: %d timing: %d resp: %d\n",
+		 mmc_hostname(mmc), mrq->cmd->opcode, timing, resp_type);
+	dataddr = (__le64 __force *)(desc + 4);
+	dataddr[0] = cpu_to_le64((u64)mrq->cmd->arg);
+
+}
+
+static void cqhci_post_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	struct mmc_data *data = mrq->data;
+
+	if (data) {
+		dma_unmap_sg(mmc_dev(host), data->sg, data->sg_len,
+			     (data->flags & MMC_DATA_READ) ?
+			     DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static inline int cqhci_tag(struct mmc_request *mrq)
+{
+	return mrq->cmd ? DCMD_SLOT : mrq->tag;
+}
+
+static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+{
+	int err = 0;
+	u64 data = 0;
+	u64 *task_desc = NULL;
+	int tag = cqhci_tag(mrq);
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	unsigned long flags;
+
+	if (!cq_host->enabled) {
+		pr_err("%s: cqhci: not enabled\n", mmc_hostname(mmc));
+		return -EINVAL;
+	}
+
+	/* First request after resume has to re-enable */
+	if (!cq_host->activated)
+		__cqhci_enable(cq_host);
+
+	if (!mmc->cqe_on) {
+		cqhci_writel(cq_host, 0, CQHCI_CTL);
+		mmc->cqe_on = true;
+		pr_debug("%s: cqhci: CQE on\n", mmc_hostname(mmc));
+		if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT) {
+			pr_err("%s: cqhci: CQE failed to exit halt state\n",
+			       mmc_hostname(mmc));
+		}
+		if (cq_host->ops->enable)
+			cq_host->ops->enable(mmc);
+	}
+
+	if (mrq->data) {
+		task_desc = (__le64 __force *)get_desc(cq_host, tag);
+		cqhci_prep_task_desc(mrq, &data, 1);
+		*task_desc = cpu_to_le64(data);
+		err = cqhci_prep_tran_desc(mrq, cq_host, tag);
+		if (err) {
+			pr_err("%s: cqhci: failed to setup tx desc: %d\n",
+			       mmc_hostname(mmc), err);
+			return err;
+		}
+	} else {
+		cqhci_prep_dcmd_desc(mmc, mrq);
+	}
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+
+	if (cq_host->recovery_halt) {
+		err = -EBUSY;
+		goto out_unlock;
+	}
+
+	cq_host->slot[tag].mrq = mrq;
+	cq_host->slot[tag].flags = 0;
+
+	cq_host->qcnt += 1;
+
+	cqhci_writel(cq_host, 1 << tag, CQHCI_TDBR);
+	if (!(cqhci_readl(cq_host, CQHCI_TDBR) & (1 << tag)))
+		pr_debug("%s: cqhci: doorbell not set for tag %d\n",
+			 mmc_hostname(mmc), tag);
+out_unlock:
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	if (err)
+		cqhci_post_req(mmc, mrq);
+
+	return err;
+}
+
+static void cqhci_recovery_needed(struct mmc_host *mmc, struct mmc_request *mrq,
+				  bool notify)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (!cq_host->recovery_halt) {
+		cq_host->recovery_halt = true;
+		pr_debug("%s: cqhci: recovery needed\n", mmc_hostname(mmc));
+		wake_up(&cq_host->wait_queue);
+		if (notify && mmc->cqe_recovery_notifier)
+			mmc->cqe_recovery_notifier(mmc, mrq);
+	}
+}
+
+static unsigned int cqhci_error_flags(int error1, int error2)
+{
+	int error = error1 ? error1 : error2;
+
+	switch (error) {
+	case -EILSEQ:
+		return CQHCI_HOST_CRC;
+	case -ETIMEDOUT:
+		return CQHCI_HOST_TIMEOUT;
+	default:
+		return CQHCI_HOST_OTHER;
+	}
+}
+
+static void cqhci_error_irq(struct mmc_host *mmc, u32 status, int cmd_error,
+			    int data_error)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	struct cqhci_slot *slot;
+	u32 terri;
+	int tag;
+
+	spin_lock(&cq_host->lock);
+
+	terri = cqhci_readl(cq_host, CQHCI_TERRI);
+
+	pr_debug("%s: cqhci: error IRQ status: 0x%08x cmd error %d data error %d TERRI: 0x%08x\n",
+		 mmc_hostname(mmc), status, cmd_error, data_error, terri);
+
+	/* Forget about errors when recovery has already been triggered */
+	if (cq_host->recovery_halt)
+		goto out_unlock;
+
+	if (!cq_host->qcnt) {
+		WARN_ONCE(1, "%s: cqhci: error when idle. IRQ status: 0x%08x cmd error %d data error %d TERRI: 0x%08x\n",
+			  mmc_hostname(mmc), status, cmd_error, data_error,
+			  terri);
+		goto out_unlock;
+	}
+
+	if (CQHCI_TERRI_C_VALID(terri)) {
+		tag = CQHCI_TERRI_C_TASK(terri);
+		slot = &cq_host->slot[tag];
+		if (slot->mrq) {
+			slot->flags = cqhci_error_flags(cmd_error, data_error);
+			cqhci_recovery_needed(mmc, slot->mrq, true);
+		}
+	}
+
+	if (CQHCI_TERRI_D_VALID(terri)) {
+		tag = CQHCI_TERRI_D_TASK(terri);
+		slot = &cq_host->slot[tag];
+		if (slot->mrq) {
+			slot->flags = cqhci_error_flags(data_error, cmd_error);
+			cqhci_recovery_needed(mmc, slot->mrq, true);
+		}
+	}
+
+	if (!cq_host->recovery_halt) {
+		/*
+		 * The only way to guarantee forward progress is to mark at
+		 * least one task in error, so if none is indicated, pick one.
+		 */
+		for (tag = 0; tag < NUM_SLOTS; tag++) {
+			slot = &cq_host->slot[tag];
+			if (!slot->mrq)
+				continue;
+			slot->flags = cqhci_error_flags(data_error, cmd_error);
+			cqhci_recovery_needed(mmc, slot->mrq, true);
+			break;
+		}
+	}
+
+out_unlock:
+	spin_unlock(&cq_host->lock);
+}
+
+static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	struct cqhci_slot *slot = &cq_host->slot[tag];
+	struct mmc_request *mrq = slot->mrq;
+	struct mmc_data *data;
+
+	if (!mrq) {
+		WARN_ONCE(1, "%s: cqhci: spurious TCN for tag %d\n",
+			  mmc_hostname(mmc), tag);
+		return;
+	}
+
+	/* No completions allowed during recovery */
+	if (cq_host->recovery_halt) {
+		slot->flags |= CQHCI_COMPLETED;
+		return;
+	}
+
+	slot->mrq = NULL;
+
+	cq_host->qcnt -= 1;
+
+	data = mrq->data;
+	if (data) {
+		if (data->error)
+			data->bytes_xfered = 0;
+		else
+			data->bytes_xfered = data->blksz * data->blocks;
+	}
+
+	mmc_cqe_request_done(mmc, mrq);
+}
+
+irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
+		      int data_error)
+{
+	u32 status;
+	unsigned long tag = 0, comp_status;
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	status = cqhci_readl(cq_host, CQHCI_IS);
+	cqhci_writel(cq_host, status, CQHCI_IS);
+
+	pr_debug("%s: cqhci: IRQ status: 0x%08x\n", mmc_hostname(mmc), status);
+
+	if ((status & CQHCI_IS_RED) || cmd_error || data_error)
+		cqhci_error_irq(mmc, status, cmd_error, data_error);
+
+	if (status & CQHCI_IS_TCC) {
+		/* read TCN and complete the request */
+		comp_status = cqhci_readl(cq_host, CQHCI_TCN);
+		cqhci_writel(cq_host, comp_status, CQHCI_TCN);
+		pr_debug("%s: cqhci: TCN: 0x%08lx\n",
+			 mmc_hostname(mmc), comp_status);
+
+		spin_lock(&cq_host->lock);
+
+		for_each_set_bit(tag, &comp_status, cq_host->num_slots) {
+			/* complete the corresponding mrq */
+			pr_debug("%s: cqhci: completing tag %lu\n",
+				 mmc_hostname(mmc), tag);
+			cqhci_finish_mrq(mmc, tag);
+		}
+
+		if (cq_host->waiting_for_idle && !cq_host->qcnt) {
+			cq_host->waiting_for_idle = false;
+			wake_up(&cq_host->wait_queue);
+		}
+
+		spin_unlock(&cq_host->lock);
+	}
+
+	if (status & CQHCI_IS_TCL)
+		wake_up(&cq_host->wait_queue);
+
+	if (status & CQHCI_IS_HAC)
+		wake_up(&cq_host->wait_queue);
+
+	return IRQ_HANDLED;
+}
+EXPORT_SYMBOL(cqhci_irq);
+
+static bool cqhci_is_idle(struct cqhci_host *cq_host, int *ret)
+{
+	unsigned long flags;
+	bool is_idle;
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+	is_idle = !cq_host->qcnt || cq_host->recovery_halt;
+	*ret = cq_host->recovery_halt ? -EBUSY : 0;
+	cq_host->waiting_for_idle = !is_idle;
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	return is_idle;
+}
+
+static int cqhci_wait_for_idle(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	int ret;
+
+	wait_event(cq_host->wait_queue, cqhci_is_idle(cq_host, &ret));
+
+	return ret;
+}
+
+static bool cqhci_timeout(struct mmc_host *mmc, struct mmc_request *mrq,
+			  bool *recovery_needed)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	int tag = cqhci_tag(mrq);
+	struct cqhci_slot *slot = &cq_host->slot[tag];
+	unsigned long flags;
+	bool timed_out;
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+	timed_out = slot->mrq == mrq;
+	if (timed_out) {
+		slot->flags |= CQHCI_EXTERNAL_TIMEOUT;
+		cqhci_recovery_needed(mmc, mrq, false);
+		*recovery_needed = cq_host->recovery_halt;
+	}
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	if (timed_out) {
+		pr_err("%s: cqhci: timeout for tag %d\n",
+		       mmc_hostname(mmc), tag);
+		cqhci_dumpregs(cq_host);
+	}
+
+	return timed_out;
+}
+
+static bool cqhci_tasks_cleared(struct cqhci_host *cq_host)
+{
+	return !(cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_CLEAR_ALL_TASKS);
+}
+
+static bool cqhci_clear_all_tasks(struct mmc_host *mmc, unsigned int timeout)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	bool ret;
+	u32 ctl;
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_TCL);
+
+	ctl = cqhci_readl(cq_host, CQHCI_CTL);
+	ctl |= CQHCI_CLEAR_ALL_TASKS;
+	cqhci_writel(cq_host, ctl, CQHCI_CTL);
+
+	wait_event_timeout(cq_host->wait_queue, cqhci_tasks_cleared(cq_host),
+			   msecs_to_jiffies(timeout) + 1);
+
+	cqhci_set_irqs(cq_host, 0);
+
+	ret = cqhci_tasks_cleared(cq_host);
+
+	if (!ret)
+		pr_debug("%s: cqhci: Failed to clear tasks\n",
+			 mmc_hostname(mmc));
+
+	return ret;
+}
+
+static bool cqhci_halted(struct cqhci_host *cq_host)
+{
+	return cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT;
+}
+
+static bool cqhci_halt(struct mmc_host *mmc, unsigned int timeout)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	bool ret;
+	u32 ctl;
+
+	if (cqhci_halted(cq_host))
+		return true;
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_HAC);
+
+	ctl = cqhci_readl(cq_host, CQHCI_CTL);
+	ctl |= CQHCI_HALT;
+	cqhci_writel(cq_host, ctl, CQHCI_CTL);
+
+	wait_event_timeout(cq_host->wait_queue, cqhci_halted(cq_host),
+			   msecs_to_jiffies(timeout) + 1);
+
+	cqhci_set_irqs(cq_host, 0);
+
+	ret = cqhci_halted(cq_host);
+
+	if (!ret)
+		pr_debug("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
+
+	return ret;
+}
+
+/*
+ * After halting we expect to be able to use the command line. We interpret the
+ * failure to halt to mean the data lines might still be in use (and the upper
+ * layers will need to send a STOP command), so we set the timeout based on a
+ * generous command timeout.
+ */
+#define CQHCI_START_HALT_TIMEOUT	5
+
+static void cqhci_recovery_start(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	pr_debug("%s: cqhci: %s\n", mmc_hostname(mmc), __func__);
+
+	WARN_ON(!cq_host->recovery_halt);
+
+	cqhci_halt(mmc, CQHCI_START_HALT_TIMEOUT);
+
+	if (cq_host->ops->disable)
+		cq_host->ops->disable(mmc, true);
+
+	mmc->cqe_on = false;
+}
+
+static int cqhci_error_from_flags(unsigned int flags)
+{
+	if (!flags)
+		return 0;
+
+	/* CRC errors might indicate re-tuning so prefer to report that */
+	if (flags & CQHCI_HOST_CRC)
+		return -EILSEQ;
+
+	if (flags & (CQHCI_EXTERNAL_TIMEOUT | CQHCI_HOST_TIMEOUT))
+		return -ETIMEDOUT;
+
+	return -EIO;
+}
+
+static void cqhci_recover_mrq(struct cqhci_host *cq_host, unsigned int tag,
+			      bool forget_reqs)
+{
+	struct cqhci_slot *slot = &cq_host->slot[tag];
+	struct mmc_request *mrq = slot->mrq;
+	struct mmc_data *data;
+
+	if (!mrq)
+		return;
+
+	slot->mrq = NULL;
+
+	cq_host->qcnt -= 1;
+
+	data = mrq->data;
+	if (data) {
+		data->bytes_xfered = 0;
+		data->error = cqhci_error_from_flags(slot->flags);
+	} else {
+		mrq->cmd->error = cqhci_error_from_flags(slot->flags);
+	}
+
+	if (!forget_reqs)
+		mmc_cqe_request_done(cq_host->mmc, mrq);
+}
+
+static void cqhci_recover_mrqs(struct cqhci_host *cq_host, bool forget_reqs)
+{
+	int i;
+
+	for (i = 0; i < cq_host->num_slots; i++)
+		cqhci_recover_mrq(cq_host, i, forget_reqs);
+}
+
+/*
+ * By now the command and data lines should be unused so there is no reason for
+ * CQHCI to take a long time to halt, but if it doesn't halt there could be
+ * problems clearing tasks, so be generous.
+ */
+#define CQHCI_FINISH_HALT_TIMEOUT	20
+
+/* CQHCI could be expected to clear it's internal state pretty quickly */
+#define CQHCI_CLEAR_TIMEOUT		20
+
+static void cqhci_recovery_finish(struct mmc_host *mmc, bool forget_reqs)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	unsigned long flags;
+	u32 cqcfg;
+	bool ok;
+
+	pr_debug("%s: cqhci: %s\n", mmc_hostname(mmc), __func__);
+
+	WARN_ON(!cq_host->recovery_halt);
+
+	ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
+
+	if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
+		ok = false;
+
+	/*
+	 * The specification contradicts itself, by saying that tasks cannot be
+	 * cleared if CQHCI does not halt, but if CQHCI does not halt, it should
+	 * be disabled/re-enabled, but not to disable before clearing tasks.
+	 * Have a go anyway.
+	 */
+	if (!ok) {
+		pr_debug("%s: cqhci: disable / re-enable\n", mmc_hostname(mmc));
+		cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+		cqcfg &= ~CQHCI_ENABLE;
+		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+		cqcfg |= CQHCI_ENABLE;
+		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+		/* Be sure that there are no tasks */
+		ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
+		if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
+			ok = false;
+		WARN_ON(!ok);
+	}
+
+	cqhci_recover_mrqs(cq_host, forget_reqs);
+
+	WARN_ON(cq_host->qcnt);
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+	cq_host->qcnt = 0;
+	cq_host->recovery_halt = false;
+	mmc->cqe_on = false;
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	/* Ensure all writes are done before interrupts are re-enabled */
+	wmb();
+
+	cqhci_writel(cq_host, CQHCI_IS_HAC | CQHCI_IS_TCL, CQHCI_IS);
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_MASK);
+
+	pr_debug("%s: cqhci: recovery done\n", mmc_hostname(mmc));
+}
+
+static const struct mmc_cqe_ops cqhci_cqe_ops = {
+	.cqe_enable = cqhci_enable,
+	.cqe_disable = cqhci_disable,
+	.cqe_request = cqhci_request,
+	.cqe_post_req = cqhci_post_req,
+	.cqe_off = cqhci_off,
+	.cqe_wait_for_idle = cqhci_wait_for_idle,
+	.cqe_timeout = cqhci_timeout,
+	.cqe_recovery_start = cqhci_recovery_start,
+	.cqe_recovery_finish = cqhci_recovery_finish,
+};
+
+struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev)
+{
+	struct cqhci_host *cq_host;
+	struct resource *cqhci_memres = NULL;
+
+	/* check and setup CMDQ interface */
+	cqhci_memres = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+						   "cqhci_mem");
+	if (!cqhci_memres) {
+		dev_dbg(&pdev->dev, "CMDQ not supported\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	cq_host = devm_kzalloc(&pdev->dev, sizeof(*cq_host), GFP_KERNEL);
+	if (!cq_host)
+		return ERR_PTR(-ENOMEM);
+	cq_host->mmio = devm_ioremap(&pdev->dev,
+				     cqhci_memres->start,
+				     resource_size(cqhci_memres));
+	if (!cq_host->mmio) {
+		dev_err(&pdev->dev, "failed to remap cqhci regs\n");
+		return ERR_PTR(-EBUSY);
+	}
+	dev_dbg(&pdev->dev, "CMDQ ioremap: done\n");
+
+	return cq_host;
+}
+EXPORT_SYMBOL(cqhci_pltfm_init);
+
+static unsigned int cqhci_ver_major(struct cqhci_host *cq_host)
+{
+	return CQHCI_VER_MAJOR(cqhci_readl(cq_host, CQHCI_VER));
+}
+
+static unsigned int cqhci_ver_minor(struct cqhci_host *cq_host)
+{
+	u32 ver = cqhci_readl(cq_host, CQHCI_VER);
+
+	return CQHCI_VER_MINOR1(ver) * 10 + CQHCI_VER_MINOR2(ver);
+}
+
+int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc,
+	      bool dma64)
+{
+	int err;
+
+	cq_host->dma64 = dma64;
+	cq_host->mmc = mmc;
+	cq_host->mmc->cqe_private = cq_host;
+
+	cq_host->num_slots = NUM_SLOTS;
+	cq_host->dcmd_slot = DCMD_SLOT;
+
+	mmc->cqe_ops = &cqhci_cqe_ops;
+
+	mmc->cqe_qdepth = NUM_SLOTS;
+	if (mmc->caps2 & MMC_CAP2_CQE_DCMD)
+		mmc->cqe_qdepth -= 1;
+
+	cq_host->slot = devm_kcalloc(mmc_dev(mmc), cq_host->num_slots,
+				     sizeof(*cq_host->slot), GFP_KERNEL);
+	if (!cq_host->slot) {
+		err = -ENOMEM;
+		goto out_err;
+	}
+
+	spin_lock_init(&cq_host->lock);
+
+	init_completion(&cq_host->halt_comp);
+	init_waitqueue_head(&cq_host->wait_queue);
+
+	pr_info("%s: CQHCI version %u.%02u\n",
+		mmc_hostname(mmc), cqhci_ver_major(cq_host),
+		cqhci_ver_minor(cq_host));
+
+	return 0;
+
+out_err:
+	pr_err("%s: CQHCI version %u.%02u failed to initialize, error %d\n",
+	       mmc_hostname(mmc), cqhci_ver_major(cq_host),
+	       cqhci_ver_minor(cq_host), err);
+	return err;
+}
+EXPORT_SYMBOL(cqhci_init);
+
+MODULE_AUTHOR("Venkat Gopalakrishnan <venkatg@codeaurora.org>");
+MODULE_DESCRIPTION("Command Queue Host Controller Interface driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h
new file mode 100644
index 000000000000..2d39d361b322
--- /dev/null
+++ b/drivers/mmc/host/cqhci.h
@@ -0,0 +1,240 @@
+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#ifndef LINUX_MMC_CQHCI_H
+#define LINUX_MMC_CQHCI_H
+
+#include <linux/compiler.h>
+#include <linux/bitops.h>
+#include <linux/spinlock_types.h>
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/wait.h>
+#include <linux/irqreturn.h>
+#include <asm/io.h>
+
+/* registers */
+/* version */
+#define CQHCI_VER			0x00
+#define CQHCI_VER_MAJOR(x)		(((x) & GENMASK(11, 8)) >> 8)
+#define CQHCI_VER_MINOR1(x)		(((x) & GENMASK(7, 4)) >> 4)
+#define CQHCI_VER_MINOR2(x)		((x) & GENMASK(3, 0))
+
+/* capabilities */
+#define CQHCI_CAP			0x04
+/* configuration */
+#define CQHCI_CFG			0x08
+#define CQHCI_DCMD			0x00001000
+#define CQHCI_TASK_DESC_SZ		0x00000100
+#define CQHCI_ENABLE			0x00000001
+
+/* control */
+#define CQHCI_CTL			0x0C
+#define CQHCI_CLEAR_ALL_TASKS		0x00000100
+#define CQHCI_HALT			0x00000001
+
+/* interrupt status */
+#define CQHCI_IS			0x10
+#define CQHCI_IS_HAC			BIT(0)
+#define CQHCI_IS_TCC			BIT(1)
+#define CQHCI_IS_RED			BIT(2)
+#define CQHCI_IS_TCL			BIT(3)
+
+#define CQHCI_IS_MASK (CQHCI_IS_TCC | CQHCI_IS_RED)
+
+/* interrupt status enable */
+#define CQHCI_ISTE			0x14
+
+/* interrupt signal enable */
+#define CQHCI_ISGE			0x18
+
+/* interrupt coalescing */
+#define CQHCI_IC			0x1C
+#define CQHCI_IC_ENABLE			BIT(31)
+#define CQHCI_IC_RESET			BIT(16)
+#define CQHCI_IC_ICCTHWEN		BIT(15)
+#define CQHCI_IC_ICCTH(x)		((x & 0x1F) << 8)
+#define CQHCI_IC_ICTOVALWEN		BIT(7)
+#define CQHCI_IC_ICTOVAL(x)		(x & 0x7F)
+
+/* task list base address */
+#define CQHCI_TDLBA			0x20
+
+/* task list base address upper */
+#define CQHCI_TDLBAU			0x24
+
+/* door-bell */
+#define CQHCI_TDBR			0x28
+
+/* task completion notification */
+#define CQHCI_TCN			0x2C
+
+/* device queue status */
+#define CQHCI_DQS			0x30
+
+/* device pending tasks */
+#define CQHCI_DPT			0x34
+
+/* task clear */
+#define CQHCI_TCLR			0x38
+
+/* send status config 1 */
+#define CQHCI_SSC1			0x40
+
+/* send status config 2 */
+#define CQHCI_SSC2			0x44
+
+/* response for dcmd */
+#define CQHCI_CRDCT			0x48
+
+/* response mode error mask */
+#define CQHCI_RMEM			0x50
+
+/* task error info */
+#define CQHCI_TERRI			0x54
+
+#define CQHCI_TERRI_C_INDEX(x)		((x) & GENMASK(5, 0))
+#define CQHCI_TERRI_C_TASK(x)		(((x) & GENMASK(12, 8)) >> 8)
+#define CQHCI_TERRI_C_VALID(x)		((x) & BIT(15))
+#define CQHCI_TERRI_D_INDEX(x)		(((x) & GENMASK(21, 16)) >> 16)
+#define CQHCI_TERRI_D_TASK(x)		(((x) & GENMASK(28, 24)) >> 24)
+#define CQHCI_TERRI_D_VALID(x)		((x) & BIT(31))
+
+/* command response index */
+#define CQHCI_CRI			0x58
+
+/* command response argument */
+#define CQHCI_CRA			0x5C
+
+#define CQHCI_INT_ALL			0xF
+#define CQHCI_IC_DEFAULT_ICCTH		31
+#define CQHCI_IC_DEFAULT_ICTOVAL	1
+
+/* attribute fields */
+#define CQHCI_VALID(x)			((x & 1) << 0)
+#define CQHCI_END(x)			((x & 1) << 1)
+#define CQHCI_INT(x)			((x & 1) << 2)
+#define CQHCI_ACT(x)			((x & 0x7) << 3)
+
+/* data command task descriptor fields */
+#define CQHCI_FORCED_PROG(x)		((x & 1) << 6)
+#define CQHCI_CONTEXT(x)		((x & 0xF) << 7)
+#define CQHCI_DATA_TAG(x)		((x & 1) << 11)
+#define CQHCI_DATA_DIR(x)		((x & 1) << 12)
+#define CQHCI_PRIORITY(x)		((x & 1) << 13)
+#define CQHCI_QBAR(x)			((x & 1) << 14)
+#define CQHCI_REL_WRITE(x)		((x & 1) << 15)
+#define CQHCI_BLK_COUNT(x)		((x & 0xFFFF) << 16)
+#define CQHCI_BLK_ADDR(x)		((x & 0xFFFFFFFF) << 32)
+
+/* direct command task descriptor fields */
+#define CQHCI_CMD_INDEX(x)		((x & 0x3F) << 16)
+#define CQHCI_CMD_TIMING(x)		((x & 1) << 22)
+#define CQHCI_RESP_TYPE(x)		((x & 0x3) << 23)
+
+/* transfer descriptor fields */
+#define CQHCI_DAT_LENGTH(x)		((x & 0xFFFF) << 16)
+#define CQHCI_DAT_ADDR_LO(x)		((x & 0xFFFFFFFF) << 32)
+#define CQHCI_DAT_ADDR_HI(x)		((x & 0xFFFFFFFF) << 0)
+
+struct cqhci_host_ops;
+struct mmc_host;
+struct cqhci_slot;
+
+struct cqhci_host {
+	const struct cqhci_host_ops *ops;
+	void __iomem *mmio;
+	struct mmc_host *mmc;
+
+	spinlock_t lock;
+
+	/* relative card address of device */
+	unsigned int rca;
+
+	/* 64 bit DMA */
+	bool dma64;
+	int num_slots;
+	int qcnt;
+
+	u32 dcmd_slot;
+	u32 caps;
+#define CQHCI_TASK_DESC_SZ_128		0x1
+
+	u32 quirks;
+#define CQHCI_QUIRK_SHORT_TXFR_DESC_SZ	0x1
+
+	bool enabled;
+	bool halted;
+	bool init_done;
+	bool activated;
+	bool waiting_for_idle;
+	bool recovery_halt;
+
+	size_t desc_size;
+	size_t data_size;
+
+	u8 *desc_base;
+
+	/* total descriptor size */
+	u8 slot_sz;
+
+	/* 64/128 bit depends on CQHCI_CFG */
+	u8 task_desc_len;
+
+	/* 64 bit on 32-bit arch, 128 bit on 64-bit */
+	u8 link_desc_len;
+
+	u8 *trans_desc_base;
+	/* same length as transfer descriptor */
+	u8 trans_desc_len;
+
+	dma_addr_t desc_dma_base;
+	dma_addr_t trans_desc_dma_base;
+
+	struct completion halt_comp;
+	wait_queue_head_t wait_queue;
+	struct cqhci_slot *slot;
+};
+
+struct cqhci_host_ops {
+	void (*dumpregs)(struct mmc_host *mmc);
+	void (*write_l)(struct cqhci_host *host, u32 val, int reg);
+	u32 (*read_l)(struct cqhci_host *host, int reg);
+	void (*enable)(struct mmc_host *mmc);
+	void (*disable)(struct mmc_host *mmc, bool recovery);
+};
+
+static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
+{
+	if (unlikely(host->ops->write_l))
+		host->ops->write_l(host, val, reg);
+	else
+		writel_relaxed(val, host->mmio + reg);
+}
+
+static inline u32 cqhci_readl(struct cqhci_host *host, int reg)
+{
+	if (unlikely(host->ops->read_l))
+		return host->ops->read_l(host, reg);
+	else
+		return readl_relaxed(host->mmio + reg);
+}
+
+struct platform_device;
+
+irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
+		      int data_error);
+int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc, bool dma64);
+struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev);
+int cqhci_suspend(struct mmc_host *mmc);
+int cqhci_resume(struct mmc_host *mmc);
+
+#endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly
  2017-03-13 12:36 ` [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly Adrian Hunter
@ 2017-03-14 16:22   ` Ulf Hansson
  0 siblings, 0 replies; 50+ messages in thread
From: Ulf Hansson @ 2017-03-14 16:22 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Commit 15520111500c ("mmc: core: Further fix thread wake-up") allowed a
> queue to release the host with is_waiting_last_req set to true. A queue
> waiting to claim the host will not reset it, which can result in the
> queue getting stuck in a loop.
>
> Fixes: 15520111500c ("mmc: core: Further fix thread wake-up")
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Cc: stable@vger.kernel.org # v4.10+

Thanks, applied for fixes!

Kind regards
Uffe

> ---
>  drivers/mmc/core/block.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index 1621fa08e206..e59107ca512a 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -1817,6 +1817,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
>                 mmc_blk_issue_flush(mq, req);
>         } else {
>                 mmc_blk_issue_rw_rq(mq, req);
> +               card->host->context_info.is_waiting_last_req = false;
>         }
>
>  out:
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 02/22] mmc: block: Fix cmd error reset failure path
  2017-03-13 12:36 ` [PATCH V2 02/22] mmc: block: Fix cmd error reset failure path Adrian Hunter
@ 2017-03-14 16:22   ` Ulf Hansson
  0 siblings, 0 replies; 50+ messages in thread
From: Ulf Hansson @ 2017-03-14 16:22 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Commit 4e1f780032c5 ("mmc: block: break out mmc_blk_rw_cmd_abort()")
> assumed the request had not completed, but in one case it had. Fix that.
>
> Fixes: 4e1f780032c5 ("mmc: block: break out mmc_blk_rw_cmd_abort()")
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Thanks, applied for fixes!

Kind regards
Uffe

> ---
>  drivers/mmc/core/block.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index e59107ca512a..05afefcfb611 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -1701,7 +1701,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
>                 case MMC_BLK_CMD_ERR:
>                         req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
>                         if (mmc_blk_reset(md, card->host, type)) {
> -                               mmc_blk_rw_cmd_abort(card, old_req);
> +                               if (req_pending)
> +                                       mmc_blk_rw_cmd_abort(card, old_req);
>                                 mmc_blk_rw_try_restart(mq, new_req);
>                                 return;
>                         }
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (21 preceding siblings ...)
  2017-03-13 12:36 ` [PATCH V2 22/22] mmc: cqhci: support for command queue enabled host Adrian Hunter
@ 2017-04-08 17:37 ` Linus Walleij
  2017-04-10 13:53 ` Ulf Hansson
  23 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:37 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Here are the hardware command queue patches without the software command
> queue patches or sdhci patches.

I think we have to merge at least a bunch of these patches, there is evidently a
bunch of stuff in the beginning of the series at least that I have
nothing against.

They will royally screw up my patch stack for MQ but that is not a technical
argument, just work.

I will try to to go in and provide Reviewed-by tags for all stuff I consider
merge material.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 03/22] mmc: block: Use local var for mqrq_cur
  2017-03-13 12:36 ` [PATCH V2 03/22] mmc: block: Use local var for mqrq_cur Adrian Hunter
@ 2017-04-08 17:37   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:37 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
> assigning it to a local variable.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-03-13 12:36 ` [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
@ 2017-04-08 17:39   ` Linus Walleij
  2017-04-10 11:01   ` Ulf Hansson
  1 sibling, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:39 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Add helper functions to enable or disable the Command Queue.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 04/22] mmc: block: Introduce queue semantics
  2017-03-13 12:36 ` [PATCH V2 04/22] mmc: block: Introduce queue semantics Adrian Hunter
@ 2017-04-08 17:40   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:40 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Change from viewing the requests in progress as 'current' and 'previous',
> to viewing them as a queue. The current request is allocated to the first
> free slot. The presence of incomplete requests is determined from the
> count (mq->qcnt) of entries in the queue. Non-read-write requests (i.e.
> discards and flushes) are not added to the queue at all and require no
> special handling. Also no special handling is needed for the
> MMC_BLK_NEW_REQUEST case.
>
> As well as allowing an arbitrarily sized queue, the queue thread function
> is significantly simpler.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 05/22] mmc: queue: Share mmc request array between partitions
  2017-03-13 12:36 ` [PATCH V2 05/22] mmc: queue: Share mmc request array between partitions Adrian Hunter
@ 2017-04-08 17:41   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:41 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> eMMC can have multiple internal partitions that are represented as separate
> disks / queues. However switching between partitions is only done when the
> queue is empty. Consequently the array of mmc requests that are queued can
> be shared between partitions saving memory.
>
> Keep a pointer to the mmc request queue on the card, and use that instead
> of allocating a new one for each partition.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

This is an especially nice patch, and even just merging this patch set
to this point is already a significant improvement to the core code IMO.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 07/22] mmc: mmc_test: Disable Command Queue while mmc_test is used
  2017-03-13 12:36 ` [PATCH V2 07/22] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
@ 2017-04-08 17:43   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:43 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Normal read and write commands may not be used while the command queue is
> enabled. Disable the Command Queue when mmc_test is probed and re-enable it
> when it is removed.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Thugh if I get it right, there is no test for the command queue
functionality then, really. The tests just test common stuff. So
we would maybe want to add tests for CMDQ.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 08/22] mmc: block: Disable Command Queue while RPMB is used
  2017-03-13 12:36 ` [PATCH V2 08/22] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
@ 2017-04-08 17:44   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-08 17:44 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> RPMB does not allow Command Queue commands. Disable and re-enable the
> Command Queue when switching.
>
> Note that the driver only switches partitions when the queue is empty.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-03-13 12:36 ` [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
  2017-04-08 17:39   ` Linus Walleij
@ 2017-04-10 11:01   ` Ulf Hansson
  2017-04-10 11:11     ` Adrian Hunter
  1 sibling, 1 reply; 50+ messages in thread
From: Ulf Hansson @ 2017-04-10 11:01 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Add helper functions to enable or disable the Command Queue.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  Documentation/mmc/mmc-dev-attrs.txt |  1 +
>  drivers/mmc/core/mmc.c              |  2 ++
>  drivers/mmc/core/mmc_ops.c          | 28 ++++++++++++++++++++++++++++
>  drivers/mmc/core/mmc_ops.h          |  2 ++
>  include/linux/mmc/card.h            |  1 +
>  5 files changed, 34 insertions(+)
>
> diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
> index 404a0e9e92b0..dcd1252877fb 100644
> --- a/Documentation/mmc/mmc-dev-attrs.txt
> +++ b/Documentation/mmc/mmc-dev-attrs.txt
> @@ -30,6 +30,7 @@ All attributes are read-only.
>         rel_sectors             Reliable write sector count
>         ocr                     Operation Conditions Register
>         dsr                     Driver Stage Register
> +       cmdq_en                 Command Queue enabled: 1 => enabled, 0 => not enabled
>
>  Note on Erase Size and Preferred Erase Size:
>
> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
> index 7fd722868875..5727a0842a59 100644
> --- a/drivers/mmc/core/mmc.c
> +++ b/drivers/mmc/core/mmc.c
> @@ -790,6 +790,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
>  MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
>  MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
>  MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
> +MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);

Why do we need to be able to change this from userspace?

[...]

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-04-10 11:01   ` Ulf Hansson
@ 2017-04-10 11:11     ` Adrian Hunter
  2017-04-10 13:02       ` Ulf Hansson
  0 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-04-10 11:11 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 10/04/17 14:01, Ulf Hansson wrote:
> On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
>> Add helper functions to enable or disable the Command Queue.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>>  Documentation/mmc/mmc-dev-attrs.txt |  1 +
>>  drivers/mmc/core/mmc.c              |  2 ++
>>  drivers/mmc/core/mmc_ops.c          | 28 ++++++++++++++++++++++++++++
>>  drivers/mmc/core/mmc_ops.h          |  2 ++
>>  include/linux/mmc/card.h            |  1 +
>>  5 files changed, 34 insertions(+)
>>
>> diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
>> index 404a0e9e92b0..dcd1252877fb 100644
>> --- a/Documentation/mmc/mmc-dev-attrs.txt
>> +++ b/Documentation/mmc/mmc-dev-attrs.txt
>> @@ -30,6 +30,7 @@ All attributes are read-only.
>>         rel_sectors             Reliable write sector count
>>         ocr                     Operation Conditions Register
>>         dsr                     Driver Stage Register
>> +       cmdq_en                 Command Queue enabled: 1 => enabled, 0 => not enabled
>>
>>  Note on Erase Size and Preferred Erase Size:
>>
>> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
>> index 7fd722868875..5727a0842a59 100644
>> --- a/drivers/mmc/core/mmc.c
>> +++ b/drivers/mmc/core/mmc.c
>> @@ -790,6 +790,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
>>  MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
>>  MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
>>  MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
>> +MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
> 
> Why do we need to be able to change this from userspace?

MMC_DEV_ATTR makes it read-only, so it is just a way for userspace to see if
command queue is enabled.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-04-10 11:11     ` Adrian Hunter
@ 2017-04-10 13:02       ` Ulf Hansson
  0 siblings, 0 replies; 50+ messages in thread
From: Ulf Hansson @ 2017-04-10 13:02 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 10 April 2017 at 13:11, Adrian Hunter <adrian.hunter@intel.com> wrote:
> On 10/04/17 14:01, Ulf Hansson wrote:
>> On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>> Add helper functions to enable or disable the Command Queue.
>>>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>> ---
>>>  Documentation/mmc/mmc-dev-attrs.txt |  1 +
>>>  drivers/mmc/core/mmc.c              |  2 ++
>>>  drivers/mmc/core/mmc_ops.c          | 28 ++++++++++++++++++++++++++++
>>>  drivers/mmc/core/mmc_ops.h          |  2 ++
>>>  include/linux/mmc/card.h            |  1 +
>>>  5 files changed, 34 insertions(+)
>>>
>>> diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
>>> index 404a0e9e92b0..dcd1252877fb 100644
>>> --- a/Documentation/mmc/mmc-dev-attrs.txt
>>> +++ b/Documentation/mmc/mmc-dev-attrs.txt
>>> @@ -30,6 +30,7 @@ All attributes are read-only.
>>>         rel_sectors             Reliable write sector count
>>>         ocr                     Operation Conditions Register
>>>         dsr                     Driver Stage Register
>>> +       cmdq_en                 Command Queue enabled: 1 => enabled, 0 => not enabled
>>>
>>>  Note on Erase Size and Preferred Erase Size:
>>>
>>> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
>>> index 7fd722868875..5727a0842a59 100644
>>> --- a/drivers/mmc/core/mmc.c
>>> +++ b/drivers/mmc/core/mmc.c
>>> @@ -790,6 +790,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
>>>  MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
>>>  MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
>>>  MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
>>> +MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
>>
>> Why do we need to be able to change this from userspace?
>
> MMC_DEV_ATTR makes it read-only, so it is just a way for userspace to see if
> command queue is enabled.

Of course! Thanks for the clarification - and yes, that makes sense!

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 09/22] mmc: block: Change mmc_apply_rel_rw() to get block address from the request
  2017-03-13 12:36 ` [PATCH V2 09/22] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
@ 2017-04-10 13:49   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-10 13:49 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> mmc_apply_rel_rw() will be used by Software Command Queuing also. In that
> case the command argument is not the block address so change
> mmc_apply_rel_rw() to get block address from the request.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Plus it is a good readability change on its own merits.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 10/22] mmc: block: Factor out data preparation
  2017-03-13 12:36 ` [PATCH V2 10/22] mmc: block: Factor out data preparation Adrian Hunter
@ 2017-04-10 13:52   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-10 13:52 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Factor out data preparation into a separate function mmc_blk_data_prep()
> which can be re-used for command queuing.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

This change also stands nicely on its own for readability.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
                   ` (22 preceding siblings ...)
  2017-04-08 17:37 ` [PATCH V2 00/22] mmc: Add Command Queue support Linus Walleij
@ 2017-04-10 13:53 ` Ulf Hansson
  2017-04-22  7:45   ` Adrian Hunter
  23 siblings, 1 reply; 50+ messages in thread
From: Ulf Hansson @ 2017-04-10 13:53 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Hi
>
> Here are the hardware command queue patches without the software command
> queue patches or sdhci patches.
>
> Changes since V1:
>
>         "Share mmc request array between partitions" is dependent
>         on changes in "Introduce queue semantics", so added that
>         and block fixes:
>
>         Added "Fix is_waiting_last_req set incorrectly"
>         Added "Fix cmd error reset failure path"
>         Added "Use local var for mqrq_cur"
>         Added "Introduce queue semantics"
>
> Changes since RFC:
>
>         Re-based on next.
>         Added comment about command queue priority.
>         Added some acks and reviews.
>
>
> Adrian Hunter (21):
>       mmc: block: Fix is_waiting_last_req set incorrectly
>       mmc: block: Fix cmd error reset failure path
>       mmc: block: Use local var for mqrq_cur
>       mmc: block: Introduce queue semantics
>       mmc: queue: Share mmc request array between partitions
>       mmc: mmc: Add functions to enable / disable the Command Queue
>       mmc: mmc_test: Disable Command Queue while mmc_test is used
>       mmc: block: Disable Command Queue while RPMB is used
>       mmc: block: Change mmc_apply_rel_rw() to get block address from the request
>       mmc: block: Factor out data preparation
>       mmc: core: Factor out debug prints from mmc_start_request()
>       mmc: core: Factor out mrq preparation from mmc_start_request()
>       mmc: core: Add mmc_retune_hold_now()
>       mmc: core: Add members to mmc_request and mmc_data for CQE's
>       mmc: host: Add CQE interface
>       mmc: core: Turn off CQE before sending commands
>       mmc: core: Add support for handling CQE requests
>       mmc: mmc: Enable Command Queuing
>       mmc: mmc: Enable CQE's
>       mmc: block: Prepare CQE data
>       mmc: block: Add CQE support
>
> Venkat Gopalakrishnan (1):
>       mmc: cqhci: support for command queue enabled host
>
>  Documentation/mmc/mmc-dev-attrs.txt |    1 +
>  drivers/mmc/core/block.c            |  527 ++++++++++++----
>  drivers/mmc/core/block.h            |    7 +
>  drivers/mmc/core/bus.c              |    7 +
>  drivers/mmc/core/core.c             |  203 ++++++-
>  drivers/mmc/core/host.c             |    6 +
>  drivers/mmc/core/host.h             |    1 +
>  drivers/mmc/core/mmc.c              |   39 +-
>  drivers/mmc/core/mmc_ops.c          |   28 +
>  drivers/mmc/core/mmc_ops.h          |    2 +
>  drivers/mmc/core/mmc_test.c         |   14 +
>  drivers/mmc/core/queue.c            |  607 ++++++++++++++----
>  drivers/mmc/core/queue.h            |   55 +-
>  drivers/mmc/host/Kconfig            |   13 +
>  drivers/mmc/host/Makefile           |    1 +
>  drivers/mmc/host/cqhci.c            | 1148 +++++++++++++++++++++++++++++++++++
>  drivers/mmc/host/cqhci.h            |  240 ++++++++
>  include/linux/mmc/card.h            |    8 +
>  include/linux/mmc/core.h            |   19 +-
>  include/linux/mmc/host.h            |   24 +
>  include/trace/events/mmc.h          |   17 +-
>  21 files changed, 2694 insertions(+), 273 deletions(-)
>  create mode 100644 drivers/mmc/host/cqhci.c
>  create mode 100644 drivers/mmc/host/cqhci.h
>
>
> Regards
> Adrian

Sorry for the delay!

I have looked through parts of the new version of the series, just
reached to patch12. Before I continue, but also to move forward, I
have picked up patch 3 -> 12 (including 12). Patch 1 and 2, went as
fixes earlier.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 11/22] mmc: core: Factor out debug prints from mmc_start_request()
  2017-03-13 12:36 ` [PATCH V2 11/22] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
@ 2017-04-10 13:53   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-10 13:53 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> In preparation to reuse the code for CQE support.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 12/22] mmc: core: Factor out mrq preparation from mmc_start_request()
  2017-03-13 12:36 ` [PATCH V2 12/22] mmc: core: Factor out mrq preparation " Adrian Hunter
@ 2017-04-10 13:54   ` Linus Walleij
  0 siblings, 0 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-10 13:54 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Mon, Mar 13, 2017 at 1:36 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> In preparation to reuse the code for CQE support.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-04-10 13:53 ` Ulf Hansson
@ 2017-04-22  7:45   ` Adrian Hunter
  2017-04-24  8:12     ` Linus Walleij
  0 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-04-22  7:45 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij, Shawn Lin

On 04/10/2017 04:53 PM, Ulf Hansson wrote:
> On 13 March 2017 at 13:36, Adrian Hunter <adrian.hunter@intel.com> wrote:
>> Hi
>>
>> Here are the hardware command queue patches without the software command
>> queue patches or sdhci patches.
>>
>> Changes since V1:
>>
>>         "Share mmc request array between partitions" is dependent
>>         on changes in "Introduce queue semantics", so added that
>>         and block fixes:
>>
>>         Added "Fix is_waiting_last_req set incorrectly"
>>         Added "Fix cmd error reset failure path"
>>         Added "Use local var for mqrq_cur"
>>         Added "Introduce queue semantics"
>>
>> Changes since RFC:
>>
>>         Re-based on next.
>>         Added comment about command queue priority.
>>         Added some acks and reviews.
>>
>>
>> Adrian Hunter (21):
>>       mmc: block: Fix is_waiting_last_req set incorrectly
>>       mmc: block: Fix cmd error reset failure path
>>       mmc: block: Use local var for mqrq_cur
>>       mmc: block: Introduce queue semantics
>>       mmc: queue: Share mmc request array between partitions
>>       mmc: mmc: Add functions to enable / disable the Command Queue
>>       mmc: mmc_test: Disable Command Queue while mmc_test is used
>>       mmc: block: Disable Command Queue while RPMB is used
>>       mmc: block: Change mmc_apply_rel_rw() to get block address from the request
>>       mmc: block: Factor out data preparation
>>       mmc: core: Factor out debug prints from mmc_start_request()
>>       mmc: core: Factor out mrq preparation from mmc_start_request()
>>       mmc: core: Add mmc_retune_hold_now()
>>       mmc: core: Add members to mmc_request and mmc_data for CQE's
>>       mmc: host: Add CQE interface
>>       mmc: core: Turn off CQE before sending commands
>>       mmc: core: Add support for handling CQE requests
>>       mmc: mmc: Enable Command Queuing
>>       mmc: mmc: Enable CQE's
>>       mmc: block: Prepare CQE data
>>       mmc: block: Add CQE support
>>
>> Venkat Gopalakrishnan (1):
>>       mmc: cqhci: support for command queue enabled host
>>
>>  Documentation/mmc/mmc-dev-attrs.txt |    1 +
>>  drivers/mmc/core/block.c            |  527 ++++++++++++----
>>  drivers/mmc/core/block.h            |    7 +
>>  drivers/mmc/core/bus.c              |    7 +
>>  drivers/mmc/core/core.c             |  203 ++++++-
>>  drivers/mmc/core/host.c             |    6 +
>>  drivers/mmc/core/host.h             |    1 +
>>  drivers/mmc/core/mmc.c              |   39 +-
>>  drivers/mmc/core/mmc_ops.c          |   28 +
>>  drivers/mmc/core/mmc_ops.h          |    2 +
>>  drivers/mmc/core/mmc_test.c         |   14 +
>>  drivers/mmc/core/queue.c            |  607 ++++++++++++++----
>>  drivers/mmc/core/queue.h            |   55 +-
>>  drivers/mmc/host/Kconfig            |   13 +
>>  drivers/mmc/host/Makefile           |    1 +
>>  drivers/mmc/host/cqhci.c            | 1148 +++++++++++++++++++++++++++++++++++
>>  drivers/mmc/host/cqhci.h            |  240 ++++++++
>>  include/linux/mmc/card.h            |    8 +
>>  include/linux/mmc/core.h            |   19 +-
>>  include/linux/mmc/host.h            |   24 +
>>  include/trace/events/mmc.h          |   17 +-
>>  21 files changed, 2694 insertions(+), 273 deletions(-)
>>  create mode 100644 drivers/mmc/host/cqhci.c
>>  create mode 100644 drivers/mmc/host/cqhci.h
>>
>>
>> Regards
>> Adrian
> 
> Sorry for the delay!
> 
> I have looked through parts of the new version of the series, just
> reached to patch12. Before I continue, but also to move forward, I
> have picked up patch 3 -> 12 (including 12). Patch 1 and 2, went as
> fixes earlier.

Ulf and Linus have been doing a great job of keeping this moving, but it
would be nice to see some others taking more interest.  The first command
queue patches were posted in February 2014, over 3 years ago!


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-04-22  7:45   ` Adrian Hunter
@ 2017-04-24  8:12     ` Linus Walleij
  2017-04-24  9:14       ` Bough Chen
  2017-04-25 13:28       ` Paolo Valente
  0 siblings, 2 replies; 50+ messages in thread
From: Linus Walleij @ 2017-04-24  8:12 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Ulf and Linus have been doing a great job of keeping this moving, but it
> would be nice to see some others taking more interest.  The first command
> queue patches were posted in February 2014, over 3 years ago!

I agree.

I think both me & Ulf would also be doing more work if we had easily
accessible hardware with upstream host controller support for native
command queueing. (Hm, was just reading in libata about NCQ ...
déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)

Do we have some hardware with host-backed command queueing
out there that is easily obtained and has upstream support for the
basic system?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-04-24  8:12     ` Linus Walleij
@ 2017-04-24  9:14       ` Bough Chen
  2017-06-15 11:38         ` Adrian Hunter
  2017-04-25 13:28       ` Paolo Valente
  1 sibling, 1 reply; 50+ messages in thread
From: Bough Chen @ 2017-04-24  9:14 UTC (permalink / raw)
  To: Linus Walleij, Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

> -----Original Message-----
> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
> owner@vger.kernel.org] On Behalf Of Linus Walleij
> Sent: Monday, April 24, 2017 4:13 PM
> To: Adrian Hunter <adrian.hunter@intel.com>
> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> 
> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter <adrian.hunter@intel.com>
> wrote:
> 
> > Ulf and Linus have been doing a great job of keeping this moving, but
> > it would be nice to see some others taking more interest.  The first
> > command queue patches were posted in February 2014, over 3 years ago!
> 
> I agree.
> 
> I think both me & Ulf would also be doing more work if we had easily accessible
> hardware with upstream host controller support for native command queueing.
> (Hm, was just reading in libata about NCQ ...
> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
> 
> Do we have some hardware with host-backed command queueing out there
> that is easily obtained and has upstream support for the basic system?
> 

The coming i.MX8 support hardware CMDQ, I will have a try when I get one on May or June.

BR,
Haibo

> Yours,
> Linus Walleij
> --
> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-04-24  8:12     ` Linus Walleij
  2017-04-24  9:14       ` Bough Chen
@ 2017-04-25 13:28       ` Paolo Valente
  1 sibling, 0 replies; 50+ messages in thread
From: Paolo Valente @ 2017-04-25 13:28 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Adrian Hunter, Ulf Hansson, linux-mmc, Alex Lemberg,
	Mateusz Nowak, Yuliy Izrailov, Jaehoon Chung, Dong Aisheng,
	Das Asutosh, Zhangfei Gao, Dorfman Konstantin, David Griego,
	Sahitya Tummala, Harjani Ritesh, Venu Byravarasu, Shawn Lin


> Il giorno 24 apr 2017, alle ore 10:12, Linus Walleij <linus.walleij@linaro.org> ha scritto:
> 
> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> 
>> Ulf and Linus have been doing a great job of keeping this moving, but it
>> would be nice to see some others taking more interest.  The first command
>> queue patches were posted in February 2014, over 3 years ago!
> 
> I agree.
> 
> I think both me & Ulf would also be doing more work if we had easily
> accessible hardware with upstream host controller support for native
> command queueing. (Hm, was just reading in libata about NCQ ...
> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
> 

That would allow me too to run some test with BFQ and MMC ...

Thanks,
Paolo

> Do we have some hardware with host-backed command queueing
> out there that is easily obtained and has upstream support for the
> basic system?
> 
> Yours,
> Linus Walleij
> --
> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-04-24  9:14       ` Bough Chen
@ 2017-06-15 11:38         ` Adrian Hunter
  2017-06-15 11:49           ` Bough Chen
  0 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-06-15 11:38 UTC (permalink / raw)
  To: Bough Chen, Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On 24/04/17 12:14, Bough Chen wrote:
>> -----Original Message-----
>> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
>> owner@vger.kernel.org] On Behalf Of Linus Walleij
>> Sent: Monday, April 24, 2017 4:13 PM
>> To: Adrian Hunter <adrian.hunter@intel.com>
>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
>> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
>> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
>> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
>> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
>> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
>> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
>>
>> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter <adrian.hunter@intel.com>
>> wrote:
>>
>>> Ulf and Linus have been doing a great job of keeping this moving, but
>>> it would be nice to see some others taking more interest.  The first
>>> command queue patches were posted in February 2014, over 3 years ago!
>>
>> I agree.
>>
>> I think both me & Ulf would also be doing more work if we had easily accessible
>> hardware with upstream host controller support for native command queueing.
>> (Hm, was just reading in libata about NCQ ...
>> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
>>
>> Do we have some hardware with host-backed command queueing out there
>> that is easily obtained and has upstream support for the basic system?
>>
> 
> The coming i.MX8 support hardware CMDQ, I will have a try when I get one on May or June.

I have sent updated patches.  Will you have a chance to test hardware CMDQ?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-06-15 11:38         ` Adrian Hunter
@ 2017-06-15 11:49           ` Bough Chen
  2017-06-20  8:01             ` Bough Chen
  0 siblings, 1 reply; 50+ messages in thread
From: Bough Chen @ 2017-06-15 11:49 UTC (permalink / raw)
  To: Adrian Hunter, Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

> -----Original Message-----
> From: Adrian Hunter [mailto:adrian.hunter@intel.com]
> Sent: Thursday, June 15, 2017 7:38 PM
> To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
> <linus.walleij@linaro.org>
> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> 
> On 24/04/17 12:14, Bough Chen wrote:
> >> -----Original Message-----
> >> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
> >> owner@vger.kernel.org] On Behalf Of Linus Walleij
> >> Sent: Monday, April 24, 2017 4:13 PM
> >> To: Adrian Hunter <adrian.hunter@intel.com>
> >> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> >> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
> >> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> >> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> >> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> >> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> >> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> >> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> >> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> >> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> >> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> >>
> >> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter
> >> <adrian.hunter@intel.com>
> >> wrote:
> >>
> >>> Ulf and Linus have been doing a great job of keeping this moving,
> >>> but it would be nice to see some others taking more interest.  The
> >>> first command queue patches were posted in February 2014, over 3 years
> ago!
> >>
> >> I agree.
> >>
> >> I think both me & Ulf would also be doing more work if we had easily
> >> accessible hardware with upstream host controller support for native
> command queueing.
> >> (Hm, was just reading in libata about NCQ ...
> >> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
> >>
> >> Do we have some hardware with host-backed command queueing out
> there
> >> that is easily obtained and has upstream support for the basic system?
> >>
> >
> > The coming i.MX8 support hardware CMDQ, I will have a try when I get one
> on May or June.
> 
> I have sent updated patches.  Will you have a chance to test hardware CMDQ?

Yes, I will try to apply these patches on our local 4.9 branch.  Will give you the test result.

Best Regards,
Haibo Chen

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-06-15 11:49           ` Bough Chen
@ 2017-06-20  8:01             ` Bough Chen
  2017-06-20  9:04               ` Adrian Hunter
  0 siblings, 1 reply; 50+ messages in thread
From: Bough Chen @ 2017-06-20  8:01 UTC (permalink / raw)
  To: Bough Chen, Adrian Hunter, Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

> -----Original Message-----
> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
> owner@vger.kernel.org] On Behalf Of Bough Chen
> Sent: Thursday, June 15, 2017 7:50 PM
> To: Adrian Hunter <adrian.hunter@intel.com>; Linus Walleij
> <linus.walleij@linaro.org>
> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> Subject: RE: [PATCH V2 00/22] mmc: Add Command Queue support
> 
> > -----Original Message-----
> > From: Adrian Hunter [mailto:adrian.hunter@intel.com]
> > Sent: Thursday, June 15, 2017 7:38 PM
> > To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
> > <linus.walleij@linaro.org>
> > Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> > mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
> Mateusz
> > Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> > <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> > Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> > <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> > Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> > <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> > Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> > <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> > Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> >
> > On 24/04/17 12:14, Bough Chen wrote:
> > >> -----Original Message-----
> > >> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
> > >> owner@vger.kernel.org] On Behalf Of Linus Walleij
> > >> Sent: Monday, April 24, 2017 4:13 PM
> > >> To: Adrian Hunter <adrian.hunter@intel.com>
> > >> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> > >> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
> > >> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> > >> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung
> > >> <jh80.chung@samsung.com>; Dong Aisheng <dongas86@gmail.com>; Das
> > >> Asutosh <asutoshd@codeaurora.org>; Zhangfei Gao
> > >> <zhangfei.gao@gmail.com>; Dorfman Konstantin
> > >> <kdorfman@codeaurora.org>; David Griego <david.griego@linaro.org>;
> > >> Sahitya Tummala <stummala@codeaurora.org>; Harjani Ritesh
> > >> <riteshh@codeaurora.org>; Venu Byravarasu <vbyravarasu@nvidia.com>;
> > >> Shawn Lin <shawn.lin@rock-chips.com>
> > >> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> > >>
> > >> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter
> > >> <adrian.hunter@intel.com>
> > >> wrote:
> > >>
> > >>> Ulf and Linus have been doing a great job of keeping this moving,
> > >>> but it would be nice to see some others taking more interest.  The
> > >>> first command queue patches were posted in February 2014, over 3
> > >>> years
> > ago!
> > >>
> > >> I agree.
> > >>
> > >> I think both me & Ulf would also be doing more work if we had
> > >> easily accessible hardware with upstream host controller support
> > >> for native
> > command queueing.
> > >> (Hm, was just reading in libata about NCQ ...
> > >> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
> > >>
> > >> Do we have some hardware with host-backed command queueing out
> > there
> > >> that is easily obtained and has upstream support for the basic system?
> > >>
> > >
> > > The coming i.MX8 support hardware CMDQ, I will have a try when I get
> > > one
> > on May or June.
> >
> > I have sent updated patches.  Will you have a chance to test hardware CMDQ?
> 
> Yes, I will try to apply these patches on our local 4.9 branch.  Will give you the
> test result.
> 

Hi Adrian,

i.MX8 still not upstream, and just work on our local 4.9 branch, to test your branch, I need to cherry pick
some mmc patches and block layer patches, I'm doing this now, but need some time.


> Best Regards,
> Haibo Chen
> \x13  칻\x1c & ~ & \x18  +-  ݶ\x17  w  ˛   m b  f ȧ \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?
> & )ߢ^[f

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-06-20  8:01             ` Bough Chen
@ 2017-06-20  9:04               ` Adrian Hunter
  2017-07-04 10:21                 ` Bough Chen
  0 siblings, 1 reply; 50+ messages in thread
From: Adrian Hunter @ 2017-06-20  9:04 UTC (permalink / raw)
  To: Bough Chen, Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On 20/06/17 11:01, Bough Chen wrote:
>> -----Original Message-----
>> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
>> owner@vger.kernel.org] On Behalf Of Bough Chen
>> Sent: Thursday, June 15, 2017 7:50 PM
>> To: Adrian Hunter <adrian.hunter@intel.com>; Linus Walleij
>> <linus.walleij@linaro.org>
>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
>> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
>> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
>> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
>> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
>> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
>> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
>> Subject: RE: [PATCH V2 00/22] mmc: Add Command Queue support
>>
>>> -----Original Message-----
>>> From: Adrian Hunter [mailto:adrian.hunter@intel.com]
>>> Sent: Thursday, June 15, 2017 7:38 PM
>>> To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
>>> <linus.walleij@linaro.org>
>>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
>> Mateusz
>>> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
>>> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
>>> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
>>> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
>>> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
>>> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
>>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
>>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
>>>
>>> On 24/04/17 12:14, Bough Chen wrote:
>>>>> -----Original Message-----
>>>>> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
>>>>> owner@vger.kernel.org] On Behalf Of Linus Walleij
>>>>> Sent: Monday, April 24, 2017 4:13 PM
>>>>> To: Adrian Hunter <adrian.hunter@intel.com>
>>>>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>>>>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
>>>>> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>>>>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung
>>>>> <jh80.chung@samsung.com>; Dong Aisheng <dongas86@gmail.com>; Das
>>>>> Asutosh <asutoshd@codeaurora.org>; Zhangfei Gao
>>>>> <zhangfei.gao@gmail.com>; Dorfman Konstantin
>>>>> <kdorfman@codeaurora.org>; David Griego <david.griego@linaro.org>;
>>>>> Sahitya Tummala <stummala@codeaurora.org>; Harjani Ritesh
>>>>> <riteshh@codeaurora.org>; Venu Byravarasu <vbyravarasu@nvidia.com>;
>>>>> Shawn Lin <shawn.lin@rock-chips.com>
>>>>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
>>>>>
>>>>> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter
>>>>> <adrian.hunter@intel.com>
>>>>> wrote:
>>>>>
>>>>>> Ulf and Linus have been doing a great job of keeping this moving,
>>>>>> but it would be nice to see some others taking more interest.  The
>>>>>> first command queue patches were posted in February 2014, over 3
>>>>>> years
>>> ago!
>>>>>
>>>>> I agree.
>>>>>
>>>>> I think both me & Ulf would also be doing more work if we had
>>>>> easily accessible hardware with upstream host controller support
>>>>> for native
>>> command queueing.
>>>>> (Hm, was just reading in libata about NCQ ...
>>>>> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
>>>>>
>>>>> Do we have some hardware with host-backed command queueing out
>>> there
>>>>> that is easily obtained and has upstream support for the basic system?
>>>>>
>>>>
>>>> The coming i.MX8 support hardware CMDQ, I will have a try when I get
>>>> one
>>> on May or June.
>>>
>>> I have sent updated patches.  Will you have a chance to test hardware CMDQ?
>>
>> Yes, I will try to apply these patches on our local 4.9 branch.  Will give you the
>> test result.
>>
> 
> Hi Adrian,
> 
> i.MX8 still not upstream, and just work on our local 4.9 branch, to test your branch, I need to cherry pick
> some mmc patches and block layer patches, I'm doing this now, but need some time.

Thanks for the update.  I realize backporting anything related to mmc block
is now a big task.

> 
> 
>> Best Regards,
>> Haibo Chen
>> \x13  칻\x1c & ~ & \x18  +-  ݶ\x17  w  ˛   m b  f ȧ \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?
>> & )ߢ^[f


^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-06-20  9:04               ` Adrian Hunter
@ 2017-07-04 10:21                 ` Bough Chen
  2017-07-06 10:48                   ` Adrian Hunter
  0 siblings, 1 reply; 50+ messages in thread
From: Bough Chen @ 2017-07-04 10:21 UTC (permalink / raw)
  To: Adrian Hunter, Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

> -----Original Message-----
> From: Adrian Hunter [mailto:adrian.hunter@intel.com]
> Sent: Tuesday, June 20, 2017 5:05 PM
> To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
> <linus.walleij@linaro.org>
> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> 
> On 20/06/17 11:01, Bough Chen wrote:
> >> -----Original Message-----
> >> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
> >> owner@vger.kernel.org] On Behalf Of Bough Chen
> >> Sent: Thursday, June 15, 2017 7:50 PM
> >> To: Adrian Hunter <adrian.hunter@intel.com>; Linus Walleij
> >> <linus.walleij@linaro.org>
> >> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> >> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
> >> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> >> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
> >> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
> >> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
> >> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
> >> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
> >> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
> >> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> >> Subject: RE: [PATCH V2 00/22] mmc: Add Command Queue support
> >>
> >>> -----Original Message-----
> >>> From: Adrian Hunter [mailto:adrian.hunter@intel.com]
> >>> Sent: Thursday, June 15, 2017 7:38 PM
> >>> To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
> >>> <linus.walleij@linaro.org>
> >>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> >>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
> >> Mateusz
> >>> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> >>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung
> >>> <jh80.chung@samsung.com>; Dong Aisheng <dongas86@gmail.com>; Das
> >>> Asutosh <asutoshd@codeaurora.org>; Zhangfei Gao
> >>> <zhangfei.gao@gmail.com>; Dorfman Konstantin
> >>> <kdorfman@codeaurora.org>; David Griego <david.griego@linaro.org>;
> >>> Sahitya Tummala <stummala@codeaurora.org>; Harjani Ritesh
> >>> <riteshh@codeaurora.org>; Venu Byravarasu <vbyravarasu@nvidia.com>;
> >>> Shawn Lin <shawn.lin@rock-chips.com>
> >>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> >>>
> >>> On 24/04/17 12:14, Bough Chen wrote:
> >>>>> -----Original Message-----
> >>>>> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
> >>>>> owner@vger.kernel.org] On Behalf Of Linus Walleij
> >>>>> Sent: Monday, April 24, 2017 4:13 PM
> >>>>> To: Adrian Hunter <adrian.hunter@intel.com>
> >>>>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
> >>>>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
> >>>>> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
> >>>>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung
> >>>>> <jh80.chung@samsung.com>; Dong Aisheng <dongas86@gmail.com>;
> Das
> >>>>> Asutosh <asutoshd@codeaurora.org>; Zhangfei Gao
> >>>>> <zhangfei.gao@gmail.com>; Dorfman Konstantin
> >>>>> <kdorfman@codeaurora.org>; David Griego <david.griego@linaro.org>;
> >>>>> Sahitya Tummala <stummala@codeaurora.org>; Harjani Ritesh
> >>>>> <riteshh@codeaurora.org>; Venu Byravarasu
> >>>>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
> >>>>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
> >>>>>
> >>>>> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter
> >>>>> <adrian.hunter@intel.com>
> >>>>> wrote:
> >>>>>
> >>>>>> Ulf and Linus have been doing a great job of keeping this moving,
> >>>>>> but it would be nice to see some others taking more interest.
> >>>>>> The first command queue patches were posted in February 2014,
> >>>>>> over 3 years
> >>> ago!
> >>>>>
> >>>>> I agree.
> >>>>>
> >>>>> I think both me & Ulf would also be doing more work if we had
> >>>>> easily accessible hardware with upstream host controller support
> >>>>> for native
> >>> command queueing.
> >>>>> (Hm, was just reading in libata about NCQ ...
> >>>>> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
> >>>>>
> >>>>> Do we have some hardware with host-backed command queueing out
> >>> there
> >>>>> that is easily obtained and has upstream support for the basic system?
> >>>>>
> >>>>
> >>>> The coming i.MX8 support hardware CMDQ, I will have a try when I
> >>>> get one
> >>> on May or June.
> >>>
> >>> I have sent updated patches.  Will you have a chance to test hardware
> CMDQ?
> >>
> >> Yes, I will try to apply these patches on our local 4.9 branch.  Will
> >> give you the test result.
> >>
> >
> > Hi Adrian,
> >
> > i.MX8 still not upstream, and just work on our local 4.9 branch, to
> > test your branch, I need to cherry pick some mmc patches and block layer
> patches, I'm doing this now, but need some time.
> 
> Thanks for the update.  I realize backporting anything related to mmc block is
> now a big task.
> 

Hi Adrian,

I finish backporting, and have a try on our i.MX8 platform, seems it is easy to see the timeout error. I will try to dig it out.  
Here I attach the detail log:

root@imx8qxplpddr4arm2:~# dd if=/dev/mmcblk0 of=/dev/null bs=1M count=200
[   41.846693] mmc0: starting CQE transfer for tag 1 blkaddr 0
[   41.852310] mmc0:     blksz 512 blocks 512 flags 00000200 tsac 150 ms nsac 0
[   41.859396] mmc0: cqhci: tag 1 task descriptor 0x016200102f
[   68.665171] mmc0: cqhci: recovery needed
[   68.669108] mmc0: cqhci: timeout for tag 0
[   68.673205] mmc0: cqhci: ============ CQHCI REGISTER DUMP ===========
[   68.679644] mmc0: cqhci: Caps:      0x0000310a | Version:  0x00000510
[   68.686089] mmc0: cqhci: Config:    0x00000001 | Control:  0x00000000
[   68.692527] mmc0: cqhci: Int stat:  0x00000000 | Int enab: 0x00000006
[   68.698964] mmc0: cqhci: Int sig:   0x00000006 | Int Coal: 0x00000000
[   68.705402] mmc0: cqhci: TDL base:  0xd807a000 | TDL up32: 0x00000000
[   68.711839] mmc0: cqhci: Doorbell:  0x00000003 | TCN:      0x00000000
[   68.718277] mmc0: cqhci: Dev queue: 0x00000001 | Dev Pend: 0x00000001
[   68.724714] mmc0: cqhci: Task clr:  0x00000000 | SSC1:     0x00011000
[   68.731152] mmc0: cqhci: SSC2:      0x00000001 | DCMD rsp: 0x00000000
[   68.737589] mmc0: cqhci: RED mask:  0xfdf9a080 | TERRI:    0x00000000
[   68.744026] mmc0: cqhci: Resp idx:  0x0000002e | Resp arg: 0x00000900
[   68.750463] mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
[   68.756903] mmc0: sdhci: Sys addr:  0xd5ccc000 | Version:  0x00000002
[   68.763348] mmc0: sdhci: Blk size:  0x00000200 | Blk cnt:  0x00000001
[   68.769786] mmc0: sdhci: Argument:  0x40010200 | Trn mode: 0x00000030
[   68.776231] mmc0: sdhci: Present:   0x010d8a8f | Host ctl: 0x00000030
[   68.782668] mmc0: sdhci: Power:     0x00000002 | Blk gap:  0x00000080
[   68.789106] mmc0: sdhci: Wake-up:   0x00000008 | Clock:    0x0000000f
[   68.795543] mmc0: sdhci: Timeout:   0x0000000e | Int stat: 0x00000000
[   68.801981] mmc0: sdhci: Int enab:  0x107f4000 | Sig enab: 0x107f4000
[   68.808418] mmc0: sdhci: AC12 err:  0x00000000 | Slot int: 0x00000502
[   68.814856] mmc0: sdhci: Caps:      0x07eb0000 | Caps_1:   0x8000b407
[   68.821293] mmc0: sdhci: Cmd:       0x00002c1a | Max curr: 0x00ffffff
[   68.827730] mmc0: sdhci: Resp[0]:   0x00000900 | Resp[1]:  0xffffffff
[   68.834168] mmc0: sdhci: Resp[2]:   0x328f5903 | Resp[3]:  0x00d02700
[   68.840605] mmc0: sdhci: Host ctl2: 0x00000000
[   68.845046] mmc0: sdhci: ADMA Err:  0x00000003 | ADMA Ptr: 0xd8098004
[   68.851481] mmc0: sdhci: ============================================
[   68.857936] mmc0: CQE recovery start
[   68.861554] mmc0: running CQE recovery
[   68.865326] mmc0: cqhci: cqhci_recovery_start
[   68.885159] mmc0: cqhci: Failed to halt
[   68.889004] mmc0: sdhci: CQE off, IRQ mask 0xff1003, IRQ status 0x4000
[   68.895574] mmc0: starting CMD12 arg 00000000 flags 00000019
[   78.905144] mmc0: Timeout waiting for hardware interrupt.
[   78.910558] mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
[   78.917003] mmc0: sdhci: Sys addr:  0x00000000 | Version:  0x00000002
[   78.923440] mmc0: sdhci: Blk size:  0x00000200 | Blk cnt:  0x00000001
[   78.929877] mmc0: sdhci: Argument:  0x00000000 | Trn mode: 0x00000030
[   78.936315] mmc0: sdhci: Present:   0x01fd8009 | Host ctl: 0x00000031
[   78.942752] mmc0: sdhci: Power:     0x00000002 | Blk gap:  0x00000080
[   78.949189] mmc0: sdhci: Wake-up:   0x00000008 | Clock:    0x0000000f
[   78.955627] mmc0: sdhci: Timeout:   0x0000000f | Int stat: 0x00004000
[   78.962064] mmc0: sdhci: Int enab:  0x007f1003 | Sig enab: 0x007f1003
[   78.968501] mmc0: sdhci: AC12 err:  0x00000000 | Slot int: 0x00000502
[   78.974939] mmc0: sdhci: Caps:      0x07eb0000 | Caps_1:   0x8000b407
[   78.981376] mmc0: sdhci: Cmd:       0x00000cd3 | Max curr: 0x00ffffff
[   78.987814] mmc0: sdhci: Resp[0]:   0x00000900 | Resp[1]:  0xffffffff
[   78.994251] mmc0: sdhci: Resp[2]:   0x328f5903 | Resp[3]:  0x00d02700
[   79.000688] mmc0: sdhci: Host ctl2: 0x00000000
[   79.005129] mmc0: sdhci: ADMA Err:  0x00000000 | ADMA Ptr: 0x00000000
[   79.011564] mmc0: sdhci: ============================================
[   79.018522] mmc0: req done (CMD12): -110: 00000000 00000000 00000000 00000000
[   79.025713] mmc0: starting CMD48 arg 00000001 flags 00000019
[   79.031424] mmc0: sdhci: IRQ status 0x00004001
[   79.035868] mmc0: sdhci: IRQ status 0x00004000
[   79.040312] mmc0: sdhci: IRQ status 0x00004000
[   79.044751] mmc0: sdhci: IRQ status 0x00004000
[   79.049190] mmc0: sdhci: IRQ status 0x00004000
[   79.053630] mmc0: sdhci: IRQ status 0x00004000
[   79.058069] mmc0: sdhci: IRQ status 0x00004000
[   79.062508] mmc0: sdhci: IRQ status 0x00004000
[   79.066948] mmc0: sdhci: IRQ status 0x00004000
[   79.071387] mmc0: sdhci: IRQ status 0x00004000
[   79.075827] mmc0: sdhci: IRQ status 0x00004000
[   79.080266] mmc0: sdhci: IRQ status 0x00004000
[   79.084705] mmc0: sdhci: IRQ status 0x00004000
[   79.089144] mmc0: sdhci: IRQ status 0x00004000
[   79.093583] mmc0: sdhci: IRQ status 0x00004000
[   79.098023] mmc0: sdhci: IRQ status 0x00004000
[   79.102463] mmc0: Unexpected interrupt 0x00004000.
[   79.107248] mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
[   79.113687] mmc0: sdhci: Sys addr:  0x00000000 | Version:  0x00000002
[   79.120125] mmc0: sdhci: Blk size:  0x00000200 | Blk cnt:  0x00000001
[   79.126562] mmc0: sdhci: Argument:  0x00000000 | Trn mode: 0x00000030
[   79.133000] mmc0: sdhci: Present:   0x01fd8009 | Host ctl: 0x00000031
[   79.139436] mmc0: sdhci: Power:     0x00000002 | Blk gap:  0x00000080
[   79.145874] mmc0: sdhci: Wake-up:   0x00000008 | Clock:    0x0000000f
[   79.152312] mmc0: sdhci: Timeout:   0x0000000f | Int stat: 0x00004000
[   79.158749] mmc0: sdhci: Int enab:  0x007f1003 | Sig enab: 0x007f1003
[   79.165186] mmc0: sdhci: AC12 err:  0x00000000 | Slot int: 0x00000502
[   79.171624] mmc0: sdhci: Caps:      0x07eb0000 | Caps_1:   0x8000b407
[   79.178061] mmc0: sdhci: Cmd:       0x00002d12 | Max curr: 0x00ffffff
[   79.184499] mmc0: sdhci: Resp[0]:   0x00400800 | Resp[1]:  0xffffffff
[   79.190936] mmc0: sdhci: Resp[2]:   0x328f5903 | Resp[3]:  0x00d02700
[   79.197373] mmc0: sdhci: Host ctl2: 0x00000000
[   79.201814] mmc0: sdhci: ADMA Err:  0x00000000 | ADMA Ptr: 0x00000000
[   79.208249] mmc0: sdhci: ============================================
[   79.214737] mmc0: req done (CMD48): 0: 00400800 00000000 00000000 00000000
[   79.221722] mmc0: cqhci: cqhci_recovery_finish
[   79.226198] mmc0: CQE transfer done tag 0
[   79.230223] mmc0:     0 bytes transferred: -110
[   79.234796] mmc0: CQE transfer done tag 1
[   79.238828] mmc0:     0 bytes transferred: 0
[   79.243364] mmc0: cqhci: recovery done
[   79.247146] mmc0: CQE recovery done
[   79.250699] mmc0: starting CQE transfer for tag 0 blkaddr 0
[   79.256299] mmc0:     blksz 512 blocks 512 flags 00000200 tsac 150 ms nsac 0
[   79.263368] mmc0: cqhci: CQE on
[   79.266540] mmc0: sdhci: CQE on, IRQ mask 0x2ff4000, IRQ status 0x4000
[   79.273074] mmc0: sdhci: IRQ status 0x00004000
[   79.277517] mmc0: cqhci: IRQ status: 0x00000006
[   79.282043] mmc0: cqhci: error IRQ status: 0x00000006 cmd error 0 data error 0 TERRI: 0x00000000
[   79.290835] mmc0: cqhci: error when idle. IRQ status: 0x00000006 cmd error 0 data error 0 TERRI: 0x00000000
[   79.300621] ------------[ cut here ]------------
[   79.305251] WARNING: CPU: 0 PID: 1273 at drivers/mmc/host/cqhci.c:671 cqhci_irq+0x1d0/0x4d0
[   79.313603] Modules linked in:
[   79.316660]
[   79.318151] CPU: 0 PID: 1273 Comm: mmcqd/0 Not tainted 4.9.11-02713-g0aa36a6-dirty #659
[   79.326156] Hardware name: Freescale i.MX8QXP LPDDR4 ARM2 (DT)
[   79.329154] fec 5b040000.ethernet eth0: MDIO read timeout
[   79.337381] task: ffff80083a1a6400 task.stack: ffff80083ae18000
[   79.343299] PC is at cqhci_irq+0x1d0/0x4d0
[   79.347399] LR is at cqhci_irq+0x1d0/0x4d0
[   79.351490] pc : [<ffff0000087f6070>] lr : [<ffff0000087f6070>] pstate: 800001c5
[   79.358882] sp : ffff80083ff6edc0
[   79.362192] x29: ffff80083ff6edc0 x28: 0000000000004000
[   79.367517] x27: ffff80083a807318 x26: ffff80083a807330
[   79.372842] x25: ffff80083a36a008 x24: 0000000000000000
[   79.378167] x23: 0000000000000000 x22: ffff80083a36a000
[   79.383493] x21: ffff80083a807318 x20: 0000000000000000
[   79.388818] x19: 0000000000000006 x18: 0000000000000006
[   79.394144] x17: 0000ffff9792db60 x16: ffff0000081dcef8
[   79.399469] x15: ffff000009083bb5 x14: 20726f7272652061
[   79.404795] x13: 746164203020726f x12: 72726520646d6320
[   79.410120] x11: 3630303030303030 x10: 0000000000000294
[   79.415446] x9 : 747320515249202e x8 : 3030303030783020
[   79.420771] x7 : 3a49525245542030 x6 : ffff000009083c16
[   79.426097] x5 : ffff80083ff6fbb8 x4 : 0000000000000001
[   79.431422] x3 : 0000000000000007 x2 : 0000000000000006
[   79.436748] x1 : ffff80083ae18000 x0 : 000000000000005f
[   79.442072]
[   79.443559] ---[ end trace b7a758ac7d743c5f ]---
[   79.448172] Call trace:
[   79.450616] Exception stack(0xffff80083ff6ebf0 to 0xffff80083ff6ed20)
[   79.457062] ebe0:                                   0000000000000006 0001000000000000
[   79.472735] ec20: ffff80083ff6ec40 ffff000008d20fa8 ffff000009083000 0000000108f9e118
[   79.480571] ec40: ffff80083ff6ece0 ffff000008100038 0000000000000006 0000000000000000
[   79.488407] ec60: ffff80083a807318 ffff80083a36a000 0000000000000000 0000000000000000
[   79.496243] ec80: ffff80083a36a008 ffff80083a807330 000000000000005f ffff80083ae18000
[   79.504079] eca0: 0000000000000006 0000000000000007 0000000000000001 ffff80083ff6fbb8
[   79.511916] ecc0: ffff000009083c16 3a49525245542030 3030303030783020 747320515249202e
[   79.519752] ece0: 0000000000000294 3630303030303030 72726520646d6320 746164203020726f
[   79.527588] ed00: 20726f7272652061 ffff000009083bb5 ffff0000081dcef8 0000ffff9792db60
[   79.535424] [<ffff0000087f6070>] cqhci_irq+0x1d0/0x4d0
[   79.540567] [<ffff0000087f3a30>] esdhc_cqhci_irq+0x50/0x60
[   79.546057] [<ffff0000087e91a8>] sdhci_irq+0xe8/0xbe0
[   79.551116] [<ffff0000081022ec>] __handle_irq_event_percpu+0x9c/0x128
[   79.557559] [<ffff000008102394>] handle_irq_event_percpu+0x1c/0x58
[   79.563744] [<ffff000008102418>] handle_irq_event+0x48/0x78
[   79.569314] [<ffff000008105d28>] handle_fasteoi_irq+0xb8/0x1b0
[   79.575151] [<ffff0000081013e4>] generic_handle_irq+0x24/0x38
[   79.580902] [<ffff000008101a5c>] __handle_domain_irq+0x5c/0xb8
[   79.586742] [<ffff000008081650>] gic_handle_irq+0xc0/0x160
[   79.592229] Exception stack(0xffff80083ae1bb20 to 0xffff80083ae1bc50)
[   79.598669] bb20: ffff80083a36a7b8 0000000000000140 000000000000010c 0000000000000007
[   79.606505] bb40: 0000000000000001 ffff80083ff6fbb8 ffff000009083bf1 515249202c303030
[   79.614341] bb60: 2073757461747320 ffff80083ae1b9b0 0000000000000290 0000000000000006
[   79.622177] bb80: 0000000005f5e0ff 0000000000000000 000000000000028f ffff000009083bb5
[   79.630013] bba0: ffff0000081dcef8 0000ffff9792db60 0000000000000006 ffff80083a36a580
[   79.637850] bbc0: ffff80083a36a7b8 0000000000000140 ffff80083a36a000 ffff80083a5df880
[   79.645686] bbe0: ffff80083b00bd78 0000000000000000 ffff80083ab05720 ffff80083ae18000
[   79.653522] bc00: ffff80083ab05718 ffff80083ae1bc50 ffff0000087e5778 ffff80083ae1bc50
[   79.661358] bc20: ffff0000089f44a8 0000000080000145 ffff80083a36aa00 0000000000000000
[   79.669192] bc40: ffffffffffffffff 00000000000f000f
[   79.674067] [<ffff0000080827b0>] el1_irq+0xb0/0x124
[   79.678952] [<ffff0000089f44a8>] _raw_spin_unlock_irqrestore+0x10/0x48
[   79.685483] [<ffff0000087f3914>] esdhc_cqe_enable+0x74/0x90
[   79.691060] [<ffff0000087f6c58>] cqhci_request+0x588/0x5d0
[   79.696552] [<ffff0000087cccdc>] mmc_cqe_start_req+0x9c/0xf0
[   79.702217] [<ffff0000087e0730>] mmc_blk_cqe_issue_rq+0x1a8/0x290
[   79.708314] [<ffff0000087e10e8>] mmc_cqe_thread+0x1f0/0x330
[   79.713893] [<ffff0000080d9cf8>] kthread+0xd0/0xe8
[   79.718686] [<ffff000008082e80>] ret_from_fork+0x10/0x50
[   79.724002] mmc0: cqhci: TCN: 0x00000000
[   79.727954] random: crng init done
[   79.731484] mmc0: cqhci: tag 0 task descriptor 0x016200102f
[   79.737157] mmc0: starting CQE transfer for tag 1 blkaddr 61071232
[   79.743356] mmc0:     blksz 512 blocks 8 flags 00000200 tsac 150 ms nsac 0
[   79.750260] mmc0: cqhci: tag 1 task descriptor 0x0163a3df800008102f

> >
> >
> >> Best Regards,
> >> Haibo Chen


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH V2 00/22] mmc: Add Command Queue support
  2017-07-04 10:21                 ` Bough Chen
@ 2017-07-06 10:48                   ` Adrian Hunter
  0 siblings, 0 replies; 50+ messages in thread
From: Adrian Hunter @ 2017-07-06 10:48 UTC (permalink / raw)
  To: Bough Chen, Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Shawn Lin

On 07/04/2017 01:21 PM, Bough Chen wrote:
>> -----Original Message-----
>> From: Adrian Hunter [mailto:adrian.hunter@intel.com]
>> Sent: Tuesday, June 20, 2017 5:05 PM
>> To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
>> <linus.walleij@linaro.org>
>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>; Mateusz
>> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
>> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
>> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
>> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
>> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
>> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
>>
>> On 20/06/17 11:01, Bough Chen wrote:
>>>> -----Original Message-----
>>>> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
>>>> owner@vger.kernel.org] On Behalf Of Bough Chen
>>>> Sent: Thursday, June 15, 2017 7:50 PM
>>>> To: Adrian Hunter <adrian.hunter@intel.com>; Linus Walleij
>>>> <linus.walleij@linaro.org>
>>>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>>>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
>>>> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>>>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung <jh80.chung@samsung.com>;
>>>> Dong Aisheng <dongas86@gmail.com>; Das Asutosh
>>>> <asutoshd@codeaurora.org>; Zhangfei Gao <zhangfei.gao@gmail.com>;
>>>> Dorfman Konstantin <kdorfman@codeaurora.org>; David Griego
>>>> <david.griego@linaro.org>; Sahitya Tummala <stummala@codeaurora.org>;
>>>> Harjani Ritesh <riteshh@codeaurora.org>; Venu Byravarasu
>>>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
>>>> Subject: RE: [PATCH V2 00/22] mmc: Add Command Queue support
>>>>
>>>>> -----Original Message-----
>>>>> From: Adrian Hunter [mailto:adrian.hunter@intel.com]
>>>>> Sent: Thursday, June 15, 2017 7:38 PM
>>>>> To: Bough Chen <haibo.chen@nxp.com>; Linus Walleij
>>>>> <linus.walleij@linaro.org>
>>>>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>>>>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
>>>> Mateusz
>>>>> Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>>>>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung
>>>>> <jh80.chung@samsung.com>; Dong Aisheng <dongas86@gmail.com>; Das
>>>>> Asutosh <asutoshd@codeaurora.org>; Zhangfei Gao
>>>>> <zhangfei.gao@gmail.com>; Dorfman Konstantin
>>>>> <kdorfman@codeaurora.org>; David Griego <david.griego@linaro.org>;
>>>>> Sahitya Tummala <stummala@codeaurora.org>; Harjani Ritesh
>>>>> <riteshh@codeaurora.org>; Venu Byravarasu <vbyravarasu@nvidia.com>;
>>>>> Shawn Lin <shawn.lin@rock-chips.com>
>>>>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
>>>>>
>>>>> On 24/04/17 12:14, Bough Chen wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: linux-mmc-owner@vger.kernel.org [mailto:linux-mmc-
>>>>>>> owner@vger.kernel.org] On Behalf Of Linus Walleij
>>>>>>> Sent: Monday, April 24, 2017 4:13 PM
>>>>>>> To: Adrian Hunter <adrian.hunter@intel.com>
>>>>>>> Cc: Ulf Hansson <ulf.hansson@linaro.org>; linux-mmc <linux-
>>>>>>> mmc@vger.kernel.org>; Alex Lemberg <alex.lemberg@sandisk.com>;
>>>>>>> Mateusz Nowak <mateusz.nowak@intel.com>; Yuliy Izrailov
>>>>>>> <Yuliy.Izrailov@sandisk.com>; Jaehoon Chung
>>>>>>> <jh80.chung@samsung.com>; Dong Aisheng <dongas86@gmail.com>;
>> Das
>>>>>>> Asutosh <asutoshd@codeaurora.org>; Zhangfei Gao
>>>>>>> <zhangfei.gao@gmail.com>; Dorfman Konstantin
>>>>>>> <kdorfman@codeaurora.org>; David Griego <david.griego@linaro.org>;
>>>>>>> Sahitya Tummala <stummala@codeaurora.org>; Harjani Ritesh
>>>>>>> <riteshh@codeaurora.org>; Venu Byravarasu
>>>>>>> <vbyravarasu@nvidia.com>; Shawn Lin <shawn.lin@rock-chips.com>
>>>>>>> Subject: Re: [PATCH V2 00/22] mmc: Add Command Queue support
>>>>>>>
>>>>>>> On Sat, Apr 22, 2017 at 9:45 AM, Adrian Hunter
>>>>>>> <adrian.hunter@intel.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Ulf and Linus have been doing a great job of keeping this moving,
>>>>>>>> but it would be nice to see some others taking more interest.
>>>>>>>> The first command queue patches were posted in February 2014,
>>>>>>>> over 3 years
>>>>> ago!
>>>>>>>
>>>>>>> I agree.
>>>>>>>
>>>>>>> I think both me & Ulf would also be doing more work if we had
>>>>>>> easily accessible hardware with upstream host controller support
>>>>>>> for native
>>>>> command queueing.
>>>>>>> (Hm, was just reading in libata about NCQ ...
>>>>>>> déja vu ... https://en.wikipedia.org/wiki/Native_Command_Queuing)
>>>>>>>
>>>>>>> Do we have some hardware with host-backed command queueing out
>>>>> there
>>>>>>> that is easily obtained and has upstream support for the basic system?
>>>>>>>
>>>>>>
>>>>>> The coming i.MX8 support hardware CMDQ, I will have a try when I
>>>>>> get one
>>>>> on May or June.
>>>>>
>>>>> I have sent updated patches.  Will you have a chance to test hardware
>> CMDQ?
>>>>
>>>> Yes, I will try to apply these patches on our local 4.9 branch.  Will
>>>> give you the test result.
>>>>
>>>
>>> Hi Adrian,
>>>
>>> i.MX8 still not upstream, and just work on our local 4.9 branch, to
>>> test your branch, I need to cherry pick some mmc patches and block layer
>> patches, I'm doing this now, but need some time.
>>
>> Thanks for the update.  I realize backporting anything related to mmc block is
>> now a big task.
>>
> 
> Hi Adrian,
> 
> I finish backporting, and have a try on our i.MX8 platform, seems it is easy to see the timeout error. I will try to dig it out.  
> Here I attach the detail log:
> 
> root@imx8qxplpddr4arm2:~# dd if=/dev/mmcblk0 of=/dev/null bs=1M count=200
> [   41.846693] mmc0: starting CQE transfer for tag 1 blkaddr 0
> [   41.852310] mmc0:     blksz 512 blocks 512 flags 00000200 tsac 150 ms nsac 0
> [   41.859396] mmc0: cqhci: tag 1 task descriptor 0x016200102f
> [   68.665171] mmc0: cqhci: recovery needed
> [   68.669108] mmc0: cqhci: timeout for tag 0
> [   68.673205] mmc0: cqhci: ============ CQHCI REGISTER DUMP ===========
> [   68.679644] mmc0: cqhci: Caps:      0x0000310a | Version:  0x00000510
> [   68.686089] mmc0: cqhci: Config:    0x00000001 | Control:  0x00000000
> [   68.692527] mmc0: cqhci: Int stat:  0x00000000 | Int enab: 0x00000006
> [   68.698964] mmc0: cqhci: Int sig:   0x00000006 | Int Coal: 0x00000000
> [   68.705402] mmc0: cqhci: TDL base:  0xd807a000 | TDL up32: 0x00000000
> [   68.711839] mmc0: cqhci: Doorbell:  0x00000003 | TCN:      0x00000000
> [   68.718277] mmc0: cqhci: Dev queue: 0x00000001 | Dev Pend: 0x00000001
> [   68.724714] mmc0: cqhci: Task clr:  0x00000000 | SSC1:     0x00011000
> [   68.731152] mmc0: cqhci: SSC2:      0x00000001 | DCMD rsp: 0x00000000
> [   68.737589] mmc0: cqhci: RED mask:  0xfdf9a080 | TERRI:    0x00000000
> [   68.744026] mmc0: cqhci: Resp idx:  0x0000002e | Resp arg: 0x00000900
> [   68.750463] mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
> [   68.756903] mmc0: sdhci: Sys addr:  0xd5ccc000 | Version:  0x00000002
> [   68.763348] mmc0: sdhci: Blk size:  0x00000200 | Blk cnt:  0x00000001
> [   68.769786] mmc0: sdhci: Argument:  0x40010200 | Trn mode: 0x00000030
> [   68.776231] mmc0: sdhci: Present:   0x010d8a8f | Host ctl: 0x00000030

We had a problem with CQE getting stuck if bit-11 was set in present state.
Refer glk_cqe_enable() in sdhci-pci-core.c.

> [   68.782668] mmc0: sdhci: Power:     0x00000002 | Blk gap:  0x00000080
> [   68.789106] mmc0: sdhci: Wake-up:   0x00000008 | Clock:    0x0000000f
> [   68.795543] mmc0: sdhci: Timeout:   0x0000000e | Int stat: 0x00000000
> [   68.801981] mmc0: sdhci: Int enab:  0x107f4000 | Sig enab: 0x107f4000
> [   68.808418] mmc0: sdhci: AC12 err:  0x00000000 | Slot int: 0x00000502
> [   68.814856] mmc0: sdhci: Caps:      0x07eb0000 | Caps_1:   0x8000b407
> [   68.821293] mmc0: sdhci: Cmd:       0x00002c1a | Max curr: 0x00ffffff
> [   68.827730] mmc0: sdhci: Resp[0]:   0x00000900 | Resp[1]:  0xffffffff
> [   68.834168] mmc0: sdhci: Resp[2]:   0x328f5903 | Resp[3]:  0x00d02700
> [   68.840605] mmc0: sdhci: Host ctl2: 0x00000000
> [   68.845046] mmc0: sdhci: ADMA Err:  0x00000003 | ADMA Ptr: 0xd8098004
> [   68.851481] mmc0: sdhci: ============================================
> [   68.857936] mmc0: CQE recovery start
> [   68.861554] mmc0: running CQE recovery
> [   68.865326] mmc0: cqhci: cqhci_recovery_start
> [   68.885159] mmc0: cqhci: Failed to halt
> [   68.889004] mmc0: sdhci: CQE off, IRQ mask 0xff1003, IRQ status 0x4000
> [   68.895574] mmc0: starting CMD12 arg 00000000 flags 00000019
> [   78.905144] mmc0: Timeout waiting for hardware interrupt.
> [   78.910558] mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
> [   78.917003] mmc0: sdhci: Sys addr:  0x00000000 | Version:  0x00000002
> [   78.923440] mmc0: sdhci: Blk size:  0x00000200 | Blk cnt:  0x00000001
> [   78.929877] mmc0: sdhci: Argument:  0x00000000 | Trn mode: 0x00000030
> [   78.936315] mmc0: sdhci: Present:   0x01fd8009 | Host ctl: 0x00000031
> [   78.942752] mmc0: sdhci: Power:     0x00000002 | Blk gap:  0x00000080
> [   78.949189] mmc0: sdhci: Wake-up:   0x00000008 | Clock:    0x0000000f
> [   78.955627] mmc0: sdhci: Timeout:   0x0000000f | Int stat: 0x00004000
> [   78.962064] mmc0: sdhci: Int enab:  0x007f1003 | Sig enab: 0x007f1003
> [   78.968501] mmc0: sdhci: AC12 err:  0x00000000 | Slot int: 0x00000502
> [   78.974939] mmc0: sdhci: Caps:      0x07eb0000 | Caps_1:   0x8000b407
> [   78.981376] mmc0: sdhci: Cmd:       0x00000cd3 | Max curr: 0x00ffffff
> [   78.987814] mmc0: sdhci: Resp[0]:   0x00000900 | Resp[1]:  0xffffffff
> [   78.994251] mmc0: sdhci: Resp[2]:   0x328f5903 | Resp[3]:  0x00d02700
> [   79.000688] mmc0: sdhci: Host ctl2: 0x00000000
> [   79.005129] mmc0: sdhci: ADMA Err:  0x00000000 | ADMA Ptr: 0x00000000
> [   79.011564] mmc0: sdhci: ============================================
> [   79.018522] mmc0: req done (CMD12): -110: 00000000 00000000 00000000 00000000
> [   79.025713] mmc0: starting CMD48 arg 00000001 flags 00000019
> [   79.031424] mmc0: sdhci: IRQ status 0x00004001
> [   79.035868] mmc0: sdhci: IRQ status 0x00004000
> [   79.040312] mmc0: sdhci: IRQ status 0x00004000
> [   79.044751] mmc0: sdhci: IRQ status 0x00004000
> [   79.049190] mmc0: sdhci: IRQ status 0x00004000
> [   79.053630] mmc0: sdhci: IRQ status 0x00004000
> [   79.058069] mmc0: sdhci: IRQ status 0x00004000
> [   79.062508] mmc0: sdhci: IRQ status 0x00004000
> [   79.066948] mmc0: sdhci: IRQ status 0x00004000
> [   79.071387] mmc0: sdhci: IRQ status 0x00004000
> [   79.075827] mmc0: sdhci: IRQ status 0x00004000
> [   79.080266] mmc0: sdhci: IRQ status 0x00004000
> [   79.084705] mmc0: sdhci: IRQ status 0x00004000
> [   79.089144] mmc0: sdhci: IRQ status 0x00004000
> [   79.093583] mmc0: sdhci: IRQ status 0x00004000
> [   79.098023] mmc0: sdhci: IRQ status 0x00004000
> [   79.102463] mmc0: Unexpected interrupt 0x00004000.
> [   79.107248] mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
> [   79.113687] mmc0: sdhci: Sys addr:  0x00000000 | Version:  0x00000002
> [   79.120125] mmc0: sdhci: Blk size:  0x00000200 | Blk cnt:  0x00000001
> [   79.126562] mmc0: sdhci: Argument:  0x00000000 | Trn mode: 0x00000030
> [   79.133000] mmc0: sdhci: Present:   0x01fd8009 | Host ctl: 0x00000031
> [   79.139436] mmc0: sdhci: Power:     0x00000002 | Blk gap:  0x00000080
> [   79.145874] mmc0: sdhci: Wake-up:   0x00000008 | Clock:    0x0000000f
> [   79.152312] mmc0: sdhci: Timeout:   0x0000000f | Int stat: 0x00004000
> [   79.158749] mmc0: sdhci: Int enab:  0x007f1003 | Sig enab: 0x007f1003
> [   79.165186] mmc0: sdhci: AC12 err:  0x00000000 | Slot int: 0x00000502
> [   79.171624] mmc0: sdhci: Caps:      0x07eb0000 | Caps_1:   0x8000b407
> [   79.178061] mmc0: sdhci: Cmd:       0x00002d12 | Max curr: 0x00ffffff
> [   79.184499] mmc0: sdhci: Resp[0]:   0x00400800 | Resp[1]:  0xffffffff
> [   79.190936] mmc0: sdhci: Resp[2]:   0x328f5903 | Resp[3]:  0x00d02700
> [   79.197373] mmc0: sdhci: Host ctl2: 0x00000000
> [   79.201814] mmc0: sdhci: ADMA Err:  0x00000000 | ADMA Ptr: 0x00000000
> [   79.208249] mmc0: sdhci: ============================================
> [   79.214737] mmc0: req done (CMD48): 0: 00400800 00000000 00000000 00000000
> [   79.221722] mmc0: cqhci: cqhci_recovery_finish
> [   79.226198] mmc0: CQE transfer done tag 0
> [   79.230223] mmc0:     0 bytes transferred: -110
> [   79.234796] mmc0: CQE transfer done tag 1
> [   79.238828] mmc0:     0 bytes transferred: 0
> [   79.243364] mmc0: cqhci: recovery done
> [   79.247146] mmc0: CQE recovery done
> [   79.250699] mmc0: starting CQE transfer for tag 0 blkaddr 0
> [   79.256299] mmc0:     blksz 512 blocks 512 flags 00000200 tsac 150 ms nsac 0
> [   79.263368] mmc0: cqhci: CQE on
> [   79.266540] mmc0: sdhci: CQE on, IRQ mask 0x2ff4000, IRQ status 0x4000
> [   79.273074] mmc0: sdhci: IRQ status 0x00004000
> [   79.277517] mmc0: cqhci: IRQ status: 0x00000006
> [   79.282043] mmc0: cqhci: error IRQ status: 0x00000006 cmd error 0 data error 0 TERRI: 0x00000000
> [   79.290835] mmc0: cqhci: error when idle. IRQ status: 0x00000006 cmd error 0 data error 0 TERRI: 0x00000000
> [   79.300621] ------------[ cut here ]------------
> [   79.305251] WARNING: CPU: 0 PID: 1273 at drivers/mmc/host/cqhci.c:671 cqhci_irq+0x1d0/0x4d0
> [   79.313603] Modules linked in:
> [   79.316660]
> [   79.318151] CPU: 0 PID: 1273 Comm: mmcqd/0 Not tainted 4.9.11-02713-g0aa36a6-dirty #659
> [   79.326156] Hardware name: Freescale i.MX8QXP LPDDR4 ARM2 (DT)
> [   79.329154] fec 5b040000.ethernet eth0: MDIO read timeout
> [   79.337381] task: ffff80083a1a6400 task.stack: ffff80083ae18000
> [   79.343299] PC is at cqhci_irq+0x1d0/0x4d0
> [   79.347399] LR is at cqhci_irq+0x1d0/0x4d0
> [   79.351490] pc : [<ffff0000087f6070>] lr : [<ffff0000087f6070>] pstate: 800001c5
> [   79.358882] sp : ffff80083ff6edc0
> [   79.362192] x29: ffff80083ff6edc0 x28: 0000000000004000
> [   79.367517] x27: ffff80083a807318 x26: ffff80083a807330
> [   79.372842] x25: ffff80083a36a008 x24: 0000000000000000
> [   79.378167] x23: 0000000000000000 x22: ffff80083a36a000
> [   79.383493] x21: ffff80083a807318 x20: 0000000000000000
> [   79.388818] x19: 0000000000000006 x18: 0000000000000006
> [   79.394144] x17: 0000ffff9792db60 x16: ffff0000081dcef8
> [   79.399469] x15: ffff000009083bb5 x14: 20726f7272652061
> [   79.404795] x13: 746164203020726f x12: 72726520646d6320
> [   79.410120] x11: 3630303030303030 x10: 0000000000000294
> [   79.415446] x9 : 747320515249202e x8 : 3030303030783020
> [   79.420771] x7 : 3a49525245542030 x6 : ffff000009083c16
> [   79.426097] x5 : ffff80083ff6fbb8 x4 : 0000000000000001
> [   79.431422] x3 : 0000000000000007 x2 : 0000000000000006
> [   79.436748] x1 : ffff80083ae18000 x0 : 000000000000005f
> [   79.442072]
> [   79.443559] ---[ end trace b7a758ac7d743c5f ]---
> [   79.448172] Call trace:
> [   79.450616] Exception stack(0xffff80083ff6ebf0 to 0xffff80083ff6ed20)
> [   79.457062] ebe0:                                   0000000000000006 0001000000000000
> [   79.472735] ec20: ffff80083ff6ec40 ffff000008d20fa8 ffff000009083000 0000000108f9e118
> [   79.480571] ec40: ffff80083ff6ece0 ffff000008100038 0000000000000006 0000000000000000
> [   79.488407] ec60: ffff80083a807318 ffff80083a36a000 0000000000000000 0000000000000000
> [   79.496243] ec80: ffff80083a36a008 ffff80083a807330 000000000000005f ffff80083ae18000
> [   79.504079] eca0: 0000000000000006 0000000000000007 0000000000000001 ffff80083ff6fbb8
> [   79.511916] ecc0: ffff000009083c16 3a49525245542030 3030303030783020 747320515249202e
> [   79.519752] ece0: 0000000000000294 3630303030303030 72726520646d6320 746164203020726f
> [   79.527588] ed00: 20726f7272652061 ffff000009083bb5 ffff0000081dcef8 0000ffff9792db60
> [   79.535424] [<ffff0000087f6070>] cqhci_irq+0x1d0/0x4d0
> [   79.540567] [<ffff0000087f3a30>] esdhc_cqhci_irq+0x50/0x60
> [   79.546057] [<ffff0000087e91a8>] sdhci_irq+0xe8/0xbe0
> [   79.551116] [<ffff0000081022ec>] __handle_irq_event_percpu+0x9c/0x128
> [   79.557559] [<ffff000008102394>] handle_irq_event_percpu+0x1c/0x58
> [   79.563744] [<ffff000008102418>] handle_irq_event+0x48/0x78
> [   79.569314] [<ffff000008105d28>] handle_fasteoi_irq+0xb8/0x1b0
> [   79.575151] [<ffff0000081013e4>] generic_handle_irq+0x24/0x38
> [   79.580902] [<ffff000008101a5c>] __handle_domain_irq+0x5c/0xb8
> [   79.586742] [<ffff000008081650>] gic_handle_irq+0xc0/0x160
> [   79.592229] Exception stack(0xffff80083ae1bb20 to 0xffff80083ae1bc50)
> [   79.598669] bb20: ffff80083a36a7b8 0000000000000140 000000000000010c 0000000000000007
> [   79.606505] bb40: 0000000000000001 ffff80083ff6fbb8 ffff000009083bf1 515249202c303030
> [   79.614341] bb60: 2073757461747320 ffff80083ae1b9b0 0000000000000290 0000000000000006
> [   79.622177] bb80: 0000000005f5e0ff 0000000000000000 000000000000028f ffff000009083bb5
> [   79.630013] bba0: ffff0000081dcef8 0000ffff9792db60 0000000000000006 ffff80083a36a580
> [   79.637850] bbc0: ffff80083a36a7b8 0000000000000140 ffff80083a36a000 ffff80083a5df880
> [   79.645686] bbe0: ffff80083b00bd78 0000000000000000 ffff80083ab05720 ffff80083ae18000
> [   79.653522] bc00: ffff80083ab05718 ffff80083ae1bc50 ffff0000087e5778 ffff80083ae1bc50
> [   79.661358] bc20: ffff0000089f44a8 0000000080000145 ffff80083a36aa00 0000000000000000
> [   79.669192] bc40: ffffffffffffffff 00000000000f000f
> [   79.674067] [<ffff0000080827b0>] el1_irq+0xb0/0x124
> [   79.678952] [<ffff0000089f44a8>] _raw_spin_unlock_irqrestore+0x10/0x48
> [   79.685483] [<ffff0000087f3914>] esdhc_cqe_enable+0x74/0x90
> [   79.691060] [<ffff0000087f6c58>] cqhci_request+0x588/0x5d0
> [   79.696552] [<ffff0000087cccdc>] mmc_cqe_start_req+0x9c/0xf0
> [   79.702217] [<ffff0000087e0730>] mmc_blk_cqe_issue_rq+0x1a8/0x290
> [   79.708314] [<ffff0000087e10e8>] mmc_cqe_thread+0x1f0/0x330
> [   79.713893] [<ffff0000080d9cf8>] kthread+0xd0/0xe8
> [   79.718686] [<ffff000008082e80>] ret_from_fork+0x10/0x50
> [   79.724002] mmc0: cqhci: TCN: 0x00000000
> [   79.727954] random: crng init done
> [   79.731484] mmc0: cqhci: tag 0 task descriptor 0x016200102f
> [   79.737157] mmc0: starting CQE transfer for tag 1 blkaddr 61071232
> [   79.743356] mmc0:     blksz 512 blocks 8 flags 00000200 tsac 150 ms nsac 0
> [   79.750260] mmc0: cqhci: tag 1 task descriptor 0x0163a3df800008102f
> 
>>>
>>>
>>>> Best Regards,
>>>> Haibo Chen
> 


^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2017-07-06 10:54 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-13 12:36 [PATCH V2 00/22] mmc: Add Command Queue support Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 01/22] mmc: block: Fix is_waiting_last_req set incorrectly Adrian Hunter
2017-03-14 16:22   ` Ulf Hansson
2017-03-13 12:36 ` [PATCH V2 02/22] mmc: block: Fix cmd error reset failure path Adrian Hunter
2017-03-14 16:22   ` Ulf Hansson
2017-03-13 12:36 ` [PATCH V2 03/22] mmc: block: Use local var for mqrq_cur Adrian Hunter
2017-04-08 17:37   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 04/22] mmc: block: Introduce queue semantics Adrian Hunter
2017-04-08 17:40   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 05/22] mmc: queue: Share mmc request array between partitions Adrian Hunter
2017-04-08 17:41   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 06/22] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
2017-04-08 17:39   ` Linus Walleij
2017-04-10 11:01   ` Ulf Hansson
2017-04-10 11:11     ` Adrian Hunter
2017-04-10 13:02       ` Ulf Hansson
2017-03-13 12:36 ` [PATCH V2 07/22] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
2017-04-08 17:43   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 08/22] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
2017-04-08 17:44   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 09/22] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
2017-04-10 13:49   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 10/22] mmc: block: Factor out data preparation Adrian Hunter
2017-04-10 13:52   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 11/22] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
2017-04-10 13:53   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 12/22] mmc: core: Factor out mrq preparation " Adrian Hunter
2017-04-10 13:54   ` Linus Walleij
2017-03-13 12:36 ` [PATCH V2 13/22] mmc: core: Add mmc_retune_hold_now() Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 14/22] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 15/22] mmc: host: Add CQE interface Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 16/22] mmc: core: Turn off CQE before sending commands Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 17/22] mmc: core: Add support for handling CQE requests Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 18/22] mmc: mmc: Enable Command Queuing Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 19/22] mmc: mmc: Enable CQE's Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 20/22] mmc: block: Prepare CQE data Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 21/22] mmc: block: Add CQE support Adrian Hunter
2017-03-13 12:36 ` [PATCH V2 22/22] mmc: cqhci: support for command queue enabled host Adrian Hunter
2017-04-08 17:37 ` [PATCH V2 00/22] mmc: Add Command Queue support Linus Walleij
2017-04-10 13:53 ` Ulf Hansson
2017-04-22  7:45   ` Adrian Hunter
2017-04-24  8:12     ` Linus Walleij
2017-04-24  9:14       ` Bough Chen
2017-06-15 11:38         ` Adrian Hunter
2017-06-15 11:49           ` Bough Chen
2017-06-20  8:01             ` Bough Chen
2017-06-20  9:04               ` Adrian Hunter
2017-07-04 10:21                 ` Bough Chen
2017-07-06 10:48                   ` Adrian Hunter
2017-04-25 13:28       ` Paolo Valente

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.