All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 00/39] mmc: Add Command Queue support
@ 2017-02-10 12:55 Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur Adrian Hunter
                   ` (38 more replies)
  0 siblings, 39 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Hi

Here are patches to enable hardware command queue (Command Queue Engine
or CQE) and software command queue.

Ritesh Harjani graciously agreed the use of their driver which has been
re-named CQHCI to match the JEDEC specification and has some more features
added.

The first 13 patches enable software command queue.  Earlier versions have
been submitted before.

The hardware command queue patches are now functionally complete, enabling
all block requests (read/write/discard/flush) as well as re-tuning and
recovery.

The patches can also be found here:

	http://git.infradead.org/users/ahunter/linux-sdhci.git/shortlog/refs/heads/cqe


Adrian Hunter (38):
      mmc: block: Use local var for mqrq_cur
      mmc: queue: Share mmc request array between partitions
      mmc: block: Introduce queue semantics
      mmc: core: Do not prepare a new request twice
      mmc: mmc: Add functions to enable / disable the Command Queue
      mmc: mmc_test: Disable Command Queue while mmc_test is used
      mmc: block: Disable Command Queue while RPMB is used
      mmc: core: Export mmc_retune_hold() and mmc_retune_release()
      mmc: queue: Add a function to control wake-up on new requests
      mmc: block: Change mmc_apply_rel_rw() to get block address from the request
      mmc: block: Factor out data preparation
      mmc: block: Add Software Command Queuing
      mmc: mmc: Enable Software Command Queuing
      mmc: core: Factor out debug prints from mmc_start_request()
      mmc: core: Factor out mrq preparation from mmc_start_request()
      mmc: core: Add mmc_retune_hold_now()
      mmc: core: Add members to mmc_request and mmc_data for CQE's
      mmc: host: Add CQE interface
      mmc: core: Turn off CQE before sending commands
      mmc: core: Add support for handling CQE requests
      mmc: mmc: Enable CQE's
      mmc: block: Prepare CQE data
      mmc: block: Add CQE support
      mmc: sdhci: Improve debug print format
      mmc: sdhci: Add response register to register dump
      mmc: sdhci: Improve register dump print format
      mmc: sdhci: Export sdhci_dumpregs
      mmc: sdhci: Get rid of 'extern' in header file
      mmc: sdhci: Add sdhci_cleanup_host
      mmc: sdhci: Factor out sdhci_set_default_irqs
      mmc: sdhci: Add CQE support
      mmc: sdhci-pci: Let devices define how to add the host
      mmc: sdhci-pci: Do not use suspend/resume callbacks with runtime pm
      mmc: sdhci-pci: Conditionally compile pm sleep functions
      mmc: sdhci-pci: Let suspend/resume callbacks replace default callbacks
      mmc: sdhci-pci: Add runtime suspend/resume callbacks
      mmc: sdhci-pci: Move a function to avoid later forward declaration
      mmc: sdhci-pci: Add CQHCI support for Intel GLK

Venkat Gopalakrishnan (1):
      mmc: cqhci: support for command queue enabled host

 Documentation/mmc/mmc-dev-attrs.txt  |    1 +
 drivers/mmc/core/block.c             | 1021 ++++++++++++++++++++++++++----
 drivers/mmc/core/block.h             |    7 +
 drivers/mmc/core/bus.c               |    7 +
 drivers/mmc/core/core.c              |  212 ++++++-
 drivers/mmc/core/host.c              |    8 +
 drivers/mmc/core/host.h              |    1 +
 drivers/mmc/core/mmc.c               |   41 +-
 drivers/mmc/core/mmc_ops.c           |   28 +
 drivers/mmc/core/mmc_ops.h           |    2 +
 drivers/mmc/core/mmc_test.c          |   22 +-
 drivers/mmc/core/queue.c             |  630 +++++++++++++++----
 drivers/mmc/core/queue.h             |   68 +-
 drivers/mmc/host/Kconfig             |   14 +
 drivers/mmc/host/Makefile            |    1 +
 drivers/mmc/host/cqhci.c             | 1148 ++++++++++++++++++++++++++++++++++
 drivers/mmc/host/cqhci.h             |  228 +++++++
 drivers/mmc/host/sdhci-pci-core.c    |  499 ++++++++++-----
 drivers/mmc/host/sdhci-pci-o2micro.c |    4 +-
 drivers/mmc/host/sdhci-pci.h         |   11 +
 drivers/mmc/host/sdhci.c             |  324 +++++++---
 drivers/mmc/host/sdhci.h             |   54 +-
 include/linux/mmc/card.h             |    8 +
 include/linux/mmc/core.h             |   20 +-
 include/linux/mmc/host.h             |   25 +
 include/trace/events/mmc.h           |   17 +-
 26 files changed, 3871 insertions(+), 530 deletions(-)
 create mode 100644 drivers/mmc/host/cqhci.c
 create mode 100644 drivers/mmc/host/cqhci.h


Regards
Adrian

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 12:29   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 02/39] mmc: queue: Share mmc request array between partitions Adrian Hunter
                   ` (37 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
assigning it to a local variable.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 985477cdcb3e..061133f3b0b2 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1605,7 +1605,8 @@ static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req)
  * @mq: the queue with the card and host to restart
  * @req: a new request that want to be started after the current one
  */
-static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req,
+				   struct mmc_queue_req *mqrq)
 {
 	if (!req)
 		return;
@@ -1619,8 +1620,8 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req)
 		return;
 	}
 	/* Else proceed and try to restart the current async request */
-	mmc_blk_rw_rq_prep(mq->mqrq_cur, mq->card, 0, mq);
-	mmc_start_areq(mq->card->host, &mq->mqrq_cur->areq, NULL);
+	mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
+	mmc_start_areq(mq->card->host, &mqrq->areq, NULL);
 }
 
 static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
@@ -1630,6 +1631,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	struct mmc_blk_request *brq;
 	int disable_multi = 0, retry = 0, type, retune_retry_done = 0;
 	enum mmc_blk_status status;
+	struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
 	struct mmc_queue_req *mq_rq;
 	struct request *old_req;
 	struct mmc_async_req *new_areq;
@@ -1653,8 +1655,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				return;
 			}
 
-			mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
-			new_areq = &mq->mqrq_cur->areq;
+			mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+			new_areq = &mqrq_cur->areq;
 		} else
 			new_areq = NULL;
 
@@ -1707,11 +1709,11 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
 			if (mmc_blk_reset(md, card->host, type)) {
 				mmc_blk_rw_cmd_abort(card, old_req);
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			if (!req_pending) {
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			break;
@@ -1724,7 +1726,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			if (!mmc_blk_reset(md, card->host, type))
 				break;
 			mmc_blk_rw_cmd_abort(card, old_req);
-			mmc_blk_rw_try_restart(mq, new_req);
+			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		case MMC_BLK_DATA_ERR: {
 			int err;
@@ -1734,7 +1736,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				break;
 			if (err == -ENODEV) {
 				mmc_blk_rw_cmd_abort(card, old_req);
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			/* Fall through */
@@ -1755,19 +1757,19 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			req_pending = blk_end_request(old_req, -EIO,
 						      brq->data.blksz);
 			if (!req_pending) {
-				mmc_blk_rw_try_restart(mq, new_req);
+				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			break;
 		case MMC_BLK_NOMEDIUM:
 			mmc_blk_rw_cmd_abort(card, old_req);
-			mmc_blk_rw_try_restart(mq, new_req);
+			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		default:
 			pr_err("%s: Unhandled return value (%d)",
 					old_req->rq_disk->disk_name, status);
 			mmc_blk_rw_cmd_abort(card, old_req);
-			mmc_blk_rw_try_restart(mq, new_req);
+			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 			return;
 		}
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 02/39] mmc: queue: Share mmc request array between partitions
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 03/39] mmc: block: Introduce queue semantics Adrian Hunter
                   ` (36 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

eMMC can have multiple internal partitions that are represented as separate
disks / queues. However switching between partitions is only done when the
queue is empty. Consequently the array of mmc requests that are queued can
be shared between partitions saving memory.

Keep a pointer to the mmc request queue on the card, and use that instead
of allocating a new one for each partition.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c |  11 ++-
 drivers/mmc/core/queue.c | 235 ++++++++++++++++++++++++++++-------------------
 drivers/mmc/core/queue.h |   2 +
 include/linux/mmc/card.h |   5 +
 4 files changed, 156 insertions(+), 97 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 061133f3b0b2..b6d1497c395f 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2187,6 +2187,7 @@ static int mmc_blk_probe(struct mmc_card *card)
 {
 	struct mmc_blk_data *md, *part_md;
 	char cap_str[10];
+	int ret;
 
 	/*
 	 * Check that the card supports the command class(es) we need.
@@ -2196,9 +2197,15 @@ static int mmc_blk_probe(struct mmc_card *card)
 
 	mmc_fixup_device(card, blk_fixups);
 
+	ret = mmc_queue_alloc_shared_queue(card);
+	if (ret)
+		return ret;
+
 	md = mmc_blk_alloc(card);
-	if (IS_ERR(md))
+	if (IS_ERR(md)) {
+		mmc_queue_free_shared_queue(card);
 		return PTR_ERR(md);
+	}
 
 	string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2,
 			cap_str, sizeof(cap_str));
@@ -2236,6 +2243,7 @@ static int mmc_blk_probe(struct mmc_card *card)
  out:
 	mmc_blk_remove_parts(card, md);
 	mmc_blk_remove_req(md);
+	mmc_queue_free_shared_queue(card);
 	return 0;
 }
 
@@ -2253,6 +2261,7 @@ static void mmc_blk_remove(struct mmc_card *card)
 	pm_runtime_put_noidle(&card->dev);
 	mmc_blk_remove_req(md);
 	dev_set_drvdata(&card->dev, NULL);
+	mmc_queue_free_shared_queue(card);
 }
 
 static int _mmc_blk_suspend(struct mmc_card *card)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 5cb369c2664b..031553c9be04 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -150,17 +150,13 @@ static void mmc_request_fn(struct request_queue *q)
 		wake_up_process(mq->thread);
 }
 
-static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
+static struct scatterlist *mmc_alloc_sg(int sg_len)
 {
 	struct scatterlist *sg;
 
 	sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL);
-	if (!sg)
-		*err = -ENOMEM;
-	else {
-		*err = 0;
+	if (sg)
 		sg_init_table(sg, sg_len);
-	}
 
 	return sg;
 }
@@ -186,80 +182,164 @@ static void mmc_queue_setup_discard(struct request_queue *q,
 		queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
 }
 
+static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
+{
+	kfree(mqrq->bounce_sg);
+	mqrq->bounce_sg = NULL;
+
+	kfree(mqrq->sg);
+	mqrq->sg = NULL;
+
+	kfree(mqrq->bounce_buf);
+	mqrq->bounce_buf = NULL;
+}
+
+static void mmc_queue_reqs_free_bufs(struct mmc_queue_req *mqrq, int qdepth)
+{
+	int i;
+
+	for (i = 0; i < qdepth; i++)
+		mmc_queue_req_free_bufs(&mqrq[i]);
+}
+
+static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth)
+{
+	mmc_queue_reqs_free_bufs(mqrq, qdepth);
+	kfree(mqrq);
+}
+
 #ifdef CONFIG_MMC_BLOCK_BOUNCE
-static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
-					unsigned int bouncesz)
+static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth,
+				       unsigned int bouncesz)
 {
 	int i;
 
-	for (i = 0; i < mq->qdepth; i++) {
-		mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
-		if (!mq->mqrq[i].bounce_buf)
-			goto out_err;
-	}
+	for (i = 0; i < qdepth; i++) {
+		mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+		if (!mqrq[i].bounce_buf)
+			return -ENOMEM;
 
-	return true;
+		mqrq[i].sg = mmc_alloc_sg(1);
+		if (!mqrq[i].sg)
+			return -ENOMEM;
 
-out_err:
-	while (--i >= 0) {
-		kfree(mq->mqrq[i].bounce_buf);
-		mq->mqrq[i].bounce_buf = NULL;
+		mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512);
+		if (!mqrq[i].bounce_sg)
+			return -ENOMEM;
 	}
-	pr_warn("%s: unable to allocate bounce buffers\n",
-		mmc_card_name(mq->card));
-	return false;
+
+	return 0;
 }
 
-static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
-				      unsigned int bouncesz)
+static bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, int qdepth,
+				   unsigned int bouncesz)
 {
-	int i, ret;
+	int ret;
 
-	for (i = 0; i < mq->qdepth; i++) {
-		mq->mqrq[i].sg = mmc_alloc_sg(1, &ret);
-		if (ret)
-			return ret;
+	ret = mmc_queue_alloc_bounce_bufs(mqrq, qdepth, bouncesz);
+	if (ret)
+		mmc_queue_reqs_free_bufs(mqrq, qdepth);
 
-		mq->mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
-		if (ret)
-			return ret;
-	}
+	return !ret;
+}
+
+static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
+{
+	unsigned int bouncesz = MMC_QUEUE_BOUNCESZ;
+
+	if (host->max_segs != 1)
+		return 0;
+
+	if (bouncesz > host->max_req_size)
+		bouncesz = host->max_req_size;
+	if (bouncesz > host->max_seg_size)
+		bouncesz = host->max_seg_size;
+	if (bouncesz > host->max_blk_count * 512)
+		bouncesz = host->max_blk_count * 512;
+
+	if (bouncesz <= 512)
+		return 0;
+
+	return bouncesz;
+}
+#else
+static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq,
+					  int qdepth, unsigned int bouncesz)
+{
+	return false;
+}
 
+static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
+{
 	return 0;
 }
 #endif
 
-static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
+static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth,
+			       int max_segs)
 {
-	int i, ret;
+	int i;
 
-	for (i = 0; i < mq->qdepth; i++) {
-		mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret);
-		if (ret)
-			return ret;
+	for (i = 0; i < qdepth; i++) {
+		mqrq[i].sg = mmc_alloc_sg(max_segs);
+		if (!mqrq[i].sg)
+			return -ENOMEM;
 	}
 
 	return 0;
 }
 
-static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
+void mmc_queue_free_shared_queue(struct mmc_card *card)
 {
-	kfree(mqrq->bounce_sg);
-	mqrq->bounce_sg = NULL;
+	if (card->mqrq) {
+		mmc_queue_free_mqrqs(card->mqrq, card->qdepth);
+		card->mqrq = NULL;
+	}
+}
 
-	kfree(mqrq->sg);
-	mqrq->sg = NULL;
+static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth)
+{
+	struct mmc_host *host = card->host;
+	struct mmc_queue_req *mqrq;
+	unsigned int bouncesz;
+	int ret = 0;
 
-	kfree(mqrq->bounce_buf);
-	mqrq->bounce_buf = NULL;
+	if (card->mqrq)
+		return -EINVAL;
+
+	mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
+	if (!mqrq)
+		return -ENOMEM;
+
+	card->mqrq = mqrq;
+	card->qdepth = qdepth;
+
+	bouncesz = mmc_queue_calc_bouncesz(host);
+
+	if (bouncesz && !mmc_queue_alloc_bounce(mqrq, qdepth, bouncesz)) {
+		bouncesz = 0;
+		pr_warn("%s: unable to allocate bounce buffers\n",
+			mmc_card_name(card));
+	}
+
+	card->bouncesz = bouncesz;
+
+	if (!bouncesz) {
+		ret = mmc_queue_alloc_sgs(mqrq, qdepth, host->max_segs);
+		if (ret)
+			goto out_err;
+	}
+
+	return ret;
+
+out_err:
+	mmc_queue_free_shared_queue(card);
+	return ret;
 }
 
-static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
+int mmc_queue_alloc_shared_queue(struct mmc_card *card)
 {
-	int i;
-
-	for (i = 0; i < mq->qdepth; i++)
-		mmc_queue_req_free_bufs(&mq->mqrq[i]);
+	return __mmc_queue_alloc_shared_queue(card, 2);
 }
 
 /**
@@ -276,7 +356,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 {
 	struct mmc_host *host = card->host;
 	u64 limit = BLK_BOUNCE_HIGH;
-	bool bounce = false;
 	int ret = -ENOMEM;
 
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
@@ -287,11 +366,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (!mq->queue)
 		return -ENOMEM;
 
-	mq->qdepth = 2;
-	mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req),
-			   GFP_KERNEL);
-	if (!mq->mqrq)
-		goto blk_cleanup;
+	mq->mqrq = card->mqrq;
+	mq->qdepth = card->qdepth;
 	mq->mqrq_cur = &mq->mqrq[0];
 	mq->mqrq_prev = &mq->mqrq[1];
 	mq->queue->queuedata = mq;
@@ -302,44 +378,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (mmc_can_erase(card))
 		mmc_queue_setup_discard(mq->queue, card);
 
-#ifdef CONFIG_MMC_BLOCK_BOUNCE
-	if (host->max_segs == 1) {
-		unsigned int bouncesz;
-
-		bouncesz = MMC_QUEUE_BOUNCESZ;
-
-		if (bouncesz > host->max_req_size)
-			bouncesz = host->max_req_size;
-		if (bouncesz > host->max_seg_size)
-			bouncesz = host->max_seg_size;
-		if (bouncesz > (host->max_blk_count * 512))
-			bouncesz = host->max_blk_count * 512;
-
-		if (bouncesz > 512 &&
-		    mmc_queue_alloc_bounce_bufs(mq, bouncesz)) {
-			blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
-			blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
-			blk_queue_max_segments(mq->queue, bouncesz / 512);
-			blk_queue_max_segment_size(mq->queue, bouncesz);
-
-			ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz);
-			if (ret)
-				goto cleanup_queue;
-			bounce = true;
-		}
-	}
-#endif
-
-	if (!bounce) {
+	if (card->bouncesz) {
+		blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
+		blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512);
+		blk_queue_max_segments(mq->queue, card->bouncesz / 512);
+		blk_queue_max_segment_size(mq->queue, card->bouncesz);
+	} else {
 		blk_queue_bounce_limit(mq->queue, limit);
 		blk_queue_max_hw_sectors(mq->queue,
 			min(host->max_blk_count, host->max_req_size / 512));
 		blk_queue_max_segments(mq->queue, host->max_segs);
 		blk_queue_max_segment_size(mq->queue, host->max_seg_size);
-
-		ret = mmc_queue_alloc_sgs(mq, host->max_segs);
-		if (ret)
-			goto cleanup_queue;
 	}
 
 	sema_init(&mq->thread_sem, 1);
@@ -354,11 +403,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 
 	return 0;
 
- cleanup_queue:
-	mmc_queue_reqs_free_bufs(mq);
-	kfree(mq->mqrq);
+cleanup_queue:
 	mq->mqrq = NULL;
-blk_cleanup:
 	blk_cleanup_queue(mq->queue);
 	return ret;
 }
@@ -380,10 +426,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
 	blk_start_queue(q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
-	mmc_queue_reqs_free_bufs(mq);
-	kfree(mq->mqrq);
 	mq->mqrq = NULL;
-
 	mq->card = NULL;
 }
 EXPORT_SYMBOL(mmc_cleanup_queue);
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index e298f100101b..298ead2b4245 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -51,6 +51,8 @@ struct mmc_queue {
 	int			qdepth;
 };
 
+extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
+extern void mmc_queue_free_shared_queue(struct mmc_card *card);
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
 			  const char *);
 extern void mmc_cleanup_queue(struct mmc_queue *);
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 77e61e0a216a..119ef8f0155c 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -208,6 +208,7 @@ struct sdio_cis {
 struct mmc_host;
 struct sdio_func;
 struct sdio_func_tuple;
+struct mmc_queue_req;
 
 #define SDIO_MAX_FUNCS		7
 
@@ -300,6 +301,10 @@ struct mmc_card {
 	struct dentry		*debugfs_root;
 	struct mmc_part	part[MMC_NUM_PHY_PARTITION]; /* physical partitions */
 	unsigned int    nr_parts;
+
+	struct mmc_queue_req	*mqrq;		/* Shared queue structure */
+	unsigned int		bouncesz;	/* Bounce buffer size */
+	int			qdepth;		/* Shared queue depth */
 };
 
 static inline bool mmc_large_sector(struct mmc_card *card)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 03/39] mmc: block: Introduce queue semantics
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 02/39] mmc: queue: Share mmc request array between partitions Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 12:29   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 04/39] mmc: core: Do not prepare a new request twice Adrian Hunter
                   ` (35 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Change from viewing the requests in progress as 'current' and 'previous',
to viewing them as a queue. The current request is allocated to the first
free slot. The presence of incomplete requests is determined from the
count (mq->qcnt) of entries in the queue. Non-read-write requests (i.e.
discards and flushes) are not added to the queue at all and require no
special handling. Also no special handling is needed for the
MMC_BLK_NEW_REQUEST case.

As well as allowing an arbitrarily sized queue, the queue thread function
is significantly simpler.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 42 ++++++++++++++++-----------
 drivers/mmc/core/queue.c | 74 ++++++++++++++++++++++++++++++------------------
 drivers/mmc/core/queue.h | 10 +++++--
 3 files changed, 79 insertions(+), 47 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index b6d1497c395f..7568e576bba3 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -134,6 +134,13 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 				      struct mmc_blk_data *md);
 static int get_card_status(struct mmc_card *card, u32 *status, int retries);
 
+static void mmc_blk_requeue(struct request_queue *q, struct request *req)
+{
+	spin_lock_irq(q->queue_lock);
+	blk_requeue_request(q, req);
+	spin_unlock_irq(q->queue_lock);
+}
+
 static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
 {
 	struct mmc_blk_data *md;
@@ -1631,14 +1638,23 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	struct mmc_blk_request *brq;
 	int disable_multi = 0, retry = 0, type, retune_retry_done = 0;
 	enum mmc_blk_status status;
-	struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+	struct mmc_queue_req *mqrq_cur = NULL;
 	struct mmc_queue_req *mq_rq;
 	struct request *old_req;
 	struct mmc_async_req *new_areq;
 	struct mmc_async_req *old_areq;
 	bool req_pending = true;
 
-	if (!new_req && !mq->mqrq_prev->req)
+	if (new_req) {
+		mqrq_cur = mmc_queue_req_find(mq, new_req);
+		if (!mqrq_cur) {
+			WARN_ON(1);
+			mmc_blk_requeue(mq->queue, new_req);
+			new_req = NULL;
+		}
+	}
+
+	if (!mq->qcnt)
 		return;
 
 	do {
@@ -1667,8 +1683,6 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			 * and there is nothing more to do until it is
 			 * complete.
 			 */
-			if (status == MMC_BLK_NEW_REQUEST)
-				mq->new_request = true;
 			return;
 		}
 
@@ -1785,6 +1799,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			mq_rq->brq.retune_retry_done = retune_retry_done;
 		}
 	} while (req_pending);
+
+	mmc_queue_req_free(mq, mq_rq);
 }
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
@@ -1792,9 +1808,8 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	int ret;
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_card *card = md->queue.card;
-	bool req_is_special = mmc_req_is_special(req);
 
-	if (req && !mq->mqrq_prev->req)
+	if (req && !mq->qcnt)
 		/* claim host only for the first request */
 		mmc_get_card(card);
 
@@ -1806,20 +1821,19 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		goto out;
 	}
 
-	mq->new_request = false;
 	if (req && req_op(req) == REQ_OP_DISCARD) {
 		/* complete ongoing async transfer before issuing discard */
-		if (card->host->areq)
+		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		mmc_blk_issue_discard_rq(mq, req);
 	} else if (req && req_op(req) == REQ_OP_SECURE_ERASE) {
 		/* complete ongoing async transfer before issuing secure erase*/
-		if (card->host->areq)
+		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		mmc_blk_issue_secdiscard_rq(mq, req);
 	} else if (req && req_op(req) == REQ_OP_FLUSH) {
 		/* complete ongoing async transfer before issuing flush */
-		if (card->host->areq)
+		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		mmc_blk_issue_flush(mq, req);
 	} else {
@@ -1827,13 +1841,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	}
 
 out:
-	if ((!req && !mq->new_request) || req_is_special)
-		/*
-		 * Release host when there are no more requests
-		 * and after special request(discard, flush) is done.
-		 * In case sepecial request, there is no reentry to
-		 * the 'mmc_blk_issue_rq' with 'mqrq_prev->req'.
-		 */
+	if (!mq->qcnt)
 		mmc_put_card(card);
 }
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 031553c9be04..479638bfb092 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -49,6 +49,35 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
 	return BLKPREP_OK;
 }
 
+struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
+					 struct request *req)
+{
+	struct mmc_queue_req *mqrq;
+	int i = ffz(mq->qslots);
+
+	if (i >= mq->qdepth)
+		return NULL;
+
+	mqrq = &mq->mqrq[i];
+	WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
+		test_bit(mqrq->task_id, &mq->qslots));
+	mqrq->req = req;
+	mq->qcnt += 1;
+	__set_bit(mqrq->task_id, &mq->qslots);
+
+	return mqrq;
+}
+
+void mmc_queue_req_free(struct mmc_queue *mq,
+			struct mmc_queue_req *mqrq)
+{
+	WARN_ON(!mqrq->req || mq->qcnt < 1 ||
+		!test_bit(mqrq->task_id, &mq->qslots));
+	mqrq->req = NULL;
+	mq->qcnt -= 1;
+	__clear_bit(mqrq->task_id, &mq->qslots);
+}
+
 static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
@@ -59,7 +88,7 @@ static int mmc_queue_thread(void *d)
 
 	down(&mq->thread_sem);
 	do {
-		struct request *req = NULL;
+		struct request *req;
 
 		spin_lock_irq(q->queue_lock);
 		set_current_state(TASK_INTERRUPTIBLE);
@@ -72,38 +101,17 @@ static int mmc_queue_thread(void *d)
 			 * Dispatch queue is empty so set flags for
 			 * mmc_request_fn() to wake us up.
 			 */
-			if (mq->mqrq_prev->req)
+			if (mq->qcnt)
 				cntx->is_waiting_last_req = true;
 			else
 				mq->asleep = true;
 		}
-		mq->mqrq_cur->req = req;
 		spin_unlock_irq(q->queue_lock);
 
-		if (req || mq->mqrq_prev->req) {
-			bool req_is_special = mmc_req_is_special(req);
-
+		if (req || mq->qcnt) {
 			set_current_state(TASK_RUNNING);
 			mmc_blk_issue_rq(mq, req);
 			cond_resched();
-			if (mq->new_request) {
-				mq->new_request = false;
-				continue; /* fetch again */
-			}
-
-			/*
-			 * Current request becomes previous request
-			 * and vice versa.
-			 * In case of special requests, current request
-			 * has been finished. Do not assign it to previous
-			 * request.
-			 */
-			if (req_is_special)
-				mq->mqrq_cur->req = NULL;
-
-			mq->mqrq_prev->brq.mrq.data = NULL;
-			mq->mqrq_prev->req = NULL;
-			swap(mq->mqrq_prev, mq->mqrq_cur);
 		} else {
 			if (kthread_should_stop()) {
 				set_current_state(TASK_RUNNING);
@@ -208,6 +216,20 @@ static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth)
 	kfree(mqrq);
 }
 
+static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
+{
+	struct mmc_queue_req *mqrq;
+	int i;
+
+	mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
+	if (mqrq) {
+		for (i = 0; i < qdepth; i++)
+			mqrq[i].task_id = i;
+	}
+
+	return mqrq;
+}
+
 #ifdef CONFIG_MMC_BLOCK_BOUNCE
 static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth,
 				       unsigned int bouncesz)
@@ -307,7 +329,7 @@ static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth)
 	if (card->mqrq)
 		return -EINVAL;
 
-	mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
+	mqrq = mmc_queue_alloc_mqrqs(qdepth);
 	if (!mqrq)
 		return -ENOMEM;
 
@@ -368,8 +390,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 
 	mq->mqrq = card->mqrq;
 	mq->qdepth = card->qdepth;
-	mq->mqrq_cur = &mq->mqrq[0];
-	mq->mqrq_prev = &mq->mqrq[1];
 	mq->queue->queuedata = mq;
 
 	blk_queue_prep_rq(mq->queue, mmc_prep_request);
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 298ead2b4245..871796c3f406 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -34,21 +34,21 @@ struct mmc_queue_req {
 	struct scatterlist	*bounce_sg;
 	unsigned int		bounce_sg_len;
 	struct mmc_async_req	areq;
+	int			task_id;
 };
 
 struct mmc_queue {
 	struct mmc_card		*card;
 	struct task_struct	*thread;
 	struct semaphore	thread_sem;
-	bool			new_request;
 	bool			suspended;
 	bool			asleep;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
 	struct mmc_queue_req	*mqrq;
-	struct mmc_queue_req	*mqrq_cur;
-	struct mmc_queue_req	*mqrq_prev;
 	int			qdepth;
+	int			qcnt;
+	unsigned long		qslots;
 };
 
 extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
@@ -66,4 +66,8 @@ extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
 
 extern int mmc_access_rpmb(struct mmc_queue *);
 
+extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
+						struct request *);
+extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
+
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 04/39] mmc: core: Do not prepare a new request twice
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (2 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 03/39] mmc: block: Introduce queue semantics Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 12:49   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
                   ` (34 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

mmc_start_req() assumes it is never called with the new request already
prepared. That is true if the queue consists of only 2 requests, but is not
true for a longer queue. e.g. mmc_start_req() has a current and previous
request but still exits to queue a new request if the queue size is
greater than 2. In that case, when mmc_start_req() is called again, the
current request will have been prepared already. Fix by flagging if the
request has been prepared.

That also means ensuring that struct mmc_async_req is always initialized
to zero, which wasn't the case in mmc_test.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
---
 drivers/mmc/core/core.c     | 12 +++++++++---
 drivers/mmc/core/mmc_test.c |  8 ++++----
 include/linux/mmc/host.h    |  1 +
 3 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 926e0fde07d7..d97b8f2cf3cb 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -686,8 +686,10 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 	struct mmc_async_req *data = host->areq;
 
 	/* Prepare a new request */
-	if (areq)
+	if (areq && !areq->pre_req_done) {
+		areq->pre_req_done = true;
 		mmc_pre_req(host, areq->mrq);
+	}
 
 	/* Finalize previous request */
 	status = mmc_finalize_areq(host);
@@ -704,12 +706,16 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 		start_err = __mmc_start_data_req(host, areq->mrq);
 
 	/* Postprocess the old request at this point */
-	if (host->areq)
+	if (host->areq) {
+		host->areq->pre_req_done = false;
 		mmc_post_req(host, host->areq->mrq, 0);
+	}
 
 	/* Cancel a prepared request if it was not started. */
-	if ((status != MMC_BLK_SUCCESS || start_err) && areq)
+	if ((status != MMC_BLK_SUCCESS || start_err) && areq) {
+		areq->pre_req_done = false;
 		mmc_post_req(host, areq->mrq, -EINVAL);
+	}
 
 	if (status != MMC_BLK_SUCCESS)
 		host->areq = NULL;
diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
index f99ac3123fd2..45e623cc1624 100644
--- a/drivers/mmc/core/mmc_test.c
+++ b/drivers/mmc/core/mmc_test.c
@@ -831,7 +831,10 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
 	struct mmc_command stop2;
 	struct mmc_data data2;
 
-	struct mmc_test_async_req test_areq[2];
+	struct mmc_test_async_req test_areq[2] = {
+		{ .test = test },
+		{ .test = test },
+	};
 	struct mmc_async_req *done_areq;
 	struct mmc_async_req *cur_areq = &test_areq[0].areq;
 	struct mmc_async_req *other_areq = &test_areq[1].areq;
@@ -839,9 +842,6 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
 	int i;
 	int ret = RESULT_OK;
 
-	test_areq[0].test = test;
-	test_areq[1].test = test;
-
 	mmc_test_nonblock_reset(&mrq1, &cmd1, &stop1, &data1);
 	mmc_test_nonblock_reset(&mrq2, &cmd2, &stop2, &data2);
 
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 83f1c4a9f03b..f339cd64f0f6 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -168,6 +168,7 @@ struct mmc_async_req {
 	 * Returns 0 if success otherwise non zero.
 	 */
 	enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *);
+	bool pre_req_done;
 };
 
 /**
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (3 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 04/39] mmc: core: Do not prepare a new request twice Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 12:52   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 06/39] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
                   ` (33 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add helper functions to enable or disable the Command Queue.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 Documentation/mmc/mmc-dev-attrs.txt |  1 +
 drivers/mmc/core/mmc.c              |  2 ++
 drivers/mmc/core/mmc_ops.c          | 28 ++++++++++++++++++++++++++++
 drivers/mmc/core/mmc_ops.h          |  2 ++
 include/linux/mmc/card.h            |  1 +
 5 files changed, 34 insertions(+)

diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
index 404a0e9e92b0..dcd1252877fb 100644
--- a/Documentation/mmc/mmc-dev-attrs.txt
+++ b/Documentation/mmc/mmc-dev-attrs.txt
@@ -30,6 +30,7 @@ All attributes are read-only.
 	rel_sectors		Reliable write sector count
 	ocr 			Operation Conditions Register
 	dsr			Driver Stage Register
+	cmdq_en			Command Queue enabled: 1 => enabled, 0 => not enabled
 
 Note on Erase Size and Preferred Erase Size:
 
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index e01e70c24ca2..df154b951a10 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -800,6 +800,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
 MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
 MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
 MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
+MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
 
 static ssize_t mmc_fwrev_show(struct device *dev,
 			      struct device_attribute *attr,
@@ -855,6 +856,7 @@ static ssize_t mmc_dsr_show(struct device *dev,
 	&dev_attr_rel_sectors.attr,
 	&dev_attr_ocr.attr,
 	&dev_attr_dsr.attr,
+	&dev_attr_cmdq_en.attr,
 	NULL,
 };
 ATTRIBUTE_GROUPS(mmc_std);
diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index fe80f26d6971..24c58d24c19a 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -838,3 +838,31 @@ int mmc_can_ext_csd(struct mmc_card *card)
 {
 	return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
 }
+
+static int mmc_cmdq_switch(struct mmc_card *card, bool enable)
+{
+	u8 val = enable ? EXT_CSD_CMDQ_MODE_ENABLED : 0;
+	int err;
+
+	if (!card->ext_csd.cmdq_support)
+		return -EOPNOTSUPP;
+
+	err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
+			 val, card->ext_csd.generic_cmd6_time);
+	if (!err)
+		card->ext_csd.cmdq_en = enable;
+
+	return err;
+}
+
+int mmc_cmdq_enable(struct mmc_card *card)
+{
+	return mmc_cmdq_switch(card, true);
+}
+EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
+
+int mmc_cmdq_disable(struct mmc_card *card)
+{
+	return mmc_cmdq_switch(card, false);
+}
+EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
diff --git a/drivers/mmc/core/mmc_ops.h b/drivers/mmc/core/mmc_ops.h
index 74beea8a9c7e..978bd2e60f8a 100644
--- a/drivers/mmc/core/mmc_ops.h
+++ b/drivers/mmc/core/mmc_ops.h
@@ -46,6 +46,8 @@ int mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 void mmc_start_bkops(struct mmc_card *card, bool from_exception);
 int mmc_can_reset(struct mmc_card *card);
 int mmc_flush_cache(struct mmc_card *card);
+int mmc_cmdq_enable(struct mmc_card *card);
+int mmc_cmdq_disable(struct mmc_card *card);
 
 #endif
 
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 119ef8f0155c..94637796b99c 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -89,6 +89,7 @@ struct mmc_ext_csd {
 	unsigned int		boot_ro_lock;		/* ro lock support */
 	bool			boot_ro_lockable;
 	bool			ffu_capable;	/* Firmware upgrade support */
+	bool			cmdq_en;	/* Command Queue enabled */
 	bool			cmdq_support;	/* Command Queue supported */
 	unsigned int		cmdq_depth;	/* Command Queue depth */
 #define MMC_FIRMWARE_LEN 8
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 06/39] mmc: mmc_test: Disable Command Queue while mmc_test is used
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (4 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 07/39] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
                   ` (32 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Normal read and write commands may not be used while the command queue is
enabled. Disable the Command Queue when mmc_test is probed and re-enable it
when it is removed.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
---
 drivers/mmc/core/mmc.c      |  7 +++++++
 drivers/mmc/core/mmc_test.c | 14 ++++++++++++++
 include/linux/mmc/card.h    |  2 ++
 3 files changed, 23 insertions(+)

diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index df154b951a10..8c290264cf2d 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1800,6 +1800,13 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 	}
 
 	/*
+	 * In some cases (e.g. RPMB or mmc_test), the Command Queue must be
+	 * disabled for a time, so a flag is needed to indicate to re-enable the
+	 * Command Queue.
+	 */
+	card->reenable_cmdq = card->ext_csd.cmdq_en;
+
+	/*
 	 * The mandatory minimum values are defined for packed command.
 	 * read: 5, write: 3
 	 */
diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
index 45e623cc1624..e6f1c3fae849 100644
--- a/drivers/mmc/core/mmc_test.c
+++ b/drivers/mmc/core/mmc_test.c
@@ -26,6 +26,7 @@
 #include "card.h"
 #include "host.h"
 #include "bus.h"
+#include "mmc_ops.h"
 
 #define RESULT_OK		0
 #define RESULT_FAIL		1
@@ -3264,6 +3265,14 @@ static int mmc_test_probe(struct mmc_card *card)
 	if (ret)
 		return ret;
 
+	if (card->ext_csd.cmdq_en) {
+		mmc_claim_host(card->host);
+		ret = mmc_cmdq_disable(card);
+		mmc_release_host(card->host);
+		if (ret)
+			return ret;
+	}
+
 	dev_info(&card->dev, "Card claimed for testing.\n");
 
 	return 0;
@@ -3271,6 +3280,11 @@ static int mmc_test_probe(struct mmc_card *card)
 
 static void mmc_test_remove(struct mmc_card *card)
 {
+	if (card->reenable_cmdq) {
+		mmc_claim_host(card->host);
+		mmc_cmdq_enable(card);
+		mmc_release_host(card->host);
+	}
 	mmc_test_free_result(card);
 	mmc_test_free_dbgfs_file(card);
 }
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 94637796b99c..85b5f2bc8bb9 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -269,6 +269,8 @@ struct mmc_card {
 #define MMC_QUIRK_TRIM_BROKEN	(1<<12)		/* Skip trim */
 #define MMC_QUIRK_BROKEN_HPI	(1<<13)		/* Disable broken HPI support */
 
+	bool			reenable_cmdq;	/* Re-enable Command Queue */
+
 	unsigned int		erase_size;	/* erase size in sectors */
  	unsigned int		erase_shift;	/* if erase unit is power 2 */
  	unsigned int		pref_erase;	/* in sectors */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 07/39] mmc: block: Disable Command Queue while RPMB is used
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (5 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 06/39] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 08/39] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
                   ` (31 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

RPMB does not allow Command Queue commands. Disable and re-enable the
Command Queue when switching.

Note that the driver only switches partitions when the queue is empty.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
---
 drivers/mmc/core/block.c | 46 ++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 38 insertions(+), 8 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 7568e576bba3..1b1396e86c24 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -733,10 +733,41 @@ static int mmc_blk_compat_ioctl(struct block_device *bdev, fmode_t mode,
 #endif
 };
 
+static int mmc_blk_part_switch_pre(struct mmc_card *card,
+				   unsigned int part_type)
+{
+	int ret = 0;
+
+	if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
+		if (card->ext_csd.cmdq_en) {
+			ret = mmc_cmdq_disable(card);
+			if (ret)
+				return ret;
+		}
+		mmc_retune_pause(card->host);
+	}
+
+	return ret;
+}
+
+static int mmc_blk_part_switch_post(struct mmc_card *card,
+				    unsigned int part_type)
+{
+	int ret = 0;
+
+	if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
+		mmc_retune_unpause(card->host);
+		if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
+			ret = mmc_cmdq_enable(card);
+	}
+
+	return ret;
+}
+
 static inline int mmc_blk_part_switch(struct mmc_card *card,
 				      struct mmc_blk_data *md)
 {
-	int ret;
+	int ret = 0;
 	struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
 
 	if (main_md->part_curr == md->part_type)
@@ -745,8 +776,9 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 	if (mmc_card_mmc(card)) {
 		u8 part_config = card->ext_csd.part_config;
 
-		if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
-			mmc_retune_pause(card->host);
+		ret = mmc_blk_part_switch_pre(card, md->part_type);
+		if (ret)
+			return ret;
 
 		part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK;
 		part_config |= md->part_type;
@@ -755,19 +787,17 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 				 EXT_CSD_PART_CONFIG, part_config,
 				 card->ext_csd.part_time);
 		if (ret) {
-			if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
-				mmc_retune_unpause(card->host);
+			mmc_blk_part_switch_post(card, md->part_type);
 			return ret;
 		}
 
 		card->ext_csd.part_config = part_config;
 
-		if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB)
-			mmc_retune_unpause(card->host);
+		ret = mmc_blk_part_switch_post(card, main_md->part_curr);
 	}
 
 	main_md->part_curr = md->part_type;
-	return 0;
+	return ret;
 }
 
 static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 08/39] mmc: core: Export mmc_retune_hold() and mmc_retune_release()
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (6 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 07/39] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 09/39] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
                   ` (30 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Re-tuning can only be done when the Command Queue is empty, when means
holding and releasing re-tuning from the block driver, so export those
functions.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
---
 drivers/mmc/core/host.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 3f8c85d5aa09..ee3d9d92481f 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -109,6 +109,7 @@ void mmc_retune_hold(struct mmc_host *host)
 		host->retune_now = 1;
 	host->hold_retune += 1;
 }
+EXPORT_SYMBOL(mmc_retune_hold);
 
 void mmc_retune_release(struct mmc_host *host)
 {
@@ -117,6 +118,7 @@ void mmc_retune_release(struct mmc_host *host)
 	else
 		WARN_ON(1);
 }
+EXPORT_SYMBOL(mmc_retune_release);
 
 int mmc_retune(struct mmc_host *host)
 {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 09/39] mmc: queue: Add a function to control wake-up on new requests
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (7 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 08/39] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:07   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 10/39] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
                   ` (29 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add a function to control wake-up on new requests. This will enable
Software Command Queuing to choose whether or not to queue new
requests immediately or wait for the current task to complete.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/queue.c | 16 ++++++++++++++++
 drivers/mmc/core/queue.h |  2 ++
 2 files changed, 18 insertions(+)

diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 479638bfb092..9e7e4eb9250c 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -97,6 +97,7 @@ static int mmc_queue_thread(void *d)
 		cntx->is_waiting_last_req = false;
 		cntx->is_new_req = false;
 		if (!req) {
+			mq->is_new_req = false;
 			/*
 			 * Dispatch queue is empty so set flags for
 			 * mmc_request_fn() to wake us up.
@@ -147,6 +148,8 @@ static void mmc_request_fn(struct request_queue *q)
 		return;
 	}
 
+	mq->is_new_req = true;
+
 	cntx = &mq->card->host->context_info;
 
 	if (cntx->is_waiting_last_req) {
@@ -158,6 +161,19 @@ static void mmc_request_fn(struct request_queue *q)
 		wake_up_process(mq->thread);
 }
 
+void mmc_queue_set_wake(struct mmc_queue *mq, bool wake_me)
+{
+	struct mmc_context_info *cntx = &mq->card->host->context_info;
+	struct request_queue *q = mq->queue;
+
+	if (cntx->is_waiting_last_req != wake_me) {
+		spin_lock_irq(q->queue_lock);
+		cntx->is_waiting_last_req = wake_me;
+		cntx->is_new_req = wake_me && mq->is_new_req;
+		spin_unlock_irq(q->queue_lock);
+	}
+}
+
 static struct scatterlist *mmc_alloc_sg(int sg_len)
 {
 	struct scatterlist *sg;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 871796c3f406..c9ce0172dd08 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -43,6 +43,7 @@ struct mmc_queue {
 	struct semaphore	thread_sem;
 	bool			suspended;
 	bool			asleep;
+	bool			is_new_req;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
 	struct mmc_queue_req	*mqrq;
@@ -69,5 +70,6 @@ extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
 extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
 						struct request *);
 extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
+extern void mmc_queue_set_wake(struct mmc_queue *, bool);
 
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 10/39] mmc: block: Change mmc_apply_rel_rw() to get block address from the request
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (8 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 09/39] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:09   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 11/39] mmc: block: Factor out data preparation Adrian Hunter
                   ` (28 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

mmc_apply_rel_rw() will be used by Software Command Queuing also. In that
case the command argument is not the block address so change
mmc_apply_rel_rw() to get block address from the request.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 1b1396e86c24..d6dfaebb84ee 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1314,7 +1314,7 @@ static inline void mmc_apply_rel_rw(struct mmc_blk_request *brq,
 {
 	if (!(card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN)) {
 		/* Legacy mode imposes restrictions on transfers. */
-		if (!IS_ALIGNED(brq->cmd.arg, card->ext_csd.rel_sectors))
+		if (!IS_ALIGNED(blk_rq_pos(req), card->ext_csd.rel_sectors))
 			brq->data.blocks = 1;
 
 		if (brq->data.blocks > card->ext_csd.rel_sectors)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 11/39] mmc: block: Factor out data preparation
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (9 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 10/39] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:11   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 12/39] mmc: block: Add Software Command Queuing Adrian Hunter
                   ` (27 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Factor out data preparation into a separate function mmc_blk_data_prep()
which can be re-used for command queuing.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 151 +++++++++++++++++++++++++----------------------
 1 file changed, 82 insertions(+), 69 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index d6dfaebb84ee..17c19387bf84 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1438,36 +1438,39 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card,
 	return MMC_BLK_SUCCESS;
 }
 
-static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
-			       struct mmc_card *card,
-			       int disable_multi,
-			       struct mmc_queue *mq)
+static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+			      int disable_multi, bool *do_rel_wr,
+			      bool *do_data_tag)
 {
-	u32 readcmd, writecmd;
+	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_card *card = md->queue.card;
 	struct mmc_blk_request *brq = &mqrq->brq;
 	struct request *req = mqrq->req;
-	struct mmc_blk_data *md = mq->blkdata;
-	bool do_data_tag;
 
 	/*
 	 * Reliable writes are used to implement Forced Unit Access and
 	 * are supported only on MMCs.
 	 */
-	bool do_rel_wr = (req->cmd_flags & REQ_FUA) &&
-		(rq_data_dir(req) == WRITE) &&
-		(md->flags & MMC_BLK_REL_WR);
+	*do_rel_wr = (req->cmd_flags & REQ_FUA) &&
+		     rq_data_dir(req) == WRITE &&
+		     (md->flags & MMC_BLK_REL_WR);
 
 	memset(brq, 0, sizeof(struct mmc_blk_request));
-	brq->mrq.cmd = &brq->cmd;
+
 	brq->mrq.data = &brq->data;
 
-	brq->cmd.arg = blk_rq_pos(req);
-	if (!mmc_card_blockaddr(card))
-		brq->cmd.arg <<= 9;
-	brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
-	brq->data.blksz = 512;
 	brq->stop.opcode = MMC_STOP_TRANSMISSION;
 	brq->stop.arg = 0;
+
+	if (rq_data_dir(req) == READ) {
+		brq->data.flags = MMC_DATA_READ;
+		brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC;
+	} else {
+		brq->data.flags = MMC_DATA_WRITE;
+		brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
+	}
+
+	brq->data.blksz = 512;
 	brq->data.blocks = blk_rq_sectors(req);
 
 	/*
@@ -1498,6 +1501,68 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 						brq->data.blocks);
 	}
 
+	if (*do_rel_wr)
+		mmc_apply_rel_rw(brq, card, req);
+
+	/*
+	 * Data tag is used only during writing meta data to speed
+	 * up write and any subsequent read of this meta data
+	 */
+	*do_data_tag = card->ext_csd.data_tag_unit_size &&
+		       (req->cmd_flags & REQ_META) &&
+		       (rq_data_dir(req) == WRITE) &&
+		       ((brq->data.blocks * brq->data.blksz) >=
+			card->ext_csd.data_tag_unit_size);
+
+	mmc_set_data_timeout(&brq->data, card);
+
+	brq->data.sg = mqrq->sg;
+	brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
+
+	/*
+	 * Adjust the sg list so it is the same size as the
+	 * request.
+	 */
+	if (brq->data.blocks != blk_rq_sectors(req)) {
+		int i, data_size = brq->data.blocks << 9;
+		struct scatterlist *sg;
+
+		for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) {
+			data_size -= sg->length;
+			if (data_size <= 0) {
+				sg->length += data_size;
+				i++;
+				break;
+			}
+		}
+		brq->data.sg_len = i;
+	}
+
+	mqrq->areq.mrq = &brq->mrq;
+
+	mmc_queue_bounce_pre(mqrq);
+}
+
+static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+			       struct mmc_card *card,
+			       int disable_multi,
+			       struct mmc_queue *mq)
+{
+	u32 readcmd, writecmd;
+	struct mmc_blk_request *brq = &mqrq->brq;
+	struct request *req = mqrq->req;
+	struct mmc_blk_data *md = mq->blkdata;
+	bool do_rel_wr, do_data_tag;
+
+	mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag);
+
+	brq->mrq.cmd = &brq->cmd;
+
+	brq->cmd.arg = blk_rq_pos(req);
+	if (!mmc_card_blockaddr(card))
+		brq->cmd.arg <<= 9;
+	brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+
 	if (brq->data.blocks > 1 || do_rel_wr) {
 		/* SPI multiblock writes terminate using a special
 		 * token, not a STOP_TRANSMISSION request.
@@ -1512,32 +1577,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 		readcmd = MMC_READ_SINGLE_BLOCK;
 		writecmd = MMC_WRITE_BLOCK;
 	}
-	if (rq_data_dir(req) == READ) {
-		brq->cmd.opcode = readcmd;
-		brq->data.flags = MMC_DATA_READ;
-		if (brq->mrq.stop)
-			brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 |
-					MMC_CMD_AC;
-	} else {
-		brq->cmd.opcode = writecmd;
-		brq->data.flags = MMC_DATA_WRITE;
-		if (brq->mrq.stop)
-			brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B |
-					MMC_CMD_AC;
-	}
-
-	if (do_rel_wr)
-		mmc_apply_rel_rw(brq, card, req);
-
-	/*
-	 * Data tag is used only during writing meta data to speed
-	 * up write and any subsequent read of this meta data
-	 */
-	do_data_tag = (card->ext_csd.data_tag_unit_size) &&
-		(req->cmd_flags & REQ_META) &&
-		(rq_data_dir(req) == WRITE) &&
-		((brq->data.blocks * brq->data.blksz) >=
-		 card->ext_csd.data_tag_unit_size);
+	brq->cmd.opcode = rq_data_dir(req) == READ ? readcmd : writecmd;
 
 	/*
 	 * Pre-defined multi-block transfers are preferable to
@@ -1568,34 +1608,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 		brq->mrq.sbc = &brq->sbc;
 	}
 
-	mmc_set_data_timeout(&brq->data, card);
-
-	brq->data.sg = mqrq->sg;
-	brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
-
-	/*
-	 * Adjust the sg list so it is the same size as the
-	 * request.
-	 */
-	if (brq->data.blocks != blk_rq_sectors(req)) {
-		int i, data_size = brq->data.blocks << 9;
-		struct scatterlist *sg;
-
-		for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) {
-			data_size -= sg->length;
-			if (data_size <= 0) {
-				sg->length += data_size;
-				i++;
-				break;
-			}
-		}
-		brq->data.sg_len = i;
-	}
-
-	mqrq->areq.mrq = &brq->mrq;
 	mqrq->areq.err_check = mmc_blk_err_check;
-
-	mmc_queue_bounce_pre(mqrq);
 }
 
 static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 12/39] mmc: block: Add Software Command Queuing
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (10 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 11/39] mmc: block: Factor out data preparation Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:34   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 13/39] mmc: mmc: Enable " Adrian Hunter
                   ` (26 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

eMMC Command Queuing is a feature added in version 5.1.  The card maintains
a queue of up to 32 data transfers.  Commands CMD44/CMD45 are sent to queue
up transfers in advance, and then one of the transfers is selected to
"execute" by CMD46/CMD47 at which point data transfer actually begins.

The advantage of command queuing is that the card can prepare for transfers
in advance. That makes a big difference in the case of random reads because
the card can start reading into its cache in advance.

A v5.1 host controller can manage the command queue itself, but it is also
possible for software to manage the queue using an non-v5.1 host controller
- that is what Software Command Queuing is.

Refer to the JEDEC (http://www.jedec.org/) eMMC v5.1 Specification for more
information about Command Queuing.

Two important aspects of Command Queuing that affect the implementation
are:
 - only read/write requests are queued
 - the queue must be empty to send other commands, including re-tuning

To support Software Command Queuing a separate function is provided to
issue read/write requests (i.e. mmc_swcmdq_issue_rw_rq()) and the
mmc_blk_request structure amended to cater for additional commands CMD44
and CMD45. There is a separate function (mmc_swcmdq_prep()) to prepare the
needed commands, but transfers are started by mmc_start_req() like normal.

mmc_swcmdq_issue_rw_rq() enqueues the new request and then executes tasks
until the queue is empty or mmc_swcmdq_execute() asks for a new request.
This puts mmc_swcmdq_execute() in control of the decision whether to queue
more requests or wait for the active one.

Recovery is invoked if anything goes wrong. Recovery has 2 options:
 1. Discard the queue and re-queue all requests. If that fails, fall back
    to option 2.
 2. Reset and re-queue all requests. If that fails, error out all the
    requests.
In either case, re-tuning will be done if needed after the queue becomes
empty because re-tuning is released at that point.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 525 ++++++++++++++++++++++++++++++++++++++++++++++-
 drivers/mmc/core/queue.c |  10 +-
 drivers/mmc/core/queue.h |  11 +-
 include/linux/mmc/core.h |   1 +
 4 files changed, 544 insertions(+), 3 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 17c19387bf84..0c7d41967281 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -113,6 +113,7 @@ struct mmc_blk_data {
 #define MMC_BLK_WRITE		BIT(1)
 #define MMC_BLK_DISCARD		BIT(2)
 #define MMC_BLK_SECDISCARD	BIT(3)
+#define MMC_BLK_SWCMDQ		BIT(4)
 
 	/*
 	 * Only set in main mmc_blk_data associated
@@ -1674,7 +1675,518 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req,
 	mmc_start_areq(mq->card->host, &mqrq->areq, NULL);
 }
 
-static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
+static enum mmc_blk_status mmc_swcmdq_err_check(struct mmc_card *card,
+						struct mmc_async_req *areq)
+{
+	struct mmc_queue_req *mqrq = container_of(areq, struct mmc_queue_req,
+						  areq);
+	struct mmc_blk_request *brq = &mqrq->brq;
+	struct request *req = mqrq->req;
+	int err;
+
+	err = brq->data.error;
+	/* In the case of data errors, send stop */
+	if (err)
+		mmc_wait_for_cmd(card->host, &brq->stop, 0);
+	else
+		err = brq->cmd.error;
+
+	/* In the case of CRC errors when re-tuning is needed, retry */
+	if (err == -EILSEQ && card->host->need_retune)
+		return MMC_BLK_RETRY;
+
+	/* For other errors abort */
+	if (err)
+		return MMC_BLK_ABORT;
+
+	if (blk_rq_bytes(req) != brq->data.bytes_xfered)
+		return MMC_BLK_PARTIAL;
+
+	return MMC_BLK_SUCCESS;
+}
+
+static void mmc_swcmdq_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
+{
+	struct mmc_blk_request *brq = &mqrq->brq;
+	struct request *req = mqrq->req;
+	bool do_rel_wr, do_data_tag;
+
+	mmc_blk_data_prep(mq, mqrq, 0, &do_rel_wr, &do_data_tag);
+
+	brq->mrq.cmd = &brq->cmd;
+	brq->mrq.cap_cmd_during_tfr = true;
+
+	brq->cmd.opcode = rq_data_dir(req) == READ ? MMC_EXECUTE_READ_TASK :
+						     MMC_EXECUTE_WRITE_TASK;
+	brq->cmd.arg = mqrq->task_id << 16;
+	brq->cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
+
+	brq->cmd44.opcode = MMC_QUE_TASK_PARAMS;
+	brq->cmd44.arg = brq->data.blocks |
+			 (do_rel_wr ? (1 << 31) : 0) |
+			 ((rq_data_dir(req) == READ) ? (1 << 30) : 0) |
+			 (do_data_tag ? (1 << 29) : 0) |
+			 mqrq->task_id << 16;
+	brq->cmd44.flags = MMC_RSP_R1 | MMC_CMD_AC;
+
+	brq->cmd45.opcode = MMC_QUE_TASK_ADDR;
+	brq->cmd45.arg = blk_rq_pos(req);
+
+	mqrq->areq.err_check = mmc_swcmdq_err_check;
+}
+
+static int mmc_swcmdq_blk_err(struct mmc_card *card, int err)
+{
+	/* Re-try after CRC errors when re-tuning is needed */
+	if (err == -EILSEQ && card->host->need_retune)
+		return MMC_BLK_RETRY;
+
+	if (err)
+		return MMC_BLK_ABORT;
+
+	return 0;
+}
+
+#define SWCMDQ_ENQUEUE_ERR (	\
+	R1_OUT_OF_RANGE |	\
+	R1_ADDRESS_ERROR |	\
+	R1_BLOCK_LEN_ERROR |	\
+	R1_WP_VIOLATION |	\
+	R1_CC_ERROR |		\
+	R1_ERROR |		\
+	R1_COM_CRC_ERROR |	\
+	R1_ILLEGAL_COMMAND)
+
+static int __mmc_swcmdq_enqueue(struct mmc_queue *mq,
+				struct mmc_queue_req *mqrq)
+{
+	struct mmc_card *card = mq->card;
+	int err;
+
+	mmc_swcmdq_prep(mq, mqrq);
+
+	err = mmc_wait_for_cmd(card->host, &mqrq->brq.cmd44, 0);
+	if (err)
+		goto out;
+
+	err = mmc_wait_for_cmd(card->host, &mqrq->brq.cmd45, 0);
+	if (err)
+		goto out;
+
+	/*
+	 * Don't assume the task is queued if there are any error bits set in
+	 * the response.
+	 */
+	if (mqrq->brq.cmd45.resp[0] & SWCMDQ_ENQUEUE_ERR)
+		return MMC_BLK_ABORT;
+out:
+	return mmc_swcmdq_blk_err(card, err);
+}
+
+static int mmc_swcmdq_enqueue(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq;
+
+	mqrq = mmc_queue_req_find(mq, req);
+	if (!mqrq) {
+		WARN_ON(1);
+		mmc_blk_requeue(mq->queue, req);
+		return 0;
+	}
+
+	/* Need to hold re-tuning so long as the queue is not empty */
+	if (mq->qcnt == 1)
+		mmc_retune_hold(mq->card->host);
+
+	return __mmc_swcmdq_enqueue(mq, mqrq);
+}
+
+static struct mmc_async_req *mmc_swcmdq_next(struct mmc_queue *mq)
+{
+	int i = __ffs(mq->qsr);
+
+	__clear_bit(i, &mq->qsr);
+
+	if (i >= mq->qdepth)
+		return NULL;
+
+	return &mq->mqrq[i].areq;
+}
+
+static int mmc_get_qsr(struct mmc_card *card, u32 *qsr)
+{
+	struct mmc_command cmd = {0};
+	int err, retries = 3;
+
+	cmd.opcode = MMC_SEND_STATUS;
+	cmd.arg = card->rca << 16 | 1 << 15;
+	cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
+	err = mmc_wait_for_cmd(card->host, &cmd, retries);
+	if (err)
+		return err;
+
+	*qsr = cmd.resp[0];
+
+	return 0;
+}
+
+static int mmc_await_qsr(struct mmc_card *card, u32 *qsr)
+{
+	unsigned long timeout;
+	u32 status = 0;
+	int err;
+
+	timeout = jiffies + msecs_to_jiffies(10 * 1000);
+	while (!status) {
+		err = mmc_get_qsr(card, &status);
+		if (err)
+			return err;
+		if (time_after(jiffies, timeout)) {
+			pr_err("%s: Card stuck with queued tasks\n",
+			       mmc_hostname(card->host));
+			return -ETIMEDOUT;
+		}
+	}
+
+	*qsr = status;
+
+	return 0;
+}
+
+static int mmc_swcmdq_await_qsr(struct mmc_queue *mq, struct mmc_card *card,
+				bool wait)
+{
+	struct mmc_queue_req *mqrq;
+	u32 qsr;
+	int err;
+
+	if (wait)
+		err = mmc_await_qsr(card, &qsr);
+	else
+		err = mmc_get_qsr(card, &qsr);
+	if (err)
+		goto out_err;
+
+	mq->qsr = qsr;
+
+	if (card->host->areq) {
+		/*
+		 * The active request remains in the QSR until completed. Remove
+		 * it so that mq->qsr only contains ones that are ready but not
+		 * executed.
+		 */
+		mqrq = container_of(card->host->areq, struct mmc_queue_req,
+				    areq);
+		__clear_bit(mqrq->task_id, &mq->qsr);
+	}
+
+	if (mq->qsr)
+		mq->qsr_err = false;
+out_err:
+	if (err) {
+		/* Don't repeatedly retry if no progress is made */
+		if (mq->qsr_err)
+			return MMC_BLK_ABORT;
+		mq->qsr_err = true;
+	}
+
+	return mmc_swcmdq_blk_err(card, err);
+}
+
+static int mmc_swcmdq_execute(struct mmc_queue *mq, bool flush, bool requeuing,
+			      bool new_req)
+{
+	struct mmc_card *card = mq->card;
+	struct mmc_async_req *next = NULL, *prev;
+	struct mmc_blk_request *brq;
+	struct mmc_queue_req *mqrq;
+	enum mmc_blk_status status;
+	struct request *req;
+	int active = card->host->areq ? 1 : 0;
+	int ret;
+
+	if (mq->prepared_areq) {
+		/*
+		 * A request that has been prepared before (i.e. passed to
+		 * mmc_start_areq()) but not started because another new request
+		 * turned up.
+		 */
+		next = mq->prepared_areq;
+	} else if (requeuing) {
+		/* Just finish the active request */
+		next = NULL;
+	} else if (mq->qsr) {
+		/* Get the next task from the Queue Status Register */
+		next = mmc_swcmdq_next(mq);
+	} else if (mq->qcnt > active) {
+		/*
+		 * There are queued tasks so read the Queue Status Register to
+		 * see if any are ready. Wait for a ready task only if there is
+		 * no active request and no new request.
+		 */
+		ret = mmc_swcmdq_await_qsr(mq, card, !active && !new_req);
+		if (ret)
+			return ret;
+		if (mq->qsr)
+			next = mmc_swcmdq_next(mq);
+	}
+
+	if (next) {
+		/*
+		 * Don't wake for a new request when waiting for the active
+		 * request if there is another request ready to start.
+		 */
+		if (active)
+			mmc_queue_set_wake(mq, false);
+	} else {
+		if (!active)
+			return 0;
+		/*
+		 * Don't wake for a new request when flushing or the queue is
+		 * full.
+		 */
+		if (flush || mq->qcnt == mq->qdepth)
+			mmc_queue_set_wake(mq, false);
+		else
+			mmc_queue_set_wake(mq, true);
+	}
+
+	prev = mmc_start_areq(card->host, next, &status);
+
+	if (status == MMC_BLK_NEW_REQUEST) {
+		mq->prepared_areq = next;
+		return status;
+	}
+
+	mq->prepared_areq = NULL;
+
+	if (!prev)
+		return 0;
+
+	mqrq = container_of(prev, struct mmc_queue_req, areq);
+	brq = &mqrq->brq;
+	req = mqrq->req;
+
+	mmc_queue_bounce_post(mqrq);
+
+	switch (status) {
+	case MMC_BLK_SUCCESS:
+	case MMC_BLK_PARTIAL:
+	case MMC_BLK_SUCCESS_ERR:
+		mmc_blk_reset_success(mq->blkdata, MMC_BLK_SWCMDQ);
+		ret = blk_end_request(req, 0, brq->data.bytes_xfered);
+		if (ret) {
+			if (!requeuing)
+				return __mmc_swcmdq_enqueue(mq, mqrq);
+			return 0;
+		}
+		break;
+	case MMC_BLK_NEW_REQUEST:
+		return status;
+	default:
+		if (mqrq->retry_cnt++) {
+			blk_end_request_all(req, -EIO);
+			break;
+		}
+		return status;
+	}
+
+	mmc_queue_req_free(mq, mqrq);
+
+	/* Release re-tuning when queue is empty */
+	if (!mq->qcnt)
+		mmc_retune_release(card->host);
+
+	return 0;
+}
+
+static enum mmc_blk_status mmc_swcmdq_requeue_check(struct mmc_card *card,
+						    struct mmc_async_req *areq)
+{
+	enum mmc_blk_status ret = mmc_swcmdq_err_check(card, areq);
+
+	/*
+	 * In the case of success, prevent mmc_start_areq() from starting
+	 * another request by returning MMC_BLK_SUCCESS_ERR.
+	 */
+	return ret == MMC_BLK_SUCCESS ? MMC_BLK_SUCCESS_ERR : ret;
+}
+
+static int mmc_swcmdq_await_active(struct mmc_queue *mq)
+{
+	struct mmc_async_req *areq = mq->card->host->areq;
+	int err;
+
+	if (!areq)
+		return 0;
+
+	areq->err_check = mmc_swcmdq_requeue_check;
+
+	err = mmc_swcmdq_execute(mq, true, true, false);
+
+	/* The request will be requeued anyway, so ignore 'retry' */
+	if (err == MMC_BLK_RETRY)
+		err = 0;
+
+	return err;
+}
+
+static int mmc_swcmdq_discard_queue(struct mmc_queue *mq)
+{
+	struct mmc_command cmd = {0};
+
+	if (!mq->qcnt)
+		return 0;
+
+	mq->qsr = 0;
+
+	cmd.opcode = MMC_CMDQ_TASK_MGMT;
+	cmd.arg = 1; /* Discard entire queue */
+	cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
+	/* This is for recovery and the response is not needed, so ignore CRC */
+	cmd.flags &= ~MMC_RSP_CRC;
+
+	return mmc_wait_for_cmd(mq->card->host, &cmd, 0);
+}
+
+static int __mmc_swcmdq_requeue(struct mmc_queue *mq)
+{
+	unsigned long i, qslots = mq->qslots;
+	int err;
+
+	if (qslots) {
+		/* Cause re-tuning before next command, if needed */
+		mmc_retune_release(mq->card->host);
+		mmc_retune_hold(mq->card->host);
+	}
+
+	while (qslots) {
+		i = __ffs(qslots);
+		err = __mmc_swcmdq_enqueue(mq, &mq->mqrq[i]);
+		if (err)
+			return err;
+		__clear_bit(i, &qslots);
+	}
+
+	return 0;
+}
+
+static void __mmc_swcmdq_error_out(struct mmc_queue *mq)
+{
+	unsigned long i, qslots = mq->qslots;
+	struct request *req;
+
+	if (qslots)
+		mmc_retune_release(mq->card->host);
+
+	while (qslots) {
+		i = __ffs(qslots);
+		req = mq->mqrq[i].req;
+		blk_end_request_all(req, -EIO);
+		mq->mqrq[i].req = NULL;
+		__clear_bit(i, &qslots);
+	}
+
+	mq->qslots = 0;
+	mq->qcnt = 0;
+}
+
+static int mmc_swcmdq_requeue(struct mmc_queue *mq)
+{
+	int err;
+
+	/* Wait for active request */
+	err = mmc_swcmdq_await_active(mq);
+	if (err)
+		return err;
+
+	err = mmc_swcmdq_discard_queue(mq);
+	if (err)
+		return err;
+
+	return __mmc_swcmdq_requeue(mq);
+}
+
+static void mmc_swcmdq_reset(struct mmc_queue *mq)
+{
+	/* Wait for active request ignoring errors */
+	mmc_swcmdq_await_active(mq);
+
+	/* Ensure the queue is discarded */
+	mmc_swcmdq_discard_queue(mq);
+
+	/* Reset and requeue else error out all requests */
+	if (mmc_blk_reset(mq->blkdata, mq->card->host, MMC_BLK_SWCMDQ) ||
+	    __mmc_swcmdq_requeue(mq))
+		__mmc_swcmdq_error_out(mq);
+}
+
+/*
+ * Recovery has 2 options:
+ * 1. Discard the queue and re-queue all requests. If that fails, fall back to
+ *    option 2.
+ * 2. Reset and re-queue all requests. If that fails, error out all the
+ *    requests.
+ * In either case, re-tuning will be done if needed after the queue becomes
+ * empty because re-tuning is released at that point.
+ */
+static void mmc_swcmdq_recovery(struct mmc_queue *mq, int err)
+{
+	/*
+	 * Recovery is expected seldom, if at all, but it reduces performance,
+	 * so make sure it is not completely silent.
+	 */
+	pr_warn("%s: running software command queue recovery\n",
+		mmc_hostname(mq->card->host));
+
+	switch (err) {
+	case MMC_BLK_RETRY:
+		err = mmc_swcmdq_requeue(mq);
+		if (!err)
+			break;
+		/* Fall through */
+	default:
+		mmc_swcmdq_reset(mq);
+	}
+}
+
+static void mmc_swcmdq_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_context_info *cntx = &mq->card->host->context_info;
+	bool flush = !req && !cntx->is_waiting_last_req;
+	int err;
+
+	/* Enqueue new requests */
+	if (req) {
+		err = mmc_swcmdq_enqueue(mq, req);
+		if (err)
+			mmc_swcmdq_recovery(mq, err);
+	}
+
+	/*
+	 * Keep executing queued requests until the queue is empty or
+	 * mmc_swcmdq_execute() asks for new requests by returning
+	 * MMC_BLK_NEW_REQUEST.
+	 */
+	while (mq->qcnt) {
+		/*
+		 * Re-tuning can only be done when the queue is empty. Recovery
+		 * for MMC_BLK_RETRY will discard the queue and re-queue all
+		 * requests. At the point the queue is empty, re-tuning is
+		 * released and will be done automatically before the next
+		 * mmc_request.
+		 */
+		if (mq->card->host->need_retune)
+			mmc_swcmdq_recovery(mq, MMC_BLK_RETRY);
+		err = mmc_swcmdq_execute(mq, flush, false, !!req);
+		if (err == MMC_BLK_NEW_REQUEST)
+			return;
+		if (err)
+			mmc_swcmdq_recovery(mq, err);
+	}
+}
+
+static void __mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 {
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_card *card = md->queue.card;
@@ -1846,6 +2358,17 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	mmc_queue_req_free(mq, mq_rq);
 }
 
+static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_card *card = md->queue.card;
+
+	if (card->ext_csd.cmdq_en)
+		mmc_swcmdq_issue_rw_rq(mq, req);
+	else
+		__mmc_blk_issue_rw_rq(mq, req);
+}
+
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 {
 	int ret;
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 9e7e4eb9250c..57ebced79fb1 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -64,6 +64,7 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 	mqrq->req = req;
 	mq->qcnt += 1;
 	__set_bit(mqrq->task_id, &mq->qslots);
+	mqrq->retry_cnt = 0;
 
 	return mqrq;
 }
@@ -377,7 +378,14 @@ static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth)
 
 int mmc_queue_alloc_shared_queue(struct mmc_card *card)
 {
-	return __mmc_queue_alloc_shared_queue(card, 2);
+	int qdepth;
+
+	if (card->ext_csd.cmdq_en)
+		qdepth = card->ext_csd.cmdq_depth;
+	else
+		qdepth = 2;
+
+	return __mmc_queue_alloc_shared_queue(card, qdepth);
 }
 
 /**
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index c9ce0172dd08..273bba434070 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -19,9 +19,13 @@ static inline bool mmc_req_is_special(struct request *req)
 
 struct mmc_blk_request {
 	struct mmc_request	mrq;
-	struct mmc_command	sbc;
+	union {
+		struct mmc_command	sbc;
+		struct mmc_command	cmd44;
+	};
 	struct mmc_command	cmd;
 	struct mmc_command	stop;
+	struct mmc_command	cmd45;
 	struct mmc_data		data;
 	int			retune_retry_done;
 };
@@ -35,6 +39,7 @@ struct mmc_queue_req {
 	unsigned int		bounce_sg_len;
 	struct mmc_async_req	areq;
 	int			task_id;
+	unsigned int		retry_cnt;
 };
 
 struct mmc_queue {
@@ -50,6 +55,10 @@ struct mmc_queue {
 	int			qdepth;
 	int			qcnt;
 	unsigned long		qslots;
+	/* Following are defined for Software Command Queuing */
+	unsigned long		qsr;
+	struct mmc_async_req	*prepared_areq;
+	bool			qsr_err;
 };
 
 extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index a0c63ea28796..a44e1d79574c 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -24,6 +24,7 @@ enum mmc_blk_status {
 	MMC_BLK_ECC_ERR,
 	MMC_BLK_NOMEDIUM,
 	MMC_BLK_NEW_REQUEST,
+	MMC_BLK_SUCCESS_ERR, /* Success but prevent starting another request */
 };
 
 struct mmc_command {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 13/39] mmc: mmc: Enable Software Command Queuing
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (11 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 12/39] mmc: block: Add Software Command Queuing Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:35   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 14/39] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
                   ` (25 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Enable the Command Queue if the host controller supports Software Command
Queuing. It is not compatible with Packed Commands, so do not enable that
at the same time.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/mmc.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 8c290264cf2d..3af218c8f5b9 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1799,6 +1799,20 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 		}
 	}
 
+	/* Enable Command Queue if supported */
+	card->ext_csd.cmdq_en = false;
+	if (card->ext_csd.cmdq_support && host->caps & MMC_CAP_CMD_DURING_TFR) {
+		err = mmc_cmdq_enable(card);
+		if (err && err != -EBADMSG)
+			goto free_card;
+		if (err) {
+			pr_warn("%s: Enabling CMDQ failed\n",
+				mmc_hostname(card->host));
+			card->ext_csd.cmdq_support = false;
+			card->ext_csd.cmdq_depth = 0;
+			err = 0;
+		}
+	}
 	/*
 	 * In some cases (e.g. RPMB or mmc_test), the Command Queue must be
 	 * disabled for a time, so a flag is needed to indicate to re-enable the
@@ -1812,7 +1826,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 	 */
 	if (card->ext_csd.max_packed_writes >= 3 &&
 	    card->ext_csd.max_packed_reads >= 5 &&
-	    host->caps2 & MMC_CAP2_PACKED_CMD) {
+	    host->caps2 & MMC_CAP2_PACKED_CMD &&
+	    !card->ext_csd.cmdq_en) {
 		err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
 				EXT_CSD_EXP_EVENTS_CTRL,
 				EXT_CSD_PACKED_EVENT_EN,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 14/39] mmc: core: Factor out debug prints from mmc_start_request()
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (12 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 13/39] mmc: mmc: Enable " Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:38   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 15/39] mmc: core: Factor out mrq preparation " Adrian Hunter
                   ` (24 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

In preparation to reuse the code for CQE support.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c | 33 ++++++++++++++++++++-------------
 1 file changed, 20 insertions(+), 13 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index d97b8f2cf3cb..de6872010ec8 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -262,26 +262,19 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	host->ops->request(host, mrq);
 }
 
-static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
 {
-#ifdef CONFIG_MMC_DEBUG
-	unsigned int i, sz;
-	struct scatterlist *sg;
-#endif
-	mmc_retune_hold(host);
-
-	if (mmc_card_removed(host->card))
-		return -ENOMEDIUM;
-
 	if (mrq->sbc) {
 		pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n",
 			 mmc_hostname(host), mrq->sbc->opcode,
 			 mrq->sbc->arg, mrq->sbc->flags);
 	}
 
-	pr_debug("%s: starting CMD%u arg %08x flags %08x\n",
-		 mmc_hostname(host), mrq->cmd->opcode,
-		 mrq->cmd->arg, mrq->cmd->flags);
+	if (mrq->cmd) {
+		pr_debug("%s: starting CMD%u arg %08x flags %08x\n",
+			 mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->arg,
+			 mrq->cmd->flags);
+	}
 
 	if (mrq->data) {
 		pr_debug("%s:     blksz %d blocks %d flags %08x "
@@ -297,6 +290,20 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 			 mmc_hostname(host), mrq->stop->opcode,
 			 mrq->stop->arg, mrq->stop->flags);
 	}
+}
+
+static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+{
+#ifdef CONFIG_MMC_DEBUG
+	unsigned int i, sz;
+	struct scatterlist *sg;
+#endif
+	mmc_retune_hold(host);
+
+	if (mmc_card_removed(host->card))
+		return -ENOMEDIUM;
+
+	mmc_mrq_pr_debug(host, mrq);
 
 	WARN_ON(!host->claimed);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 15/39] mmc: core: Factor out mrq preparation from mmc_start_request()
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (13 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 14/39] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:39   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 16/39] mmc: core: Add mmc_retune_hold_now() Adrian Hunter
                   ` (23 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

In preparation to reuse the code for CQE support.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c | 40 +++++++++++++++++++++++++++-------------
 1 file changed, 27 insertions(+), 13 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index de6872010ec8..7f2e75db93d4 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -292,23 +292,18 @@ static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
 	}
 }
 
-static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq)
 {
 #ifdef CONFIG_MMC_DEBUG
 	unsigned int i, sz;
 	struct scatterlist *sg;
 #endif
-	mmc_retune_hold(host);
-
-	if (mmc_card_removed(host->card))
-		return -ENOMEDIUM;
-
-	mmc_mrq_pr_debug(host, mrq);
-
-	WARN_ON(!host->claimed);
 
-	mrq->cmd->error = 0;
-	mrq->cmd->mrq = mrq;
+	if (mrq->cmd) {
+		mrq->cmd->error = 0;
+		mrq->cmd->mrq = mrq;
+		mrq->cmd->data = mrq->data;
+	}
 	if (mrq->sbc) {
 		mrq->sbc->error = 0;
 		mrq->sbc->mrq = mrq;
@@ -325,8 +320,6 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 		if (sz != mrq->data->blocks * mrq->data->blksz)
 			return -EINVAL;
 #endif
-
-		mrq->cmd->data = mrq->data;
 		mrq->data->error = 0;
 		mrq->data->mrq = mrq;
 		if (mrq->stop) {
@@ -335,6 +328,27 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 			mrq->stop->mrq = mrq;
 		}
 	}
+
+	return 0;
+}
+
+static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+{
+	int err;
+
+	mmc_retune_hold(host);
+
+	if (mmc_card_removed(host->card))
+		return -ENOMEDIUM;
+
+	mmc_mrq_pr_debug(host, mrq);
+
+	WARN_ON(!host->claimed);
+
+	err = mmc_mrq_prep(host, mrq);
+	if (err)
+		return err;
+
 	led_trigger_event(host->led, LED_FULL);
 	__mmc_start_request(host, mrq);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 16/39] mmc: core: Add mmc_retune_hold_now()
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (14 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 15/39] mmc: core: Factor out mrq preparation " Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 17/39] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
                   ` (22 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

In preparation for CQE support.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/host.c | 6 ++++++
 drivers/mmc/core/host.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index ee3d9d92481f..7db581480f23 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -111,6 +111,12 @@ void mmc_retune_hold(struct mmc_host *host)
 }
 EXPORT_SYMBOL(mmc_retune_hold);
 
+void mmc_retune_hold_now(struct mmc_host *host)
+{
+	host->retune_now = 0;
+	host->hold_retune += 1;
+}
+
 void mmc_retune_release(struct mmc_host *host)
 {
 	if (host->hold_retune)
diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
index fb6a76a03833..77d6f60d1bf9 100644
--- a/drivers/mmc/core/host.h
+++ b/drivers/mmc/core/host.h
@@ -19,6 +19,7 @@
 void mmc_retune_enable(struct mmc_host *host);
 void mmc_retune_disable(struct mmc_host *host);
 void mmc_retune_hold(struct mmc_host *host);
+void mmc_retune_hold_now(struct mmc_host *host);
 void mmc_retune_release(struct mmc_host *host);
 int mmc_retune(struct mmc_host *host);
 void mmc_retune_pause(struct mmc_host *host);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 17/39] mmc: core: Add members to mmc_request and mmc_data for CQE's
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (15 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 16/39] mmc: core: Add mmc_retune_hold_now() Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:42   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 18/39] mmc: host: Add CQE interface Adrian Hunter
                   ` (21 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Most of the information needed to issue requests to a CQE is already in
struct mmc_request and struct mmc_data. Add data block address, some flags,
and the task id (tag).

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 include/linux/mmc/core.h   | 13 +++++++++++--
 include/trace/events/mmc.h | 17 ++++++++++++-----
 2 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index a44e1d79574c..da82a6dbe9fb 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -123,11 +123,18 @@ struct mmc_data {
 	unsigned int		timeout_clks;	/* data timeout (in clocks) */
 	unsigned int		blksz;		/* data block size */
 	unsigned int		blocks;		/* number of blocks */
+	unsigned int		blk_addr;	/* block address */
 	int			error;		/* data error */
 	unsigned int		flags;
 
-#define MMC_DATA_WRITE	(1 << 8)
-#define MMC_DATA_READ	(1 << 9)
+#define MMC_DATA_WRITE		BIT(8)
+#define MMC_DATA_READ		BIT(9)
+/* Extra flags used by CQE */
+#define MMC_DATA_QBR		BIT(10)		/* CQE queue barrier*/
+#define MMC_DATA_PRIO		BIT(11)		/* CQE high priority */
+#define MMC_DATA_REL_WR		BIT(12)		/* Reliable write */
+#define MMC_DATA_DAT_TAG	BIT(13)		/* Tag request */
+#define MMC_DATA_FORCED_PRG	BIT(14)		/* Forced programming */
 
 	unsigned int		bytes_xfered;
 
@@ -154,6 +161,8 @@ struct mmc_request {
 
 	/* Allow other commands during this ongoing data transfer or busy wait */
 	bool			cap_cmd_during_tfr;
+
+	int			tag;
 };
 
 struct mmc_card;
diff --git a/include/trace/events/mmc.h b/include/trace/events/mmc.h
index a72f9b94c80b..baf96d15a184 100644
--- a/include/trace/events/mmc.h
+++ b/include/trace/events/mmc.h
@@ -29,8 +29,10 @@
 		__field(unsigned int,		sbc_flags)
 		__field(unsigned int,		sbc_retries)
 		__field(unsigned int,		blocks)
+		__field(unsigned int,		blk_addr)
 		__field(unsigned int,		blksz)
 		__field(unsigned int,		data_flags)
+		__field(int,			tag)
 		__field(unsigned int,		can_retune)
 		__field(unsigned int,		doing_retune)
 		__field(unsigned int,		retune_now)
@@ -56,7 +58,9 @@
 		__entry->sbc_retries = mrq->sbc ? mrq->sbc->retries : 0;
 		__entry->blksz = mrq->data ? mrq->data->blksz : 0;
 		__entry->blocks = mrq->data ? mrq->data->blocks : 0;
+		__entry->blk_addr = mrq->data ? mrq->data->blk_addr : 0;
 		__entry->data_flags = mrq->data ? mrq->data->flags : 0;
+		__entry->tag = mrq->tag;
 		__entry->can_retune = host->can_retune;
 		__entry->doing_retune = host->doing_retune;
 		__entry->retune_now = host->retune_now;
@@ -71,8 +75,8 @@
 		  "cmd_opcode=%u cmd_arg=0x%x cmd_flags=0x%x cmd_retries=%u "
 		  "stop_opcode=%u stop_arg=0x%x stop_flags=0x%x stop_retries=%u "
 		  "sbc_opcode=%u sbc_arg=0x%x sbc_flags=0x%x sbc_retires=%u "
-		  "blocks=%u block_size=%u data_flags=0x%x "
-		  "can_retune=%u doing_retune=%u retune_now=%u "
+		  "blocks=%u block_size=%u blk_addr=%u data_flags=0x%x "
+		  "tag=%d can_retune=%u doing_retune=%u retune_now=%u "
 		  "need_retune=%d hold_retune=%d retune_period=%u",
 		  __get_str(name), __entry->mrq,
 		  __entry->cmd_opcode, __entry->cmd_arg,
@@ -81,7 +85,8 @@
 		  __entry->stop_flags, __entry->stop_retries,
 		  __entry->sbc_opcode, __entry->sbc_arg,
 		  __entry->sbc_flags, __entry->sbc_retries,
-		  __entry->blocks, __entry->blksz, __entry->data_flags,
+		  __entry->blocks, __entry->blk_addr,
+		  __entry->blksz, __entry->data_flags, __entry->tag,
 		  __entry->can_retune, __entry->doing_retune,
 		  __entry->retune_now, __entry->need_retune,
 		  __entry->hold_retune, __entry->retune_period)
@@ -108,6 +113,7 @@
 		__field(unsigned int,		sbc_retries)
 		__field(unsigned int,		bytes_xfered)
 		__field(int,			data_err)
+		__field(int,			tag)
 		__field(unsigned int,		can_retune)
 		__field(unsigned int,		doing_retune)
 		__field(unsigned int,		retune_now)
@@ -139,6 +145,7 @@
 		__entry->sbc_retries = mrq->sbc ? mrq->sbc->retries : 0;
 		__entry->bytes_xfered = mrq->data ? mrq->data->bytes_xfered : 0;
 		__entry->data_err = mrq->data ? mrq->data->error : 0;
+		__entry->tag = mrq->tag;
 		__entry->can_retune = host->can_retune;
 		__entry->doing_retune = host->doing_retune;
 		__entry->retune_now = host->retune_now;
@@ -154,7 +161,7 @@
 		  "cmd_retries=%u stop_opcode=%u stop_err=%d "
 		  "stop_resp=0x%x 0x%x 0x%x 0x%x stop_retries=%u "
 		  "sbc_opcode=%u sbc_err=%d sbc_resp=0x%x 0x%x 0x%x 0x%x "
-		  "sbc_retries=%u bytes_xfered=%u data_err=%d "
+		  "sbc_retries=%u bytes_xfered=%u data_err=%d tag=%d "
 		  "can_retune=%u doing_retune=%u retune_now=%u need_retune=%d "
 		  "hold_retune=%d retune_period=%u",
 		  __get_str(name), __entry->mrq,
@@ -170,7 +177,7 @@
 		  __entry->sbc_resp[0], __entry->sbc_resp[1],
 		  __entry->sbc_resp[2], __entry->sbc_resp[3],
 		  __entry->sbc_retries,
-		  __entry->bytes_xfered, __entry->data_err,
+		  __entry->bytes_xfered, __entry->data_err, __entry->tag,
 		  __entry->can_retune, __entry->doing_retune,
 		  __entry->retune_now, __entry->need_retune,
 		  __entry->hold_retune, __entry->retune_period)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 18/39] mmc: host: Add CQE interface
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (16 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 17/39] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 19/39] mmc: core: Turn off CQE before sending commands Adrian Hunter
                   ` (20 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add CQE host operations, capabilities, and host members.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 include/linux/mmc/host.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index f339cd64f0f6..a563a8122f5d 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -160,6 +160,19 @@ struct mmc_host_ops {
 				  unsigned int direction, int blk_size);
 };
 
+struct mmc_cqe_ops {
+	int	(*cqe_enable)(struct mmc_host *host, struct mmc_card *card);
+	void	(*cqe_disable)(struct mmc_host *host);
+	int	(*cqe_request)(struct mmc_host *host, struct mmc_request *mrq);
+	void	(*cqe_post_req)(struct mmc_host *host, struct mmc_request *mrq);
+	void	(*cqe_off)(struct mmc_host *host);
+	int	(*cqe_wait_for_idle)(struct mmc_host *host);
+	bool	(*cqe_timeout)(struct mmc_host *host, struct mmc_request *mrq,
+			       bool *recovery_needed);
+	void	(*cqe_recovery_start)(struct mmc_host *host);
+	void	(*cqe_recovery_finish)(struct mmc_host *host, bool forget_reqs);
+};
+
 struct mmc_async_req {
 	/* active mmc request */
 	struct mmc_request	*mrq;
@@ -304,6 +317,8 @@ struct mmc_host {
 #define MMC_CAP2_HS400_ES	(1 << 20)	/* Host supports enhanced strobe */
 #define MMC_CAP2_NO_SD		(1 << 21)	/* Do not send SD commands during initialization */
 #define MMC_CAP2_NO_MMC		(1 << 22)	/* Do not send (e)MMC commands during initialization */
+#define MMC_CAP2_CQE		(1 << 23)	/* Has eMMC command queue engine */
+#define MMC_CAP2_CQE_DCMD	(1 << 24)	/* CQE can issue a direct command */
 
 	mmc_pm_flag_t		pm_caps;	/* supported pm features */
 
@@ -389,6 +404,15 @@ struct mmc_host {
 	int			dsr_req;	/* DSR value is valid */
 	u32			dsr;	/* optional driver stage (DSR) value */
 
+	/* Command Queue Engine (CQE) support */
+	const struct mmc_cqe_ops *cqe_ops;
+	void			*cqe_private;
+	void			(*cqe_recovery_notifier)(struct mmc_host *,
+							 struct mmc_request *);
+	int			cqe_qdepth;
+	bool			cqe_enabled;
+	bool			cqe_on;
+
 	unsigned long		private[0] ____cacheline_aligned;
 };
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 19/39] mmc: core: Turn off CQE before sending commands
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (17 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 18/39] mmc: host: Add CQE interface Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:42   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 20/39] mmc: core: Add support for handling CQE requests Adrian Hunter
                   ` (19 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Turn off the CQE before sending commands, and ensure it is off in any reset
or power management paths.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 7f2e75db93d4..916bae093929 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -259,6 +259,9 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 
 	trace_mmc_request_start(host, mrq);
 
+	if (host->cqe_on)
+		host->cqe_ops->cqe_off(host);
+
 	host->ops->request(host, mrq);
 }
 
@@ -1229,6 +1232,9 @@ void mmc_set_bus_width(struct mmc_host *host, unsigned int width)
  */
 void mmc_set_initial_state(struct mmc_host *host)
 {
+	if (host->cqe_on)
+		host->cqe_ops->cqe_off(host);
+
 	mmc_retune_disable(host);
 
 	if (mmc_host_is_spi(host))
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 20/39] mmc: core: Add support for handling CQE requests
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (18 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 19/39] mmc: core: Turn off CQE before sending commands Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:44   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 21/39] mmc: mmc: Enable CQE's Adrian Hunter
                   ` (18 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add core support for handling CQE requests, including starting, completing
and recovering.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/core.c  | 147 +++++++++++++++++++++++++++++++++++++++++++++--
 include/linux/mmc/core.h |   6 ++
 2 files changed, 148 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 916bae093929..2eda589db53d 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -265,7 +265,8 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	host->ops->request(host, mrq);
 }
 
-static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
+static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq,
+			     bool cqe)
 {
 	if (mrq->sbc) {
 		pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n",
@@ -274,9 +275,12 @@ static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq)
 	}
 
 	if (mrq->cmd) {
-		pr_debug("%s: starting CMD%u arg %08x flags %08x\n",
-			 mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->arg,
-			 mrq->cmd->flags);
+		pr_debug("%s: starting %sCMD%u arg %08x flags %08x\n",
+			 mmc_hostname(host), cqe ? "CQE direct " : "",
+			 mrq->cmd->opcode, mrq->cmd->arg, mrq->cmd->flags);
+	} else if (cqe) {
+		pr_debug("%s: starting CQE transfer for tag %d blkaddr %u\n",
+			 mmc_hostname(host), mrq->tag, mrq->data->blk_addr);
 	}
 
 	if (mrq->data) {
@@ -344,7 +348,7 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	if (mmc_card_removed(host->card))
 		return -ENOMEDIUM;
 
-	mmc_mrq_pr_debug(host, mrq);
+	mmc_mrq_pr_debug(host, mrq, false);
 
 	WARN_ON(!host->claimed);
 
@@ -358,6 +362,139 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	return 0;
 }
 
+int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	int err;
+
+	/* Caller must hold retuning while CQE is in use */
+	err = mmc_retune(host);
+	if (err)
+		goto out_err;
+
+	mrq->host = host;
+
+	mmc_mrq_pr_debug(host, mrq, true);
+
+	err = mmc_mrq_prep(host, mrq);
+	if (err)
+		goto out_err;
+
+	err = host->cqe_ops->cqe_request(host, mrq);
+	if (err)
+		goto out_err;
+
+	trace_mmc_request_start(host, mrq);
+
+	return 0;
+
+out_err:
+	if (mrq->cmd) {
+		pr_debug("%s: failed to start CQE direct CMD%u, error %d\n",
+			 mmc_hostname(host), mrq->cmd->opcode, err);
+	} else {
+		pr_debug("%s: failed to start CQE transfer for tag %d, error %d\n",
+			 mmc_hostname(host), mrq->tag, err);
+	}
+	return err;
+}
+EXPORT_SYMBOL(mmc_cqe_start_req);
+
+void __mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq)
+{
+	mmc_should_fail_request(host, mrq);
+
+	/* Flag re-tuning needed on CRC errors */
+	if ((mrq->cmd && mrq->cmd->error == -EILSEQ) ||
+	    (mrq->data && mrq->data->error == -EILSEQ))
+		mmc_retune_needed(host);
+
+	trace_mmc_request_done(host, mrq);
+
+	if (mrq->cmd) {
+		pr_debug("%s: CQE req done (direct CMD%u): %d\n",
+			 mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->error);
+	} else {
+		pr_debug("%s: CQE transfer done tag %d\n",
+			 mmc_hostname(host), mrq->tag);
+	}
+
+	if (mrq->data) {
+		pr_debug("%s:     %d bytes transferred: %d\n",
+			 mmc_hostname(host),
+			 mrq->data->bytes_xfered, mrq->data->error);
+	}
+}
+EXPORT_SYMBOL(__mmc_cqe_request_done);
+
+/**
+ *	mmc_cqe_request_done - CQE has finished processing an MMC request
+ *	@host: MMC host which completed request
+ *	@mrq: MMC request which completed
+ *
+ *	CQE drivers should call this function when they have completed
+ *	their processing of a request.
+ */
+void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq)
+{
+	__mmc_cqe_request_done(host, mrq);
+
+	mrq->done(mrq);
+}
+EXPORT_SYMBOL(mmc_cqe_request_done);
+
+/**
+ *	mmc_cqe_post_req - CQE post process of a completed MMC request
+ *	@host: MMC host
+ *	@mrq: MMC request to be processed
+ */
+void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	if (host->cqe_ops->cqe_post_req)
+		host->cqe_ops->cqe_post_req(host, mrq);
+}
+EXPORT_SYMBOL(mmc_cqe_post_req);
+
+/* Arbitrary 1 second timeout */
+#define MMC_CQE_RECOVERY_TIMEOUT	1000
+
+int mmc_cqe_recovery(struct mmc_host *host, bool forget_reqs)
+{
+	struct mmc_command cmd;
+	int err;
+
+	mmc_retune_hold_now(host);
+
+	/*
+	 * Recovery is expected seldom, if at all, but it reduces performance,
+	 * so make sure it is not completely silent.
+	 */
+	pr_warn("%s: running CQE recovery\n", mmc_hostname(host));
+
+	host->cqe_ops->cqe_recovery_start(host);
+
+	memset(&cmd, 0, sizeof(cmd));
+	cmd.opcode       = MMC_STOP_TRANSMISSION,
+	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC,
+	cmd.flags       &= ~MMC_RSP_CRC; /* Ignore CRC */
+	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
+	mmc_wait_for_cmd(host, &cmd, 0);
+
+	memset(&cmd, 0, sizeof(cmd));
+	cmd.opcode       = MMC_CMDQ_TASK_MGMT;
+	cmd.arg          = 1; /* Discard entire queue */
+	cmd.flags        = MMC_RSP_R1B | MMC_CMD_AC;
+	cmd.flags       &= ~MMC_RSP_CRC; /* Ignore CRC */
+	cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
+	err = mmc_wait_for_cmd(host, &cmd, 0);
+
+	host->cqe_ops->cqe_recovery_finish(host, forget_reqs);
+
+	mmc_retune_release(host);
+
+	return err;
+}
+EXPORT_SYMBOL(mmc_cqe_recovery);
+
 /**
  *	mmc_start_bkops - start BKOPS for supported cards
  *	@card: MMC card to start BKOPS
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index da82a6dbe9fb..4f7848ba2f04 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -175,6 +175,12 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd,
 		int retries);
 
+int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq);
+void __mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq);
+void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq);
+void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq);
+int mmc_cqe_recovery(struct mmc_host *host, bool forget_reqs);
+
 int mmc_hw_reset(struct mmc_host *host);
 void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 21/39] mmc: mmc: Enable CQE's
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (19 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 20/39] mmc: core: Add support for handling CQE requests Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:45   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 22/39] mmc: block: Prepare CQE data Adrian Hunter
                   ` (17 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Enable or disable CQE when a card is added or removed respectively.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/bus.c |  7 +++++++
 drivers/mmc/core/mmc.c | 17 ++++++++++++++++-
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
index 301246513a37..a4b49e25fe96 100644
--- a/drivers/mmc/core/bus.c
+++ b/drivers/mmc/core/bus.c
@@ -369,10 +369,17 @@ int mmc_add_card(struct mmc_card *card)
  */
 void mmc_remove_card(struct mmc_card *card)
 {
+	struct mmc_host *host = card->host;
+
 #ifdef CONFIG_DEBUG_FS
 	mmc_remove_card_debugfs(card);
 #endif
 
+	if (host->cqe_enabled) {
+		host->cqe_ops->cqe_disable(host);
+		host->cqe_enabled = false;
+	}
+
 	if (mmc_card_present(card)) {
 		if (mmc_host_is_spi(card->host)) {
 			pr_info("%s: SPI card removed\n",
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 3af218c8f5b9..f7c2bcb60c52 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1801,7 +1801,9 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 
 	/* Enable Command Queue if supported */
 	card->ext_csd.cmdq_en = false;
-	if (card->ext_csd.cmdq_support && host->caps & MMC_CAP_CMD_DURING_TFR) {
+	if (card->ext_csd.cmdq_support &&
+	    (host->caps & MMC_CAP_CMD_DURING_TFR ||
+	     host->caps2 & MMC_CAP2_CQE)) {
 		err = mmc_cmdq_enable(card);
 		if (err && err != -EBADMSG)
 			goto free_card;
@@ -1820,6 +1822,19 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
 	 */
 	card->reenable_cmdq = card->ext_csd.cmdq_en;
 
+	if (card->ext_csd.cmdq_en && (host->caps2 & MMC_CAP2_CQE) &&
+	    !host->cqe_enabled) {
+		err = host->cqe_ops->cqe_enable(host, card);
+		if (err) {
+			pr_err("%s: Failed to enable CQE, error %d\n",
+				mmc_hostname(host), err);
+		} else {
+			host->cqe_enabled = true;
+			pr_info("%s: Command Queue Engine enabled\n",
+				mmc_hostname(host));
+		}
+	}
+
 	/*
 	 * The mandatory minimum values are defined for packed command.
 	 * read: 5, write: 3
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 22/39] mmc: block: Prepare CQE data
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (20 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 21/39] mmc: mmc: Enable CQE's Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-15 13:49   ` Linus Walleij
  2017-02-10 12:55 ` [PATCH RFC 23/39] mmc: block: Add CQE support Adrian Hunter
                   ` (16 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Enhance mmc_blk_data_prep() to support CQE requests.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 40 +++++++++++++++++++++++++++++-----------
 1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 0c7d41967281..8119c5533a91 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -36,6 +36,7 @@
 #include <linux/compat.h>
 #include <linux/pm_runtime.h>
 #include <linux/idr.h>
+#include <linux/ioprio.h>
 
 #include <linux/mmc/ioctl.h>
 #include <linux/mmc/card.h>
@@ -1440,25 +1441,27 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card,
 }
 
 static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
-			      int disable_multi, bool *do_rel_wr,
-			      bool *do_data_tag)
+			      int disable_multi, bool *do_rel_wr_p,
+			      bool *do_data_tag_p)
 {
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 	struct mmc_blk_request *brq = &mqrq->brq;
 	struct request *req = mqrq->req;
+	bool do_rel_wr, do_data_tag;
 
 	/*
 	 * Reliable writes are used to implement Forced Unit Access and
 	 * are supported only on MMCs.
 	 */
-	*do_rel_wr = (req->cmd_flags & REQ_FUA) &&
-		     rq_data_dir(req) == WRITE &&
-		     (md->flags & MMC_BLK_REL_WR);
+	do_rel_wr = (req->cmd_flags & REQ_FUA) &&
+		    rq_data_dir(req) == WRITE &&
+		    (md->flags & MMC_BLK_REL_WR);
 
 	memset(brq, 0, sizeof(struct mmc_blk_request));
 
 	brq->mrq.data = &brq->data;
+	brq->mrq.tag = req->tag;
 
 	brq->stop.opcode = MMC_STOP_TRANSMISSION;
 	brq->stop.arg = 0;
@@ -1473,6 +1476,10 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 
 	brq->data.blksz = 512;
 	brq->data.blocks = blk_rq_sectors(req);
+	brq->data.blk_addr = blk_rq_pos(req);
+
+	if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
+		brq->data.flags |= MMC_DATA_PRIO;
 
 	/*
 	 * The block layer doesn't support all sector count
@@ -1502,18 +1509,23 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 						brq->data.blocks);
 	}
 
-	if (*do_rel_wr)
+	if (do_rel_wr) {
 		mmc_apply_rel_rw(brq, card, req);
+		brq->data.flags |= MMC_DATA_REL_WR;
+	}
 
 	/*
 	 * Data tag is used only during writing meta data to speed
 	 * up write and any subsequent read of this meta data
 	 */
-	*do_data_tag = card->ext_csd.data_tag_unit_size &&
-		       (req->cmd_flags & REQ_META) &&
-		       (rq_data_dir(req) == WRITE) &&
-		       ((brq->data.blocks * brq->data.blksz) >=
-			card->ext_csd.data_tag_unit_size);
+	do_data_tag = card->ext_csd.data_tag_unit_size &&
+		      (req->cmd_flags & REQ_META) &&
+		      (rq_data_dir(req) == WRITE) &&
+		      ((brq->data.blocks * brq->data.blksz) >=
+		       card->ext_csd.data_tag_unit_size);
+
+	if (do_data_tag)
+		brq->data.flags |= MMC_DATA_DAT_TAG;
 
 	mmc_set_data_timeout(&brq->data, card);
 
@@ -1542,6 +1554,12 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 	mqrq->areq.mrq = &brq->mrq;
 
 	mmc_queue_bounce_pre(mqrq);
+
+	if (do_rel_wr_p)
+		*do_rel_wr_p = do_rel_wr;
+
+	if (do_data_tag_p)
+		*do_data_tag_p = do_data_tag;
 }
 
 static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 23/39] mmc: block: Add CQE support
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (21 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 22/39] mmc: block: Prepare CQE data Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 24/39] mmc: cqhci: support for command queue enabled host Adrian Hunter
                   ` (15 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add CQE support to the block driver, including:
	- optionally using DCMD for flush requests
	- manually issuing discard requests
	- issuing read / write requests to the CQE
	- supporting block-layer timeouts
	- handling recovery
	- supporting re-tuning

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 202 ++++++++++++++++++++++++++++++-
 drivers/mmc/core/block.h |   7 ++
 drivers/mmc/core/queue.c | 301 ++++++++++++++++++++++++++++++++++++++++++++++-
 drivers/mmc/core/queue.h |  43 ++++++-
 4 files changed, 546 insertions(+), 7 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 8119c5533a91..3182e5994d4d 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -115,6 +115,7 @@ struct mmc_blk_data {
 #define MMC_BLK_DISCARD		BIT(2)
 #define MMC_BLK_SECDISCARD	BIT(3)
 #define MMC_BLK_SWCMDQ		BIT(4)
+#define MMC_BLK_CQE_RECOVERY	BIT(5)
 
 	/*
 	 * Only set in main mmc_blk_data associated
@@ -2376,6 +2377,205 @@ static void __mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	mmc_queue_req_free(mq, mq_rq);
 }
 
+#define MMC_CQE_RETRIES 2
+
+void mmc_blk_cqe_complete_rq(struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+	struct mmc_request *mrq = &mqrq->brq.mrq;
+	struct request_queue *q = req->q;
+	struct mmc_queue *mq = q->queuedata;
+	struct mmc_host *host = mq->card->host;
+	unsigned long flags;
+	bool put_card;
+	int err;
+
+	mmc_cqe_post_req(host, mrq);
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	mq->cqe_in_flight[mmc_cqe_issue_type(host, req)] -= 1;
+
+	put_card = mmc_cqe_tot_in_flight(mq) == 0;
+
+	mmc_queue_clr_special(req);
+
+	if (mrq->cmd && mrq->cmd->error)
+		err = mrq->cmd->error;
+	else if (mrq->data && mrq->data->error)
+		err = mrq->data->error;
+	else
+		err = 0;
+
+	if (err) {
+		/*
+		 * !req->retries means we have not seen this request before, so
+		 * we add 1 to the number of retries and compare to 1 to decide
+		 * whether or not to retry.
+		 */
+		if (!req->retries)
+			req->retries = MMC_CQE_RETRIES + 1;
+		if (--req->retries >= 1)
+			blk_requeue_request(q, req);
+		else
+			__blk_end_request_all(req, -EIO);
+	} else if (mrq->data) {
+		if (__blk_end_request(req, 0, mrq->data->bytes_xfered))
+			blk_requeue_request(q, req);
+	} else {
+		__blk_end_request_all(req, 0);
+	}
+
+	mmc_cqe_kick_queue(mq);
+
+	spin_unlock_irqrestore(q->queue_lock, flags);
+
+	if (put_card)
+		mmc_put_card(mq->card);
+}
+
+void mmc_blk_cqe_recovery(struct mmc_queue *mq)
+{
+	struct mmc_card *card = mq->card;
+	struct mmc_host *host = card->host;
+	int err, i;
+
+	mmc_get_card(card);
+
+	pr_debug("%s: CQE recovery start\n", mmc_hostname(host));
+
+	/*
+	 * Block layer timeouts race with completions which means the normal
+	 * completion path cannot be used so tell CQE to forget the requests.
+	 */
+	err = mmc_cqe_recovery(host, true);
+
+	/* Then complete all requests directly */
+	for (i = 0; i < mq->qdepth; i++) {
+		struct mmc_queue_req *mqrq = &mq->mqrq[i];
+
+		if (mqrq->req) {
+			__mmc_cqe_request_done(host, &mqrq->brq.mrq);
+			mmc_blk_cqe_complete_rq(mqrq->req);
+		}
+	}
+
+	if (err)
+		mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY);
+	else
+		mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
+
+	pr_debug("%s: CQE recovery done\n", mmc_hostname(host));
+
+	mmc_put_card(card);
+}
+
+static void mmc_blk_cqe_req_done(struct mmc_request *mrq)
+{
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+
+	blk_complete_request(mqrq->req);
+}
+
+static int mmc_blk_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	mrq->done = mmc_blk_cqe_req_done;
+	return mmc_cqe_start_req(host, mrq);
+}
+
+static struct mmc_request *mmc_blk_cqe_prep_dcmd(struct mmc_queue_req *mqrq)
+{
+	struct mmc_blk_request *brq = &mqrq->brq;
+
+	memset(brq, 0, sizeof(*brq));
+
+	brq->mrq.cmd = &brq->cmd;
+	brq->mrq.tag = mqrq->req->tag;
+
+	return &brq->mrq;
+}
+
+int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+	struct mmc_request *mrq = mmc_blk_cqe_prep_dcmd(mqrq);
+
+	mrq->cmd->opcode = MMC_SWITCH;
+	mrq->cmd->arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
+			(EXT_CSD_FLUSH_CACHE << 16) |
+			(1 << 8) |
+			EXT_CSD_CMD_SET_NORMAL;
+	mrq->cmd->flags = MMC_CMD_AC | MMC_RSP_R1B;
+
+	return mmc_blk_cqe_start_req(mq->card->host, mrq);
+}
+
+static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+
+	mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
+
+	return mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq);
+}
+
+enum mmc_issued mmc_blk_cqe_issue_rq(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_card *card = md->queue.card;
+	struct mmc_host *host = card->host;
+	int ret;
+
+	ret = mmc_blk_part_switch(card, md);
+	if (ret)
+		return MMC_REQ_FAILED_TO_START;
+
+	switch (mmc_cqe_issue_type(host, req)) {
+	case MMC_ISSUE_SYNC:
+		ret = host->cqe_ops->cqe_wait_for_idle(host);
+		if (ret)
+			return MMC_REQ_BUSY;
+		switch (req_op(req)) {
+		case REQ_OP_DISCARD:
+			mmc_blk_issue_discard_rq(mq, req);
+			break;
+		case REQ_OP_SECURE_ERASE:
+			mmc_blk_issue_secdiscard_rq(mq, req);
+			break;
+		case REQ_OP_FLUSH:
+			mmc_blk_issue_flush(mq, req);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			return MMC_REQ_FAILED_TO_START;
+		}
+		return MMC_REQ_FINISHED;
+	case MMC_ISSUE_DCMD:
+	case MMC_ISSUE_ASYNC:
+		mmc_queue_set_special(mq, req);
+		switch (req_op(req)) {
+		case REQ_OP_FLUSH:
+			ret = mmc_blk_cqe_issue_flush(mq, req);
+			break;
+		case REQ_OP_READ:
+		case REQ_OP_WRITE:
+			ret = mmc_blk_cqe_issue_rw_rq(mq, req);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			ret = -EINVAL;
+		}
+		if (!ret)
+			return MMC_REQ_STARTED;
+		mmc_queue_clr_special(req);
+		return ret == -EBUSY ? MMC_REQ_BUSY : MMC_REQ_FAILED_TO_START;
+	default:
+		WARN_ON_ONCE(1);
+		return MMC_REQ_FAILED_TO_START;
+	}
+}
+
 static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
 {
 	struct mmc_blk_data *md = mq->blkdata;
@@ -2473,7 +2673,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 	INIT_LIST_HEAD(&md->part);
 	md->usage = 1;
 
-	ret = mmc_init_queue(&md->queue, card, &md->lock, subname);
+	ret = mmc_init_queue(&md->queue, card, &md->lock, subname, area_type);
 	if (ret)
 		goto err_putdisk;
 
diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
index 860ca7c8df86..d7b3d7008b00 100644
--- a/drivers/mmc/core/block.h
+++ b/drivers/mmc/core/block.h
@@ -6,4 +6,11 @@
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
 
+enum mmc_issued;
+
+enum mmc_issued mmc_blk_cqe_issue_rq(struct mmc_queue *mq,
+				     struct request *req);
+void mmc_blk_cqe_complete_rq(struct request *rq);
+void mmc_blk_cqe_recovery(struct mmc_queue *mq);
+
 #endif
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 57ebced79fb1..32941ffa29e2 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -79,6 +79,274 @@ void mmc_queue_req_free(struct mmc_queue *mq,
 	__clear_bit(mqrq->task_id, &mq->qslots);
 }
 
+static void mmc_cqe_request_fn(struct request_queue *q)
+{
+	struct mmc_queue *mq = q->queuedata;
+	struct request *req;
+
+	if (!mq) {
+		while ((req = blk_fetch_request(q)) != NULL) {
+			req->rq_flags |= RQF_QUIET;
+			__blk_end_request_all(req, -EIO);
+		}
+		return;
+	}
+
+	if (mq->asleep && !mq->cqe_busy)
+		wake_up_process(mq->thread);
+}
+
+static inline bool mmc_cqe_dcmd_busy(struct mmc_queue *mq)
+{
+	/* Allow only 1 DCMD at a time */
+	return mq->cqe_in_flight[MMC_ISSUE_DCMD];
+}
+
+static inline bool mmc_cqe_queue_full(struct mmc_queue *mq)
+{
+	return mmc_cqe_qcnt(mq) >= mq->qdepth;
+}
+
+void mmc_cqe_kick_queue(struct mmc_queue *mq)
+{
+	if ((mq->cqe_busy & MMC_CQE_DCMD_BUSY) && !mmc_cqe_dcmd_busy(mq))
+		mq->cqe_busy &= ~MMC_CQE_DCMD_BUSY;
+
+	if ((mq->cqe_busy & MMC_CQE_QUEUE_FULL) && !mmc_cqe_queue_full(mq))
+		mq->cqe_busy &= ~MMC_CQE_QUEUE_FULL;
+
+	if (mq->asleep && !mq->cqe_busy)
+		__blk_run_queue(mq->queue);
+}
+
+static inline bool mmc_cqe_can_dcmd(struct mmc_host *host)
+{
+	return host->caps2 & MMC_CAP2_CQE_DCMD;
+}
+
+enum mmc_issue_type mmc_cqe_issue_type(struct mmc_host *host,
+				       struct request *req)
+{
+	switch (req_op(req)) {
+	case REQ_OP_DISCARD:
+	case REQ_OP_SECURE_ERASE:
+		return MMC_ISSUE_SYNC;
+	case REQ_OP_FLUSH:
+		return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC;
+	default:
+		return MMC_ISSUE_ASYNC;
+	}
+}
+
+void mmc_queue_set_special(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mqrq = &mq->mqrq[req->tag];
+
+	mqrq->req = req;
+	req->special = mqrq;
+}
+
+void mmc_queue_clr_special(struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+
+	if (!mqrq)
+		return;
+
+	mqrq->req = NULL;
+	req->special = NULL;
+}
+
+static void __mmc_cqe_recovery_notifier(struct mmc_queue *mq)
+{
+	if (!mq->cqe_recovery_needed) {
+		mq->cqe_recovery_needed = true;
+		wake_up_process(mq->thread);
+	}
+}
+
+static void mmc_cqe_recovery_notifier(struct mmc_host *host,
+				      struct mmc_request *mrq)
+{
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+	struct request *req = mqrq->req;
+	struct request_queue *q = req->q;
+	struct mmc_queue *mq = q->queuedata;
+	unsigned long flags;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	__mmc_cqe_recovery_notifier(mq);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+static int mmc_cqe_thread(void *d)
+{
+	struct mmc_queue *mq = d;
+	struct request_queue *q = mq->queue;
+	struct mmc_card *card = mq->card;
+	struct mmc_host *host = card->host;
+	unsigned long flags;
+	int get_put = 0;
+
+	current->flags |= PF_MEMALLOC;
+
+	down(&mq->thread_sem);
+	spin_lock_irqsave(q->queue_lock, flags);
+	while (1) {
+		struct request *req = NULL;
+		enum mmc_issue_type issue_type;
+		bool retune_ok = false;
+
+		if (mq->cqe_recovery_needed) {
+			spin_unlock_irqrestore(q->queue_lock, flags);
+			mmc_blk_cqe_recovery(mq);
+			spin_lock_irqsave(q->queue_lock, flags);
+			mq->cqe_recovery_needed = false;
+		}
+
+		set_current_state(TASK_INTERRUPTIBLE);
+
+		if (!kthread_should_stop())
+			req = blk_peek_request(q);
+
+		if (req) {
+			issue_type = mmc_cqe_issue_type(host, req);
+			switch (issue_type) {
+			case MMC_ISSUE_DCMD:
+				if (mmc_cqe_dcmd_busy(mq)) {
+					mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
+					req = NULL;
+					break;
+				}
+				/* Fall through */
+			case MMC_ISSUE_ASYNC:
+				if (blk_queue_start_tag(q, req)) {
+					mq->cqe_busy |= MMC_CQE_QUEUE_FULL;
+					req = NULL;
+				}
+				break;
+			default:
+				/*
+				 * Timeouts are handled by mmc core, so set a
+				 * large value to avoid races.
+				 */
+				req->timeout = 600 * HZ;
+				req->special = NULL;
+				blk_start_request(req);
+				break;
+			}
+			if (req) {
+				mq->cqe_in_flight[issue_type] += 1;
+				if (mmc_cqe_tot_in_flight(mq) == 1)
+					get_put += 1;
+				if (mmc_cqe_qcnt(mq) == 1)
+					retune_ok = true;
+			}
+		}
+
+		mq->asleep = !req;
+
+		spin_unlock_irqrestore(q->queue_lock, flags);
+
+		if (req) {
+			enum mmc_issued issued;
+
+			set_current_state(TASK_RUNNING);
+
+			if (get_put) {
+				get_put = 0;
+				mmc_get_card(card);
+			}
+
+			if (host->need_retune && retune_ok &&
+			    !host->hold_retune)
+				host->retune_now = true;
+			else
+				host->retune_now = false;
+
+			issued = mmc_blk_cqe_issue_rq(mq, req);
+
+			cond_resched();
+
+			spin_lock_irqsave(q->queue_lock, flags);
+
+			switch (issued) {
+			case MMC_REQ_STARTED:
+				break;
+			case MMC_REQ_BUSY:
+				blk_requeue_request(q, req);
+				goto finished;
+			case MMC_REQ_FAILED_TO_START:
+				__blk_end_request_all(req, -EIO);
+				/* Fall through */
+			case MMC_REQ_FINISHED:
+finished:
+				mq->cqe_in_flight[issue_type] -= 1;
+				if (mmc_cqe_tot_in_flight(mq) == 0)
+					get_put = -1;
+			}
+		} else {
+			if (get_put < 0) {
+				get_put = 0;
+				mmc_put_card(card);
+			}
+			/*
+			 * Do not stop with requests in flight in case recovery
+			 * is needed.
+			 */
+			if (kthread_should_stop() &&
+			    !mmc_cqe_tot_in_flight(mq)) {
+				set_current_state(TASK_RUNNING);
+				break;
+			}
+			up(&mq->thread_sem);
+			schedule();
+			down(&mq->thread_sem);
+			spin_lock_irqsave(q->queue_lock, flags);
+		}
+	} /* loop */
+	up(&mq->thread_sem);
+
+	return 0;
+}
+
+enum blk_eh_timer_return __mmc_cqe_timed_out(struct request *req)
+{
+	struct mmc_queue_req *mqrq = req->special;
+	struct mmc_request *mrq = &mqrq->brq.mrq;
+	struct mmc_queue *mq = req->q->queuedata;
+	struct mmc_host *host = mq->card->host;
+	enum mmc_issue_type issue_type = mmc_cqe_issue_type(host, req);
+	bool recovery_needed = false;
+
+	switch (issue_type) {
+	case MMC_ISSUE_ASYNC:
+	case MMC_ISSUE_DCMD:
+		if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) {
+			if (recovery_needed)
+				__mmc_cqe_recovery_notifier(mq);
+			return BLK_EH_RESET_TIMER;
+		} else {
+			/* No timeout */
+			return BLK_EH_HANDLED;
+		}
+	default:
+		/* Timeout is handled by mmc core */
+		return BLK_EH_RESET_TIMER;
+	}
+}
+
+enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
+{
+	struct mmc_queue *mq = req->q->queuedata;
+
+	if (!req->special || mq->cqe_recovery_needed)
+		return BLK_EH_RESET_TIMER;
+
+	return __mmc_cqe_timed_out(req);
+}
+
 static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
@@ -398,20 +666,43 @@ int mmc_queue_alloc_shared_queue(struct mmc_card *card)
  * Initialise a MMC card request queue.
  */
 int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
-		   spinlock_t *lock, const char *subname)
+		   spinlock_t *lock, const char *subname, int area_type)
 {
 	struct mmc_host *host = card->host;
 	u64 limit = BLK_BOUNCE_HIGH;
 	int ret = -ENOMEM;
+	bool use_cqe = host->cqe_enabled && area_type != MMC_BLK_DATA_AREA_RPMB;
 
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
 	mq->card = card;
-	mq->queue = blk_init_queue(mmc_request_fn, lock);
+
+	mq->queue = blk_init_queue(use_cqe ?
+				   mmc_cqe_request_fn : mmc_request_fn, lock);
 	if (!mq->queue)
 		return -ENOMEM;
 
+	if (use_cqe) {
+		int q_depth = card->ext_csd.cmdq_depth;
+
+		if (q_depth > host->cqe_qdepth)
+			q_depth = host->cqe_qdepth;
+		if (q_depth > card->qdepth)
+			q_depth = card->qdepth;
+
+		ret = blk_queue_init_tags(mq->queue, q_depth, NULL,
+					  BLK_TAG_ALLOC_FIFO);
+		if (ret)
+			goto cleanup_queue;
+
+		blk_queue_softirq_done(mq->queue, mmc_blk_cqe_complete_rq);
+		blk_queue_rq_timed_out(mq->queue, mmc_cqe_timed_out);
+		blk_queue_rq_timeout(mq->queue, 60 * HZ);
+
+		host->cqe_recovery_notifier = mmc_cqe_recovery_notifier;
+	}
+
 	mq->mqrq = card->mqrq;
 	mq->qdepth = card->qdepth;
 	mq->queue->queuedata = mq;
@@ -437,9 +728,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 
 	sema_init(&mq->thread_sem, 1);
 
-	mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
-		host->index, subname ? subname : "");
-
+	mq->thread = kthread_run(use_cqe ? mmc_cqe_thread : mmc_queue_thread,
+				 mq, "mmcqd/%d%s", host->index,
+				 subname ? subname : "");
 	if (IS_ERR(mq->thread)) {
 		ret = PTR_ERR(mq->thread);
 		goto cleanup_queue;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 273bba434070..d92805971b05 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -14,6 +14,13 @@ static inline bool mmc_req_is_special(struct request *req)
 		 req_op(req) == REQ_OP_SECURE_ERASE);
 }
 
+enum mmc_issued {
+	MMC_REQ_STARTED,
+	MMC_REQ_BUSY,
+	MMC_REQ_FAILED_TO_START,
+	MMC_REQ_FINISHED,
+};
+
 struct task_struct;
 struct mmc_blk_data;
 
@@ -42,6 +49,13 @@ struct mmc_queue_req {
 	unsigned int		retry_cnt;
 };
 
+enum mmc_issue_type {
+	MMC_ISSUE_SYNC,
+	MMC_ISSUE_DCMD,
+	MMC_ISSUE_ASYNC,
+	MMC_ISSUE_MAX,
+};
+
 struct mmc_queue {
 	struct mmc_card		*card;
 	struct task_struct	*thread;
@@ -59,12 +73,18 @@ struct mmc_queue {
 	unsigned long		qsr;
 	struct mmc_async_req	*prepared_areq;
 	bool			qsr_err;
+	/* Following are defined for a Command Queue Engine */
+	int			cqe_in_flight[MMC_ISSUE_MAX];
+	unsigned int		cqe_busy;
+	bool			cqe_recovery_needed;
+#define MMC_CQE_DCMD_BUSY	BIT(0)
+#define MMC_CQE_QUEUE_FULL	BIT(1)
 };
 
 extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
 extern void mmc_queue_free_shared_queue(struct mmc_card *card);
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
-			  const char *);
+			  const char *, int);
 extern void mmc_cleanup_queue(struct mmc_queue *);
 extern void mmc_queue_suspend(struct mmc_queue *);
 extern void mmc_queue_resume(struct mmc_queue *);
@@ -81,4 +101,25 @@ extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
 extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
 extern void mmc_queue_set_wake(struct mmc_queue *, bool);
 
+void mmc_queue_set_special(struct mmc_queue *mq, struct request *req);
+void mmc_queue_clr_special(struct request *req);
+
+void mmc_cqe_kick_queue(struct mmc_queue *mq);
+
+enum mmc_issue_type mmc_cqe_issue_type(struct mmc_host *host,
+				       struct request *req);
+
+static inline int mmc_cqe_tot_in_flight(struct mmc_queue *mq)
+{
+	return mq->cqe_in_flight[MMC_ISSUE_SYNC] +
+	       mq->cqe_in_flight[MMC_ISSUE_DCMD] +
+	       mq->cqe_in_flight[MMC_ISSUE_ASYNC];
+}
+
+static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
+{
+	return mq->cqe_in_flight[MMC_ISSUE_DCMD] +
+	       mq->cqe_in_flight[MMC_ISSUE_ASYNC];
+}
+
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 24/39] mmc: cqhci: support for command queue enabled host
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (22 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 23/39] mmc: block: Add CQE support Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 25/39] mmc: sdhci: Improve debug print format Adrian Hunter
                   ` (14 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

From: Venkat Gopalakrishnan <venkatg@codeaurora.org>

This patch adds CMDQ support for command-queue compatible
hosts.

Command queue is added in eMMC-5.1 specification. This
enables the controller to process upto 32 requests at
a time.

Adrian Hunter contributed renaming to cqhci, recovery, suspend
and resume, cqhci_off, cqhci_wait_for_idle, and external timeout
handling.

Signed-off-by: Asutosh Das <asutoshd@codeaurora.org>
Signed-off-by: Sujit Reddy Thumma <sthumma@codeaurora.org>
Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org>
Signed-off-by: Ritesh Harjani <riteshh@codeaurora.org>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/Kconfig  |   13 +
 drivers/mmc/host/Makefile |    1 +
 drivers/mmc/host/cqhci.c  | 1148 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/mmc/host/cqhci.h  |  228 +++++++++
 4 files changed, 1390 insertions(+)
 create mode 100644 drivers/mmc/host/cqhci.c
 create mode 100644 drivers/mmc/host/cqhci.h

diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index f08691a58d7e..fceba81b8f37 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -794,6 +794,19 @@ config MMC_SUNXI
 	  This selects support for the SD/MMC Host Controller on
 	  Allwinner sunxi SoCs.
 
+config MMC_CQHCI
+	tristate "Command Queue Host Controller Interface support"
+	depends on HAS_DMA
+	help
+	  This selects the Command Queue Host Controller Interface (CQHCI)
+	  support present in host controllers of Qualcomm Technologies, Inc
+	  amongst others.
+	  This controller supports eMMC devices with command queue support.
+
+	  If you have a controller with this interface, say Y or M here.
+
+	  If unsure, say N.
+
 config MMC_TOSHIBA_PCI
 	tristate "Toshiba Type A SD/MMC Card Interface Driver"
 	depends on PCI
diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile
index 6d548c4ee2fa..b78964d3cbea 100644
--- a/drivers/mmc/host/Makefile
+++ b/drivers/mmc/host/Makefile
@@ -79,6 +79,7 @@ obj-$(CONFIG_MMC_SDHCI_MSM)		+= sdhci-msm.o
 obj-$(CONFIG_MMC_SDHCI_ST)		+= sdhci-st.o
 obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32)	+= sdhci-pic32.o
 obj-$(CONFIG_MMC_SDHCI_BRCMSTB)		+= sdhci-brcmstb.o
+obj-$(CONFIG_MMC_CQHCI)			+= cqhci.o
 
 ifeq ($(CONFIG_CB710_DEBUG),y)
 	CFLAGS-cb710-mmc	+= -DDEBUG
diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
new file mode 100644
index 000000000000..bdf2626bdc16
--- /dev/null
+++ b/drivers/mmc/host/cqhci.c
@@ -0,0 +1,1148 @@
+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/scatterlist.h>
+#include <linux/platform_device.h>
+#include <linux/ktime.h>
+
+#include <linux/mmc/mmc.h>
+#include <linux/mmc/host.h>
+#include <linux/mmc/card.h>
+
+#include "cqhci.h"
+
+#define DCMD_SLOT 31
+#define NUM_SLOTS 32
+
+struct cqhci_slot {
+	struct mmc_request *mrq;
+	unsigned int flags;
+#define CQHCI_EXTERNAL_TIMEOUT	BIT(0)
+#define CQHCI_COMPLETED		BIT(1)
+#define CQHCI_HOST_CRC		BIT(2)
+#define CQHCI_HOST_TIMEOUT	BIT(3)
+#define CQHCI_HOST_OTHER	BIT(4)
+};
+
+static inline u8 *get_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	return cq_host->desc_base + (tag * cq_host->slot_sz);
+}
+
+static inline u8 *get_link_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	u8 *desc = get_desc(cq_host, tag);
+
+	return desc + cq_host->task_desc_len;
+}
+
+static inline dma_addr_t get_trans_desc_dma(struct cqhci_host *cq_host, u8 tag)
+{
+	return cq_host->trans_desc_dma_base +
+		(cq_host->mmc->max_segs * tag *
+		 cq_host->trans_desc_len);
+}
+
+static inline u8 *get_trans_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	return cq_host->trans_desc_base +
+		(cq_host->trans_desc_len * cq_host->mmc->max_segs * tag);
+}
+
+static void setup_trans_desc(struct cqhci_host *cq_host, u8 tag)
+{
+	u8 *link_temp;
+	dma_addr_t trans_temp;
+
+	link_temp = get_link_desc(cq_host, tag);
+	trans_temp = get_trans_desc_dma(cq_host, tag);
+
+	memset(link_temp, 0, cq_host->link_desc_len);
+	if (cq_host->link_desc_len > 8)
+		*(link_temp + 8) = 0;
+
+	if (tag == DCMD_SLOT) {
+		*link_temp = CQHCI_VALID(0) | CQHCI_ACT(0) | CQHCI_END(1);
+		return;
+	}
+
+	*link_temp = CQHCI_VALID(1) | CQHCI_ACT(0x6) | CQHCI_END(0);
+
+	if (cq_host->dma64) {
+		__le64 *data_addr = (__le64 __force *)(link_temp + 4);
+
+		data_addr[0] = cpu_to_le64(trans_temp);
+	} else {
+		__le32 *data_addr = (__le32 __force *)(link_temp + 4);
+
+		data_addr[0] = cpu_to_le32(trans_temp);
+	}
+}
+
+static void cqhci_set_irqs(struct cqhci_host *cq_host, u32 set)
+{
+	u32 ier;
+
+	ier = cqhci_readl(cq_host, CQHCI_ISTE);
+	ier |= set;
+	cqhci_writel(cq_host, ier, CQHCI_ISTE);
+	cqhci_writel(cq_host, ier, CQHCI_ISGE);
+}
+
+#define DRV_NAME "cqhci"
+
+#define CQHCI_DUMP(f, x...) \
+	pr_err("%s: " DRV_NAME ": " f, mmc_hostname(mmc), ## x)
+
+static void cqhci_dumpregs(struct cqhci_host *cq_host)
+{
+	struct mmc_host *mmc = cq_host->mmc;
+
+	CQHCI_DUMP("============ CQHCI REGISTER DUMP ===========\n");
+
+	CQHCI_DUMP("Caps:      0x%08x | Version:  0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_CAP),
+		   cqhci_readl(cq_host, CQHCI_VER));
+	CQHCI_DUMP("Config:    0x%08x | Control:  0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_CFG),
+		   cqhci_readl(cq_host, CQHCI_CTL));
+	CQHCI_DUMP("Int stat:  0x%08x | Int enab: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_IS),
+		   cqhci_readl(cq_host, CQHCI_ISTE));
+	CQHCI_DUMP("Int sig:   0x%08x | Int Coal: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_ISGE),
+		   cqhci_readl(cq_host, CQHCI_IC));
+	CQHCI_DUMP("TDL base:  0x%08x | TDL up32: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_TDLBA),
+		   cqhci_readl(cq_host, CQHCI_TDLBAU));
+	CQHCI_DUMP("Doorbell:  0x%08x | TCN:      0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_TDBR),
+		   cqhci_readl(cq_host, CQHCI_TCN));
+	CQHCI_DUMP("Dev queue: 0x%08x | Dev Pend: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_DQS),
+		   cqhci_readl(cq_host, CQHCI_DPT));
+	CQHCI_DUMP("Task clr:  0x%08x | SSC1:     0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_TCLR),
+		   cqhci_readl(cq_host, CQHCI_SSC1));
+	CQHCI_DUMP("SSC2:      0x%08x | DCMD rsp: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_SSC2),
+		   cqhci_readl(cq_host, CQHCI_CRDCT));
+	CQHCI_DUMP("RED mask:  0x%08x | TERRI:    0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_RMEM),
+		   cqhci_readl(cq_host, CQHCI_TERRI));
+	CQHCI_DUMP("Resp idx:  0x%08x | Resp arg: 0x%08x\n",
+		   cqhci_readl(cq_host, CQHCI_CRI),
+		   cqhci_readl(cq_host, CQHCI_CRA));
+
+	if (cq_host->ops->dumpregs)
+		cq_host->ops->dumpregs(mmc);
+	else
+		CQHCI_DUMP(": ===========================================\n");
+}
+
+/**
+ * The allocated descriptor table for task, link & transfer descritors
+ * looks like:
+ * |----------|
+ * |task desc |  |->|----------|
+ * |----------|  |  |trans desc|
+ * |link desc-|->|  |----------|
+ * |----------|          .
+ *      .                .
+ *  no. of slots      max-segs
+ *      .           |----------|
+ * |----------|
+ * The idea here is to create the [task+trans] table and mark & point the
+ * link desc to the transfer desc table on a per slot basis.
+ */
+static int cqhci_host_alloc_tdl(struct cqhci_host *cq_host)
+{
+	int i = 0;
+
+	/* task descriptor can be 64/128 bit irrespective of arch */
+	if (cq_host->caps & CQHCI_TASK_DESC_SZ_128) {
+		cqhci_writel(cq_host, cqhci_readl(cq_host, CQHCI_CFG) |
+			       CQHCI_TASK_DESC_SZ, CQHCI_CFG);
+		cq_host->task_desc_len = 16;
+	} else {
+		cq_host->task_desc_len = 8;
+	}
+
+	/*
+	 * 96 bits length of transfer desc instead of 128 bits which means
+	 * ADMA would expect next valid descriptor at the 96th bit
+	 * or 128th bit
+	 */
+	if (cq_host->dma64) {
+		if (cq_host->quirks & CQHCI_QUIRK_SHORT_TXFR_DESC_SZ)
+			cq_host->trans_desc_len = 12;
+		else
+			cq_host->trans_desc_len = 16;
+		cq_host->link_desc_len = 16;
+	} else {
+		cq_host->trans_desc_len = 8;
+		cq_host->link_desc_len = 8;
+	}
+
+	/* total size of a slot: 1 task & 1 transfer (link) */
+	cq_host->slot_sz = cq_host->task_desc_len + cq_host->link_desc_len;
+
+	cq_host->desc_size = cq_host->slot_sz * cq_host->num_slots;
+
+	cq_host->data_size = cq_host->trans_desc_len * cq_host->mmc->max_segs *
+		(cq_host->num_slots - 1);
+
+	pr_debug("%s: cqhci: desc_size: %zu data_sz: %zu slot-sz: %d\n",
+		 mmc_hostname(cq_host->mmc), cq_host->desc_size, cq_host->data_size,
+		 cq_host->slot_sz);
+
+	/*
+	 * allocate a dma-mapped chunk of memory for the descriptors
+	 * allocate a dma-mapped chunk of memory for link descriptors
+	 * setup each link-desc memory offset per slot-number to
+	 * the descriptor table.
+	 */
+	cq_host->desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc),
+						 cq_host->desc_size,
+						 &cq_host->desc_dma_base,
+						 GFP_KERNEL);
+	cq_host->trans_desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc),
+					      cq_host->data_size,
+					      &cq_host->trans_desc_dma_base,
+					      GFP_KERNEL);
+	if (!cq_host->desc_base || !cq_host->trans_desc_base)
+		return -ENOMEM;
+
+	pr_debug("%s: cqhci: desc-base: 0x%p trans-base: 0x%p\n desc_dma 0x%llx trans_dma: 0x%llx\n",
+		 mmc_hostname(cq_host->mmc), cq_host->desc_base, cq_host->trans_desc_base,
+		(unsigned long long)cq_host->desc_dma_base,
+		(unsigned long long)cq_host->trans_desc_dma_base);
+
+	for (; i < (cq_host->num_slots); i++)
+		setup_trans_desc(cq_host, i);
+
+	return 0;
+}
+
+static void __cqhci_enable(struct cqhci_host *cq_host)
+{
+	struct mmc_host *mmc = cq_host->mmc;
+	u32 cqcfg;
+
+	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+
+	/* Configuration must not be changed while enabled */
+	if (cqcfg & CQHCI_ENABLE) {
+		cqcfg &= ~CQHCI_ENABLE;
+		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+	}
+
+	cqcfg &= ~(CQHCI_DCMD | CQHCI_TASK_DESC_SZ);
+
+	if (mmc->caps2 & MMC_CAP2_CQE_DCMD)
+		cqcfg |= CQHCI_DCMD;
+
+	if (cq_host->caps & CQHCI_TASK_DESC_SZ_128)
+		cqcfg |= CQHCI_TASK_DESC_SZ;
+
+	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+
+	cqhci_writel(cq_host, lower_32_bits(cq_host->desc_dma_base),
+		     CQHCI_TDLBA);
+	cqhci_writel(cq_host, upper_32_bits(cq_host->desc_dma_base),
+		     CQHCI_TDLBAU);
+
+	cqhci_writel(cq_host, cq_host->rca, CQHCI_SSC2);
+
+	cqhci_set_irqs(cq_host, 0);
+
+	cqcfg |= CQHCI_ENABLE;
+
+	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+
+	mmc->cqe_on = true;
+
+	if (cq_host->ops->enable)
+		cq_host->ops->enable(mmc);
+
+	/* Ensure all writes are done before interrupts are enabled */
+	wmb();
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_MASK);
+
+	cq_host->activated = true;
+}
+
+static void __cqhci_disable(struct cqhci_host *cq_host)
+{
+	u32 cqcfg;
+
+	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+	cqcfg &= ~CQHCI_ENABLE;
+	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+
+	cq_host->mmc->cqe_on = false;
+
+	cq_host->activated = false;
+}
+
+int cqhci_suspend(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (cq_host->enabled)
+		__cqhci_disable(cq_host);
+
+	return 0;
+}
+EXPORT_SYMBOL(cqhci_suspend);
+
+int cqhci_resume(struct mmc_host *mmc)
+{
+	/* Re-enable is done upon first request */
+	return 0;
+}
+EXPORT_SYMBOL(cqhci_resume);
+
+static int cqhci_enable(struct mmc_host *mmc, struct mmc_card *card)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	int err;
+
+	if (cq_host->enabled)
+		return 0;
+
+	cq_host->rca = card->rca;
+
+	err = cqhci_host_alloc_tdl(cq_host);
+	if (err)
+		return err;
+
+	__cqhci_enable(cq_host);
+
+	cq_host->enabled = true;
+
+#ifdef DEBUG
+	cqhci_dumpregs(cq_host);
+#endif
+	return 0;
+}
+
+/* CQHCI is idle and should halt immediately, so set a small timeout */
+#define CQHCI_OFF_TIMEOUT 100
+
+static void cqhci_off(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	ktime_t timeout;
+	bool timed_out;
+	u32 reg;
+
+	if (!cq_host->enabled || !mmc->cqe_on || cq_host->recovery_halt)
+		return;
+
+	if (cq_host->ops->disable)
+		cq_host->ops->disable(mmc, false);
+
+	cqhci_writel(cq_host, CQHCI_HALT, CQHCI_CTL);
+
+	timeout = ktime_add_us(ktime_get(), CQHCI_OFF_TIMEOUT);
+	while (1) {
+		timed_out = ktime_compare(ktime_get(), timeout) > 0;
+		reg = cqhci_readl(cq_host, CQHCI_CTL);
+		if ((reg & CQHCI_HALT) || timed_out)
+			break;
+	}
+
+	if (timed_out)
+		pr_err("%s: cqhci: CQE stuck on\n", mmc_hostname(mmc));
+	else
+		pr_debug("%s: cqhci: CQE off\n", mmc_hostname(mmc));
+
+	mmc->cqe_on = false;
+}
+
+static void cqhci_disable(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (!cq_host->enabled)
+		return;
+
+	cqhci_off(mmc);
+
+	__cqhci_disable(cq_host);
+
+	dmam_free_coherent(mmc_dev(mmc), cq_host->data_size,
+			   cq_host->trans_desc_base,
+			   cq_host->trans_desc_dma_base);
+
+	dmam_free_coherent(mmc_dev(mmc), cq_host->desc_size,
+			   cq_host->desc_base,
+			   cq_host->desc_dma_base);
+
+	cq_host->trans_desc_base = NULL;
+	cq_host->desc_base = NULL;
+
+	cq_host->enabled = false;
+}
+
+static void cqhci_prep_task_desc(struct mmc_request *mrq,
+					u64 *data, bool intr)
+{
+	u32 req_flags = mrq->data->flags;
+
+	*data = CQHCI_VALID(1) |
+		CQHCI_END(1) |
+		CQHCI_INT(intr) |
+		CQHCI_ACT(0x5) |
+		CQHCI_FORCED_PROG(!!(req_flags & MMC_DATA_FORCED_PRG)) |
+		CQHCI_DATA_TAG(!!(req_flags & MMC_DATA_DAT_TAG)) |
+		CQHCI_DATA_DIR(!!(req_flags & MMC_DATA_READ)) |
+		CQHCI_PRIORITY(!!(req_flags & MMC_DATA_PRIO)) |
+		CQHCI_QBAR(!!(req_flags & MMC_DATA_QBR)) |
+		CQHCI_REL_WRITE(!!(req_flags & MMC_DATA_REL_WR)) |
+		CQHCI_BLK_COUNT(mrq->data->blocks) |
+		CQHCI_BLK_ADDR((u64)mrq->data->blk_addr);
+
+	pr_debug("%s: cqhci: tag %d task descriptor 0x016%llx\n",
+		 mmc_hostname(mrq->host), mrq->tag, (unsigned long long)*data);
+}
+
+static int cqhci_dma_map(struct mmc_host *host, struct mmc_request *mrq)
+{
+	int sg_count;
+	struct mmc_data *data = mrq->data;
+
+	if (!data)
+		return -EINVAL;
+
+	sg_count = dma_map_sg(mmc_dev(host), data->sg,
+			      data->sg_len,
+			      (data->flags & MMC_DATA_WRITE) ?
+			      DMA_TO_DEVICE : DMA_FROM_DEVICE);
+	if (!sg_count) {
+		pr_err("%s: sg-len: %d\n", __func__, data->sg_len);
+		return -ENOMEM;
+	}
+
+	return sg_count;
+}
+
+static void cqhci_set_tran_desc(u8 *desc,
+				 dma_addr_t addr, int len, bool end)
+{
+	__le64 *dataddr = (__le64 __force *)(desc + 4);
+	__le32 *attr = (__le32 __force *)desc;
+
+	*attr = (CQHCI_VALID(1) |
+		 CQHCI_END(end ? 1 : 0) |
+		 CQHCI_INT(0) |
+		 CQHCI_ACT(0x4) |
+		 CQHCI_DAT_LENGTH(len));
+
+	dataddr[0] = cpu_to_le64(addr);
+}
+
+static int cqhci_prep_tran_desc(struct mmc_request *mrq,
+			       struct cqhci_host *cq_host, int tag)
+{
+	struct mmc_data *data = mrq->data;
+	int i, sg_count, len;
+	bool end = false;
+	dma_addr_t addr;
+	u8 *desc;
+	struct scatterlist *sg;
+
+	sg_count = cqhci_dma_map(mrq->host, mrq);
+	if (sg_count < 0) {
+		pr_err("%s: %s: unable to map sg lists, %d\n",
+				mmc_hostname(mrq->host), __func__, sg_count);
+		return sg_count;
+	}
+
+	desc = get_trans_desc(cq_host, tag);
+
+	for_each_sg(data->sg, sg, sg_count, i) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+
+		if ((i+1) == sg_count)
+			end = true;
+		cqhci_set_tran_desc(desc, addr, len, end);
+		desc += cq_host->trans_desc_len;
+	}
+
+	return 0;
+}
+
+static void cqhci_prep_dcmd_desc(struct mmc_host *mmc,
+				   struct mmc_request *mrq)
+{
+	u64 *task_desc = NULL;
+	u64 data = 0;
+	u8 resp_type;
+	u8 *desc;
+	__le64 *dataddr;
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	u8 timing;
+
+	if (!(mrq->cmd->flags & MMC_RSP_PRESENT)) {
+		resp_type = 0x0;
+		timing = 0x1;
+	} else {
+		if (mrq->cmd->flags & MMC_RSP_R1B) {
+			resp_type = 0x3;
+			timing = 0x0;
+		} else {
+			resp_type = 0x2;
+			timing = 0x1;
+		}
+	}
+
+	task_desc = (__le64 __force *)get_desc(cq_host, cq_host->dcmd_slot);
+	memset(task_desc, 0, cq_host->task_desc_len);
+	data |= (CQHCI_VALID(1) |
+		 CQHCI_END(1) |
+		 CQHCI_INT(1) |
+		 CQHCI_QBAR(1) |
+		 CQHCI_ACT(0x5) |
+		 CQHCI_CMD_INDEX(mrq->cmd->opcode) |
+		 CQHCI_CMD_TIMING(timing) | CQHCI_RESP_TYPE(resp_type));
+	*task_desc |= data;
+	desc = (u8 *)task_desc;
+	pr_debug("%s: cqhci: dcmd: cmd: %d timing: %d resp: %d\n",
+		 mmc_hostname(mmc), mrq->cmd->opcode, timing, resp_type);
+	dataddr = (__le64 __force *)(desc + 4);
+	dataddr[0] = cpu_to_le64((u64)mrq->cmd->arg);
+
+}
+
+static void cqhci_post_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+	struct mmc_data *data = mrq->data;
+
+	if (data) {
+		dma_unmap_sg(mmc_dev(host), data->sg, data->sg_len,
+			     (data->flags & MMC_DATA_READ) ?
+			     DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static inline int cqhci_tag(struct mmc_request *mrq)
+{
+	return mrq->cmd ? DCMD_SLOT : mrq->tag;
+}
+
+static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+{
+	int err = 0;
+	u64 data = 0;
+	u64 *task_desc = NULL;
+	int tag = cqhci_tag(mrq);
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	unsigned long flags;
+
+	if (!cq_host->enabled) {
+		pr_err("%s: cqhci: not enabled\n", mmc_hostname(mmc));
+		return -EINVAL;
+	}
+
+	/* First request after resume has to re-enable */
+	if (!cq_host->activated)
+		__cqhci_enable(cq_host);
+
+	if (!mmc->cqe_on) {
+		cqhci_writel(cq_host, 0, CQHCI_CTL);
+		mmc->cqe_on = true;
+		pr_debug("%s: cqhci: CQE on\n", mmc_hostname(mmc));
+		if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT) {
+			pr_err("%s: cqhci: CQE failed to exit halt state\n",
+			       mmc_hostname(mmc));
+		}
+		if (cq_host->ops->enable)
+			cq_host->ops->enable(mmc);
+	}
+
+	if (mrq->data) {
+		task_desc = (__le64 __force *)get_desc(cq_host, tag);
+		cqhci_prep_task_desc(mrq, &data, 1);
+		*task_desc = cpu_to_le64(data);
+		err = cqhci_prep_tran_desc(mrq, cq_host, tag);
+		if (err) {
+			pr_err("%s: cqhci: failed to setup tx desc: %d\n",
+			       mmc_hostname(mmc), err);
+			return err;
+		}
+	} else {
+		cqhci_prep_dcmd_desc(mmc, mrq);
+	}
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+
+	if (cq_host->recovery_halt) {
+		err = -EBUSY;
+		goto out_unlock;
+	}
+
+	cq_host->slot[tag].mrq = mrq;
+	cq_host->slot[tag].flags = 0;
+
+	cq_host->qcnt += 1;
+
+	cqhci_writel(cq_host, 1 << tag, CQHCI_TDBR);
+	if (!(cqhci_readl(cq_host, CQHCI_TDBR) & (1 << tag)))
+		pr_debug("%s: cqhci: doorbell not set for tag %d\n",
+			 mmc_hostname(mmc), tag);
+out_unlock:
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	if (err)
+		cqhci_post_req(mmc, mrq);
+
+	return err;
+}
+
+static void cqhci_recovery_needed(struct mmc_host *mmc, struct mmc_request *mrq,
+				  bool notify)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (!cq_host->recovery_halt) {
+		cq_host->recovery_halt = true;
+		pr_debug("%s: cqhci: recovery needed\n", mmc_hostname(mmc));
+		wake_up(&cq_host->wait_queue);
+		if (notify && mmc->cqe_recovery_notifier)
+			mmc->cqe_recovery_notifier(mmc, mrq);
+	}
+}
+
+static unsigned int cqhci_error_flags(int error1, int error2)
+{
+	int error = error1 ? error1 : error2;
+
+	switch (error) {
+	case -EILSEQ:
+		return CQHCI_HOST_CRC;
+	case -ETIMEDOUT:
+		return CQHCI_HOST_TIMEOUT;
+	default:
+		return CQHCI_HOST_OTHER;
+	}
+}
+
+static void cqhci_error_irq(struct mmc_host *mmc, u32 status, int cmd_error,
+			    int data_error)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	struct cqhci_slot *slot;
+	u32 terri;
+	int tag;
+
+	spin_lock(&cq_host->lock);
+
+	terri = cqhci_readl(cq_host, CQHCI_TERRI);
+
+	pr_debug("%s: cqhci: error IRQ status: 0x%08x cmd error %d data error %d TERRI: 0x%08x\n",
+		 mmc_hostname(mmc), status, cmd_error, data_error, terri);
+
+	/* Forget about errors when recovery has already been triggered */
+	if (cq_host->recovery_halt)
+		goto out_unlock;
+
+	if (!cq_host->qcnt) {
+		WARN_ONCE(1, "%s: cqhci: error when idle. IRQ status: 0x%08x cmd error %d data error %d TERRI: 0x%08x\n",
+			  mmc_hostname(mmc), status, cmd_error, data_error,
+			  terri);
+		goto out_unlock;
+	}
+
+	if (CQHCI_TERRI_C_VALID(terri)) {
+		tag = CQHCI_TERRI_C_TASK(terri);
+		slot = &cq_host->slot[tag];
+		if (slot->mrq) {
+			slot->flags = cqhci_error_flags(cmd_error, data_error);
+			cqhci_recovery_needed(mmc, slot->mrq, true);
+		}
+	}
+
+	if (CQHCI_TERRI_D_VALID(terri)) {
+		tag = CQHCI_TERRI_D_TASK(terri);
+		slot = &cq_host->slot[tag];
+		if (slot->mrq) {
+			slot->flags = cqhci_error_flags(data_error, cmd_error);
+			cqhci_recovery_needed(mmc, slot->mrq, true);
+		}
+	}
+
+	if (!cq_host->recovery_halt) {
+		/*
+		 * The only way to guarantee forward progress is to mark at
+		 * least one task in error, so if none is indicated, pick one.
+		 */
+		for (tag = 0; tag < NUM_SLOTS; tag++) {
+			slot = &cq_host->slot[tag];
+			if (!slot->mrq)
+				continue;
+			slot->flags = cqhci_error_flags(data_error, cmd_error);
+			cqhci_recovery_needed(mmc, slot->mrq, true);
+			break;
+		}
+	}
+
+out_unlock:
+	spin_unlock(&cq_host->lock);
+}
+
+static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	struct cqhci_slot *slot = &cq_host->slot[tag];
+	struct mmc_request *mrq = slot->mrq;
+	struct mmc_data *data;
+
+	if (!mrq) {
+		WARN_ONCE(1, "%s: cqhci: spurious TCN for tag %d\n",
+			  mmc_hostname(mmc), tag);
+		return;
+	}
+
+	/* No completions allowed during recovery */
+	if (cq_host->recovery_halt) {
+		slot->flags |= CQHCI_COMPLETED;
+		return;
+	}
+
+	slot->mrq = NULL;
+
+	cq_host->qcnt -= 1;
+
+	data = mrq->data;
+	if (data) {
+		if (data->error)
+			data->bytes_xfered = 0;
+		else
+			data->bytes_xfered = data->blksz * data->blocks;
+	}
+
+	mmc_cqe_request_done(mmc, mrq);
+}
+
+irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
+		      int data_error)
+{
+	u32 status;
+	unsigned long tag = 0, comp_status;
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	status = cqhci_readl(cq_host, CQHCI_IS);
+	cqhci_writel(cq_host, status, CQHCI_IS);
+
+	pr_debug("%s: cqhci: IRQ status: 0x%08x\n", mmc_hostname(mmc), status);
+
+	if ((status & CQHCI_IS_RED) || cmd_error || data_error)
+		cqhci_error_irq(mmc, status, cmd_error, data_error);
+
+	if (status & CQHCI_IS_TCC) {
+		/* read TCN and complete the request */
+		comp_status = cqhci_readl(cq_host, CQHCI_TCN);
+		cqhci_writel(cq_host, comp_status, CQHCI_TCN);
+		pr_debug("%s: cqhci: TCN: 0x%08lx\n",
+			 mmc_hostname(mmc), comp_status);
+
+		spin_lock(&cq_host->lock);
+
+		for_each_set_bit(tag, &comp_status, cq_host->num_slots) {
+			/* complete the corresponding mrq */
+			pr_debug("%s: cqhci: completing tag %lu\n",
+				 mmc_hostname(mmc), tag);
+			cqhci_finish_mrq(mmc, tag);
+		}
+
+		if (cq_host->waiting_for_idle && !cq_host->qcnt) {
+			cq_host->waiting_for_idle = false;
+			wake_up(&cq_host->wait_queue);
+		}
+
+		spin_unlock(&cq_host->lock);
+	}
+
+	if (status & CQHCI_IS_TCL)
+		wake_up(&cq_host->wait_queue);
+
+	if (status & CQHCI_IS_HAC)
+		wake_up(&cq_host->wait_queue);
+
+	return IRQ_HANDLED;
+}
+EXPORT_SYMBOL(cqhci_irq);
+
+static bool cqhci_is_idle(struct cqhci_host *cq_host, int *ret)
+{
+	unsigned long flags;
+	bool is_idle;
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+	is_idle = !cq_host->qcnt || cq_host->recovery_halt;
+	*ret = cq_host->recovery_halt ? -EBUSY : 0;
+	cq_host->waiting_for_idle = !is_idle;
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	return is_idle;
+}
+
+static int cqhci_wait_for_idle(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	int ret;
+
+	wait_event(cq_host->wait_queue, cqhci_is_idle(cq_host, &ret));
+
+	return ret;
+}
+
+static bool cqhci_timeout(struct mmc_host *mmc, struct mmc_request *mrq,
+			  bool *recovery_needed)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	int tag = cqhci_tag(mrq);
+	struct cqhci_slot *slot = &cq_host->slot[tag];
+	unsigned long flags;
+	bool timed_out;
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+	timed_out = slot->mrq == mrq;
+	if (timed_out) {
+		slot->flags |= CQHCI_EXTERNAL_TIMEOUT;
+		cqhci_recovery_needed(mmc, mrq, false);
+		*recovery_needed = cq_host->recovery_halt;
+	}
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	if (timed_out) {
+		pr_err("%s: cqhci: timeout for tag %d\n",
+		       mmc_hostname(mmc), tag);
+		cqhci_dumpregs(cq_host);
+	}
+
+	return timed_out;
+}
+
+static bool cqhci_tasks_cleared(struct cqhci_host *cq_host)
+{
+	return !(cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_CLEAR_ALL_TASKS);
+}
+
+static bool cqhci_clear_all_tasks(struct mmc_host *mmc, unsigned int timeout)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	bool ret;
+	u32 ctl;
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_TCL);
+
+	ctl = cqhci_readl(cq_host, CQHCI_CTL);
+	ctl |= CQHCI_CLEAR_ALL_TASKS;
+	cqhci_writel(cq_host, ctl, CQHCI_CTL);
+
+	wait_event_timeout(cq_host->wait_queue, cqhci_tasks_cleared(cq_host),
+			   msecs_to_jiffies(timeout) + 1);
+
+	cqhci_set_irqs(cq_host, 0);
+
+	ret = cqhci_tasks_cleared(cq_host);
+
+	if (!ret)
+		pr_debug("%s: cqhci: Failed to clear tasks\n",
+			 mmc_hostname(mmc));
+
+	return ret;
+}
+
+static bool cqhci_halted(struct cqhci_host *cq_host)
+{
+	return cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT;
+}
+
+static bool cqhci_halt(struct mmc_host *mmc, unsigned int timeout)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	bool ret;
+	u32 ctl;
+
+	if (cqhci_halted(cq_host))
+		return true;
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_HAC);
+
+	ctl = cqhci_readl(cq_host, CQHCI_CTL);
+	ctl |= CQHCI_HALT;
+	cqhci_writel(cq_host, ctl, CQHCI_CTL);
+
+	wait_event_timeout(cq_host->wait_queue, cqhci_halted(cq_host),
+			   msecs_to_jiffies(timeout) + 1);
+
+	cqhci_set_irqs(cq_host, 0);
+
+	ret = cqhci_halted(cq_host);
+
+	if (!ret)
+		pr_debug("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
+
+	return ret;
+}
+
+/*
+ * After halting we expect to be able to use the command line. We interpret the
+ * failure to halt to mean the data lines might still be in use (and the upper
+ * layers will need to send a STOP command), so we set the timeout based on a
+ * generous command timeout.
+ */
+#define CQHCI_START_HALT_TIMEOUT	5
+
+static void cqhci_recovery_start(struct mmc_host *mmc)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	pr_debug("%s: cqhci: %s\n", mmc_hostname(mmc), __func__);
+
+	WARN_ON(!cq_host->recovery_halt);
+
+	cqhci_halt(mmc, CQHCI_START_HALT_TIMEOUT);
+
+	if (cq_host->ops->disable)
+		cq_host->ops->disable(mmc, true);
+
+	mmc->cqe_on = false;
+}
+
+static int cqhci_error_from_flags(unsigned int flags)
+{
+	if (!flags)
+		return 0;
+
+	/* CRC errors might indicate re-tuning so prefer to report that */
+	if (flags & CQHCI_HOST_CRC)
+		return -EILSEQ;
+
+	if (flags & (CQHCI_EXTERNAL_TIMEOUT | CQHCI_HOST_TIMEOUT))
+		return -ETIMEDOUT;
+
+	return -EIO;
+}
+
+static void cqhci_recover_mrq(struct cqhci_host *cq_host, unsigned int tag,
+			      bool forget_reqs)
+{
+	struct cqhci_slot *slot = &cq_host->slot[tag];
+	struct mmc_request *mrq = slot->mrq;
+	struct mmc_data *data;
+
+	if (!mrq)
+		return;
+
+	slot->mrq = NULL;
+
+	cq_host->qcnt -= 1;
+
+	data = mrq->data;
+	if (data) {
+		data->bytes_xfered = 0;
+		data->error = cqhci_error_from_flags(slot->flags);
+	} else {
+		mrq->cmd->error = cqhci_error_from_flags(slot->flags);
+	}
+
+	if (!forget_reqs)
+		mmc_cqe_request_done(cq_host->mmc, mrq);
+}
+
+static void cqhci_recover_mrqs(struct cqhci_host *cq_host, bool forget_reqs)
+{
+	int i;
+
+	for (i = 0; i < cq_host->num_slots; i++)
+		cqhci_recover_mrq(cq_host, i, forget_reqs);
+}
+
+/*
+ * By now the command and data lines should be unused so there is no reason for
+ * CQHCI to take a long time to halt, but if it doesn't halt there could be
+ * problems clearing tasks, so be generous.
+ */
+#define CQHCI_FINISH_HALT_TIMEOUT	20
+
+/* CQHCI could be expected to clear it's internal state pretty quickly */
+#define CQHCI_CLEAR_TIMEOUT		20
+
+static void cqhci_recovery_finish(struct mmc_host *mmc, bool forget_reqs)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+	unsigned long flags;
+	u32 cqcfg;
+	bool ok;
+
+	pr_debug("%s: cqhci: %s\n", mmc_hostname(mmc), __func__);
+
+	WARN_ON(!cq_host->recovery_halt);
+
+	ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
+
+	if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
+		ok = false;
+
+	/*
+	 * The specification contradicts itself, by saying that tasks cannot be
+	 * cleared if CQHCI does not halt, but if CQHCI does not halt, it should
+	 * be disabled/re-enabled, but not to disable before clearing tasks.
+	 * Have a go anyway.
+	 */
+	if (!ok) {
+		pr_debug("%s: cqhci: disable / re-enable\n", mmc_hostname(mmc));
+		cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
+		cqcfg &= ~CQHCI_ENABLE;
+		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+		cqcfg |= CQHCI_ENABLE;
+		cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+		/* Be sure that there are no tasks */
+		ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
+		if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
+			ok = false;
+		WARN_ON(!ok);
+	}
+
+	cqhci_recover_mrqs(cq_host, forget_reqs);
+
+	WARN_ON(cq_host->qcnt);
+
+	spin_lock_irqsave(&cq_host->lock, flags);
+	cq_host->qcnt = 0;
+	cq_host->recovery_halt = false;
+	mmc->cqe_on = false;
+	spin_unlock_irqrestore(&cq_host->lock, flags);
+
+	/* Ensure all writes are done before interrupts are re-enabled */
+	wmb();
+
+	cqhci_writel(cq_host, CQHCI_IS_HAC | CQHCI_IS_TCL, CQHCI_IS);
+
+	cqhci_set_irqs(cq_host, CQHCI_IS_MASK);
+
+	pr_debug("%s: cqhci: recovery done\n", mmc_hostname(mmc));
+}
+
+static const struct mmc_cqe_ops cqhci_cqe_ops = {
+	.cqe_enable = cqhci_enable,
+	.cqe_disable = cqhci_disable,
+	.cqe_request = cqhci_request,
+	.cqe_post_req = cqhci_post_req,
+	.cqe_off = cqhci_off,
+	.cqe_wait_for_idle = cqhci_wait_for_idle,
+	.cqe_timeout = cqhci_timeout,
+	.cqe_recovery_start = cqhci_recovery_start,
+	.cqe_recovery_finish = cqhci_recovery_finish,
+};
+
+struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev)
+{
+	struct cqhci_host *cq_host;
+	struct resource *cqhci_memres = NULL;
+
+	/* check and setup CMDQ interface */
+	cqhci_memres = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+						   "cqhci_mem");
+	if (!cqhci_memres) {
+		dev_dbg(&pdev->dev, "CMDQ not supported\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	cq_host = devm_kzalloc(&pdev->dev, sizeof(*cq_host), GFP_KERNEL);
+	if (!cq_host)
+		return ERR_PTR(-ENOMEM);
+	cq_host->mmio = devm_ioremap(&pdev->dev,
+				     cqhci_memres->start,
+				     resource_size(cqhci_memres));
+	if (!cq_host->mmio) {
+		dev_err(&pdev->dev, "failed to remap cqhci regs\n");
+		return ERR_PTR(-EBUSY);
+	}
+	dev_dbg(&pdev->dev, "CMDQ ioremap: done\n");
+
+	return cq_host;
+}
+EXPORT_SYMBOL(cqhci_pltfm_init);
+
+static unsigned int cqhci_ver_major(struct cqhci_host *cq_host)
+{
+	return CQHCI_VER_MAJOR(cqhci_readl(cq_host, CQHCI_VER));
+}
+
+static unsigned int cqhci_ver_minor(struct cqhci_host *cq_host)
+{
+	u32 ver = cqhci_readl(cq_host, CQHCI_VER);
+
+	return CQHCI_VER_MINOR1(ver) * 10 + CQHCI_VER_MINOR2(ver);
+}
+
+int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc,
+	      bool dma64)
+{
+	int err;
+
+	cq_host->dma64 = dma64;
+	cq_host->mmc = mmc;
+	cq_host->mmc->cqe_private = cq_host;
+
+	cq_host->num_slots = NUM_SLOTS;
+	cq_host->dcmd_slot = DCMD_SLOT;
+
+	mmc->cqe_ops = &cqhci_cqe_ops;
+
+	mmc->cqe_qdepth = NUM_SLOTS;
+	if (mmc->caps2 & MMC_CAP2_CQE_DCMD)
+		mmc->cqe_qdepth -= 1;
+
+	cq_host->slot = devm_kcalloc(mmc_dev(mmc), cq_host->num_slots,
+				     sizeof(*cq_host->slot), GFP_KERNEL);
+	if (!cq_host->slot) {
+		err = -ENOMEM;
+		goto out_err;
+	}
+
+	spin_lock_init(&cq_host->lock);
+
+	init_completion(&cq_host->halt_comp);
+	init_waitqueue_head(&cq_host->wait_queue);
+
+	pr_info("%s: CQHCI version %u.%02u\n",
+		mmc_hostname(mmc), cqhci_ver_major(cq_host),
+		cqhci_ver_minor(cq_host));
+
+	return 0;
+
+out_err:
+	pr_err("%s: CQHCI version %u.%02u failed to initialize, error %d\n",
+	       mmc_hostname(mmc), cqhci_ver_major(cq_host),
+	       cqhci_ver_minor(cq_host), err);
+	return err;
+}
+EXPORT_SYMBOL(cqhci_init);
+
+MODULE_AUTHOR("Venkat Gopalakrishnan <venkatg@codeaurora.org>");
+MODULE_DESCRIPTION("Command Queue Host Controller Interface driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h
new file mode 100644
index 000000000000..1c6ce6b0ca7d
--- /dev/null
+++ b/drivers/mmc/host/cqhci.h
@@ -0,0 +1,228 @@
+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#ifndef LINUX_MMC_CQHCI_H
+#define LINUX_MMC_CQHCI_H
+#include <linux/mmc/core.h>
+
+/* registers */
+/* version */
+#define CQHCI_VER			0x00
+#define CQHCI_VER_MAJOR(x)		(((x) & GENMASK(11, 8)) >> 8)
+#define CQHCI_VER_MINOR1(x)		(((x) & GENMASK(7, 4)) >> 4)
+#define CQHCI_VER_MINOR2(x)		((x) & GENMASK(3, 0))
+
+/* capabilities */
+#define CQHCI_CAP			0x04
+/* configuration */
+#define CQHCI_CFG			0x08
+#define CQHCI_DCMD			0x00001000
+#define CQHCI_TASK_DESC_SZ		0x00000100
+#define CQHCI_ENABLE			0x00000001
+
+/* control */
+#define CQHCI_CTL			0x0C
+#define CQHCI_CLEAR_ALL_TASKS		0x00000100
+#define CQHCI_HALT			0x00000001
+
+/* interrupt status */
+#define CQHCI_IS			0x10
+#define CQHCI_IS_HAC			BIT(0)
+#define CQHCI_IS_TCC			BIT(1)
+#define CQHCI_IS_RED			BIT(2)
+#define CQHCI_IS_TCL			BIT(3)
+
+#define CQHCI_IS_MASK (CQHCI_IS_TCC | CQHCI_IS_RED)
+
+/* interrupt status enable */
+#define CQHCI_ISTE			0x14
+
+/* interrupt signal enable */
+#define CQHCI_ISGE			0x18
+
+/* interrupt coalescing */
+#define CQHCI_IC			0x1C
+#define CQHCI_IC_ENABLE			BIT(31)
+#define CQHCI_IC_RESET			BIT(16)
+#define CQHCI_IC_ICCTHWEN		BIT(15)
+#define CQHCI_IC_ICCTH(x)		((x & 0x1F) << 8)
+#define CQHCI_IC_ICTOVALWEN		BIT(7)
+#define CQHCI_IC_ICTOVAL(x)		(x & 0x7F)
+
+/* task list base address */
+#define CQHCI_TDLBA			0x20
+
+/* task list base address upper */
+#define CQHCI_TDLBAU			0x24
+
+/* door-bell */
+#define CQHCI_TDBR			0x28
+
+/* task completion notification */
+#define CQHCI_TCN			0x2C
+
+/* device queue status */
+#define CQHCI_DQS			0x30
+
+/* device pending tasks */
+#define CQHCI_DPT			0x34
+
+/* task clear */
+#define CQHCI_TCLR			0x38
+
+/* send status config 1 */
+#define CQHCI_SSC1			0x40
+
+/* send status config 2 */
+#define CQHCI_SSC2			0x44
+
+/* response for dcmd */
+#define CQHCI_CRDCT			0x48
+
+/* response mode error mask */
+#define CQHCI_RMEM			0x50
+
+/* task error info */
+#define CQHCI_TERRI			0x54
+
+#define CQHCI_TERRI_C_INDEX(x)		((x) & GENMASK(5, 0))
+#define CQHCI_TERRI_C_TASK(x)		(((x) & GENMASK(12, 8)) >> 8)
+#define CQHCI_TERRI_C_VALID(x)		((x) & BIT(15))
+#define CQHCI_TERRI_D_INDEX(x)		(((x) & GENMASK(21, 16)) >> 16)
+#define CQHCI_TERRI_D_TASK(x)		(((x) & GENMASK(28, 24)) >> 24)
+#define CQHCI_TERRI_D_VALID(x)		((x) & BIT(31))
+
+/* command response index */
+#define CQHCI_CRI			0x58
+
+/* command response argument */
+#define CQHCI_CRA			0x5C
+
+#define CQHCI_INT_ALL			0xF
+#define CQHCI_IC_DEFAULT_ICCTH		31
+#define CQHCI_IC_DEFAULT_ICTOVAL	1
+
+/* attribute fields */
+#define CQHCI_VALID(x)			((x & 1) << 0)
+#define CQHCI_END(x)			((x & 1) << 1)
+#define CQHCI_INT(x)			((x & 1) << 2)
+#define CQHCI_ACT(x)			((x & 0x7) << 3)
+
+/* data command task descriptor fields */
+#define CQHCI_FORCED_PROG(x)		((x & 1) << 6)
+#define CQHCI_CONTEXT(x)		((x & 0xF) << 7)
+#define CQHCI_DATA_TAG(x)		((x & 1) << 11)
+#define CQHCI_DATA_DIR(x)		((x & 1) << 12)
+#define CQHCI_PRIORITY(x)		((x & 1) << 13)
+#define CQHCI_QBAR(x)			((x & 1) << 14)
+#define CQHCI_REL_WRITE(x)		((x & 1) << 15)
+#define CQHCI_BLK_COUNT(x)		((x & 0xFFFF) << 16)
+#define CQHCI_BLK_ADDR(x)		((x & 0xFFFFFFFF) << 32)
+
+/* direct command task descriptor fields */
+#define CQHCI_CMD_INDEX(x)		((x & 0x3F) << 16)
+#define CQHCI_CMD_TIMING(x)		((x & 1) << 22)
+#define CQHCI_RESP_TYPE(x)		((x & 0x3) << 23)
+
+/* transfer descriptor fields */
+#define CQHCI_DAT_LENGTH(x)		((x & 0xFFFF) << 16)
+#define CQHCI_DAT_ADDR_LO(x)		((x & 0xFFFFFFFF) << 32)
+#define CQHCI_DAT_ADDR_HI(x)		((x & 0xFFFFFFFF) << 0)
+
+struct cqhci_slot;
+
+struct cqhci_host {
+	const struct cqhci_host_ops *ops;
+	void __iomem *mmio;
+	struct mmc_host *mmc;
+
+	spinlock_t lock;
+
+	/* relative card address of device */
+	unsigned int rca;
+
+	/* 64 bit DMA */
+	bool dma64;
+	int num_slots;
+	int qcnt;
+
+	u32 dcmd_slot;
+	u32 caps;
+#define CQHCI_TASK_DESC_SZ_128		0x1
+
+	u32 quirks;
+#define CQHCI_QUIRK_SHORT_TXFR_DESC_SZ	0x1
+
+	bool enabled;
+	bool halted;
+	bool init_done;
+	bool activated;
+	bool waiting_for_idle;
+	bool recovery_halt;
+
+	size_t desc_size;
+	size_t data_size;
+
+	u8 *desc_base;
+
+	/* total descriptor size */
+	u8 slot_sz;
+
+	/* 64/128 bit depends on CQHCI_CFG */
+	u8 task_desc_len;
+
+	/* 64 bit on 32-bit arch, 128 bit on 64-bit */
+	u8 link_desc_len;
+
+	u8 *trans_desc_base;
+	/* same length as transfer descriptor */
+	u8 trans_desc_len;
+
+	dma_addr_t desc_dma_base;
+	dma_addr_t trans_desc_dma_base;
+
+	struct completion halt_comp;
+	wait_queue_head_t wait_queue;
+	struct cqhci_slot *slot;
+};
+
+struct cqhci_host_ops {
+	void (*dumpregs)(struct mmc_host *mmc);
+	void (*write_l)(struct cqhci_host *host, u32 val, int reg);
+	u32 (*read_l)(struct cqhci_host *host, int reg);
+	void (*enable)(struct mmc_host *mmc);
+	void (*disable)(struct mmc_host *mmc, bool recovery);
+};
+
+static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
+{
+	if (unlikely(host->ops->write_l))
+		host->ops->write_l(host, val, reg);
+	else
+		writel_relaxed(val, host->mmio + reg);
+}
+
+static inline u32 cqhci_readl(struct cqhci_host *host, int reg)
+{
+	if (unlikely(host->ops->read_l))
+		return host->ops->read_l(host, reg);
+	else
+		return readl_relaxed(host->mmio + reg);
+}
+
+irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
+		      int data_error);
+int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc, bool dma64);
+struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev);
+int cqhci_suspend(struct mmc_host *mmc);
+int cqhci_resume(struct mmc_host *mmc);
+
+#endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 25/39] mmc: sdhci: Improve debug print format
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (23 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 24/39] mmc: cqhci: support for command queue enabled host Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-17 12:30   ` Ulf Hansson
  2017-02-10 12:55 ` [PATCH RFC 26/39] mmc: sdhci: Add response register to register dump Adrian Hunter
                   ` (13 subsequent siblings)
  38 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Ensure all debug prints start with the mmc host name.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 28 ++++++++++++----------------
 1 file changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 6fdd7a70f229..8aa8541a349a 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -37,7 +37,7 @@
 #define DRIVER_NAME "sdhci"
 
 #define DBG(f, x...) \
-	pr_debug(DRIVER_NAME " [%s()]: " f, __func__,## x)
+	pr_debug("%s: " DRIVER_NAME ": " f, mmc_hostname(host->mmc), ## x)
 
 #define MAX_TUNING_LOOP 40
 
@@ -715,8 +715,8 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
 	}
 
 	if (count >= 0xF) {
-		DBG("%s: Too large timeout 0x%x requested for CMD%d!\n",
-		    mmc_hostname(host->mmc), count, cmd->opcode);
+		DBG("Too large timeout 0x%x requested for CMD%d!\n",
+		    count, cmd->opcode);
 		count = 0xE;
 	}
 
@@ -2509,7 +2509,6 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
 #ifdef CONFIG_MMC_DEBUG
 static void sdhci_adma_show_error(struct sdhci_host *host)
 {
-	const char *name = mmc_hostname(host->mmc);
 	void *desc = host->adma_table;
 
 	sdhci_dumpregs(host);
@@ -2518,14 +2517,14 @@ static void sdhci_adma_show_error(struct sdhci_host *host)
 		struct sdhci_adma2_64_desc *dma_desc = desc;
 
 		if (host->flags & SDHCI_USE_64_BIT_DMA)
-			DBG("%s: %p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
-			    name, desc, le32_to_cpu(dma_desc->addr_hi),
+			DBG("%p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
+			    desc, le32_to_cpu(dma_desc->addr_hi),
 			    le32_to_cpu(dma_desc->addr_lo),
 			    le16_to_cpu(dma_desc->len),
 			    le16_to_cpu(dma_desc->cmd));
 		else
-			DBG("%s: %p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
-			    name, desc, le32_to_cpu(dma_desc->addr_lo),
+			DBG("%p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
+			    desc, le32_to_cpu(dma_desc->addr_lo),
 			    le16_to_cpu(dma_desc->len),
 			    le16_to_cpu(dma_desc->cmd));
 
@@ -2641,10 +2640,8 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
 				~(SDHCI_DEFAULT_BOUNDARY_SIZE - 1)) +
 				SDHCI_DEFAULT_BOUNDARY_SIZE;
 			host->data->bytes_xfered = dmanow - dmastart;
-			DBG("%s: DMA base 0x%08x, transferred 0x%06x bytes,"
-				" next 0x%08x\n",
-				mmc_hostname(host->mmc), dmastart,
-				host->data->bytes_xfered, dmanow);
+			DBG("DMA base 0x%08x, transferred 0x%06x bytes, next 0x%08x\n",
+			    dmastart, host->data->bytes_xfered, dmanow);
 			sdhci_writel(host, dmanow, SDHCI_DMA_ADDRESS);
 		}
 
@@ -2689,8 +2686,7 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id)
 				  SDHCI_INT_BUS_POWER);
 		sdhci_writel(host, mask, SDHCI_INT_STATUS);
 
-		DBG("*** %s got interrupt: 0x%08x\n",
-			mmc_hostname(host->mmc), intmask);
+		DBG("IRQ status 0x%08x\n", intmask);
 
 		if (intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE)) {
 			u32 present = sdhci_readl(host, SDHCI_PRESENT_STATE) &
@@ -3324,9 +3320,9 @@ int sdhci_setup_host(struct sdhci_host *host)
 	     !(host->flags & SDHCI_USE_SDMA)) &&
 	     !(host->quirks2 & SDHCI_QUIRK2_ACMD23_BROKEN)) {
 		host->flags |= SDHCI_AUTO_CMD23;
-		DBG("%s: Auto-CMD23 available\n", mmc_hostname(mmc));
+		DBG("Auto-CMD23 available\n");
 	} else {
-		DBG("%s: Auto-CMD23 unavailable\n", mmc_hostname(mmc));
+		DBG("Auto-CMD23 unavailable\n");
 	}
 
 	/*
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 26/39] mmc: sdhci: Add response register to register dump
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (24 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 25/39] mmc: sdhci: Improve debug print format Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 27/39] mmc: sdhci: Improve register dump print format Adrian Hunter
                   ` (12 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add response register to register dump.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 8aa8541a349a..ee7e8c0519e6 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -86,6 +86,13 @@ static void sdhci_dumpregs(struct sdhci_host *host)
 	pr_err(DRIVER_NAME ": Cmd:      0x%08x | Max curr: 0x%08x\n",
 	       sdhci_readw(host, SDHCI_COMMAND),
 	       sdhci_readl(host, SDHCI_MAX_CURRENT));
+	pr_err(DRIVER_NAME ": Resp[0]:  0x%08x | Resp[1]:  0x%08x\n",
+		   sdhci_readl(host, SDHCI_RESPONSE),
+		   sdhci_readl(host, SDHCI_RESPONSE + 4));
+	pr_err(DRIVER_NAME ": Resp[2]:  0x%08x | Resp[3]:  0x%08x\n",
+		   sdhci_readl(host, SDHCI_RESPONSE + 8),
+		   sdhci_readl(host, SDHCI_RESPONSE + 12));
+
 	pr_err(DRIVER_NAME ": Host ctl2: 0x%08x\n",
 	       sdhci_readw(host, SDHCI_HOST_CONTROL2));
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 27/39] mmc: sdhci: Improve register dump print format
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (25 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 26/39] mmc: sdhci: Add response register to register dump Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 28/39] mmc: sdhci: Export sdhci_dumpregs Adrian Hunter
                   ` (11 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Ensure all prints start with the mmc host name, and the text all lines up.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 104 ++++++++++++++++++++++++-----------------------
 1 file changed, 53 insertions(+), 51 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index ee7e8c0519e6..91da20acd068 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -39,6 +39,9 @@
 #define DBG(f, x...) \
 	pr_debug("%s: " DRIVER_NAME ": " f, mmc_hostname(host->mmc), ## x)
 
+#define SDHCI_DUMP(f, x...) \
+	pr_err("%s: " DRIVER_NAME ": " f, mmc_hostname(host->mmc), ## x)
+
 #define MAX_TUNING_LOOP 40
 
 static unsigned int debug_quirks = 0;
@@ -50,65 +53,64 @@
 
 static void sdhci_dumpregs(struct sdhci_host *host)
 {
-	pr_err(DRIVER_NAME ": =========== REGISTER DUMP (%s)===========\n",
-	       mmc_hostname(host->mmc));
-
-	pr_err(DRIVER_NAME ": Sys addr: 0x%08x | Version:  0x%08x\n",
-	       sdhci_readl(host, SDHCI_DMA_ADDRESS),
-	       sdhci_readw(host, SDHCI_HOST_VERSION));
-	pr_err(DRIVER_NAME ": Blk size: 0x%08x | Blk cnt:  0x%08x\n",
-	       sdhci_readw(host, SDHCI_BLOCK_SIZE),
-	       sdhci_readw(host, SDHCI_BLOCK_COUNT));
-	pr_err(DRIVER_NAME ": Argument: 0x%08x | Trn mode: 0x%08x\n",
-	       sdhci_readl(host, SDHCI_ARGUMENT),
-	       sdhci_readw(host, SDHCI_TRANSFER_MODE));
-	pr_err(DRIVER_NAME ": Present:  0x%08x | Host ctl: 0x%08x\n",
-	       sdhci_readl(host, SDHCI_PRESENT_STATE),
-	       sdhci_readb(host, SDHCI_HOST_CONTROL));
-	pr_err(DRIVER_NAME ": Power:    0x%08x | Blk gap:  0x%08x\n",
-	       sdhci_readb(host, SDHCI_POWER_CONTROL),
-	       sdhci_readb(host, SDHCI_BLOCK_GAP_CONTROL));
-	pr_err(DRIVER_NAME ": Wake-up:  0x%08x | Clock:    0x%08x\n",
-	       sdhci_readb(host, SDHCI_WAKE_UP_CONTROL),
-	       sdhci_readw(host, SDHCI_CLOCK_CONTROL));
-	pr_err(DRIVER_NAME ": Timeout:  0x%08x | Int stat: 0x%08x\n",
-	       sdhci_readb(host, SDHCI_TIMEOUT_CONTROL),
-	       sdhci_readl(host, SDHCI_INT_STATUS));
-	pr_err(DRIVER_NAME ": Int enab: 0x%08x | Sig enab: 0x%08x\n",
-	       sdhci_readl(host, SDHCI_INT_ENABLE),
-	       sdhci_readl(host, SDHCI_SIGNAL_ENABLE));
-	pr_err(DRIVER_NAME ": AC12 err: 0x%08x | Slot int: 0x%08x\n",
-	       sdhci_readw(host, SDHCI_ACMD12_ERR),
-	       sdhci_readw(host, SDHCI_SLOT_INT_STATUS));
-	pr_err(DRIVER_NAME ": Caps:     0x%08x | Caps_1:   0x%08x\n",
-	       sdhci_readl(host, SDHCI_CAPABILITIES),
-	       sdhci_readl(host, SDHCI_CAPABILITIES_1));
-	pr_err(DRIVER_NAME ": Cmd:      0x%08x | Max curr: 0x%08x\n",
-	       sdhci_readw(host, SDHCI_COMMAND),
-	       sdhci_readl(host, SDHCI_MAX_CURRENT));
-	pr_err(DRIVER_NAME ": Resp[0]:  0x%08x | Resp[1]:  0x%08x\n",
+	SDHCI_DUMP("============ SDHCI REGISTER DUMP ===========\n");
+
+	SDHCI_DUMP("Sys addr:  0x%08x | Version:  0x%08x\n",
+		   sdhci_readl(host, SDHCI_DMA_ADDRESS),
+		   sdhci_readw(host, SDHCI_HOST_VERSION));
+	SDHCI_DUMP("Blk size:  0x%08x | Blk cnt:  0x%08x\n",
+		   sdhci_readw(host, SDHCI_BLOCK_SIZE),
+		   sdhci_readw(host, SDHCI_BLOCK_COUNT));
+	SDHCI_DUMP("Argument:  0x%08x | Trn mode: 0x%08x\n",
+		   sdhci_readl(host, SDHCI_ARGUMENT),
+		   sdhci_readw(host, SDHCI_TRANSFER_MODE));
+	SDHCI_DUMP("Present:   0x%08x | Host ctl: 0x%08x\n",
+		   sdhci_readl(host, SDHCI_PRESENT_STATE),
+		   sdhci_readb(host, SDHCI_HOST_CONTROL));
+	SDHCI_DUMP("Power:     0x%08x | Blk gap:  0x%08x\n",
+		   sdhci_readb(host, SDHCI_POWER_CONTROL),
+		   sdhci_readb(host, SDHCI_BLOCK_GAP_CONTROL));
+	SDHCI_DUMP("Wake-up:   0x%08x | Clock:    0x%08x\n",
+		   sdhci_readb(host, SDHCI_WAKE_UP_CONTROL),
+		   sdhci_readw(host, SDHCI_CLOCK_CONTROL));
+	SDHCI_DUMP("Timeout:   0x%08x | Int stat: 0x%08x\n",
+		   sdhci_readb(host, SDHCI_TIMEOUT_CONTROL),
+		   sdhci_readl(host, SDHCI_INT_STATUS));
+	SDHCI_DUMP("Int enab:  0x%08x | Sig enab: 0x%08x\n",
+		   sdhci_readl(host, SDHCI_INT_ENABLE),
+		   sdhci_readl(host, SDHCI_SIGNAL_ENABLE));
+	SDHCI_DUMP("AC12 err:  0x%08x | Slot int: 0x%08x\n",
+		   sdhci_readw(host, SDHCI_ACMD12_ERR),
+		   sdhci_readw(host, SDHCI_SLOT_INT_STATUS));
+	SDHCI_DUMP("Caps:      0x%08x | Caps_1:   0x%08x\n",
+		   sdhci_readl(host, SDHCI_CAPABILITIES),
+		   sdhci_readl(host, SDHCI_CAPABILITIES_1));
+	SDHCI_DUMP("Cmd:       0x%08x | Max curr: 0x%08x\n",
+		   sdhci_readw(host, SDHCI_COMMAND),
+		   sdhci_readl(host, SDHCI_MAX_CURRENT));
+	SDHCI_DUMP("Resp[0]:   0x%08x | Resp[1]:  0x%08x\n",
 		   sdhci_readl(host, SDHCI_RESPONSE),
 		   sdhci_readl(host, SDHCI_RESPONSE + 4));
-	pr_err(DRIVER_NAME ": Resp[2]:  0x%08x | Resp[3]:  0x%08x\n",
+	SDHCI_DUMP("Resp[2]:   0x%08x | Resp[3]:  0x%08x\n",
 		   sdhci_readl(host, SDHCI_RESPONSE + 8),
 		   sdhci_readl(host, SDHCI_RESPONSE + 12));
-
-	pr_err(DRIVER_NAME ": Host ctl2: 0x%08x\n",
-	       sdhci_readw(host, SDHCI_HOST_CONTROL2));
+	SDHCI_DUMP("Host ctl2: 0x%08x\n",
+		   sdhci_readw(host, SDHCI_HOST_CONTROL2));
 
 	if (host->flags & SDHCI_USE_ADMA) {
-		if (host->flags & SDHCI_USE_64_BIT_DMA)
-			pr_err(DRIVER_NAME ": ADMA Err: 0x%08x | ADMA Ptr: 0x%08x%08x\n",
-			       readl(host->ioaddr + SDHCI_ADMA_ERROR),
-			       readl(host->ioaddr + SDHCI_ADMA_ADDRESS_HI),
-			       readl(host->ioaddr + SDHCI_ADMA_ADDRESS));
-		else
-			pr_err(DRIVER_NAME ": ADMA Err: 0x%08x | ADMA Ptr: 0x%08x\n",
-			       readl(host->ioaddr + SDHCI_ADMA_ERROR),
-			       readl(host->ioaddr + SDHCI_ADMA_ADDRESS));
+		if (host->flags & SDHCI_USE_64_BIT_DMA) {
+			SDHCI_DUMP("ADMA Err:  0x%08x | ADMA Ptr: 0x%08x%08x\n",
+				   readl(host->ioaddr + SDHCI_ADMA_ERROR),
+				   readl(host->ioaddr + SDHCI_ADMA_ADDRESS_HI),
+				   readl(host->ioaddr + SDHCI_ADMA_ADDRESS));
+		} else {
+			SDHCI_DUMP("ADMA Err:  0x%08x | ADMA Ptr: 0x%08x\n",
+				   readl(host->ioaddr + SDHCI_ADMA_ERROR),
+				   readl(host->ioaddr + SDHCI_ADMA_ADDRESS));
+		}
 	}
 
-	pr_err(DRIVER_NAME ": ===========================================\n");
+	SDHCI_DUMP("============================================\n");
 }
 
 /*****************************************************************************\
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 28/39] mmc: sdhci: Export sdhci_dumpregs
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (26 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 27/39] mmc: sdhci: Improve register dump print format Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 29/39] mmc: sdhci: Get rid of 'extern' in header file Adrian Hunter
                   ` (10 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Export sdhci_dumpregs so that it can be called by drivers.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 3 ++-
 drivers/mmc/host/sdhci.h | 2 ++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 91da20acd068..5aa96ada1314 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -51,7 +51,7 @@
 
 static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable);
 
-static void sdhci_dumpregs(struct sdhci_host *host)
+void sdhci_dumpregs(struct sdhci_host *host)
 {
 	SDHCI_DUMP("============ SDHCI REGISTER DUMP ===========\n");
 
@@ -112,6 +112,7 @@ static void sdhci_dumpregs(struct sdhci_host *host)
 
 	SDHCI_DUMP("============================================\n");
 }
+EXPORT_SYMBOL_GPL(sdhci_dumpregs);
 
 /*****************************************************************************\
  *                                                                           *
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index edf3adfbc213..61753492a8e8 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -702,4 +702,6 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
 extern int sdhci_runtime_resume_host(struct sdhci_host *host);
 #endif
 
+void sdhci_dumpregs(struct sdhci_host *host);
+
 #endif /* __SDHCI_HW_H */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 29/39] mmc: sdhci: Get rid of 'extern' in header file
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (27 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 28/39] mmc: sdhci: Export sdhci_dumpregs Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 30/39] mmc: sdhci: Add sdhci_cleanup_host Adrian Hunter
                   ` (9 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Get rid of unnecessary 'extern' in header file.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 61753492a8e8..8735c9325e9e 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -652,24 +652,22 @@ static inline u8 sdhci_readb(struct sdhci_host *host, int reg)
 
 #endif /* CONFIG_MMC_SDHCI_IO_ACCESSORS */
 
-extern struct sdhci_host *sdhci_alloc_host(struct device *dev,
-	size_t priv_size);
-extern void sdhci_free_host(struct sdhci_host *host);
+struct sdhci_host *sdhci_alloc_host(struct device *dev, size_t priv_size);
+void sdhci_free_host(struct sdhci_host *host);
 
 static inline void *sdhci_priv(struct sdhci_host *host)
 {
 	return host->private;
 }
 
-extern void sdhci_card_detect(struct sdhci_host *host);
-extern void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps,
-			      u32 *caps1);
-extern int sdhci_setup_host(struct sdhci_host *host);
-extern int __sdhci_add_host(struct sdhci_host *host);
-extern int sdhci_add_host(struct sdhci_host *host);
-extern void sdhci_remove_host(struct sdhci_host *host, int dead);
-extern void sdhci_send_command(struct sdhci_host *host,
-				struct mmc_command *cmd);
+void sdhci_card_detect(struct sdhci_host *host);
+void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps,
+		       u32 *caps1);
+int sdhci_setup_host(struct sdhci_host *host);
+int __sdhci_add_host(struct sdhci_host *host);
+int sdhci_add_host(struct sdhci_host *host);
+void sdhci_remove_host(struct sdhci_host *host, int dead);
+void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd);
 
 static inline void sdhci_read_caps(struct sdhci_host *host)
 {
@@ -695,11 +693,11 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
 int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode);
 
 #ifdef CONFIG_PM
-extern int sdhci_suspend_host(struct sdhci_host *host);
-extern int sdhci_resume_host(struct sdhci_host *host);
-extern void sdhci_enable_irq_wakeups(struct sdhci_host *host);
-extern int sdhci_runtime_suspend_host(struct sdhci_host *host);
-extern int sdhci_runtime_resume_host(struct sdhci_host *host);
+int sdhci_suspend_host(struct sdhci_host *host);
+int sdhci_resume_host(struct sdhci_host *host);
+void sdhci_enable_irq_wakeups(struct sdhci_host *host);
+int sdhci_runtime_suspend_host(struct sdhci_host *host);
+int sdhci_runtime_resume_host(struct sdhci_host *host);
 #endif
 
 void sdhci_dumpregs(struct sdhci_host *host);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 30/39] mmc: sdhci: Add sdhci_cleanup_host
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (28 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 29/39] mmc: sdhci: Get rid of 'extern' in header file Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 31/39] mmc: sdhci: Factor out sdhci_set_default_irqs Adrian Hunter
                   ` (8 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add sdhci_cleanup_host() to cleanup __sdhci_add_host().

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 37 ++++++++++++++++++++++++++-----------
 drivers/mmc/host/sdhci.h |  1 +
 2 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 5aa96ada1314..cd8a7eb5431d 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -3596,6 +3596,22 @@ int sdhci_setup_host(struct sdhci_host *host)
 }
 EXPORT_SYMBOL_GPL(sdhci_setup_host);
 
+void sdhci_cleanup_host(struct sdhci_host *host)
+{
+	struct mmc_host *mmc = host->mmc;
+
+	if (!IS_ERR(mmc->supply.vqmmc))
+		regulator_disable(mmc->supply.vqmmc);
+
+	if (host->align_buffer)
+		dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz +
+				  host->adma_table_sz, host->align_buffer,
+				  host->align_addr);
+	host->adma_table = NULL;
+	host->align_buffer = NULL;
+}
+EXPORT_SYMBOL_GPL(sdhci_cleanup_host);
+
 int __sdhci_add_host(struct sdhci_host *host)
 {
 	struct mmc_host *mmc = host->mmc;
@@ -3660,16 +3676,6 @@ int __sdhci_add_host(struct sdhci_host *host)
 untasklet:
 	tasklet_kill(&host->finish_tasklet);
 
-	if (!IS_ERR(mmc->supply.vqmmc))
-		regulator_disable(mmc->supply.vqmmc);
-
-	if (host->align_buffer)
-		dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz +
-				  host->adma_table_sz, host->align_buffer,
-				  host->align_addr);
-	host->adma_table = NULL;
-	host->align_buffer = NULL;
-
 	return ret;
 }
 EXPORT_SYMBOL_GPL(__sdhci_add_host);
@@ -3682,7 +3688,16 @@ int sdhci_add_host(struct sdhci_host *host)
 	if (ret)
 		return ret;
 
-	return __sdhci_add_host(host);
+	ret = __sdhci_add_host(host);
+	if (ret)
+		goto cleanup;
+
+	return 0;
+
+cleanup:
+	sdhci_cleanup_host(host);
+
+	return ret;
 }
 EXPORT_SYMBOL_GPL(sdhci_add_host);
 
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 8735c9325e9e..afa2cc394ae9 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -664,6 +664,7 @@ static inline void *sdhci_priv(struct sdhci_host *host)
 void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps,
 		       u32 *caps1);
 int sdhci_setup_host(struct sdhci_host *host);
+void sdhci_cleanup_host(struct sdhci_host *host);
 int __sdhci_add_host(struct sdhci_host *host);
 int sdhci_add_host(struct sdhci_host *host);
 void sdhci_remove_host(struct sdhci_host *host, int dead);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 31/39] mmc: sdhci: Factor out sdhci_set_default_irqs
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (29 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 30/39] mmc: sdhci: Add sdhci_cleanup_host Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 32/39] mmc: sdhci: Add CQE support Adrian Hunter
                   ` (7 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Factor out sdhci_set_default_irqs().

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index cd8a7eb5431d..79ab36a37484 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -225,15 +225,8 @@ static void sdhci_do_reset(struct sdhci_host *host, u8 mask)
 	}
 }
 
-static void sdhci_init(struct sdhci_host *host, int soft)
+static void sdhci_set_default_irqs(struct sdhci_host *host)
 {
-	struct mmc_host *mmc = host->mmc;
-
-	if (soft)
-		sdhci_do_reset(host, SDHCI_RESET_CMD|SDHCI_RESET_DATA);
-	else
-		sdhci_do_reset(host, SDHCI_RESET_ALL);
-
 	host->ier = SDHCI_INT_BUS_POWER | SDHCI_INT_DATA_END_BIT |
 		    SDHCI_INT_DATA_CRC | SDHCI_INT_DATA_TIMEOUT |
 		    SDHCI_INT_INDEX | SDHCI_INT_END_BIT | SDHCI_INT_CRC |
@@ -246,6 +239,18 @@ static void sdhci_init(struct sdhci_host *host, int soft)
 
 	sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
 	sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+}
+
+static void sdhci_init(struct sdhci_host *host, int soft)
+{
+	struct mmc_host *mmc = host->mmc;
+
+	if (soft)
+		sdhci_do_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
+	else
+		sdhci_do_reset(host, SDHCI_RESET_ALL);
+
+	sdhci_set_default_irqs(host);
 
 	if (soft) {
 		/* force clock reconfiguration */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 32/39] mmc: sdhci: Add CQE support
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (30 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 31/39] mmc: sdhci: Factor out sdhci_set_default_irqs Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 33/39] mmc: sdhci-pci: Let devices define how to add the host Adrian Hunter
                   ` (6 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add an interrupt hook and helper functions for enabling, disabling and
delivering interrupts to a CQE.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 130 +++++++++++++++++++++++++++++++++++++++++++++--
 drivers/mmc/host/sdhci.h |  19 +++++++
 2 files changed, 146 insertions(+), 3 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 79ab36a37484..865e0e278aa2 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -252,6 +252,8 @@ static void sdhci_init(struct sdhci_host *host, int soft)
 
 	sdhci_set_default_irqs(host);
 
+	host->cqe_on = false;
+
 	if (soft) {
 		/* force clock reconfiguration */
 		host->clock = 0;
@@ -2696,13 +2698,19 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id)
 	}
 
 	do {
+		DBG("IRQ status 0x%08x\n", intmask);
+
+		if (host->ops->irq) {
+			intmask = host->ops->irq(host, intmask);
+			if (!intmask)
+				goto cont;
+		}
+
 		/* Clear selected interrupts. */
 		mask = intmask & (SDHCI_INT_CMD_MASK | SDHCI_INT_DATA_MASK |
 				  SDHCI_INT_BUS_POWER);
 		sdhci_writel(host, mask, SDHCI_INT_STATUS);
 
-		DBG("IRQ status 0x%08x\n", intmask);
-
 		if (intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE)) {
 			u32 present = sdhci_readl(host, SDHCI_PRESENT_STATE) &
 				      SDHCI_CARD_PRESENT;
@@ -2762,7 +2770,7 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id)
 			unexpected |= intmask;
 			sdhci_writel(host, intmask, SDHCI_INT_STATUS);
 		}
-
+cont:
 		if (result == IRQ_NONE)
 			result = IRQ_HANDLED;
 
@@ -2995,6 +3003,119 @@ int sdhci_runtime_resume_host(struct sdhci_host *host)
 
 /*****************************************************************************\
  *                                                                           *
+ * Command Queue Engine (CQE) helpers                                        *
+ *                                                                           *
+\*****************************************************************************/
+
+void sdhci_cqe_enable(struct mmc_host *mmc)
+{
+	struct sdhci_host *host = mmc_priv(mmc);
+	unsigned long flags;
+	u8 ctrl;
+
+	spin_lock_irqsave(&host->lock, flags);
+
+	ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+	ctrl &= ~SDHCI_CTRL_DMA_MASK;
+	if (host->flags & SDHCI_USE_64_BIT_DMA)
+		ctrl |= SDHCI_CTRL_ADMA64;
+	else
+		ctrl |= SDHCI_CTRL_ADMA32;
+	sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
+
+	sdhci_writew(host, SDHCI_MAKE_BLKSZ(SDHCI_DEFAULT_BOUNDARY_ARG, 512),
+		     SDHCI_BLOCK_SIZE);
+
+	/* Set maximum timeout */
+	sdhci_writeb(host, 0xE, SDHCI_TIMEOUT_CONTROL);
+
+	host->ier = host->cqe_ier;
+
+	sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
+	sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+
+	host->cqe_on = true;
+
+	pr_debug("%s: sdhci: CQE on, IRQ mask %#x, IRQ status %#x\n",
+		 mmc_hostname(mmc), host->ier,
+		 sdhci_readl(host, SDHCI_INT_STATUS));
+
+	mmiowb();
+	spin_unlock_irqrestore(&host->lock, flags);
+}
+EXPORT_SYMBOL_GPL(sdhci_cqe_enable);
+
+void sdhci_cqe_disable(struct mmc_host *mmc, bool recovery)
+{
+	struct sdhci_host *host = mmc_priv(mmc);
+	unsigned long flags;
+
+	spin_lock_irqsave(&host->lock, flags);
+
+	sdhci_set_default_irqs(host);
+
+	host->cqe_on = false;
+
+	if (recovery) {
+		sdhci_do_reset(host, SDHCI_RESET_CMD);
+		sdhci_do_reset(host, SDHCI_RESET_DATA);
+	}
+
+	pr_debug("%s: sdhci: CQE off, IRQ mask %#x, IRQ status %#x\n",
+		 mmc_hostname(mmc), host->ier,
+		 sdhci_readl(host, SDHCI_INT_STATUS));
+
+	mmiowb();
+	spin_unlock_irqrestore(&host->lock, flags);
+}
+EXPORT_SYMBOL_GPL(sdhci_cqe_disable);
+
+bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error,
+		   int *data_error)
+{
+	u32 mask;
+
+	if (!host->cqe_on)
+		return false;
+
+	if (intmask & (SDHCI_INT_INDEX | SDHCI_INT_END_BIT | SDHCI_INT_CRC))
+		*cmd_error = -EILSEQ;
+	else if (intmask & SDHCI_INT_TIMEOUT)
+		*cmd_error = -ETIMEDOUT;
+	else
+		*cmd_error = 0;
+
+	if (intmask & (SDHCI_INT_DATA_END_BIT | SDHCI_INT_DATA_CRC))
+		*data_error = -EILSEQ;
+	else if (intmask & SDHCI_INT_DATA_TIMEOUT)
+		*data_error = -ETIMEDOUT;
+	else if (intmask & SDHCI_INT_ADMA_ERROR)
+		*data_error = -EIO;
+	else
+		*data_error = 0;
+
+	/* Clear selected interrupts. */
+	mask = intmask & host->cqe_ier;
+	sdhci_writel(host, mask, SDHCI_INT_STATUS);
+
+	if (intmask & SDHCI_INT_BUS_POWER)
+		pr_err("%s: Card is consuming too much power!\n",
+		       mmc_hostname(host->mmc));
+
+	intmask &= ~(host->cqe_ier | SDHCI_INT_ERROR);
+	if (intmask) {
+		sdhci_writel(host, intmask, SDHCI_INT_STATUS);
+		pr_err("%s: CQE: Unexpected interrupt 0x%08x.\n",
+		       mmc_hostname(host->mmc), intmask);
+		sdhci_dumpregs(host);
+	}
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(sdhci_cqe_irq);
+
+/*****************************************************************************\
+ *                                                                           *
  * Device allocation/registration                                            *
  *                                                                           *
 \*****************************************************************************/
@@ -3018,6 +3139,9 @@ struct sdhci_host *sdhci_alloc_host(struct device *dev,
 
 	host->flags = SDHCI_SIGNALING_330;
 
+	host->cqe_ier     = SDHCI_CQE_INT_MASK;
+	host->cqe_err_ier = SDHCI_CQE_INT_ERR_MASK;
+
 	return host;
 }
 
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index afa2cc394ae9..c50df362223f 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -134,6 +134,7 @@
 #define  SDHCI_INT_CARD_REMOVE	0x00000080
 #define  SDHCI_INT_CARD_INT	0x00000100
 #define  SDHCI_INT_RETUNE	0x00001000
+#define  SDHCI_INT_CQE		0x00004000
 #define  SDHCI_INT_ERROR	0x00008000
 #define  SDHCI_INT_TIMEOUT	0x00010000
 #define  SDHCI_INT_CRC		0x00020000
@@ -158,6 +159,13 @@
 		SDHCI_INT_BLK_GAP)
 #define SDHCI_INT_ALL_MASK	((unsigned int)-1)
 
+#define SDHCI_CQE_INT_ERR_MASK ( \
+	SDHCI_INT_ADMA_ERROR | SDHCI_INT_BUS_POWER | SDHCI_INT_DATA_END_BIT | \
+	SDHCI_INT_DATA_CRC | SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_INDEX | \
+	SDHCI_INT_END_BIT | SDHCI_INT_CRC | SDHCI_INT_TIMEOUT)
+
+#define SDHCI_CQE_INT_MASK (SDHCI_CQE_INT_ERR_MASK | SDHCI_INT_CQE)
+
 #define SDHCI_ACMD12_ERR	0x3C
 
 #define SDHCI_HOST_CONTROL2		0x3E
@@ -518,6 +526,10 @@ struct sdhci_host {
 	/* cached registers */
 	u32			ier;
 
+	bool			cqe_on;		/* CQE is operating */
+	u32			cqe_ier;	/* CQE interrupt mask */
+	u32			cqe_err_ier;	/* CQE error interrupt mask */
+
 	wait_queue_head_t	buf_ready_int;	/* Waitqueue for Buffer Read Ready interrupt */
 	unsigned int		tuning_done;	/* Condition flag set when CMD19 succeeds */
 
@@ -544,6 +556,8 @@ struct sdhci_ops {
 	void	(*set_power)(struct sdhci_host *host, unsigned char mode,
 			     unsigned short vdd);
 
+	u32		(*irq)(struct sdhci_host *host, u32 intmask);
+
 	int		(*enable_dma)(struct sdhci_host *host);
 	unsigned int	(*get_max_clock)(struct sdhci_host *host);
 	unsigned int	(*get_min_clock)(struct sdhci_host *host);
@@ -701,6 +715,11 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
 int sdhci_runtime_resume_host(struct sdhci_host *host);
 #endif
 
+void sdhci_cqe_enable(struct mmc_host *mmc);
+void sdhci_cqe_disable(struct mmc_host *mmc, bool recovery);
+bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error,
+		   int *data_error);
+
 void sdhci_dumpregs(struct sdhci_host *host);
 
 #endif /* __SDHCI_HW_H */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 33/39] mmc: sdhci-pci: Let devices define how to add the host
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (31 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 32/39] mmc: sdhci: Add CQE support Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 34/39] mmc: sdhci-pci: Do not use suspend/resume callbacks with runtime pm Adrian Hunter
                   ` (5 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

SDHCI provides more flexibility than simply calling sdhci_add_host(). Make
that available by allowing devices to specify their own ->add_host()
function.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci-core.c | 5 ++++-
 drivers/mmc/host/sdhci-pci.h      | 1 +
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 982b3e349426..8315522fb2bb 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -1915,7 +1915,10 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
 		}
 	}
 
-	ret = sdhci_add_host(host);
+	if (chip->fixes && chip->fixes->add_host)
+		ret = chip->fixes->add_host(slot);
+	else
+		ret = sdhci_add_host(host);
 	if (ret)
 		goto remove;
 
diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
index 36f743464fcc..fcf0f04cb90d 100644
--- a/drivers/mmc/host/sdhci-pci.h
+++ b/drivers/mmc/host/sdhci-pci.h
@@ -64,6 +64,7 @@ struct sdhci_pci_fixes {
 	int			(*probe) (struct sdhci_pci_chip *);
 
 	int			(*probe_slot) (struct sdhci_pci_slot *);
+	int			(*add_host) (struct sdhci_pci_slot *);
 	void			(*remove_slot) (struct sdhci_pci_slot *, int);
 
 	int			(*suspend) (struct sdhci_pci_chip *);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 34/39] mmc: sdhci-pci: Do not use suspend/resume callbacks with runtime pm
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (32 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 33/39] mmc: sdhci-pci: Let devices define how to add the host Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 35/39] mmc: sdhci-pci: Conditionally compile pm sleep functions Adrian Hunter
                   ` (4 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Do not use suspend/resume callbacks with runtime pm. It doesn't make sense
and isn't being used, so remove.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci-core.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 8315522fb2bb..cdf959bb782e 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -1750,12 +1750,6 @@ static int sdhci_pci_runtime_suspend(struct device *dev)
 			goto err_pci_runtime_suspend;
 	}
 
-	if (chip->fixes && chip->fixes->suspend) {
-		ret = chip->fixes->suspend(chip);
-		if (ret)
-			goto err_pci_runtime_suspend;
-	}
-
 	return 0;
 
 err_pci_runtime_suspend:
@@ -1775,12 +1769,6 @@ static int sdhci_pci_runtime_resume(struct device *dev)
 	if (!chip)
 		return 0;
 
-	if (chip->fixes && chip->fixes->resume) {
-		ret = chip->fixes->resume(chip);
-		if (ret)
-			return ret;
-	}
-
 	for (i = 0; i < chip->num_slots; i++) {
 		slot = chip->slots[i];
 		if (!slot)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 35/39] mmc: sdhci-pci: Conditionally compile pm sleep functions
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (33 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 34/39] mmc: sdhci-pci: Do not use suspend/resume callbacks with runtime pm Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 36/39] mmc: sdhci-pci: Let suspend/resume callbacks replace default callbacks Adrian Hunter
                   ` (3 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

It is confusing to have some parts of suspend / resume under conditional
compilation and some parts not. Use conditional compilation everywhere.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci-core.c    | 10 ++++++++++
 drivers/mmc/host/sdhci-pci-o2micro.c |  2 ++
 drivers/mmc/host/sdhci-pci.h         |  2 ++
 3 files changed, 14 insertions(+)

diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index cdf959bb782e..6ea6a5bce7ac 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -71,6 +71,7 @@ static int ricoh_mmc_probe_slot(struct sdhci_pci_slot *slot)
 	return 0;
 }
 
+#ifdef CONFIG_PM_SLEEP
 static int ricoh_mmc_resume(struct sdhci_pci_chip *chip)
 {
 	/* Apply a delay to allow controller to settle */
@@ -79,6 +80,7 @@ static int ricoh_mmc_resume(struct sdhci_pci_chip *chip)
 	msleep(500);
 	return 0;
 }
+#endif
 
 static const struct sdhci_pci_fixes sdhci_ricoh = {
 	.probe		= ricoh_probe,
@@ -89,7 +91,9 @@ static int ricoh_mmc_resume(struct sdhci_pci_chip *chip)
 
 static const struct sdhci_pci_fixes sdhci_ricoh_mmc = {
 	.probe_slot	= ricoh_mmc_probe_slot,
+#ifdef CONFIG_PM_SLEEP
 	.resume		= ricoh_mmc_resume,
+#endif
 	.quirks		= SDHCI_QUIRK_32BIT_DMA_ADDR |
 			  SDHCI_QUIRK_CLOCK_BEFORE_RESET |
 			  SDHCI_QUIRK_NO_CARD_NO_RESET |
@@ -715,6 +719,7 @@ static void jmicron_remove_slot(struct sdhci_pci_slot *slot, int dead)
 		jmicron_enable_mmc(slot->host, 0);
 }
 
+#ifdef CONFIG_PM_SLEEP
 static int jmicron_suspend(struct sdhci_pci_chip *chip)
 {
 	int i;
@@ -746,13 +751,16 @@ static int jmicron_resume(struct sdhci_pci_chip *chip)
 
 	return 0;
 }
+#endif
 
 static const struct sdhci_pci_fixes sdhci_o2 = {
 	.probe = sdhci_pci_o2_probe,
 	.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
 	.quirks2 = SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD,
 	.probe_slot = sdhci_pci_o2_probe_slot,
+#ifdef CONFIG_PM_SLEEP
 	.resume = sdhci_pci_o2_resume,
+#endif
 };
 
 static const struct sdhci_pci_fixes sdhci_jmicron = {
@@ -761,8 +769,10 @@ static int jmicron_resume(struct sdhci_pci_chip *chip)
 	.probe_slot	= jmicron_probe_slot,
 	.remove_slot	= jmicron_remove_slot,
 
+#ifdef CONFIG_PM_SLEEP
 	.suspend	= jmicron_suspend,
 	.resume		= jmicron_resume,
+#endif
 };
 
 /* SysKonnect CardBus2SDIO extra registers */
diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
index d48f03104b5b..0ea34e2c7cba 100644
--- a/drivers/mmc/host/sdhci-pci-o2micro.c
+++ b/drivers/mmc/host/sdhci-pci-o2micro.c
@@ -384,8 +384,10 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
 	return 0;
 }
 
+#ifdef CONFIG_PM_SLEEP
 int sdhci_pci_o2_resume(struct sdhci_pci_chip *chip)
 {
 	sdhci_pci_o2_probe(chip);
 	return 0;
 }
+#endif
diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
index fcf0f04cb90d..47869bc20c59 100644
--- a/drivers/mmc/host/sdhci-pci.h
+++ b/drivers/mmc/host/sdhci-pci.h
@@ -67,8 +67,10 @@ struct sdhci_pci_fixes {
 	int			(*add_host) (struct sdhci_pci_slot *);
 	void			(*remove_slot) (struct sdhci_pci_slot *, int);
 
+#ifdef CONFIG_PM_SLEEP
 	int			(*suspend) (struct sdhci_pci_chip *);
 	int			(*resume) (struct sdhci_pci_chip *);
+#endif
 
 	const struct sdhci_ops	*ops;
 };
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 36/39] mmc: sdhci-pci: Let suspend/resume callbacks replace default callbacks
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (34 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 35/39] mmc: sdhci-pci: Conditionally compile pm sleep functions Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 37/39] mmc: sdhci-pci: Add runtime suspend/resume callbacks Adrian Hunter
                   ` (2 subsequent siblings)
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

The suspend / resume callbacks lack the flexibility to allow a device to
specify a different function entirely. Change them around so that device
functions are called directly and they in turn can call the default
implementations if needed.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci-core.c    | 158 +++++++++++++++++++++--------------
 drivers/mmc/host/sdhci-pci-o2micro.c |   2 +-
 drivers/mmc/host/sdhci-pci.h         |   4 +
 3 files changed, 98 insertions(+), 66 deletions(-)

diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 6ea6a5bce7ac..7c3c63a03dfc 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -41,6 +41,82 @@ static int sdhci_pci_select_drive_strength(struct sdhci_host *host,
 					   unsigned int max_dtr, int host_drv,
 					   int card_drv, int *drv_type);
 
+#ifdef CONFIG_PM_SLEEP
+static int __sdhci_pci_suspend_host(struct sdhci_pci_chip *chip)
+{
+	int i, ret;
+
+	for (i = 0; i < chip->num_slots; i++) {
+		struct sdhci_pci_slot *slot = chip->slots[i];
+
+		if (!slot)
+			continue;
+
+		ret = sdhci_suspend_host(slot->host);
+		if (ret)
+			goto err_pci_suspend;
+
+		if (slot->host->mmc->pm_flags & MMC_PM_WAKE_SDIO_IRQ)
+			sdhci_enable_irq_wakeups(slot->host);
+	}
+
+	return 0;
+
+err_pci_suspend:
+	while (--i >= 0)
+		sdhci_resume_host(chip->slots[i]->host);
+	return ret;
+}
+
+static int sdhci_pci_init_wakeup(struct sdhci_pci_chip *chip)
+{
+	mmc_pm_flag_t pm_flags = 0;
+	int i;
+
+	for (i = 0; i < chip->num_slots; i++) {
+		struct sdhci_pci_slot *slot = chip->slots[i];
+
+		if (slot)
+			pm_flags |= slot->host->mmc->pm_flags;
+	}
+
+	return device_init_wakeup(&chip->pdev->dev,
+				  (pm_flags & MMC_PM_KEEP_POWER) &&
+				  (pm_flags & MMC_PM_WAKE_SDIO_IRQ));
+}
+
+static int sdhci_pci_suspend_host(struct sdhci_pci_chip *chip)
+{
+	int ret;
+
+	ret = __sdhci_pci_suspend_host(chip);
+	if (ret)
+		return ret;
+
+	sdhci_pci_init_wakeup(chip);
+
+	return 0;
+}
+
+int sdhci_pci_resume_host(struct sdhci_pci_chip *chip)
+{
+	struct sdhci_pci_slot *slot;
+	int i, ret;
+
+	for (i = 0; i < chip->num_slots; i++) {
+		slot = chip->slots[i];
+		if (!slot)
+			continue;
+
+		ret = sdhci_resume_host(slot->host);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+#endif
+
 /*****************************************************************************\
  *                                                                           *
  * Hardware specific quirk handling                                          *
@@ -78,7 +154,7 @@ static int ricoh_mmc_resume(struct sdhci_pci_chip *chip)
 	/* Otherwise it becomes confused if card state changed
 		during suspend */
 	msleep(500);
-	return 0;
+	return sdhci_pci_resume_host(chip);
 }
 #endif
 
@@ -722,7 +798,11 @@ static void jmicron_remove_slot(struct sdhci_pci_slot *slot, int dead)
 #ifdef CONFIG_PM_SLEEP
 static int jmicron_suspend(struct sdhci_pci_chip *chip)
 {
-	int i;
+	int i, ret;
+
+	ret = __sdhci_pci_suspend_host(chip);
+	if (ret)
+		return ret;
 
 	if (chip->pdev->device == PCI_DEVICE_ID_JMICRON_JMB38X_MMC ||
 	    chip->pdev->device == PCI_DEVICE_ID_JMICRON_JMB388_ESD) {
@@ -730,6 +810,8 @@ static int jmicron_suspend(struct sdhci_pci_chip *chip)
 			jmicron_enable_mmc(chip->slots[i]->host, 0);
 	}
 
+	sdhci_pci_init_wakeup(chip);
+
 	return 0;
 }
 
@@ -749,7 +831,7 @@ static int jmicron_resume(struct sdhci_pci_chip *chip)
 		return ret;
 	}
 
-	return 0;
+	return sdhci_pci_resume_host(chip);
 }
 #endif
 
@@ -1657,83 +1739,29 @@ static int sdhci_pci_select_drive_strength(struct sdhci_host *host,
 static int sdhci_pci_suspend(struct device *dev)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
-	struct sdhci_pci_chip *chip;
-	struct sdhci_pci_slot *slot;
-	mmc_pm_flag_t slot_pm_flags;
-	mmc_pm_flag_t pm_flags = 0;
-	int i, ret;
+	struct sdhci_pci_chip *chip = pci_get_drvdata(pdev);
 
-	chip = pci_get_drvdata(pdev);
 	if (!chip)
 		return 0;
 
-	for (i = 0; i < chip->num_slots; i++) {
-		slot = chip->slots[i];
-		if (!slot)
-			continue;
-
-		ret = sdhci_suspend_host(slot->host);
+	if (chip->fixes && chip->fixes->suspend)
+		return chip->fixes->suspend(chip);
 
-		if (ret)
-			goto err_pci_suspend;
-
-		slot_pm_flags = slot->host->mmc->pm_flags;
-		if (slot_pm_flags & MMC_PM_WAKE_SDIO_IRQ)
-			sdhci_enable_irq_wakeups(slot->host);
-
-		pm_flags |= slot_pm_flags;
-	}
-
-	if (chip->fixes && chip->fixes->suspend) {
-		ret = chip->fixes->suspend(chip);
-		if (ret)
-			goto err_pci_suspend;
-	}
-
-	if (pm_flags & MMC_PM_KEEP_POWER) {
-		if (pm_flags & MMC_PM_WAKE_SDIO_IRQ)
-			device_init_wakeup(dev, true);
-		else
-			device_init_wakeup(dev, false);
-	} else
-		device_init_wakeup(dev, false);
-
-	return 0;
-
-err_pci_suspend:
-	while (--i >= 0)
-		sdhci_resume_host(chip->slots[i]->host);
-	return ret;
+	return sdhci_pci_suspend_host(chip);
 }
 
 static int sdhci_pci_resume(struct device *dev)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
-	struct sdhci_pci_chip *chip;
-	struct sdhci_pci_slot *slot;
-	int i, ret;
+	struct sdhci_pci_chip *chip = pci_get_drvdata(pdev);
 
-	chip = pci_get_drvdata(pdev);
 	if (!chip)
 		return 0;
 
-	if (chip->fixes && chip->fixes->resume) {
-		ret = chip->fixes->resume(chip);
-		if (ret)
-			return ret;
-	}
-
-	for (i = 0; i < chip->num_slots; i++) {
-		slot = chip->slots[i];
-		if (!slot)
-			continue;
+	if (chip->fixes && chip->fixes->resume)
+		return chip->fixes->resume(chip);
 
-		ret = sdhci_resume_host(slot->host);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
+	return sdhci_pci_resume_host(chip);
 }
 #endif
 
diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
index 0ea34e2c7cba..14273ca00641 100644
--- a/drivers/mmc/host/sdhci-pci-o2micro.c
+++ b/drivers/mmc/host/sdhci-pci-o2micro.c
@@ -388,6 +388,6 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
 int sdhci_pci_o2_resume(struct sdhci_pci_chip *chip)
 {
 	sdhci_pci_o2_probe(chip);
-	return 0;
+	return sdhci_pci_resume_host(chip);
 }
 #endif
diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
index 47869bc20c59..182c95894768 100644
--- a/drivers/mmc/host/sdhci-pci.h
+++ b/drivers/mmc/host/sdhci-pci.h
@@ -106,4 +106,8 @@ struct sdhci_pci_chip {
 	struct sdhci_pci_slot	*slots[MAX_SLOTS]; /* Pointers to host slots */
 };
 
+#ifdef CONFIG_PM_SLEEP
+int sdhci_pci_resume_host(struct sdhci_pci_chip *chip);
+#endif
+
 #endif /* __SDHCI_PCI_H */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 37/39] mmc: sdhci-pci: Add runtime suspend/resume callbacks
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (35 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 36/39] mmc: sdhci-pci: Let suspend/resume callbacks replace default callbacks Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 38/39] mmc: sdhci-pci: Move a function to avoid later forward declaration Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 39/39] mmc: sdhci-pci: Add CQHCI support for Intel GLK Adrian Hunter
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add runtime suspend/resume callbacks to match suspend/resume callbacks.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci-core.c | 85 +++++++++++++++++++++++----------------
 drivers/mmc/host/sdhci-pci.h      |  4 ++
 2 files changed, 55 insertions(+), 34 deletions(-)

diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 7c3c63a03dfc..ca7c06508dcd 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -117,6 +117,49 @@ int sdhci_pci_resume_host(struct sdhci_pci_chip *chip)
 }
 #endif
 
+#ifdef CONFIG_PM
+static int sdhci_pci_runtime_suspend_host(struct sdhci_pci_chip *chip)
+{
+	struct sdhci_pci_slot *slot;
+	int i, ret;
+
+	for (i = 0; i < chip->num_slots; i++) {
+		slot = chip->slots[i];
+		if (!slot)
+			continue;
+
+		ret = sdhci_runtime_suspend_host(slot->host);
+		if (ret)
+			goto err_pci_runtime_suspend;
+	}
+
+	return 0;
+
+err_pci_runtime_suspend:
+	while (--i >= 0)
+		sdhci_runtime_resume_host(chip->slots[i]->host);
+	return ret;
+}
+
+static int sdhci_pci_runtime_resume_host(struct sdhci_pci_chip *chip)
+{
+	struct sdhci_pci_slot *slot;
+	int i, ret;
+
+	for (i = 0; i < chip->num_slots; i++) {
+		slot = chip->slots[i];
+		if (!slot)
+			continue;
+
+		ret = sdhci_runtime_resume_host(slot->host);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+#endif
+
 /*****************************************************************************\
  *                                                                           *
  * Hardware specific quirk handling                                          *
@@ -1769,55 +1812,29 @@ static int sdhci_pci_resume(struct device *dev)
 static int sdhci_pci_runtime_suspend(struct device *dev)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
-	struct sdhci_pci_chip *chip;
-	struct sdhci_pci_slot *slot;
-	int i, ret;
+	struct sdhci_pci_chip *chip = pci_get_drvdata(pdev);
 
-	chip = pci_get_drvdata(pdev);
 	if (!chip)
 		return 0;
 
-	for (i = 0; i < chip->num_slots; i++) {
-		slot = chip->slots[i];
-		if (!slot)
-			continue;
-
-		ret = sdhci_runtime_suspend_host(slot->host);
-
-		if (ret)
-			goto err_pci_runtime_suspend;
-	}
-
-	return 0;
+	if (chip->fixes && chip->fixes->runtime_suspend)
+		return chip->fixes->runtime_suspend(chip);
 
-err_pci_runtime_suspend:
-	while (--i >= 0)
-		sdhci_runtime_resume_host(chip->slots[i]->host);
-	return ret;
+	return sdhci_pci_runtime_suspend_host(chip);
 }
 
 static int sdhci_pci_runtime_resume(struct device *dev)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
-	struct sdhci_pci_chip *chip;
-	struct sdhci_pci_slot *slot;
-	int i, ret;
+	struct sdhci_pci_chip *chip = pci_get_drvdata(pdev);
 
-	chip = pci_get_drvdata(pdev);
 	if (!chip)
 		return 0;
 
-	for (i = 0; i < chip->num_slots; i++) {
-		slot = chip->slots[i];
-		if (!slot)
-			continue;
-
-		ret = sdhci_runtime_resume_host(slot->host);
-		if (ret)
-			return ret;
-	}
+	if (chip->fixes && chip->fixes->runtime_resume)
+		return chip->fixes->runtime_resume(chip);
 
-	return 0;
+	return sdhci_pci_runtime_resume_host(chip);
 }
 #endif
 
diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
index 182c95894768..edf9b6963fd6 100644
--- a/drivers/mmc/host/sdhci-pci.h
+++ b/drivers/mmc/host/sdhci-pci.h
@@ -71,6 +71,10 @@ struct sdhci_pci_fixes {
 	int			(*suspend) (struct sdhci_pci_chip *);
 	int			(*resume) (struct sdhci_pci_chip *);
 #endif
+#ifdef CONFIG_PM
+	int			(*runtime_suspend) (struct sdhci_pci_chip *);
+	int			(*runtime_resume) (struct sdhci_pci_chip *);
+#endif
 
 	const struct sdhci_ops	*ops;
 };
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 38/39] mmc: sdhci-pci: Move a function to avoid later forward declaration
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (36 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 37/39] mmc: sdhci-pci: Add runtime suspend/resume callbacks Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  2017-02-10 12:55 ` [PATCH RFC 39/39] mmc: sdhci-pci: Add CQHCI support for Intel GLK Adrian Hunter
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Move a function to avoid having to forward declare it in a subsequent
patch.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci-core.c | 78 +++++++++++++++++++--------------------
 1 file changed, 39 insertions(+), 39 deletions(-)

diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index ca7c06508dcd..5c98a7bb8ed6 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -482,6 +482,45 @@ static int bxt_get_cd(struct mmc_host *mmc)
 	return ret;
 }
 
+#define SDHCI_INTEL_PWR_TIMEOUT_CNT	20
+#define SDHCI_INTEL_PWR_TIMEOUT_UDELAY	100
+
+static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
+				  unsigned short vdd)
+{
+	int cntr;
+	u8 reg;
+
+	sdhci_set_power(host, mode, vdd);
+
+	if (mode == MMC_POWER_OFF)
+		return;
+
+	/*
+	 * Bus power might not enable after D3 -> D0 transition due to the
+	 * present state not yet having propagated. Retry for up to 2ms.
+	 */
+	for (cntr = 0; cntr < SDHCI_INTEL_PWR_TIMEOUT_CNT; cntr++) {
+		reg = sdhci_readb(host, SDHCI_POWER_CONTROL);
+		if (reg & SDHCI_POWER_ON)
+			break;
+		udelay(SDHCI_INTEL_PWR_TIMEOUT_UDELAY);
+		reg |= SDHCI_POWER_ON;
+		sdhci_writeb(host, reg, SDHCI_POWER_CONTROL);
+	}
+}
+
+static const struct sdhci_ops sdhci_intel_byt_ops = {
+	.set_clock		= sdhci_set_clock,
+	.set_power		= sdhci_intel_set_power,
+	.enable_dma		= sdhci_pci_enable_dma,
+	.set_bus_width		= sdhci_pci_set_bus_width,
+	.reset			= sdhci_reset,
+	.set_uhs_signaling	= sdhci_set_uhs_signaling,
+	.hw_reset		= sdhci_pci_hw_reset,
+	.select_drive_strength	= sdhci_pci_select_drive_strength,
+};
+
 static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
 {
 	slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
@@ -560,45 +599,6 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
 	return 0;
 }
 
-#define SDHCI_INTEL_PWR_TIMEOUT_CNT	20
-#define SDHCI_INTEL_PWR_TIMEOUT_UDELAY	100
-
-static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
-				  unsigned short vdd)
-{
-	int cntr;
-	u8 reg;
-
-	sdhci_set_power(host, mode, vdd);
-
-	if (mode == MMC_POWER_OFF)
-		return;
-
-	/*
-	 * Bus power might not enable after D3 -> D0 transition due to the
-	 * present state not yet having propagated. Retry for up to 2ms.
-	 */
-	for (cntr = 0; cntr < SDHCI_INTEL_PWR_TIMEOUT_CNT; cntr++) {
-		reg = sdhci_readb(host, SDHCI_POWER_CONTROL);
-		if (reg & SDHCI_POWER_ON)
-			break;
-		udelay(SDHCI_INTEL_PWR_TIMEOUT_UDELAY);
-		reg |= SDHCI_POWER_ON;
-		sdhci_writeb(host, reg, SDHCI_POWER_CONTROL);
-	}
-}
-
-static const struct sdhci_ops sdhci_intel_byt_ops = {
-	.set_clock		= sdhci_set_clock,
-	.set_power		= sdhci_intel_set_power,
-	.enable_dma		= sdhci_pci_enable_dma,
-	.set_bus_width		= sdhci_pci_set_bus_width,
-	.reset			= sdhci_reset,
-	.set_uhs_signaling	= sdhci_set_uhs_signaling,
-	.hw_reset		= sdhci_pci_hw_reset,
-	.select_drive_strength	= sdhci_pci_select_drive_strength,
-};
-
 static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = {
 	.allow_runtime_pm = true,
 	.probe_slot	= byt_emmc_probe_slot,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH RFC 39/39] mmc: sdhci-pci: Add CQHCI support for Intel GLK
  2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
                   ` (37 preceding siblings ...)
  2017-02-10 12:55 ` [PATCH RFC 38/39] mmc: sdhci-pci: Move a function to avoid later forward declaration Adrian Hunter
@ 2017-02-10 12:55 ` Adrian Hunter
  38 siblings, 0 replies; 63+ messages in thread
From: Adrian Hunter @ 2017-02-10 12:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

Add CQHCI initialization and implement CQHCI operations for Intel GLK.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/Kconfig          |   1 +
 drivers/mmc/host/sdhci-pci-core.c | 151 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 151 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index fceba81b8f37..d964681579b4 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -72,6 +72,7 @@ config MMC_SDHCI_BIG_ENDIAN_32BIT_BYTE_SWAPPER
 config MMC_SDHCI_PCI
 	tristate "SDHCI support on PCI bus"
 	depends on MMC_SDHCI && PCI
+	select MMC_CQHCI
 	help
 	  This selects the PCI Secure Digital Host Controller Interface.
 	  Most controllers found today are PCI devices.
diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 5c98a7bb8ed6..f6d47cfc8ba5 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -29,6 +29,8 @@
 #include <linux/mmc/sdhci-pci-data.h>
 #include <linux/acpi.h>
 
+#include "cqhci.h"
+
 #include "sdhci.h"
 #include "sdhci-pci.h"
 #include "sdhci-pci-o2micro.h"
@@ -115,6 +117,28 @@ int sdhci_pci_resume_host(struct sdhci_pci_chip *chip)
 
 	return 0;
 }
+
+static int sdhci_cqhci_suspend(struct sdhci_pci_chip *chip)
+{
+	int ret;
+
+	ret = cqhci_suspend(chip->slots[0]->host->mmc);
+	if (ret)
+		return ret;
+
+	return sdhci_pci_suspend_host(chip);
+}
+
+static int sdhci_cqhci_resume(struct sdhci_pci_chip *chip)
+{
+	int ret;
+
+	ret = sdhci_pci_resume_host(chip);
+	if (ret)
+		return ret;
+
+	return cqhci_resume(chip->slots[0]->host->mmc);
+}
 #endif
 
 #ifdef CONFIG_PM
@@ -158,8 +182,54 @@ static int sdhci_pci_runtime_resume_host(struct sdhci_pci_chip *chip)
 
 	return 0;
 }
+
+static int sdhci_cqhci_runtime_suspend(struct sdhci_pci_chip *chip)
+{
+	int ret;
+
+	ret = cqhci_suspend(chip->slots[0]->host->mmc);
+	if (ret)
+		return ret;
+
+	return sdhci_pci_runtime_suspend_host(chip);
+}
+
+static int sdhci_cqhci_runtime_resume(struct sdhci_pci_chip *chip)
+{
+	int ret;
+
+	ret = sdhci_pci_runtime_resume_host(chip);
+	if (ret)
+		return ret;
+
+	return cqhci_resume(chip->slots[0]->host->mmc);
+}
 #endif
 
+static u32 sdhci_cqhci_irq(struct sdhci_host *host, u32 intmask)
+{
+	int cmd_error = 0;
+	int data_error = 0;
+
+	if (!sdhci_cqe_irq(host, intmask, &cmd_error, &data_error))
+		return intmask;
+
+	cqhci_irq(host->mmc, intmask, cmd_error, data_error);
+
+	return 0;
+}
+
+static void sdhci_pci_dumpregs(struct mmc_host *mmc)
+{
+	sdhci_dumpregs(mmc_priv(mmc));
+}
+
+static const struct cqhci_host_ops sdhci_cqhci_ops = {
+	.enable		= sdhci_cqe_enable,
+	.disable	= sdhci_cqe_disable,
+	.dumpregs	= sdhci_pci_dumpregs,
+};
+
 /*****************************************************************************\
  *                                                                           *
  * Hardware specific quirk handling                                          *
@@ -521,6 +591,18 @@ static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
 	.select_drive_strength	= sdhci_pci_select_drive_strength,
 };
 
+static const struct sdhci_ops sdhci_intel_glk_ops = {
+	.set_clock		= sdhci_set_clock,
+	.set_power		= sdhci_intel_set_power,
+	.enable_dma		= sdhci_pci_enable_dma,
+	.set_bus_width		= sdhci_pci_set_bus_width,
+	.reset			= sdhci_reset,
+	.set_uhs_signaling	= sdhci_set_uhs_signaling,
+	.hw_reset		= sdhci_pci_hw_reset,
+	.select_drive_strength	= sdhci_pci_select_drive_strength,
+	.irq			= sdhci_cqhci_irq,
+};
+
 static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
 {
 	slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
@@ -538,6 +620,54 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
 	return 0;
 }
 
+static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
+{
+	int ret = byt_emmc_probe_slot(slot);
+
+	slot->host->mmc->caps2 |= MMC_CAP2_CQE;
+
+	return ret;
+}
+
+static int glk_emmc_add_host(struct sdhci_pci_slot *slot)
+{
+	struct device *dev = &slot->chip->pdev->dev;
+	struct sdhci_host *host = slot->host;
+	struct cqhci_host *cq_host;
+	bool dma64;
+	int ret;
+
+	ret = sdhci_setup_host(host);
+	if (ret)
+		return ret;
+
+	cq_host = devm_kzalloc(dev, sizeof(*cq_host), GFP_KERNEL);
+	if (!cq_host) {
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	cq_host->mmio = host->ioaddr + 0x200;
+	cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
+	cq_host->quirks |= CQHCI_QUIRK_SHORT_TXFR_DESC_SZ;
+	cq_host->ops = &sdhci_cqhci_ops;
+
+	dma64 = host->flags & SDHCI_USE_64_BIT_DMA;
+	ret = cqhci_init(cq_host, host->mmc, dma64);
+	if (ret)
+		goto cleanup;
+
+	ret = __sdhci_add_host(host);
+	if (ret)
+		goto cleanup;
+
+	return 0;
+
+cleanup:
+	sdhci_cleanup_host(host);
+	return ret;
+}
+
 #ifdef CONFIG_ACPI
 static int ni_set_max_freq(struct sdhci_pci_slot *slot)
 {
@@ -609,6 +739,25 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
 	.ops		= &sdhci_intel_byt_ops,
 };
 
+static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
+	.allow_runtime_pm	= true,
+	.probe_slot		= glk_emmc_probe_slot,
+	.add_host		= glk_emmc_add_host,
+#ifdef CONFIG_PM_SLEEP
+	.suspend		= sdhci_cqhci_suspend,
+	.resume			= sdhci_cqhci_resume,
+#endif
+#ifdef CONFIG_PM
+	.runtime_suspend	= sdhci_cqhci_runtime_suspend,
+	.runtime_resume		= sdhci_cqhci_runtime_resume,
+#endif
+	.quirks			= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
+	.quirks2		= SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+				  SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400 |
+				  SDHCI_QUIRK2_STOP_WITH_TC,
+	.ops			= &sdhci_intel_glk_ops,
+};
+
 static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = {
 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
 	.quirks2	= SDHCI_QUIRK2_HOST_OFF_CARD_ON |
@@ -1560,7 +1709,7 @@ static int amd_probe(struct sdhci_pci_chip *chip)
 		.device		= PCI_DEVICE_ID_INTEL_GLK_EMMC,
 		.subvendor	= PCI_ANY_ID,
 		.subdevice	= PCI_ANY_ID,
-		.driver_data	= (kernel_ulong_t)&sdhci_intel_byt_emmc,
+		.driver_data	= (kernel_ulong_t)&sdhci_intel_glk_emmc,
 	},
 
 	{
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 03/39] mmc: block: Introduce queue semantics
  2017-02-10 12:55 ` [PATCH RFC 03/39] mmc: block: Introduce queue semantics Adrian Hunter
@ 2017-02-15 12:29   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 12:29 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Change from viewing the requests in progress as 'current' and 'previous',
> to viewing them as a queue. The current request is allocated to the first
> free slot. The presence of incomplete requests is determined from the
> count (mq->qcnt) of entries in the queue. Non-read-write requests (i.e.
> discards and flushes) are not added to the queue at all and require no
> special handling. Also no special handling is needed for the
> MMC_BLK_NEW_REQUEST case.
>
> As well as allowing an arbitrarily sized queue, the queue thread function
> is significantly simpler.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

This patch appears to be doing the same thing as my patch
"mmc: queue: get/put struct mmc_queue_req"

I just happen to feel that mmc_queue_req_get() vs
mmc_queue_req_put() is a more intuitive naming. If I wanted to
be overspecific I would say "mmc_queue_req_pool_get()"
and "mmc_queue_req_pool_put()".

On top of that, this patch adds the task_id and qcount business
to see if things are complete. This is closely related to the
mechanism of a polling loop that my patch set is getting us
rid of: I stop issueing NULL requests altogether and this
way we get rid of a big chunk of the problems you are
addressing here, like this:

> -                       if (mq->mqrq_prev->req)
> +                       if (mq->qcnt)
>                                 cntx->is_waiting_last_req = true;
(...)
> -               if (req || mq->mqrq_prev->req) {
> -                       bool req_is_special = mmc_req_is_special(req);
> -
> +               if (req || mq->qcnt) {

As you will see, I take it further, the above will become just:
if (req).

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur
  2017-02-10 12:55 ` [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur Adrian Hunter
@ 2017-02-15 12:29   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 12:29 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
> assigning it to a local variable.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

My patch series also totally kill off mq->mqrq_cur and prev instead using
a pool (with queue depth items) of pre-allocated struct mmc_queue_req, see
"mmc: queue: get/put struct mmc_queue_req"

So I guess we agree that getting rid of the prev/cur thing is something we
can agree to do as a first step?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 04/39] mmc: core: Do not prepare a new request twice
  2017-02-10 12:55 ` [PATCH RFC 04/39] mmc: core: Do not prepare a new request twice Adrian Hunter
@ 2017-02-15 12:49   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 12:49 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> mmc_start_req() assumes it is never called with the new request already
> prepared. That is true if the queue consists of only 2 requests, but is not
> true for a longer queue. e.g. mmc_start_req() has a current and previous
> request but still exits to queue a new request if the queue size is
> greater than 2. In that case, when mmc_start_req() is called again, the
> current request will have been prepared already. Fix by flagging if the
> request has been prepared.
>
> That also means ensuring that struct mmc_async_req is always initialized
> to zero, which wasn't the case in mmc_test.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>

I disagree with this patch, because it makes things even more complex,
with .pre_req_done:

> +       if (areq && !areq->pre_req_done) {
> +               areq->pre_req_done = true;
(...)
> +       if (host->areq) {
> +               host->areq->pre_req_done = false;
(...)
> +       if ((status != MMC_BLK_SUCCESS || start_err) && areq) {
> +               areq->pre_req_done = false;
> @@ -168,6 +168,7 @@ struct mmc_async_req {
> +       bool pre_req_done;

We already have a crazy amount of states, which is something I try to
address in my patch set where I kill off the context and instead model
things in a different way:

Today what happens is that we call mmc_pre_req(), wait for the
previous request to complete, get the status, and if it happens
to be bad, we call mmc_post_req() and return back to the block
queue.

But why do we do that? Why are we not just leaving the request
prepared? After all it is just a cache flusg that happens in the
mmc_pre_req()... so you are addressing that in the patch, which
is nice in a way.

But in my patch set, we get rid of the whole problem, because I
just redefine things by introducing a completion in the areq.
When that completion is complete, we can always issue a new
request, because that completion is not done until all retry and
trying with single writes and error handling etc is already done.

What I don't like about this is that it takes the dataflow and states
that I'm trying to simplify by removing the NULL flushing, and
makes it even more complex with yet another state.

I haven't read all patches yet, but I feel it is better to do
command queueing on top of my patches, that is, except
for the last one that switch to using blk-mq - it's just an RFC.

Maybe I change my mind when I've read it all through, but I
generally feel I simplify the data flow, while this complicates the
data flow, but I admit that that position is a bit subjective.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-02-10 12:55 ` [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
@ 2017-02-15 12:52   ` Linus Walleij
  2017-02-17 12:21     ` Ulf Hansson
  0 siblings, 1 reply; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 12:52 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Add helper functions to enable or disable the Command Queue.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

I think I asked this before but why?

If the card has this feature, certainly we should just exploit it?

If some card has a problem with it, I guess it is broken and should
simply have this operation blacklisted.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 09/39] mmc: queue: Add a function to control wake-up on new requests
  2017-02-10 12:55 ` [PATCH RFC 09/39] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
@ 2017-02-15 13:07   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:07 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Add a function to control wake-up on new requests. This will enable
> Software Command Queuing to choose whether or not to queue new
> requests immediately or wait for the current task to complete.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

I think my patch set killed off the context and this whole thing
"just works" instead of needing more states and hacks like this.

The proliferation of states is reaching a all time high here:

+       if (cntx->is_waiting_last_req != wake_me) {
+               spin_lock_irq(q->queue_lock);
+               cntx->is_waiting_last_req = wake_me;
+               cntx->is_new_req = wake_me && mq->is_new_req;
(...)
@@ -43,6 +43,7 @@ struct mmc_queue {
(...)
+       bool                    is_new_req;

Now there is a is_new_req in the context AND an identically
named state variable in struct mmc_queue, and when reading the
code I suspect a reader would like to know the difference
between the two .is_new_req variables, one in the state
of the host and one in the state of the queue.

This is a cognitive nightmare, sorry. At least this would have to
have more specific names.

I think all these states come from the polling loop construction
and this constant spinning around and checking for errors, doing
and undoing the pre/post/bounce stuff, and compulsively flushing
the pipeline with NULL and so on. I was trying to
get rid of that stuff, but this is expanding it instead.

Hm, maybe I get a bit on the complainy side, sorry if so...

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 10/39] mmc: block: Change mmc_apply_rel_rw() to get block address from the request
  2017-02-10 12:55 ` [PATCH RFC 10/39] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
@ 2017-02-15 13:09   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:09 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> mmc_apply_rel_rw() will be used by Software Command Queuing also. In that
> case the command argument is not the block address so change
> mmc_apply_rel_rw() to get block address from the request.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

This looks right. Isn't this something we can just apply right off?
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 11/39] mmc: block: Factor out data preparation
  2017-02-10 12:55 ` [PATCH RFC 11/39] mmc: block: Factor out data preparation Adrian Hunter
@ 2017-02-15 13:11   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:11 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Factor out data preparation into a separate function mmc_blk_data_prep()
> which can be re-used for command queuing.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

This patch is a good and valid refactoring as it stands, alone.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 12/39] mmc: block: Add Software Command Queuing
  2017-02-10 12:55 ` [PATCH RFC 12/39] mmc: block: Add Software Command Queuing Adrian Hunter
@ 2017-02-15 13:34   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:34 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> eMMC Command Queuing is a feature added in version 5.1.  The card maintains
> a queue of up to 32 data transfers.  Commands CMD44/CMD45 are sent to queue
> up transfers in advance, and then one of the transfers is selected to
> "execute" by CMD46/CMD47 at which point data transfer actually begins.
(...)
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Overall this looks nice.

I have of course objections about the things where I just patched
around.

> +static int mmc_swcmdq_await_qsr(struct mmc_queue *mq, struct mmc_card *card,
> +                               bool wait)
(...)
> +       if (card->host->areq) {
> +               /*
> +                * The active request remains in the QSR until completed. Remove
> +                * it so that mq->qsr only contains ones that are ready but not
> +                * executed.
> +                */
> +               mqrq = container_of(card->host->areq, struct mmc_queue_req,
> +                                   areq);
> +               __clear_bit(mqrq->task_id, &mq->qsr);
> +       }

Clever. I see that you might need that task_id field in mqrq for this.
Which is fine: this is the per-request state container and should be used
for all states pertaining to a certain request.

> +       prev = mmc_start_areq(card->host, next, &status);
> +
> +       if (status == MMC_BLK_NEW_REQUEST) {
> +               mq->prepared_areq = next;
> +               return status;
> +       }
> +
> +       mq->prepared_areq = NULL;
> +
> +       if (!prev)
> +               return 0;
> +
> +       mqrq = container_of(prev, struct mmc_queue_req, areq);
> +       brq = &mqrq->brq;
> +       req = mqrq->req;
> +
> +       mmc_queue_bounce_post(mqrq);

So this is duplicating hairy submission semantics that I've tried to get rid of.

> +       switch (status) {
> +       case MMC_BLK_SUCCESS:
> +       case MMC_BLK_PARTIAL:
> +       case MMC_BLK_SUCCESS_ERR:
> +               mmc_blk_reset_success(mq->blkdata, MMC_BLK_SWCMDQ);
> +               ret = blk_end_request(req, 0, brq->data.bytes_xfered);
> +               if (ret) {
> +                       if (!requeuing)
> +                               return __mmc_swcmdq_enqueue(mq, mqrq);
> +                       return 0;
> +               }
> +               break;
> +       case MMC_BLK_NEW_REQUEST:
> +               return status;
> +       default:
> +               if (mqrq->retry_cnt++) {
> +                       blk_end_request_all(req, -EIO);
> +                       break;
> +               }
> +               return status;
> +       }

And this is the successful path where I nowadays try to split the return
path in two in the  last non-RFC patch:
"mmc: queue: issue requests in massive parallel"

It is not good that the command queue code is making a local copy of
the error handling code, but I guess it actually needs to handle
errors in a totally different way?

> +static void __mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)

For this ans all other functions that got a __prefix all of a sudden: please
try to avoid that and instead come up with a proper new name.

The code is confusing enough as it is and in "my"
subsystems I have tried to cut down on all __prefixed functions in
favor of more exact naming. This is just very painful for the mind.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 13/39] mmc: mmc: Enable Software Command Queuing
  2017-02-10 12:55 ` [PATCH RFC 13/39] mmc: mmc: Enable " Adrian Hunter
@ 2017-02-15 13:35   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:35 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Enable the Command Queue if the host controller supports Software Command
> Queuing. It is not compatible with Packed Commands, so do not enable that
> at the same time.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

This looks right
Acked-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 14/39] mmc: core: Factor out debug prints from mmc_start_request()
  2017-02-10 12:55 ` [PATCH RFC 14/39] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
@ 2017-02-15 13:38   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:38 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> In preparation to reuse the code for CQE support.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 15/39] mmc: core: Factor out mrq preparation from mmc_start_request()
  2017-02-10 12:55 ` [PATCH RFC 15/39] mmc: core: Factor out mrq preparation " Adrian Hunter
@ 2017-02-15 13:39   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:39 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> In preparation to reuse the code for CQE support.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 17/39] mmc: core: Add members to mmc_request and mmc_data for CQE's
  2017-02-10 12:55 ` [PATCH RFC 17/39] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
@ 2017-02-15 13:42   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:42 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Most of the information needed to issue requests to a CQE is already in
> struct mmc_request and struct mmc_data. Add data block address, some flags,
> and the task id (tag).
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Acked-by: Linus Walleij <linus.walleij@linaro.org>

I can't claim to know exactly why tag has to be here as opposed to being
only internal to out mmc_blk_req, but I assume there is a good reason for
that.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 19/39] mmc: core: Turn off CQE before sending commands
  2017-02-10 12:55 ` [PATCH RFC 19/39] mmc: core: Turn off CQE before sending commands Adrian Hunter
@ 2017-02-15 13:42   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:42 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Turn off the CQE before sending commands, and ensure it is off in any reset
> or power management paths.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Acked-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 20/39] mmc: core: Add support for handling CQE requests
  2017-02-10 12:55 ` [PATCH RFC 20/39] mmc: core: Add support for handling CQE requests Adrian Hunter
@ 2017-02-15 13:44   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:44 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Add core support for handling CQE requests, including starting, completing
> and recovering.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

So this adds a new command path for the cqe. I guess it's unavoidable, and
looks good to me.
Acked-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 21/39] mmc: mmc: Enable CQE's
  2017-02-10 12:55 ` [PATCH RFC 21/39] mmc: mmc: Enable CQE's Adrian Hunter
@ 2017-02-15 13:45   ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:45 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Enable or disable CQE when a card is added or removed respectively.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 22/39] mmc: block: Prepare CQE data
  2017-02-10 12:55 ` [PATCH RFC 22/39] mmc: block: Prepare CQE data Adrian Hunter
@ 2017-02-15 13:49   ` Linus Walleij
  2017-03-03 12:22     ` Adrian Hunter
  0 siblings, 1 reply; 63+ messages in thread
From: Linus Walleij @ 2017-02-15 13:49 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> Enhance mmc_blk_data_prep() to support CQE requests.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

Hey:

> +#include <linux/ioprio.h>
(...)
> +       if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
> +               brq->data.flags |= MMC_DATA_PRIO;

What is this?

Why are we not already doing this and why is it not a separate patch?

>         /*
>          * Data tag is used only during writing meta data to speed
>          * up write and any subsequent read of this meta data
>          */

Is this comment still true?

> -       *do_data_tag = card->ext_csd.data_tag_unit_size &&
> -                      (req->cmd_flags & REQ_META) &&
> -                      (rq_data_dir(req) == WRITE) &&
> -                      ((brq->data.blocks * brq->data.blksz) >=
> -                       card->ext_csd.data_tag_unit_size);
> +       do_data_tag = card->ext_csd.data_tag_unit_size &&
> +                     (req->cmd_flags & REQ_META) &&
> +                     (rq_data_dir(req) == WRITE) &&
> +                     ((brq->data.blocks * brq->data.blksz) >=
> +                      card->ext_csd.data_tag_unit_size);
> +
> +       if (do_data_tag)
> +               brq->data.flags |= MMC_DATA_DAT_TAG;

Because it seems to be more unconditionally used now.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-02-15 12:52   ` Linus Walleij
@ 2017-02-17 12:21     ` Ulf Hansson
  2017-02-23 14:54       ` Linus Walleij
  0 siblings, 1 reply; 63+ messages in thread
From: Ulf Hansson @ 2017-02-17 12:21 UTC (permalink / raw)
  To: Linus Walleij, Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On 15 February 2017 at 13:52, Linus Walleij <linus.walleij@linaro.org> wrote:
> On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>
>> Add helper functions to enable or disable the Command Queue.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>
> I think I asked this before but why?

There are certain other requests, like for RPMB, which requires CMDQ
to be "temporary" disabled.

Seems like a little more information the change log would be
appreciated to understand why we need to add these functions.

>
> If the card has this feature, certainly we should just exploit it?
>
> If some card has a problem with it, I guess it is broken and should
> simply have this operation blacklisted.
>
> Yours,
> Linus Walleij

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 25/39] mmc: sdhci: Improve debug print format
  2017-02-10 12:55 ` [PATCH RFC 25/39] mmc: sdhci: Improve debug print format Adrian Hunter
@ 2017-02-17 12:30   ` Ulf Hansson
  0 siblings, 0 replies; 63+ messages in thread
From: Ulf Hansson @ 2017-02-17 12:30 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
	Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
	Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu, Linus Walleij

On 10 February 2017 at 13:55, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Ensure all debug prints start with the mmc host name.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

There seems to be a bunch of sdhci improvements/cleanups, not closely
relate to the CMDQ support. May suggest you send these in a separate
series, so we can apply them first.

Like this one.

Kind regards
Uffe

> ---
>  drivers/mmc/host/sdhci.c | 28 ++++++++++++----------------
>  1 file changed, 12 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
> index 6fdd7a70f229..8aa8541a349a 100644
> --- a/drivers/mmc/host/sdhci.c
> +++ b/drivers/mmc/host/sdhci.c
> @@ -37,7 +37,7 @@
>  #define DRIVER_NAME "sdhci"
>
>  #define DBG(f, x...) \
> -       pr_debug(DRIVER_NAME " [%s()]: " f, __func__,## x)
> +       pr_debug("%s: " DRIVER_NAME ": " f, mmc_hostname(host->mmc), ## x)
>
>  #define MAX_TUNING_LOOP 40
>
> @@ -715,8 +715,8 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
>         }
>
>         if (count >= 0xF) {
> -               DBG("%s: Too large timeout 0x%x requested for CMD%d!\n",
> -                   mmc_hostname(host->mmc), count, cmd->opcode);
> +               DBG("Too large timeout 0x%x requested for CMD%d!\n",
> +                   count, cmd->opcode);
>                 count = 0xE;
>         }
>
> @@ -2509,7 +2509,6 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
>  #ifdef CONFIG_MMC_DEBUG
>  static void sdhci_adma_show_error(struct sdhci_host *host)
>  {
> -       const char *name = mmc_hostname(host->mmc);
>         void *desc = host->adma_table;
>
>         sdhci_dumpregs(host);
> @@ -2518,14 +2517,14 @@ static void sdhci_adma_show_error(struct sdhci_host *host)
>                 struct sdhci_adma2_64_desc *dma_desc = desc;
>
>                 if (host->flags & SDHCI_USE_64_BIT_DMA)
> -                       DBG("%s: %p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
> -                           name, desc, le32_to_cpu(dma_desc->addr_hi),
> +                       DBG("%p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
> +                           desc, le32_to_cpu(dma_desc->addr_hi),
>                             le32_to_cpu(dma_desc->addr_lo),
>                             le16_to_cpu(dma_desc->len),
>                             le16_to_cpu(dma_desc->cmd));
>                 else
> -                       DBG("%s: %p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
> -                           name, desc, le32_to_cpu(dma_desc->addr_lo),
> +                       DBG("%p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
> +                           desc, le32_to_cpu(dma_desc->addr_lo),
>                             le16_to_cpu(dma_desc->len),
>                             le16_to_cpu(dma_desc->cmd));
>
> @@ -2641,10 +2640,8 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
>                                 ~(SDHCI_DEFAULT_BOUNDARY_SIZE - 1)) +
>                                 SDHCI_DEFAULT_BOUNDARY_SIZE;
>                         host->data->bytes_xfered = dmanow - dmastart;
> -                       DBG("%s: DMA base 0x%08x, transferred 0x%06x bytes,"
> -                               " next 0x%08x\n",
> -                               mmc_hostname(host->mmc), dmastart,
> -                               host->data->bytes_xfered, dmanow);
> +                       DBG("DMA base 0x%08x, transferred 0x%06x bytes, next 0x%08x\n",
> +                           dmastart, host->data->bytes_xfered, dmanow);
>                         sdhci_writel(host, dmanow, SDHCI_DMA_ADDRESS);
>                 }
>
> @@ -2689,8 +2686,7 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id)
>                                   SDHCI_INT_BUS_POWER);
>                 sdhci_writel(host, mask, SDHCI_INT_STATUS);
>
> -               DBG("*** %s got interrupt: 0x%08x\n",
> -                       mmc_hostname(host->mmc), intmask);
> +               DBG("IRQ status 0x%08x\n", intmask);
>
>                 if (intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE)) {
>                         u32 present = sdhci_readl(host, SDHCI_PRESENT_STATE) &
> @@ -3324,9 +3320,9 @@ int sdhci_setup_host(struct sdhci_host *host)
>              !(host->flags & SDHCI_USE_SDMA)) &&
>              !(host->quirks2 & SDHCI_QUIRK2_ACMD23_BROKEN)) {
>                 host->flags |= SDHCI_AUTO_CMD23;
> -               DBG("%s: Auto-CMD23 available\n", mmc_hostname(mmc));
> +               DBG("Auto-CMD23 available\n");
>         } else {
> -               DBG("%s: Auto-CMD23 unavailable\n", mmc_hostname(mmc));
> +               DBG("Auto-CMD23 unavailable\n");
>         }
>
>         /*
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue
  2017-02-17 12:21     ` Ulf Hansson
@ 2017-02-23 14:54       ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-02-23 14:54 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Adrian Hunter, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Feb 17, 2017 at 1:21 PM, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> On 15 February 2017 at 13:52, Linus Walleij <linus.walleij@linaro.org> wrote:
>> On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>
>>> Add helper functions to enable or disable the Command Queue.
>>>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>
>> I think I asked this before but why?
>
> There are certain other requests, like for RPMB, which requires CMDQ
> to be "temporary" disabled.

Ah my comment is stupid.

I specifically want to know why we should be able to disable
it from sysfs.

I don't see the point in that, just giving userspace another gun
to shoot itself in the foot.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 22/39] mmc: block: Prepare CQE data
  2017-02-15 13:49   ` Linus Walleij
@ 2017-03-03 12:22     ` Adrian Hunter
  2017-03-09 22:39       ` Linus Walleij
  0 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-03-03 12:22 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On 15/02/17 15:49, Linus Walleij wrote:
> On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> 
>> Enhance mmc_blk_data_prep() to support CQE requests.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> 
> Hey:
> 
>> +#include <linux/ioprio.h>
> (...)
>> +       if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
>> +               brq->data.flags |= MMC_DATA_PRIO;
> 
> What is this?

It is the command queue priority.

The command queue supports 2 priorities: "high" (1) and "simple" (0).  The
eMMC will give "high" priority tasks priority over "simple" priority tasks.

So here we give priority to IOPRIO_CLASS_RT which seems appropriate.

> 
> Why are we not already doing this and why is it not a separate patch?

I think adding a comment is better since it is a command queue feature.

> 
>>         /*
>>          * Data tag is used only during writing meta data to speed
>>          * up write and any subsequent read of this meta data
>>          */
> 
> Is this comment still true?

AFAIK

> 
>> -       *do_data_tag = card->ext_csd.data_tag_unit_size &&
>> -                      (req->cmd_flags & REQ_META) &&
>> -                      (rq_data_dir(req) == WRITE) &&
>> -                      ((brq->data.blocks * brq->data.blksz) >=
>> -                       card->ext_csd.data_tag_unit_size);
>> +       do_data_tag = card->ext_csd.data_tag_unit_size &&
>> +                     (req->cmd_flags & REQ_META) &&
>> +                     (rq_data_dir(req) == WRITE) &&
>> +                     ((brq->data.blocks * brq->data.blksz) >=
>> +                      card->ext_csd.data_tag_unit_size);
>> +
>> +       if (do_data_tag)
>> +               brq->data.flags |= MMC_DATA_DAT_TAG;
> 
> Because it seems to be more unconditionally used now.

The key ingredient is (req->cmd_flags & REQ_META) i.e. meta-data gets tagged.



^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 22/39] mmc: block: Prepare CQE data
  2017-03-03 12:22     ` Adrian Hunter
@ 2017-03-09 22:39       ` Linus Walleij
  2017-03-10  8:29         ` Adrian Hunter
  0 siblings, 1 reply; 63+ messages in thread
From: Linus Walleij @ 2017-03-09 22:39 UTC (permalink / raw)
  To: Adrian Hunter, linux-block, Paolo Valente, Jens Axboe
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Mar 3, 2017 at 1:22 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> On 15/02/17 15:49, Linus Walleij wrote:
>> On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>
>>> Enhance mmc_blk_data_prep() to support CQE requests.
>>>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>
>> Hey:
>>
>>> +#include <linux/ioprio.h>
>> (...)
>>> +       if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
>>> +               brq->data.flags |= MMC_DATA_PRIO;
>>
>> What is this?
>
> It is the command queue priority.
>
> The command queue supports 2 priorities: "high" (1) and "simple" (0).  The
> eMMC will give "high" priority tasks priority over "simple" priority tasks.
>
> So here we give priority to IOPRIO_CLASS_RT which seems appropriate.

So if I understand correctly, you are obtaining the block layer scheduling
priorities, that can (only?) be set by ionice has from the command line?

We need to discuss this with the block maintainers.

I'm not so sure about the future of this. The IOPRIO is only used with the CFQ
scheduler, only two other sites in the kernel use this and MQ and its schedulers
surely does not have ionice handling as far as I know.

The BFQ does not use it, AFAIK it is using different heuristics to prioritize
block traffic, and that does not include using ionice hints.

Is ionice:ing something we're really going to do going forward?
Should this be repurposed so that the block scheduler use this prio to
communicate to the driver layer to prioritize certain traffic?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 22/39] mmc: block: Prepare CQE data
  2017-03-09 22:39       ` Linus Walleij
@ 2017-03-10  8:29         ` Adrian Hunter
  2017-03-28  7:57           ` Linus Walleij
  0 siblings, 1 reply; 63+ messages in thread
From: Adrian Hunter @ 2017-03-10  8:29 UTC (permalink / raw)
  To: Linus Walleij, linux-block, Paolo Valente, Jens Axboe
  Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On 10/03/17 00:39, Linus Walleij wrote:
> On Fri, Mar 3, 2017 at 1:22 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>> On 15/02/17 15:49, Linus Walleij wrote:
>>> On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>>
>>>> Enhance mmc_blk_data_prep() to support CQE requests.
>>>>
>>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>>
>>> Hey:
>>>
>>>> +#include <linux/ioprio.h>
>>> (...)
>>>> +       if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
>>>> +               brq->data.flags |= MMC_DATA_PRIO;
>>>
>>> What is this?
>>
>> It is the command queue priority.
>>
>> The command queue supports 2 priorities: "high" (1) and "simple" (0).  The
>> eMMC will give "high" priority tasks priority over "simple" priority tasks.
>>
>> So here we give priority to IOPRIO_CLASS_RT which seems appropriate.
> 
> So if I understand correctly, you are obtaining the block layer scheduling
> priorities, that can (only?) be set by ionice has from the command line?

AFAICS it is the ioprio_set() system call .

> 
> We need to discuss this with the block maintainers.
> 
> I'm not so sure about the future of this. The IOPRIO is only used with the CFQ
> scheduler, only two other sites in the kernel use this and MQ and its schedulers
> surely does not have ionice handling as far as I know.
> 
> The BFQ does not use it, AFAIK it is using different heuristics to prioritize
> block traffic, and that does not include using ionice hints.
> 
> Is ionice:ing something we're really going to do going forward?
> Should this be repurposed so that the block scheduler use this prio to
> communicate to the driver layer to prioritize certain traffic?

That seems like a separate issue.  At the moment, I/O priorities are what we
have, and giving priority to RT seems appropriate.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH RFC 22/39] mmc: block: Prepare CQE data
  2017-03-10  8:29         ` Adrian Hunter
@ 2017-03-28  7:57           ` Linus Walleij
  0 siblings, 0 replies; 63+ messages in thread
From: Linus Walleij @ 2017-03-28  7:57 UTC (permalink / raw)
  To: Adrian Hunter, Paolo Valente, Jens Axboe
  Cc: linux-block, Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
	Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
	Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
	Harjani Ritesh, Venu Byravarasu

On Fri, Mar 10, 2017 at 9:29 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> On 10/03/17 00:39, Linus Walleij wrote:
>> On Fri, Mar 3, 2017 at 1:22 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>> On 15/02/17 15:49, Linus Walleij wrote:
>>>> On Fri, Feb 10, 2017 at 1:55 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>>>
>>>>> Enhance mmc_blk_data_prep() to support CQE requests.
>>>>>
>>>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>>>
>>>> Hey:
>>>>
>>>>> +#include <linux/ioprio.h>
>>>> (...)
>>>>> +       if (IOPRIO_PRIO_CLASS(req_get_ioprio(req)) == IOPRIO_CLASS_RT)
>>>>> +               brq->data.flags |= MMC_DATA_PRIO;
>>>>
>>>> What is this?
>>>
>>> It is the command queue priority.
>>>
>>> The command queue supports 2 priorities: "high" (1) and "simple" (0).  The
>>> eMMC will give "high" priority tasks priority over "simple" priority tasks.
>>>
>>> So here we give priority to IOPRIO_CLASS_RT which seems appropriate.
>>
>> So if I understand correctly, you are obtaining the block layer scheduling
>> priorities, that can (only?) be set by ionice has from the command line?
>
> AFAICS it is the ioprio_set() system call .

OK to be clear, correct, it is set by that system call.

And as far as I can tell there is exactly one program in the world that issues
that syscall, and that is ionice(1).

It is described to the world as a way for users to give hints to the
I/O scheduler
about how to manage requests. For example a user can hint that some IO
is a backup job and not important and some IO is a media player and really
important.

And since the block layer only handles this hint in the legacy blk CFQ scheduler
I want to know from the block layer developers if this priority has any future.
BFQ does not use this hint at all, I think Paolo described the ionice command
as problematic, and we need to understand if user-provided IO scheduling hints
is something that is going to be used in the future.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2017-03-28  7:57 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-10 12:55 [PATCH RFC 00/39] mmc: Add Command Queue support Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 01/39] mmc: block: Use local var for mqrq_cur Adrian Hunter
2017-02-15 12:29   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 02/39] mmc: queue: Share mmc request array between partitions Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 03/39] mmc: block: Introduce queue semantics Adrian Hunter
2017-02-15 12:29   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 04/39] mmc: core: Do not prepare a new request twice Adrian Hunter
2017-02-15 12:49   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 05/39] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
2017-02-15 12:52   ` Linus Walleij
2017-02-17 12:21     ` Ulf Hansson
2017-02-23 14:54       ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 06/39] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 07/39] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 08/39] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 09/39] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
2017-02-15 13:07   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 10/39] mmc: block: Change mmc_apply_rel_rw() to get block address from the request Adrian Hunter
2017-02-15 13:09   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 11/39] mmc: block: Factor out data preparation Adrian Hunter
2017-02-15 13:11   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 12/39] mmc: block: Add Software Command Queuing Adrian Hunter
2017-02-15 13:34   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 13/39] mmc: mmc: Enable " Adrian Hunter
2017-02-15 13:35   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 14/39] mmc: core: Factor out debug prints from mmc_start_request() Adrian Hunter
2017-02-15 13:38   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 15/39] mmc: core: Factor out mrq preparation " Adrian Hunter
2017-02-15 13:39   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 16/39] mmc: core: Add mmc_retune_hold_now() Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 17/39] mmc: core: Add members to mmc_request and mmc_data for CQE's Adrian Hunter
2017-02-15 13:42   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 18/39] mmc: host: Add CQE interface Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 19/39] mmc: core: Turn off CQE before sending commands Adrian Hunter
2017-02-15 13:42   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 20/39] mmc: core: Add support for handling CQE requests Adrian Hunter
2017-02-15 13:44   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 21/39] mmc: mmc: Enable CQE's Adrian Hunter
2017-02-15 13:45   ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 22/39] mmc: block: Prepare CQE data Adrian Hunter
2017-02-15 13:49   ` Linus Walleij
2017-03-03 12:22     ` Adrian Hunter
2017-03-09 22:39       ` Linus Walleij
2017-03-10  8:29         ` Adrian Hunter
2017-03-28  7:57           ` Linus Walleij
2017-02-10 12:55 ` [PATCH RFC 23/39] mmc: block: Add CQE support Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 24/39] mmc: cqhci: support for command queue enabled host Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 25/39] mmc: sdhci: Improve debug print format Adrian Hunter
2017-02-17 12:30   ` Ulf Hansson
2017-02-10 12:55 ` [PATCH RFC 26/39] mmc: sdhci: Add response register to register dump Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 27/39] mmc: sdhci: Improve register dump print format Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 28/39] mmc: sdhci: Export sdhci_dumpregs Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 29/39] mmc: sdhci: Get rid of 'extern' in header file Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 30/39] mmc: sdhci: Add sdhci_cleanup_host Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 31/39] mmc: sdhci: Factor out sdhci_set_default_irqs Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 32/39] mmc: sdhci: Add CQE support Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 33/39] mmc: sdhci-pci: Let devices define how to add the host Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 34/39] mmc: sdhci-pci: Do not use suspend/resume callbacks with runtime pm Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 35/39] mmc: sdhci-pci: Conditionally compile pm sleep functions Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 36/39] mmc: sdhci-pci: Let suspend/resume callbacks replace default callbacks Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 37/39] mmc: sdhci-pci: Add runtime suspend/resume callbacks Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 38/39] mmc: sdhci-pci: Move a function to avoid later forward declaration Adrian Hunter
2017-02-10 12:55 ` [PATCH RFC 39/39] mmc: sdhci-pci: Add CQHCI support for Intel GLK Adrian Hunter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.