linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH PoC 0/7] mmc: switch to blk-mq
@ 2016-09-22 13:57 Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 1/7] mmc-mq: add debug printks Bartlomiej Zolnierkiewicz
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Hi,

Since Linus Walleij is also working on that and I won't
probably have time to touch this code till the end of
upcoming month, here it is (basically a code dump of my
proof-of-concept work).  I hope that it would be useful
to somebody.

It is extremely ugly & full of bogus debug code but boots
fine on my Odroid-XU3 and benchmarks can be run.

The patchset is based on top of patches up to "[PATCH v3
24/30] mmc: block: Introduce queue semantics" patch from
"[PATCH V3 00/30] mmc: mmc: dd Software Command Queuing"
series by Adrian Hunter:

  http://www.spinics.net/lists/linux-mmc/msg38013.html

[ It was commmit f3ba397441825f99edb4833a5f52dd2355d253c7
  in swcmdq branch from:

  http://git.infradead.org/users/ahunter/linux-sdhci.git

  Unfortunalty it got rebased later to V4 -- my patchset
  probably still applies and works fine but I have not
  tested this yet. ]

PS Linus, I got async requests support working, see patch #7.
You may be able to use this method depending on the hackiness
level of your patches. ;)


Initial benchmark results:

   root@target:~# time dd if=/dev/mmcblk0 of=/dev/null bs=4k

   vanilla - performance governor - 1300MHz/1800MHz
   15758000128 bytes (16 GB) copied, 69.8334 s, 226 MB/s

   blk-mq - performance governor - 1300MHz/1800MHz
   15758000128 bytes (16 GB) copied, 79.0868 s, 199 MB/s

   vanilla - performance governor - 200MHz/200MHz
   15758000128 bytes (16 GB) copied, 228.014 s, 69.1 MB/s

   blk-mq - performance governor - 200MHz/200MHz
   15758000128 bytes (16 GB) copied, 208.536 s, 75.6 MB/s

   vanilla - on-demand governor - 1300MHz/1800MHz
   15758000128 bytes (16 GB) copied, 77.0968 s, 204 MB/s

   blk-mq - on-demand governor - 1300/1800MHz
   15758000128 bytes (16 GB) copied, 96.351 s, 164 MB/s


   root@target:~# time dd if=/dev/mmcblk0 of=/dev/null

   vanilla - on-demand governor - 1300MHz/1800MHz
   15758000128 bytes (16 GB) copied, 155.149 s, 102 MB/s

   blk-mq - on-demand governor - 1300MHz/1800MHz
   15758000128 bytes (16 GB) copied, 106.122 s, 148 MB/s


Bartlomiej Zolnierkiewicz (7):
  mmc-mq: add debug printks
  mmc-mq: remove async requests support
  mmc-mq: request completion fixes
  mmc-mq: implement checking for queue busy condition
  mmc-mq: remove some debug printks
  mmc-mq: initial blk-mq support
  mmc-mq: async request support for blk-mq mode

 drivers/mmc/card/block.c    | 174 +++++++++++++-------------
 drivers/mmc/card/mmc_test.c |   8 +-
 drivers/mmc/card/queue.c    | 262 ++++++++++++++++++++++++++------------
 drivers/mmc/card/queue.h    |  50 +++++++-
 drivers/mmc/core/bus.c      |   2 -
 drivers/mmc/core/core.c     | 299 +++++++++++++++++++++++++++++---------------
 drivers/mmc/core/core.h     |   2 -
 drivers/mmc/core/mmc_ops.c  |   9 ++
 drivers/mmc/host/dw_mmc.c   |   3 +-
 include/linux/mmc/card.h    |   1 -
 include/linux/mmc/core.h    |   6 +-
 include/linux/mmc/host.h    |  15 ---
 12 files changed, 529 insertions(+), 302 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH PoC 1/7] mmc-mq: add debug printks
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 2/7] mmc-mq: remove async requests support Bartlomiej Zolnierkiewicz
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/block.c | 18 ++++++++++++++++--
 drivers/mmc/core/core.c  | 32 ++++++++++++++++++++++++++++++--
 2 files changed, 46 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 7d733d0..ce56930 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1995,6 +1995,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 	const u8 packed_nr = 2;
 	u8 reqs = 0;
 
+	pr_info("%s: enter\n", __func__);
 	if (rqc) {
 		mqrq_cur = mmc_queue_req_find(mq, rqc);
 		if (!mqrq_cur) {
@@ -2004,8 +2005,10 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 		}
 	}
 
-	if (!mq->qcnt)
+	if (!mq->qcnt) {
+		pr_info("%s: exit (0) (!mq->qcnt)\n", __func__);
 		return 0;
+	}
 
 	if (mqrq_cur)
 		reqs = mmc_blk_prep_packed_list(mq, mqrq_cur);
@@ -2035,8 +2038,10 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 		} else
 			areq = NULL;
 		areq = mmc_start_req(card->host, areq, (int *) &status);
-		if (!areq)
+		if (!areq) {
+			pr_info("%s: exit (0) (!areq)\n", __func__);
 			return 0;
+		}
 
 		mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
 		brq = &mq_rq->brq;
@@ -2150,6 +2155,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 
 	mmc_queue_req_free(mq, mq_rq);
 
+	pr_info("%s: exit (1)\n", __func__);
 	return 1;
 
  cmd_abort:
@@ -2184,6 +2190,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 
 	mmc_queue_req_free(mq, mq_rq);
 
+	pr_info("%s: exit (0)\n", __func__);
 	return 0;
 }
 
@@ -2194,10 +2201,13 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	struct mmc_card *card = md->queue.card;
 	unsigned int cmd_flags = req ? req->cmd_flags : 0;
 
+	pr_info("%s: enter\n", __func__);
+
 	if (req && !mq->qcnt)
 		/* claim host only for the first request */
 		mmc_get_card(card);
 
+	pr_info("%s: mmc_blk_part_switch\n", __func__);
 	ret = mmc_blk_part_switch(card, md);
 	if (ret) {
 		if (req) {
@@ -2208,6 +2218,7 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	}
 
 	if (cmd_flags & REQ_DISCARD) {
+		pr_info("%s: DISCARD rq\n", __func__);
 		/* complete ongoing async transfer before issuing discard */
 		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
@@ -2216,11 +2227,13 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		else
 			ret = mmc_blk_issue_discard_rq(mq, req);
 	} else if (cmd_flags & REQ_FLUSH) {
+		pr_info("%s: FLUSH rq\n", __func__);
 		/* complete ongoing async transfer before issuing flush */
 		if (mq->qcnt)
 			mmc_blk_issue_rw_rq(mq, NULL);
 		ret = mmc_blk_issue_flush(mq, req);
 	} else {
+		pr_info("%s: RW rq\n", __func__);
 		ret = mmc_blk_issue_rw_rq(mq, req);
 	}
 
@@ -2228,6 +2241,7 @@ out:
 	/* Release host when there are no more requests */
 	if (!mq->qcnt)
 		mmc_put_card(card);
+	pr_info("%s: exit\n", __func__);
 	return ret;
 }
 
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 9be42691..d2d8d9b 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -219,6 +219,8 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 {
 	int err;
 
+	pr_info("%s: enter\n", __func__);
+
 	/* Assumes host controller has been runtime resumed by mmc_claim_host */
 	err = mmc_retune(host);
 	if (err) {
@@ -256,6 +258,8 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	trace_mmc_request_start(host, mrq);
 
 	host->ops->request(host, mrq);
+
+	pr_info("%s: exit\n", __func__);
 }
 
 static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
@@ -264,6 +268,7 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	unsigned int i, sz;
 	struct scatterlist *sg;
 #endif
+	pr_info("%s: enter\n", __func__);
 	mmc_retune_hold(host);
 
 	if (mmc_card_removed(host->card))
@@ -327,6 +332,7 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	led_trigger_event(host->led, LED_FULL);
 	__mmc_start_request(host, mrq);
 
+	pr_info("%s: exit\n", __func__);
 	return 0;
 }
 
@@ -466,6 +472,8 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 {
 	int err;
 
+	pr_info("%s: enter\n", __func__);
+
 	mmc_wait_ongoing_tfr_cmd(host);
 
 	init_completion(&mrq->completion);
@@ -480,6 +488,8 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 		complete(&mrq->completion);
 	}
 
+	pr_info("%s: exit\n", __func__);
+
 	return err;
 }
 
@@ -502,10 +512,14 @@ static int mmc_wait_for_data_req_done(struct mmc_host *host,
 	struct mmc_context_info *context_info = &host->context_info;
 	int err;
 
+	pr_info("%s: enter\n", __func__);
+
 	while (1) {
 		wait_event_interruptible(context_info->wait,
+//				context_info->is_done_rcv);
 				(context_info->is_done_rcv ||
 				 context_info->is_new_req));
+		pr_info("%s: waiting done\n", __func__);
 		context_info->is_waiting_last_req = false;
 		if (context_info->is_done_rcv) {
 			context_info->is_done_rcv = false;
@@ -527,11 +541,14 @@ static int mmc_wait_for_data_req_done(struct mmc_host *host,
 				continue; /* wait for done/new event again */
 			}
 		} else if (context_info->is_new_req) {
-			if (!next_req)
+			if (!next_req) {
+				pr_info("%s: exit (!next_req)\n", __func__);
 				return MMC_BLK_NEW_REQUEST;
+			}
 		}
 	}
 	mmc_retune_release(host);
+	pr_info("%s: exit (err=%d)\n", __func__, err);
 	return err;
 }
 
@@ -539,8 +556,11 @@ void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq)
 {
 	struct mmc_command *cmd;
 
+	pr_info("%s: enter\n", __func__);
+
 	while (1) {
 		wait_for_completion(&mrq->completion);
+		pr_info("%s: waiting done\n", __func__);
 
 		cmd = mrq->cmd;
 
@@ -567,7 +587,7 @@ void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq)
 
 		mmc_retune_recheck(host);
 
-		pr_debug("%s: req failed (CMD%u): %d, retrying...\n",
+		pr_info("%s: req failed (CMD%u): %d, retrying...\n",
 			 mmc_hostname(host), cmd->opcode, cmd->error);
 		cmd->retries--;
 		cmd->error = 0;
@@ -575,6 +595,8 @@ void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq)
 	}
 
 	mmc_retune_release(host);
+
+	pr_info("%s: exit\n", __func__);
 }
 EXPORT_SYMBOL(mmc_wait_for_req_done);
 
@@ -656,6 +678,10 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 	int start_err = 0;
 	struct mmc_async_req *data = host->areq;
 
+	pr_info("%s: enter\n", __func__);
+
+	pr_info("%s: areq=%p host->areq=%p\n", __func__, areq, host->areq);
+
 	/* Prepare a new request */
 	if (areq && !areq->pre_req_done) {
 		areq->pre_req_done = true;
@@ -671,6 +697,7 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 			 * The previous request was not completed,
 			 * nothing to return
 			 */
+			pr_info("%s: exit (NULL)\n", __func__);
 			return NULL;
 		}
 		/*
@@ -714,6 +741,7 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 
 	if (error)
 		*error = err;
+	pr_info("%s: exit (data=%p)\n", __func__, data);
 	return data;
 }
 EXPORT_SYMBOL(mmc_start_req);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH PoC 2/7] mmc-mq: remove async requests support
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 1/7] mmc-mq: add debug printks Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 3/7] mmc-mq: request completion fixes Bartlomiej Zolnierkiewicz
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/block.c |  62 ++++++++++++--------------
 drivers/mmc/card/queue.c | 113 +++++++++++++++--------------------------------
 drivers/mmc/card/queue.h |   5 +--
 drivers/mmc/core/bus.c   |   2 -
 drivers/mmc/core/core.c  | 110 +++++++++------------------------------------
 drivers/mmc/core/core.h  |   2 -
 include/linux/mmc/card.h |   1 -
 include/linux/mmc/host.h |  15 -------
 8 files changed, 85 insertions(+), 225 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index ce56930..1d4a09f 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1192,7 +1192,7 @@ int mmc_access_rpmb(struct mmc_queue *mq)
 	return false;
 }
 
-static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req, struct mmc_queue_req *mqrq)
 {
 	struct mmc_blk_data *md = mq->data;
 	struct mmc_card *card = md->queue.card;
@@ -1230,13 +1230,14 @@ out:
 		goto retry;
 	if (!err)
 		mmc_blk_reset_success(md, type);
+	mmc_queue_req_free(mq, mqrq);
 	blk_end_request(req, err, blk_rq_bytes(req));
 
 	return err ? 0 : 1;
 }
 
 static int mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq,
-				       struct request *req)
+				       struct request *req, struct mmc_queue_req *mqrq)
 {
 	struct mmc_blk_data *md = mq->data;
 	struct mmc_card *card = md->queue.card;
@@ -1297,12 +1298,13 @@ out_retry:
 	if (!err)
 		mmc_blk_reset_success(md, type);
 out:
+	mmc_queue_req_free(mq, mqrq);
 	blk_end_request(req, err, blk_rq_bytes(req));
 
 	return err ? 0 : 1;
 }
 
-static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req)
+static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req, struct mmc_queue_req *mqrq)
 {
 	struct mmc_blk_data *md = mq->data;
 	struct mmc_card *card = md->queue.card;
@@ -1312,6 +1314,7 @@ static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req)
 	if (ret)
 		ret = -EIO;
 
+	mmc_queue_req_free(mq, mqrq);
 	blk_end_request_all(req, ret);
 
 	return ret ? 0 : 1;
@@ -1918,6 +1921,7 @@ static int mmc_blk_end_packed_req(struct mmc_queue_req *mq_rq)
 	int idx = packed->idx_failure, i = 0;
 	int ret = 0;
 
+	BUG();
 	BUG_ON(!packed);
 
 	while (!list_empty(&packed->list)) {
@@ -1981,7 +1985,7 @@ static void mmc_blk_revert_packed_req(struct mmc_queue *mq,
 	mmc_blk_clear_packed(mq_rq);
 }
 
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
+static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 {
 	struct mmc_blk_data *md = mq->data;
 	struct mmc_card *card = md->queue.card;
@@ -1990,20 +1994,14 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 	enum mmc_blk_status status;
 	struct mmc_queue_req *mqrq_cur = NULL;
 	struct mmc_queue_req *mq_rq;
-	struct request *req;
+	struct request *rqc = NULL, *req;
 	struct mmc_async_req *areq;
 	const u8 packed_nr = 2;
 	u8 reqs = 0;
 
 	pr_info("%s: enter\n", __func__);
-	if (rqc) {
-		mqrq_cur = mmc_queue_req_find(mq, rqc);
-		if (!mqrq_cur) {
-			WARN_ON(1);
-			mmc_blk_requeue(mq->queue, rqc);
-			rqc = NULL;
-		}
-	}
+	mqrq_cur = mqrq;
+	rqc = mqrq_cur->req;
 
 	if (!mq->qcnt) {
 		pr_info("%s: exit (0) (!mq->qcnt)\n", __func__);
@@ -2059,10 +2057,14 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 
 			if (mmc_packed_cmd(mq_rq->cmd_type)) {
 				ret = mmc_blk_end_packed_req(mq_rq);
+				mmc_queue_req_free(mq, mq_rq); //
+				pr_info("%s: freeing mqrq (packed)\n", __func__); //
 				break;
 			} else {
-				ret = blk_end_request(req, 0,
-						brq->data.bytes_xfered);
+				int bytes = brq->data.bytes_xfered;
+				mmc_queue_req_free(mq, mq_rq); //
+				pr_info("%s: freeing mqrq\n", __func__); //
+				ret = blk_end_request(req, 0, bytes);
 			}
 
 			/*
@@ -2153,9 +2155,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 		}
 	} while (ret);
 
-	mmc_queue_req_free(mq, mq_rq);
-
-	pr_info("%s: exit (1)\n", __func__);
+	pr_info("%s: exit (1==ok)\n", __func__);
 	return 1;
 
  cmd_abort:
@@ -2194,7 +2194,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 	return 0;
 }
 
-static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
+static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mmc_queue_req *mqrq)
 {
 	int ret;
 	struct mmc_blk_data *md = mq->data;
@@ -2203,9 +2203,10 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 
 	pr_info("%s: enter\n", __func__);
 
-	if (req && !mq->qcnt)
-		/* claim host only for the first request */
-		mmc_get_card(card);
+	BUG_ON(!req);
+
+	/* claim host only for the first request */
+	mmc_get_card(card);
 
 	pr_info("%s: mmc_blk_part_switch\n", __func__);
 	ret = mmc_blk_part_switch(card, md);
@@ -2219,28 +2220,21 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 
 	if (cmd_flags & REQ_DISCARD) {
 		pr_info("%s: DISCARD rq\n", __func__);
-		/* complete ongoing async transfer before issuing discard */
-		if (mq->qcnt)
-			mmc_blk_issue_rw_rq(mq, NULL);
 		if (req->cmd_flags & REQ_SECURE)
-			ret = mmc_blk_issue_secdiscard_rq(mq, req);
+			ret = mmc_blk_issue_secdiscard_rq(mq, req, mqrq);
 		else
-			ret = mmc_blk_issue_discard_rq(mq, req);
+			ret = mmc_blk_issue_discard_rq(mq, req, mqrq);
 	} else if (cmd_flags & REQ_FLUSH) {
 		pr_info("%s: FLUSH rq\n", __func__);
-		/* complete ongoing async transfer before issuing flush */
-		if (mq->qcnt)
-			mmc_blk_issue_rw_rq(mq, NULL);
-		ret = mmc_blk_issue_flush(mq, req);
+		ret = mmc_blk_issue_flush(mq, req, mqrq);
 	} else {
 		pr_info("%s: RW rq\n", __func__);
-		ret = mmc_blk_issue_rw_rq(mq, req);
+		ret = mmc_blk_issue_rw_rq(mq, mqrq);
 	}
 
 out:
 	/* Release host when there are no more requests */
-	if (!mq->qcnt)
-		mmc_put_card(card);
+	mmc_put_card(card);
 	pr_info("%s: exit\n", __func__);
 	return ret;
 }
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 5a016ce..e9c9bbf 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -52,15 +52,22 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 	struct mmc_queue_req *mqrq;
 	int i = ffz(mq->qslots);
 
+	pr_info("%s: enter (%d)\n", __func__, i);
+
+	WARN_ON(i >= mq->qdepth);
 	if (i >= mq->qdepth)
 		return NULL;
 
+////	spin_lock_irq(req->q->queue_lock);
 	mqrq = &mq->mqrq[i];
 	WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
 		test_bit(mqrq->task_id, &mq->qslots));
 	mqrq->req = req;
 	mq->qcnt += 1;
 	__set_bit(mqrq->task_id, &mq->qslots);
+////	spin_unlock_irq(req->q->queue_lock);
+
+	pr_info("%s: exit\n", __func__);
 
 	return mqrq;
 }
@@ -68,60 +75,17 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 void mmc_queue_req_free(struct mmc_queue *mq,
 			struct mmc_queue_req *mqrq)
 {
+	struct request *req;
+	pr_info("%s: enter\n", __func__);
+	req = mqrq->req;
+	spin_lock_irq(req->q->queue_lock);
 	WARN_ON(!mqrq->req || mq->qcnt < 1 ||
 		!test_bit(mqrq->task_id, &mq->qslots));
 	mqrq->req = NULL;
 	mq->qcnt -= 1;
 	__clear_bit(mqrq->task_id, &mq->qslots);
-}
-
-static int mmc_queue_thread(void *d)
-{
-	struct mmc_queue *mq = d;
-	struct request_queue *q = mq->queue;
-	struct mmc_context_info *cntx = &mq->card->host->context_info;
-
-	current->flags |= PF_MEMALLOC;
-
-	down(&mq->thread_sem);
-	do {
-		struct request *req;
-
-		spin_lock_irq(q->queue_lock);
-		set_current_state(TASK_INTERRUPTIBLE);
-		req = blk_fetch_request(q);
-		mq->asleep = false;
-		cntx->is_waiting_last_req = false;
-		cntx->is_new_req = false;
-		if (!req) {
-			/*
-			 * Dispatch queue is empty so set flags for
-			 * mmc_request_fn() to wake us up.
-			 */
-			if (mq->qcnt)
-				cntx->is_waiting_last_req = true;
-			else
-				mq->asleep = true;
-		}
-		spin_unlock_irq(q->queue_lock);
-
-		if (req || mq->qcnt) {
-			set_current_state(TASK_RUNNING);
-			mq->issue_fn(mq, req);
-			cond_resched();
-		} else {
-			if (kthread_should_stop()) {
-				set_current_state(TASK_RUNNING);
-				break;
-			}
-			up(&mq->thread_sem);
-			schedule();
-			down(&mq->thread_sem);
-		}
-	} while (1);
-	up(&mq->thread_sem);
-
-	return 0;
+	spin_unlock_irq(req->q->queue_lock);
+	pr_info("%s: exit\n", __func__);
 }
 
 /*
@@ -134,7 +98,7 @@ static void mmc_request_fn(struct request_queue *q)
 {
 	struct mmc_queue *mq = q->queuedata;
 	struct request *req;
-	struct mmc_context_info *cntx;
+	struct mmc_queue_req *mqrq_cur = NULL;
 
 	if (!mq) {
 		while ((req = blk_fetch_request(q)) != NULL) {
@@ -143,16 +107,28 @@ static void mmc_request_fn(struct request_queue *q)
 		}
 		return;
 	}
-
-	cntx = &mq->card->host->context_info;
-
-	if (cntx->is_waiting_last_req) {
-		cntx->is_new_req = true;
-		wake_up_interruptible(&cntx->wait);
+repeat:
+	req = blk_fetch_request(q);
+	if (req && req->cmd_type == REQ_TYPE_FS) {
+		mqrq_cur = mmc_queue_req_find(mq, req);
+		if (!mqrq_cur) {
+			pr_info("%s: command already queued (%d)\n", __func__, mq->qcnt);
+//			WARN_ON(1);
+//			spin_unlock_irq(q->queue_lock);
+			blk_requeue_request(mq->queue, req);
+//			spin_lock_irq(q->queue_lock);
+			req = NULL;
+		}
 	}
-
-	if (mq->asleep)
-		wake_up_process(mq->thread);
+	if (!req) {
+		pr_info("%s: no request\n", __func__);
+		return;
+	}
+	spin_unlock_irq(q->queue_lock);
+	mq->issue_fn(mq, req, mqrq_cur);
+	spin_lock_irq(q->queue_lock);
+	goto repeat;
+//#endif
 }
 
 static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
@@ -305,7 +281,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (!mq->queue)
 		return -ENOMEM;
 
-	mq->qdepth = 2;
+	mq->qdepth = 1;
 	mq->mqrq = mmc_queue_alloc_mqrqs(mq, mq->qdepth);
 	if (!mq->mqrq)
 		goto blk_cleanup;
@@ -357,16 +333,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 			goto cleanup_queue;
 	}
 
-	sema_init(&mq->thread_sem, 1);
-
-	mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
-		host->index, subname ? subname : "");
-
-	if (IS_ERR(mq->thread)) {
-		ret = PTR_ERR(mq->thread);
-		goto cleanup_queue;
-	}
-
 	return 0;
 
  cleanup_queue:
@@ -386,9 +352,6 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
 	/* Make sure the queue isn't suspended, as that will deadlock */
 	mmc_queue_resume(mq);
 
-	/* Then terminate our worker thread */
-	kthread_stop(mq->thread);
-
 	/* Empty the queue */
 	spin_lock_irqsave(q->queue_lock, flags);
 	q->queuedata = NULL;
@@ -468,8 +431,6 @@ void mmc_queue_suspend(struct mmc_queue *mq)
 		spin_lock_irqsave(q->queue_lock, flags);
 		blk_stop_queue(q);
 		spin_unlock_irqrestore(q->queue_lock, flags);
-
-		down(&mq->thread_sem);
 	}
 }
 
@@ -485,8 +446,6 @@ void mmc_queue_resume(struct mmc_queue *mq)
 	if (mq->flags & MMC_QUEUE_SUSPENDED) {
 		mq->flags &= ~MMC_QUEUE_SUSPENDED;
 
-		up(&mq->thread_sem);
-
 		spin_lock_irqsave(q->queue_lock, flags);
 		blk_start_queue(q);
 		spin_unlock_irqrestore(q->queue_lock, flags);
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 1afd5da..c52fa88 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -47,13 +47,10 @@ struct mmc_queue_req {
 
 struct mmc_queue {
 	struct mmc_card		*card;
-	struct task_struct	*thread;
-	struct semaphore	thread_sem;
 	unsigned int		flags;
 #define MMC_QUEUE_SUSPENDED	(1 << 0)
-	bool			asleep;
 
-	int			(*issue_fn)(struct mmc_queue *, struct request *);
+	int			(*issue_fn)(struct mmc_queue *, struct request *, struct mmc_queue_req *mqrq);
 	void			*data;
 	struct request_queue	*queue;
 	struct mmc_queue_req	*mqrq;
diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
index c64266f..949e569 100644
--- a/drivers/mmc/core/bus.c
+++ b/drivers/mmc/core/bus.c
@@ -346,8 +346,6 @@ int mmc_add_card(struct mmc_card *card)
 #ifdef CONFIG_DEBUG_FS
 	mmc_add_card_debugfs(card);
 #endif
-	mmc_init_context_info(card->host);
-
 	card->dev.of_node = mmc_of_find_child_device(card->host, 0);
 
 	device_enable_async_suspend(&card->dev);
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index d2d8d9b..7496c22 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -29,6 +29,7 @@
 #include <linux/random.h>
 #include <linux/slab.h>
 #include <linux/of.h>
+#include <linux/kernel.h>
 
 #include <linux/mmc/card.h>
 #include <linux/mmc/host.h>
@@ -44,6 +45,7 @@
 #include "host.h"
 #include "sdio_bus.h"
 #include "pwrseq.h"
+#include "../card/queue.h"
 
 #include "mmc_ops.h"
 #include "sd_ops.h"
@@ -407,19 +409,11 @@ out:
 EXPORT_SYMBOL(mmc_start_bkops);
 
 /*
- * mmc_wait_data_done() - done callback for data request
- * @mrq: done data request
+ * mmc_wait_done() - done callback for request
+ * @mrq: done request
  *
  * Wakes up mmc context, passed as a callback to host controller driver
  */
-static void mmc_wait_data_done(struct mmc_request *mrq)
-{
-	struct mmc_context_info *context_info = &mrq->host->context_info;
-
-	context_info->is_done_rcv = true;
-	wake_up_interruptible(&context_info->wait);
-}
-
 static void mmc_wait_done(struct mmc_request *mrq)
 {
 	complete(&mrq->completion);
@@ -438,36 +432,15 @@ static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
 }
 
 /*
- *__mmc_start_data_req() - starts data request
+ *__mmc_start_req() - starts request
  * @host: MMC host to start the request
- * @mrq: data request to start
+ * @mrq: request to start
  *
  * Sets the done callback to be called when request is completed by the card.
- * Starts data mmc request execution
+ * Starts mmc request execution
  * If an ongoing transfer is already in progress, wait for the command line
  * to become available before sending another command.
  */
-static int __mmc_start_data_req(struct mmc_host *host, struct mmc_request *mrq)
-{
-	int err;
-
-	mmc_wait_ongoing_tfr_cmd(host);
-
-	mrq->done = mmc_wait_data_done;
-	mrq->host = host;
-
-	init_completion(&mrq->cmd_completion);
-
-	err = mmc_start_request(host, mrq);
-	if (err) {
-		mrq->cmd->error = err;
-		mmc_complete_cmd(mrq);
-		mmc_wait_data_done(mrq);
-	}
-
-	return err;
-}
-
 static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 {
 	int err;
@@ -478,6 +451,7 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 
 	init_completion(&mrq->completion);
 	mrq->done = mmc_wait_done;
+	mrq->host = host;
 
 	init_completion(&mrq->cmd_completion);
 
@@ -485,7 +459,7 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 	if (err) {
 		mrq->cmd->error = err;
 		mmc_complete_cmd(mrq);
-		complete(&mrq->completion);
+		mmc_wait_done(mrq);
 	}
 
 	pr_info("%s: exit\n", __func__);
@@ -508,21 +482,17 @@ static int mmc_wait_for_data_req_done(struct mmc_host *host,
 				      struct mmc_request *mrq,
 				      struct mmc_async_req *next_req)
 {
+	struct mmc_queue_req *mq_mrq = container_of(next_req, struct mmc_queue_req,
+						    mmc_active);
 	struct mmc_command *cmd;
-	struct mmc_context_info *context_info = &host->context_info;
 	int err;
 
 	pr_info("%s: enter\n", __func__);
 
 	while (1) {
-		wait_event_interruptible(context_info->wait,
-//				context_info->is_done_rcv);
-				(context_info->is_done_rcv ||
-				 context_info->is_new_req));
+		wait_for_completion(&mrq->completion);
 		pr_info("%s: waiting done\n", __func__);
-		context_info->is_waiting_last_req = false;
-		if (context_info->is_done_rcv) {
-			context_info->is_done_rcv = false;
+		if (1) {
 			cmd = mrq->cmd;
 
 			if (!cmd->error || !cmd->retries ||
@@ -540,11 +510,6 @@ static int mmc_wait_for_data_req_done(struct mmc_host *host,
 				__mmc_start_request(host, mrq);
 				continue; /* wait for done/new event again */
 			}
-		} else if (context_info->is_new_req) {
-			if (!next_req) {
-				pr_info("%s: exit (!next_req)\n", __func__);
-				return MMC_BLK_NEW_REQUEST;
-			}
 		}
 	}
 	mmc_retune_release(host);
@@ -614,10 +579,7 @@ EXPORT_SYMBOL(mmc_wait_for_req_done);
  */
 bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq)
 {
-	if (host->areq)
-		return host->context_info.is_done_rcv;
-	else
-		return completion_done(&mrq->completion);
+	return completion_done(&mrq->completion);
 }
 EXPORT_SYMBOL(mmc_is_req_done);
 
@@ -688,18 +650,12 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 		mmc_pre_req(host, areq->mrq, !host->areq);
 	}
 
+	if (areq) //
+		start_err = __mmc_start_req(host, areq->mrq); //
+
+	host->areq = areq; //
 	if (host->areq) {
 		err = mmc_wait_for_data_req_done(host, host->areq->mrq,	areq);
-		if (err == MMC_BLK_NEW_REQUEST) {
-			if (error)
-				*error = err;
-			/*
-			 * The previous request was not completed,
-			 * nothing to return
-			 */
-			pr_info("%s: exit (NULL)\n", __func__);
-			return NULL;
-		}
 		/*
 		 * Check BKOPS urgency for each R1 response
 		 */
@@ -720,24 +676,14 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 		}
 	}
 
-	if (!err && areq)
-		start_err = __mmc_start_data_req(host, areq->mrq);
-
 	if (host->areq) {
 		host->areq->pre_req_done = false;
 		mmc_post_req(host, host->areq->mrq, 0);
 	}
 
-	 /* Cancel a prepared request if it was not started. */
-	if ((err || start_err) && areq) {
-		areq->pre_req_done = false;
-		mmc_post_req(host, areq->mrq, -EINVAL);
-	}
 
-	if (err)
-		host->areq = NULL;
-	else
-		host->areq = areq;
+	data = host->areq; //
+	host->areq = NULL; //
 
 	if (error)
 		*error = err;
@@ -2960,22 +2906,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
 }
 #endif
 
-/**
- * mmc_init_context_info() - init synchronization context
- * @host: mmc host
- *
- * Init struct context_info needed to implement asynchronous
- * request mechanism, used by mmc core, host driver and mmc requests
- * supplier.
- */
-void mmc_init_context_info(struct mmc_host *host)
-{
-	host->context_info.is_new_req = false;
-	host->context_info.is_done_rcv = false;
-	host->context_info.is_waiting_last_req = false;
-	init_waitqueue_head(&host->context_info.wait);
-}
-
 static int __init mmc_init(void)
 {
 	int ret;
diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
index 0fa86a2..34e664b 100644
--- a/drivers/mmc/core/core.h
+++ b/drivers/mmc/core/core.h
@@ -84,8 +84,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
 void mmc_add_card_debugfs(struct mmc_card *card);
 void mmc_remove_card_debugfs(struct mmc_card *card);
 
-void mmc_init_context_info(struct mmc_host *host);
-
 int mmc_execute_tuning(struct mmc_card *card);
 int mmc_hs200_to_hs400(struct mmc_card *card);
 int mmc_hs400_to_hs200(struct mmc_card *card);
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 3d7434e..810f318 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -219,7 +219,6 @@ enum mmc_blk_status {
 	MMC_BLK_DATA_ERR,
 	MMC_BLK_ECC_ERR,
 	MMC_BLK_NOMEDIUM,
-	MMC_BLK_NEW_REQUEST,
 };
 
 /* The number of MMC physical partitions.  These consist of:
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 9fb00b7..1a46cbd 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -193,20 +193,6 @@ struct mmc_slot {
 	void *handler_priv;
 };
 
-/**
- * mmc_context_info - synchronization details for mmc context
- * @is_done_rcv		wake up reason was done request
- * @is_new_req		wake up reason was new request
- * @is_waiting_last_req	mmc context waiting for single running request
- * @wait		wait queue
- */
-struct mmc_context_info {
-	bool			is_done_rcv;
-	bool			is_new_req;
-	bool			is_waiting_last_req;
-	wait_queue_head_t	wait;
-};
-
 struct regulator;
 struct mmc_pwrseq;
 
@@ -380,7 +366,6 @@ struct mmc_host {
 	struct dentry		*debugfs_root;
 
 	struct mmc_async_req	*areq;		/* active async req */
-	struct mmc_context_info	context_info;	/* async synchronization info */
 
 	/* Ongoing data transfer that allows commands during transfer */
 	struct mmc_request	*ongoing_mrq;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH PoC 3/7] mmc-mq: request completion fixes
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 1/7] mmc-mq: add debug printks Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 2/7] mmc-mq: remove async requests support Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 4/7] mmc-mq: implement checking for queue busy condition Bartlomiej Zolnierkiewicz
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/block.c    |  52 ++++++++++++--------
 drivers/mmc/card/mmc_test.c |   8 +--
 drivers/mmc/card/queue.c    |  11 ++++-
 drivers/mmc/card/queue.h    |   4 ++
 drivers/mmc/core/core.c     | 117 +++++++++++++++++++++++++++++++++++++++-----
 drivers/mmc/core/mmc_ops.c  |   9 ++++
 drivers/mmc/host/dw_mmc.c   |   3 +-
 include/linux/mmc/core.h    |   6 ++-
 8 files changed, 169 insertions(+), 41 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 1d4a09f..3c2bdc2 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1057,9 +1057,12 @@ static int mmc_blk_cmd_recovery(struct mmc_card *card, struct request *req,
 	 * we can't be sure the returned status is for the r/w command.
 	 */
 	for (retry = 2; retry >= 0; retry--) {
-		err = get_card_status(card, &status, 0);
-		if (!err)
-			break;
+		mdelay(100);
+		pr_info("%s: mdelay(100)\n", __func__);
+		return ERR_CONTINUE;
+//		err = get_card_status(card, &status, 0);
+//		if (!err)
+//			break;
 
 		/* Re-tune if needed */
 		mmc_retune_recheck(card->host);
@@ -1230,6 +1233,7 @@ out:
 		goto retry;
 	if (!err)
 		mmc_blk_reset_success(md, type);
+	mmc_put_card(card);
 	mmc_queue_req_free(mq, mqrq);
 	blk_end_request(req, err, blk_rq_bytes(req));
 
@@ -1298,6 +1302,7 @@ out_retry:
 	if (!err)
 		mmc_blk_reset_success(md, type);
 out:
+	mmc_put_card(card);
 	mmc_queue_req_free(mq, mqrq);
 	blk_end_request(req, err, blk_rq_bytes(req));
 
@@ -1314,6 +1319,7 @@ static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req, struct
 	if (ret)
 		ret = -EIO;
 
+	mmc_put_card(card);
 	mmc_queue_req_free(mq, mqrq);
 	blk_end_request_all(req, ret);
 
@@ -1419,10 +1425,13 @@ static int mmc_blk_err_check(struct mmc_card *card,
 			gen_err = 1;
 		}
 
-		err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, false, req,
-					&gen_err);
-		if (err)
-			return MMC_BLK_CMD_ERR;
+		mdelay(100);
+		pr_info("%s: mdelay(100)\n", __func__);
+		return MMC_BLK_SUCCESS;
+//		err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, false, req,
+//					&gen_err);
+//		if (err)
+//			return MMC_BLK_CMD_ERR;
 	}
 
 	/* if general error occurs, retry the write operation. */
@@ -2035,12 +2044,13 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 			areq = &mqrq_cur->mmc_active;
 		} else
 			areq = NULL;
-		areq = mmc_start_req(card->host, areq, (int *) &status);
+		areq = mmc_start_req(card->host, areq, (int *) &status, mqrq);
 		if (!areq) {
 			pr_info("%s: exit (0) (!areq)\n", __func__);
 			return 0;
 		}
-
+		ret = 0; //
+#if 0
 		mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
 		brq = &mq_rq->brq;
 		req = mq_rq->req;
@@ -2139,7 +2149,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 					goto cmd_abort;
 				mmc_blk_packed_hdr_wrq_prep(mq_rq, card, mq);
 				mmc_start_req(card->host,
-					      &mq_rq->mmc_active, NULL);
+					      &mq_rq->mmc_active, NULL, mq_rq);
 			} else {
 
 				/*
@@ -2149,10 +2159,11 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 				mmc_blk_rw_rq_prep(mq_rq, card,
 						disable_multi, mq);
 				mmc_start_req(card->host,
-						&mq_rq->mmc_active, NULL);
+						&mq_rq->mmc_active, NULL, mq_rq);
 			}
 			mq_rq->brq.retune_retry_done = retune_retry_done;
 		}
+#endif
 	} while (ret);
 
 	pr_info("%s: exit (1==ok)\n", __func__);
@@ -2184,10 +2195,10 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 
 			mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
 			mmc_start_req(card->host,
-				      &mqrq_cur->mmc_active, NULL);
+				      &mqrq_cur->mmc_active, NULL, mqrq_cur);
 		}
 	}
-
+	BUG();
 	mmc_queue_req_free(mq, mq_rq);
 
 	pr_info("%s: exit (0)\n", __func__);
@@ -2201,17 +2212,18 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 	struct mmc_card *card = md->queue.card;
 	unsigned int cmd_flags = req ? req->cmd_flags : 0;
 
-	pr_info("%s: enter\n", __func__);
+	pr_info("%s: enter (mq=%p md=%p)\n", __func__, mq, md);
 
 	BUG_ON(!req);
 
 	/* claim host only for the first request */
 	mmc_get_card(card);
 
-	pr_info("%s: mmc_blk_part_switch\n", __func__);
+	pr_info("%s: mmc_blk_part_switch (mq=%p md=%p)\n", __func__, mq, md);
 	ret = mmc_blk_part_switch(card, md);
 	if (ret) {
 		if (req) {
+			mmc_queue_req_free(req->q->queuedata, mqrq); //
 			blk_end_request_all(req, -EIO);
 		}
 		ret = 0;
@@ -2219,23 +2231,23 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 	}
 
 	if (cmd_flags & REQ_DISCARD) {
-		pr_info("%s: DISCARD rq\n", __func__);
+		pr_info("%s: DISCARD rq (mq=%p md=%p)\n", __func__, mq, md);
 		if (req->cmd_flags & REQ_SECURE)
 			ret = mmc_blk_issue_secdiscard_rq(mq, req, mqrq);
 		else
 			ret = mmc_blk_issue_discard_rq(mq, req, mqrq);
 	} else if (cmd_flags & REQ_FLUSH) {
-		pr_info("%s: FLUSH rq\n", __func__);
+		pr_info("%s: FLUSH rq (mq=%p md=%p)\n", __func__, mq, md);
 		ret = mmc_blk_issue_flush(mq, req, mqrq);
 	} else {
-		pr_info("%s: RW rq\n", __func__);
+		pr_info("%s: RW rq (mq=%p md=%p)\n", __func__, mq, md);
 		ret = mmc_blk_issue_rw_rq(mq, mqrq);
 	}
 
 out:
 	/* Release host when there are no more requests */
-	mmc_put_card(card);
-	pr_info("%s: exit\n", __func__);
+/////	mmc_put_card(card);
+	pr_info("%s: exit (mq=%p md=%p)\n", __func__, mq, md);
 	return ret;
 }
 
diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index 7dee9e5..ff375c1 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -2413,10 +2413,10 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
 	} while (repeat_cmd && R1_CURRENT_STATE(status) != R1_STATE_TRAN);
 
 	/* Wait for data request to complete */
-	if (use_areq)
-		mmc_start_req(host, NULL, &ret);
-	else
-		mmc_wait_for_req_done(test->card->host, mrq);
+//	if (use_areq)
+//		mmc_start_req(host, NULL, &ret);
+//	else
+//		mmc_wait_for_req_done(test->card->host, mrq);
 
 	/*
 	 * For cap_cmd_during_tfr request, upper layer must send stop if
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index e9c9bbf..6fd711d 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -52,14 +52,17 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 	struct mmc_queue_req *mqrq;
 	int i = ffz(mq->qslots);
 
-	pr_info("%s: enter (%d)\n", __func__, i);
+	pr_info("%s: enter (%d) (testtag=%d qdepth=%d 0.testtag=%d\n", __func__, i, mq->testtag, mq->qdepth, mq->mqrq[0].testtag);
 
-	WARN_ON(i >= mq->qdepth);
+	WARN_ON(mq->testtag == 0);
+//////	WARN_ON(i >= mq->qdepth);
 	if (i >= mq->qdepth)
 		return NULL;
+	WARN_ON(mq->qdepth == 0);
 
 ////	spin_lock_irq(req->q->queue_lock);
 	mqrq = &mq->mqrq[i];
+	WARN_ON(mqrq->testtag == 0);
 	WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
 		test_bit(mqrq->task_id, &mq->qslots));
 	mqrq->req = req;
@@ -109,6 +112,7 @@ static void mmc_request_fn(struct request_queue *q)
 	}
 repeat:
 	req = blk_fetch_request(q);
+	WARN_ON(req && req->cmd_type != REQ_TYPE_FS);
 	if (req && req->cmd_type == REQ_TYPE_FS) {
 		mqrq_cur = mmc_queue_req_find(mq, req);
 		if (!mqrq_cur) {
@@ -179,6 +183,8 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(struct mmc_queue *mq,
 			mqrq[i].task_id = i;
 	}
 
+	mqrq[0].testtag = 1;
+
 	return mqrq;
 }
 
@@ -285,6 +291,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	mq->mqrq = mmc_queue_alloc_mqrqs(mq, mq->qdepth);
 	if (!mq->mqrq)
 		goto blk_cleanup;
+	mq->testtag = 1;
 	mq->queue->queuedata = mq;
 
 	blk_queue_prep_rq(mq->queue, mmc_prep_request);
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index c52fa88..3adf1bc 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -43,6 +43,8 @@ struct mmc_queue_req {
 	enum mmc_packed_type	cmd_type;
 	struct mmc_packed	*packed;
 	int			task_id;
+
+	int			testtag;
 };
 
 struct mmc_queue {
@@ -57,6 +59,8 @@ struct mmc_queue {
 	int			qdepth;
 	int			qcnt;
 	unsigned long		qslots;
+
+	int			testtag;
 };
 
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 7496c22..22052f0 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -231,6 +231,9 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 		return;
 	}
 
+	if (mrq->cmd->retries == 3 && mrq->cmd->opcode == 5)
+		WARN_ON(1);
+
 	/*
 	 * For sdio rw commands we must wait for card busy otherwise some
 	 * sdio devices won't work properly.
@@ -408,6 +411,9 @@ out:
 }
 EXPORT_SYMBOL(mmc_start_bkops);
 
+static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
+			 int err);
+
 /*
  * mmc_wait_done() - done callback for request
  * @mrq: done request
@@ -416,7 +422,77 @@ EXPORT_SYMBOL(mmc_start_bkops);
  */
 static void mmc_wait_done(struct mmc_request *mrq)
 {
+	struct mmc_host *host = mrq->host; //
+	struct mmc_queue_req *mq_rq = mrq->mqrq;
+	struct mmc_async_req *areq = NULL;
+//	struct mmc_queue_req *mq_rq = container_of(next_req, struct mmc_queue_req,
+//						    mmc_active);
+//	struct mmc_queue_req *mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
+	struct mmc_command *cmd;
+	int err = 0, ret = 0;
+
+	pr_info("%s: enter\n", __func__);
+
+	cmd = mrq->cmd;
+	pr_info("%s: cmd->opcode=%d mq_rq=%p\n", __func__, cmd->opcode, mq_rq);
+
+	if (mq_rq)
+		areq = &mq_rq->mmc_active;
+
+	if (!cmd->error || !cmd->retries ||
+	    mmc_card_removed(host->card)) {
+		if (mq_rq &&
+		    (mq_rq->req->cmd_type == REQ_TYPE_FS) &&
+		    ((mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)) == 0)) {
+			err = areq->err_check(host->card, areq);
+			BUG_ON(err != MMC_BLK_SUCCESS);
+		}
+	}
+	else {
+//		WARN_ON(1);
+		mmc_retune_recheck(host);
+		pr_info("%s: req failed (CMD%u): %d, retrying...\n",
+			mmc_hostname(host),
+			cmd->opcode, cmd->error);
+		cmd->retries--;
+		cmd->error = 0;
+		__mmc_start_request(host, mrq);
+		goto out;
+	}
+
+	mmc_retune_release(host);
+
+//	host->areq->pre_req_done = false;
+	if (mq_rq &&
+	    (mq_rq->req->cmd_type == REQ_TYPE_FS) &&
+	    ((mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)) == 0)) {
+		mmc_post_req(host, mrq, 0);
+	}
+
 	complete(&mrq->completion);
+BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)));
+	if (mq_rq &&
+	    (mq_rq->req->cmd_type == REQ_TYPE_FS) &&
+	    ((mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)) == 0)) {
+		struct mmc_blk_request *brq;
+		struct request *req;
+		struct mmc_blk_data *md = mq_rq->req->q->queuedata;
+		int bytes;
+
+		brq = &mq_rq->brq;
+		req = mq_rq->req;
+//		type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
+		mmc_queue_bounce_post(mq_rq);
+
+		bytes = brq->data.bytes_xfered;
+		mmc_put_card(host->card);
+		pr_info("%s: freeing mqrq\n", __func__); //
+		mmc_queue_req_free(req->q->queuedata, mq_rq); //
+		ret = blk_end_request(req, 0, bytes);
+
+	}
+out:
+	pr_info("%s: exit (err=%d, ret=%d)\n", __func__, err, ret);
 }
 
 static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
@@ -441,7 +517,7 @@ static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
  * If an ongoing transfer is already in progress, wait for the command line
  * to become available before sending another command.
  */
-static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
+static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq, struct mmc_queue_req *mqrq)
 {
 	int err;
 
@@ -452,6 +528,7 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 	init_completion(&mrq->completion);
 	mrq->done = mmc_wait_done;
 	mrq->host = host;
+	mrq->mqrq = mqrq;
 
 	init_completion(&mrq->cmd_completion);
 
@@ -466,7 +543,7 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 
 	return err;
 }
-
+#if 0
 /*
  * mmc_wait_for_data_req_done() - wait for request completed
  * @host: MMC host to prepare the command.
@@ -564,7 +641,7 @@ void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq)
 	pr_info("%s: exit\n", __func__);
 }
 EXPORT_SYMBOL(mmc_wait_for_req_done);
-
+#endif
 /**
  *	mmc_is_req_done - Determine if a 'cap_cmd_during_tfr' request is done
  *	@host: MMC host
@@ -634,7 +711,7 @@ static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
  *	is returned without waiting. NULL is not an error condition.
  */
 struct mmc_async_req *mmc_start_req(struct mmc_host *host,
-				    struct mmc_async_req *areq, int *error)
+				    struct mmc_async_req *areq, int *error, struct mmc_queue_req *mqrq)
 {
 	int err = 0;
 	int start_err = 0;
@@ -645,15 +722,17 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 	pr_info("%s: areq=%p host->areq=%p\n", __func__, areq, host->areq);
 
 	/* Prepare a new request */
-	if (areq && !areq->pre_req_done) {
-		areq->pre_req_done = true;
+//	if (areq && !areq->pre_req_done) {
+//		areq->pre_req_done = true;
 		mmc_pre_req(host, areq->mrq, !host->areq);
-	}
+//	}
 
 	if (areq) //
-		start_err = __mmc_start_req(host, areq->mrq); //
-
+		start_err = __mmc_start_req(host, areq->mrq, mqrq); //
+	data = areq; //
+#if 0
 	host->areq = areq; //
+
 	if (host->areq) {
 		err = mmc_wait_for_data_req_done(host, host->areq->mrq,	areq);
 		/*
@@ -687,6 +766,7 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 
 	if (error)
 		*error = err;
+#endif
 	pr_info("%s: exit (data=%p)\n", __func__, data);
 	return data;
 }
@@ -706,10 +786,19 @@ EXPORT_SYMBOL(mmc_start_req);
  */
 void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
 {
-	__mmc_start_req(host, mrq);
+	pr_info("%s: enter\n", __func__);
 
-	if (!mrq->cap_cmd_during_tfr)
-		mmc_wait_for_req_done(host, mrq);
+	__mmc_start_req(host, mrq, NULL);
+
+	if (!mrq->cap_cmd_during_tfr) {
+//		mmc_wait_for_req_done(host, mrq);
+//		BUG(); //
+		pr_info("%s: wait start\n", __func__);
+		wait_for_completion(&mrq->completion);
+		pr_info("%s: wait done\n", __func__);
+	}
+
+	pr_info("%s: exit\n", __func__);
 }
 EXPORT_SYMBOL(mmc_wait_for_req);
 
@@ -794,6 +883,8 @@ int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, int retries
 {
 	struct mmc_request mrq = {NULL};
 
+	pr_info("%s: enter (cmd->opcode=%d retries=%d)\n", __func__, cmd->opcode, cmd->retries);
+
 	WARN_ON(!host->claimed);
 
 	memset(cmd->resp, 0, sizeof(cmd->resp));
@@ -801,8 +892,10 @@ int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, int retries
 
 	mrq.cmd = cmd;
 	cmd->data = NULL;
+	pr_info("%s: cmd->opcode=%d retries=%d\n", __func__, cmd->opcode, cmd->retries);
 
 	mmc_wait_for_req(host, &mrq);
+	pr_info("%s: exit (cmd->opcode=%d retries=%d cmd->error=%d)\n", __func__, cmd->opcode, cmd->retries, cmd->error);
 
 	return cmd->error;
 }
diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index 7ec7d62..0cfa3ad 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -482,6 +482,9 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 	bool expired = false;
 	bool busy = false;
 
+	WARN_ON(1);
+	pr_info("%s: enter\n", __func__);
+
 	mmc_retune_hold(host);
 
 	/*
@@ -536,6 +539,7 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 	/* Must check status to be sure of no errors. */
 	timeout = jiffies + msecs_to_jiffies(timeout_ms) + 1;
 	do {
+		pr_info("%s: busy loop enter\n", __func__);
 		/*
 		 * Due to the possibility of being preempted after
 		 * sending the status command, check the expiration
@@ -543,6 +547,7 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 		 */
 		expired = time_after(jiffies, timeout);
 		if (send_status) {
+			pr_info("%s: send status\n", __func__);
 			err = __mmc_send_status(card, &status, ignore_crc);
 			if (err)
 				goto out;
@@ -550,6 +555,7 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 		if ((host->caps & MMC_CAP_WAIT_WHILE_BUSY) && use_r1b_resp)
 			break;
 		if (host->ops->card_busy) {
+			pr_info("%s: card busy\n", __func__);
 			if (!host->ops->card_busy(host))
 				break;
 			busy = true;
@@ -563,6 +569,7 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 		 * rely on waiting for the stated timeout to be sufficient.
 		 */
 		if (!send_status && !host->ops->card_busy) {
+			pr_info("%s: mmc delay\n", __func__);
 			mmc_delay(timeout_ms);
 			goto out;
 		}
@@ -575,12 +582,14 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
 			err = -ETIMEDOUT;
 			goto out;
 		}
+		pr_info("%s: busy loop (busy=%d)\n", __func__, busy);
 	} while (R1_CURRENT_STATE(status) == R1_STATE_PRG || busy);
 
 	err = mmc_switch_status_error(host, status);
 out:
 	mmc_retune_release(host);
 
+	pr_info("%s: exit\n", __func__);
 	return err;
 }
 
diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
index c2a1286..c23b1b2 100644
--- a/drivers/mmc/host/dw_mmc.c
+++ b/drivers/mmc/host/dw_mmc.c
@@ -1229,6 +1229,7 @@ static void dw_mci_queue_request(struct dw_mci *host, struct dw_mci_slot *slot,
 	dev_vdbg(&slot->mmc->class_dev, "queue request: state=%d\n",
 		 host->state);
 
+	BUG_ON(slot->mrq);
 	slot->mrq = mrq;
 
 	if (host->state == STATE_WAITING_CMD11_DONE) {
@@ -1255,8 +1256,6 @@ static void dw_mci_request(struct mmc_host *mmc, struct mmc_request *mrq)
 	struct dw_mci_slot *slot = mmc_priv(mmc);
 	struct dw_mci *host = slot->host;
 
-	WARN_ON(slot->mrq);
-
 	/*
 	 * The check for card presence and queueing of the request must be
 	 * atomic, otherwise the card could be removed in between and the
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index 4c6d131..2d0aec8 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -126,6 +126,7 @@ struct mmc_data {
 };
 
 struct mmc_host;
+struct mmc_queue_req;
 struct mmc_request {
 	struct mmc_command	*sbc;		/* SET_BLOCK_COUNT for multiblock */
 	struct mmc_command	*cmd;
@@ -139,6 +140,8 @@ struct mmc_request {
 
 	/* Allow other commands during this ongoing data transfer or busy wait */
 	bool			cap_cmd_during_tfr;
+
+	struct mmc_queue_req	*mqrq;
 };
 
 struct mmc_card;
@@ -147,7 +150,8 @@ struct mmc_async_req;
 extern int mmc_stop_bkops(struct mmc_card *);
 extern int mmc_read_bkops_status(struct mmc_card *);
 extern struct mmc_async_req *mmc_start_req(struct mmc_host *,
-					   struct mmc_async_req *, int *);
+					   struct mmc_async_req *, int *,
+					   struct mmc_queue_req *);
 extern int mmc_interrupt_hpi(struct mmc_card *);
 extern void mmc_wait_for_req(struct mmc_host *, struct mmc_request *);
 extern void mmc_wait_for_req_done(struct mmc_host *host,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH PoC 4/7] mmc-mq: implement checking for queue busy condition
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
                   ` (2 preceding siblings ...)
  2016-09-22 13:57 ` [PATCH PoC 3/7] mmc-mq: request completion fixes Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 5/7] mmc-mq: remove some debug printks Bartlomiej Zolnierkiewicz
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/queue.c | 28 +++++++++++++++++++++++-----
 drivers/mmc/card/queue.h |  2 ++
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 6fd711d..3ed4477 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -46,6 +46,21 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
 	return BLKPREP_OK;
 }
 
+int mmc_queue_ready(struct request_queue *q, struct mmc_queue *mq)
+{
+	unsigned int busy;
+
+	busy = atomic_inc_return(&mq->device_busy) - 1;
+
+	if (busy >= mq->qdepth)
+		goto out_dec;
+
+	return 1;
+out_dec:
+	atomic_dec(&mq->device_busy);
+	return 0;
+}
+
 struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 					 struct request *req)
 {
@@ -81,13 +96,14 @@ void mmc_queue_req_free(struct mmc_queue *mq,
 	struct request *req;
 	pr_info("%s: enter\n", __func__);
 	req = mqrq->req;
-	spin_lock_irq(req->q->queue_lock);
+////	spin_lock_irq(req->q->queue_lock);
 	WARN_ON(!mqrq->req || mq->qcnt < 1 ||
 		!test_bit(mqrq->task_id, &mq->qslots));
 	mqrq->req = NULL;
 	mq->qcnt -= 1;
 	__clear_bit(mqrq->task_id, &mq->qslots);
-	spin_unlock_irq(req->q->queue_lock);
+////	spin_unlock_irq(req->q->queue_lock);
+	atomic_dec(&mq->device_busy);
 	pr_info("%s: exit\n", __func__);
 }
 
@@ -114,9 +130,9 @@ repeat:
 	req = blk_fetch_request(q);
 	WARN_ON(req && req->cmd_type != REQ_TYPE_FS);
 	if (req && req->cmd_type == REQ_TYPE_FS) {
-		mqrq_cur = mmc_queue_req_find(mq, req);
-		if (!mqrq_cur) {
-			pr_info("%s: command already queued (%d)\n", __func__, mq->qcnt);
+		if (mmc_queue_ready(q, mq)) {
+		} else {
+			pr_info("%s: command already queued\n", __func__);
 //			WARN_ON(1);
 //			spin_unlock_irq(q->queue_lock);
 			blk_requeue_request(mq->queue, req);
@@ -129,6 +145,8 @@ repeat:
 		return;
 	}
 	spin_unlock_irq(q->queue_lock);
+	mqrq_cur = mmc_queue_req_find(mq, req);
+	BUG_ON(!mqrq_cur);
 	mq->issue_fn(mq, req, mqrq_cur);
 	spin_lock_irq(q->queue_lock);
 	goto repeat;
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 3adf1bc..20399e4 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -61,6 +61,8 @@ struct mmc_queue {
 	unsigned long		qslots;
 
 	int			testtag;
+
+	atomic_t		device_busy;
 };
 
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH PoC 5/7] mmc-mq: remove some debug printks
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
                   ` (3 preceding siblings ...)
  2016-09-22 13:57 ` [PATCH PoC 4/7] mmc-mq: implement checking for queue busy condition Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 6/7] mmc-mq: initial blk-mq support Bartlomiej Zolnierkiewicz
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/block.c | 16 ++++++++--------
 drivers/mmc/card/queue.c | 12 ++++++------
 drivers/mmc/core/core.c  | 45 +++++++++++++++++++++++----------------------
 3 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 3c2bdc2..ef230e8 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -2008,7 +2008,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 	const u8 packed_nr = 2;
 	u8 reqs = 0;
 
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 	mqrq_cur = mqrq;
 	rqc = mqrq_cur->req;
 
@@ -2166,7 +2166,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 #endif
 	} while (ret);
 
-	pr_info("%s: exit (1==ok)\n", __func__);
+//	pr_info("%s: exit (1==ok)\n", __func__);
 	return 1;
 
  cmd_abort:
@@ -2212,14 +2212,14 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 	struct mmc_card *card = md->queue.card;
 	unsigned int cmd_flags = req ? req->cmd_flags : 0;
 
-	pr_info("%s: enter (mq=%p md=%p)\n", __func__, mq, md);
+//	pr_info("%s: enter (mq=%p md=%p)\n", __func__, mq, md);
 
 	BUG_ON(!req);
 
 	/* claim host only for the first request */
 	mmc_get_card(card);
 
-	pr_info("%s: mmc_blk_part_switch (mq=%p md=%p)\n", __func__, mq, md);
+//	pr_info("%s: mmc_blk_part_switch (mq=%p md=%p)\n", __func__, mq, md);
 	ret = mmc_blk_part_switch(card, md);
 	if (ret) {
 		if (req) {
@@ -2231,23 +2231,23 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 	}
 
 	if (cmd_flags & REQ_DISCARD) {
-		pr_info("%s: DISCARD rq (mq=%p md=%p)\n", __func__, mq, md);
+//		pr_info("%s: DISCARD rq (mq=%p md=%p)\n", __func__, mq, md);
 		if (req->cmd_flags & REQ_SECURE)
 			ret = mmc_blk_issue_secdiscard_rq(mq, req, mqrq);
 		else
 			ret = mmc_blk_issue_discard_rq(mq, req, mqrq);
 	} else if (cmd_flags & REQ_FLUSH) {
-		pr_info("%s: FLUSH rq (mq=%p md=%p)\n", __func__, mq, md);
+//		pr_info("%s: FLUSH rq (mq=%p md=%p)\n", __func__, mq, md);
 		ret = mmc_blk_issue_flush(mq, req, mqrq);
 	} else {
-		pr_info("%s: RW rq (mq=%p md=%p)\n", __func__, mq, md);
+//		pr_info("%s: RW rq (mq=%p md=%p)\n", __func__, mq, md);
 		ret = mmc_blk_issue_rw_rq(mq, mqrq);
 	}
 
 out:
 	/* Release host when there are no more requests */
 /////	mmc_put_card(card);
-	pr_info("%s: exit (mq=%p md=%p)\n", __func__, mq, md);
+//	pr_info("%s: exit (mq=%p md=%p)\n", __func__, mq, md);
 	return ret;
 }
 
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 3ed4477..d4f4859 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -67,7 +67,7 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 	struct mmc_queue_req *mqrq;
 	int i = ffz(mq->qslots);
 
-	pr_info("%s: enter (%d) (testtag=%d qdepth=%d 0.testtag=%d\n", __func__, i, mq->testtag, mq->qdepth, mq->mqrq[0].testtag);
+//	pr_info("%s: enter (%d) (testtag=%d qdepth=%d 0.testtag=%d\n", __func__, i, mq->testtag, mq->qdepth, mq->mqrq[0].testtag);
 
 	WARN_ON(mq->testtag == 0);
 //////	WARN_ON(i >= mq->qdepth);
@@ -85,7 +85,7 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 	__set_bit(mqrq->task_id, &mq->qslots);
 ////	spin_unlock_irq(req->q->queue_lock);
 
-	pr_info("%s: exit\n", __func__);
+//	pr_info("%s: exit\n", __func__);
 
 	return mqrq;
 }
@@ -94,7 +94,7 @@ void mmc_queue_req_free(struct mmc_queue *mq,
 			struct mmc_queue_req *mqrq)
 {
 	struct request *req;
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 	req = mqrq->req;
 ////	spin_lock_irq(req->q->queue_lock);
 	WARN_ON(!mqrq->req || mq->qcnt < 1 ||
@@ -104,7 +104,7 @@ void mmc_queue_req_free(struct mmc_queue *mq,
 	__clear_bit(mqrq->task_id, &mq->qslots);
 ////	spin_unlock_irq(req->q->queue_lock);
 	atomic_dec(&mq->device_busy);
-	pr_info("%s: exit\n", __func__);
+//	pr_info("%s: exit\n", __func__);
 }
 
 /*
@@ -132,7 +132,7 @@ repeat:
 	if (req && req->cmd_type == REQ_TYPE_FS) {
 		if (mmc_queue_ready(q, mq)) {
 		} else {
-			pr_info("%s: command already queued\n", __func__);
+//			pr_info("%s: command already queued\n", __func__);
 //			WARN_ON(1);
 //			spin_unlock_irq(q->queue_lock);
 			blk_requeue_request(mq->queue, req);
@@ -141,7 +141,7 @@ repeat:
 		}
 	}
 	if (!req) {
-		pr_info("%s: no request\n", __func__);
+//		pr_info("%s: no request\n", __func__);
 		return;
 	}
 	spin_unlock_irq(q->queue_lock);
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 22052f0..549e65e 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -221,7 +221,7 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 {
 	int err;
 
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 
 	/* Assumes host controller has been runtime resumed by mmc_claim_host */
 	err = mmc_retune(host);
@@ -264,7 +264,7 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 
 	host->ops->request(host, mrq);
 
-	pr_info("%s: exit\n", __func__);
+//	pr_info("%s: exit\n", __func__);
 }
 
 static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
@@ -273,7 +273,7 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	unsigned int i, sz;
 	struct scatterlist *sg;
 #endif
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 	mmc_retune_hold(host);
 
 	if (mmc_card_removed(host->card))
@@ -337,7 +337,7 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
 	led_trigger_event(host->led, LED_FULL);
 	__mmc_start_request(host, mrq);
 
-	pr_info("%s: exit\n", __func__);
+//	pr_info("%s: exit\n", __func__);
 	return 0;
 }
 
@@ -431,10 +431,10 @@ static void mmc_wait_done(struct mmc_request *mrq)
 	struct mmc_command *cmd;
 	int err = 0, ret = 0;
 
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 
 	cmd = mrq->cmd;
-	pr_info("%s: cmd->opcode=%d mq_rq=%p\n", __func__, cmd->opcode, mq_rq);
+//	pr_info("%s: cmd->opcode=%d mq_rq=%p\n", __func__, cmd->opcode, mq_rq);
 
 	if (mq_rq)
 		areq = &mq_rq->mmc_active;
@@ -457,7 +457,8 @@ static void mmc_wait_done(struct mmc_request *mrq)
 		cmd->retries--;
 		cmd->error = 0;
 		__mmc_start_request(host, mrq);
-		goto out;
+//		goto out;
+		return;
 	}
 
 	mmc_retune_release(host);
@@ -486,13 +487,13 @@ BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags
 
 		bytes = brq->data.bytes_xfered;
 		mmc_put_card(host->card);
-		pr_info("%s: freeing mqrq\n", __func__); //
+//		pr_info("%s: freeing mqrq\n", __func__); //
 		mmc_queue_req_free(req->q->queuedata, mq_rq); //
 		ret = blk_end_request(req, 0, bytes);
 
 	}
-out:
-	pr_info("%s: exit (err=%d, ret=%d)\n", __func__, err, ret);
+//out:
+//	pr_info("%s: exit (err=%d, ret=%d)\n", __func__, err, ret);
 }
 
 static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
@@ -521,7 +522,7 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq, struc
 {
 	int err;
 
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 
 	mmc_wait_ongoing_tfr_cmd(host);
 
@@ -539,7 +540,7 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq, struc
 		mmc_wait_done(mrq);
 	}
 
-	pr_info("%s: exit\n", __func__);
+//	pr_info("%s: exit\n", __func__);
 
 	return err;
 }
@@ -717,9 +718,9 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 	int start_err = 0;
 	struct mmc_async_req *data = host->areq;
 
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 
-	pr_info("%s: areq=%p host->areq=%p\n", __func__, areq, host->areq);
+//	pr_info("%s: areq=%p host->areq=%p\n", __func__, areq, host->areq);
 
 	/* Prepare a new request */
 //	if (areq && !areq->pre_req_done) {
@@ -767,7 +768,7 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
 	if (error)
 		*error = err;
 #endif
-	pr_info("%s: exit (data=%p)\n", __func__, data);
+//	pr_info("%s: exit (data=%p)\n", __func__, data);
 	return data;
 }
 EXPORT_SYMBOL(mmc_start_req);
@@ -786,19 +787,19 @@ EXPORT_SYMBOL(mmc_start_req);
  */
 void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
 {
-	pr_info("%s: enter\n", __func__);
+//	pr_info("%s: enter\n", __func__);
 
 	__mmc_start_req(host, mrq, NULL);
 
 	if (!mrq->cap_cmd_during_tfr) {
 //		mmc_wait_for_req_done(host, mrq);
 //		BUG(); //
-		pr_info("%s: wait start\n", __func__);
+//		pr_info("%s: wait start\n", __func__);
 		wait_for_completion(&mrq->completion);
-		pr_info("%s: wait done\n", __func__);
+//		pr_info("%s: wait done\n", __func__);
 	}
 
-	pr_info("%s: exit\n", __func__);
+//	pr_info("%s: exit\n", __func__);
 }
 EXPORT_SYMBOL(mmc_wait_for_req);
 
@@ -883,7 +884,7 @@ int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, int retries
 {
 	struct mmc_request mrq = {NULL};
 
-	pr_info("%s: enter (cmd->opcode=%d retries=%d)\n", __func__, cmd->opcode, cmd->retries);
+//	pr_info("%s: enter (cmd->opcode=%d retries=%d)\n", __func__, cmd->opcode, cmd->retries);
 
 	WARN_ON(!host->claimed);
 
@@ -892,10 +893,10 @@ int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, int retries
 
 	mrq.cmd = cmd;
 	cmd->data = NULL;
-	pr_info("%s: cmd->opcode=%d retries=%d\n", __func__, cmd->opcode, cmd->retries);
+//	pr_info("%s: cmd->opcode=%d retries=%d\n", __func__, cmd->opcode, cmd->retries);
 
 	mmc_wait_for_req(host, &mrq);
-	pr_info("%s: exit (cmd->opcode=%d retries=%d cmd->error=%d)\n", __func__, cmd->opcode, cmd->retries, cmd->error);
+//	pr_info("%s: exit (cmd->opcode=%d retries=%d cmd->error=%d)\n", __func__, cmd->opcode, cmd->retries, cmd->error);
 
 	return cmd->error;
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH PoC 6/7] mmc-mq: initial blk-mq support
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
                   ` (4 preceding siblings ...)
  2016-09-22 13:57 ` [PATCH PoC 5/7] mmc-mq: remove some debug printks Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-22 13:57 ` [PATCH PoC 7/7] mmc-mq: async request support for blk-mq mode Bartlomiej Zolnierkiewicz
  2016-09-30  0:50 ` [PATCH PoC 0/7] mmc: switch to blk-mq Linus Walleij
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/block.c | 15 +++++----
 drivers/mmc/card/queue.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++--
 drivers/mmc/card/queue.h |  3 ++
 drivers/mmc/core/core.c  |  7 ++--
 4 files changed, 98 insertions(+), 12 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index ef230e8..9968623 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -27,6 +27,7 @@
 #include <linux/errno.h>
 #include <linux/hdreg.h>
 #include <linux/kdev_t.h>
+#include <linux/blk-mq.h>
 #include <linux/blkdev.h>
 #include <linux/mutex.h>
 #include <linux/scatterlist.h>
@@ -1235,7 +1236,7 @@ out:
 		mmc_blk_reset_success(md, type);
 	mmc_put_card(card);
 	mmc_queue_req_free(mq, mqrq);
-	blk_end_request(req, err, blk_rq_bytes(req));
+	blk_mq_end_request(req, err);
 
 	return err ? 0 : 1;
 }
@@ -1304,7 +1305,7 @@ out_retry:
 out:
 	mmc_put_card(card);
 	mmc_queue_req_free(mq, mqrq);
-	blk_end_request(req, err, blk_rq_bytes(req));
+	blk_mq_end_request(req, err);
 
 	return err ? 0 : 1;
 }
@@ -1321,16 +1322,14 @@ static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req, struct
 
 	mmc_put_card(card);
 	mmc_queue_req_free(mq, mqrq);
-	blk_end_request_all(req, ret);
+	blk_mq_end_request(req, ret);
 
 	return ret ? 0 : 1;
 }
 
 static void mmc_blk_requeue(struct request_queue *q, struct request *req)
 {
-	spin_lock_irq(q->queue_lock);
-	blk_requeue_request(q, req);
-	spin_unlock_irq(q->queue_lock);
+	blk_mq_requeue_request(req);
 }
 
 /*
@@ -2219,12 +2218,14 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 	/* claim host only for the first request */
 	mmc_get_card(card);
 
+	blk_mq_start_request(req);
+
 //	pr_info("%s: mmc_blk_part_switch (mq=%p md=%p)\n", __func__, mq, md);
 	ret = mmc_blk_part_switch(card, md);
 	if (ret) {
 		if (req) {
 			mmc_queue_req_free(req->q->queuedata, mqrq); //
-			blk_end_request_all(req, -EIO);
+			blk_mq_end_request(req, -EIO);
 		}
 		ret = 0;
 		goto out;
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index d4f4859..038c01e 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -11,6 +11,7 @@
  */
 #include <linux/slab.h>
 #include <linux/module.h>
+#include <linux/blk-mq.h>
 #include <linux/blkdev.h>
 #include <linux/freezer.h>
 #include <linux/kthread.h>
@@ -280,6 +281,59 @@ static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
 		mmc_queue_req_free_bufs(&mq->mqrq[i]);
 }
 
+static int mmc_init_request(void *data, struct request *rq,
+		unsigned int hctx_idx, unsigned int request_idx,
+		unsigned int numa_node)
+{
+//	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+
+//	cmd->sense_buffer = kzalloc_node(SCSI_SENSE_BUFFERSIZE, GFP_KERNEL,
+//			numa_node);
+//	if (!cmd->sense_buffer)
+//		return -ENOMEM;
+	return 0;
+}
+
+static void mmc_exit_request(void *data, struct request *rq,
+		unsigned int hctx_idx, unsigned int request_idx)
+{
+//	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+
+//	kfree(cmd->sense_buffer);
+}
+
+static int mmc_queue_rq(struct blk_mq_hw_ctx *hctx,
+			 const struct blk_mq_queue_data *bd)
+{
+	struct request *req = bd->rq;
+	struct request_queue *q = req->q;
+	struct mmc_queue *mq = q->queuedata;
+	struct mmc_queue_req *mqrq_cur;
+//	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
+	int ret;
+
+	WARN_ON(req && req->cmd_type != REQ_TYPE_FS);
+
+	if (!mmc_queue_ready(q, mq))
+		return BLK_MQ_RQ_QUEUE_BUSY;
+
+	mqrq_cur = mmc_queue_req_find(mq, req);
+	BUG_ON(!mqrq_cur);
+	mq->issue_fn(mq, req, mqrq_cur);
+
+	return BLK_MQ_RQ_QUEUE_OK;
+}
+
+static struct blk_mq_ops mmc_mq_ops = {
+	.map_queue	= blk_mq_map_queue,
+	.queue_rq	= mmc_queue_rq,
+//	.complete	= scsi_softirq_done,
+//	.timeout	= scsi_timeout,
+	.init_request	= mmc_init_request,
+	.exit_request	= mmc_exit_request,
+};
+
+
 /**
  * mmc_init_queue - initialise a queue structure.
  * @mq: mmc queue
@@ -293,6 +347,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 		   spinlock_t *lock, const char *subname)
 {
 	struct mmc_host *host = card->host;
+ 	struct request_queue *q;
 	u64 limit = BLK_BOUNCE_HIGH;
 	bool bounce = false;
 	int ret = -ENOMEM;
@@ -301,9 +356,28 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
 	mq->card = card;
-	mq->queue = blk_init_queue(mmc_request_fn, lock);
-	if (!mq->queue)
-		return -ENOMEM;
+//	mq->queue = blk_init_queue(mmc_request_fn, lock);
+//	if (!mq->queue)
+//		return -ENOMEM;
+	memset(&mq->tag_set, 0, sizeof(mq->tag_set));
+	mq->tag_set.ops = &mmc_mq_ops;
+	mq->tag_set.queue_depth = 1;
+	mq->tag_set.numa_node = NUMA_NO_NODE;
+	mq->tag_set.flags =
+		BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
+	mq->tag_set.nr_hw_queues = 1;
+	mq->tag_set.cmd_size = sizeof(struct mmc_queue_req);
+
+	ret = blk_mq_alloc_tag_set(&mq->tag_set);
+	if (ret)
+		goto out;
+
+	q = blk_mq_init_queue(&mq->tag_set);
+	if (IS_ERR(q)) {
+		ret = PTR_ERR(q);
+		goto cleanup_tag_set;
+	}
+	mq->queue = q;
 
 	mq->qdepth = 1;
 	mq->mqrq = mmc_queue_alloc_mqrqs(mq, mq->qdepth);
@@ -366,6 +440,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	mq->mqrq = NULL;
 blk_cleanup:
 	blk_cleanup_queue(mq->queue);
+cleanup_tag_set:
+	blk_mq_free_tag_set(&mq->tag_set);
+out:
 	return ret;
 }
 
@@ -387,6 +464,8 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
 	kfree(mq->mqrq);
 	mq->mqrq = NULL;
 
+	blk_mq_free_tag_set(&mq->tag_set);
+
 	mq->card = NULL;
 }
 EXPORT_SYMBOL(mmc_cleanup_queue);
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 20399e4..b67ac83 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -63,6 +63,9 @@ struct mmc_queue {
 	int			testtag;
 
 	atomic_t		device_busy;
+
+	/* Block layer tags. */
+	struct blk_mq_tag_set	tag_set;
 };
 
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 549e65e..64687f1 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -30,6 +30,7 @@
 #include <linux/slab.h>
 #include <linux/of.h>
 #include <linux/kernel.h>
+#include <linux/blk-mq.h>
 
 #include <linux/mmc/card.h>
 #include <linux/mmc/host.h>
@@ -489,8 +490,10 @@ BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags
 		mmc_put_card(host->card);
 //		pr_info("%s: freeing mqrq\n", __func__); //
 		mmc_queue_req_free(req->q->queuedata, mq_rq); //
-		ret = blk_end_request(req, 0, bytes);
-
+//		ret = blk_end_request(req, 0, bytes);
+		ret = blk_update_request(req, 0, bytes);
+		if (!ret)
+			__blk_mq_end_request(req, 0);
 	}
 //out:
 //	pr_info("%s: exit (err=%d, ret=%d)\n", __func__, err, ret);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH PoC 7/7] mmc-mq: async request support for blk-mq mode
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
                   ` (5 preceding siblings ...)
  2016-09-22 13:57 ` [PATCH PoC 6/7] mmc-mq: initial blk-mq support Bartlomiej Zolnierkiewicz
@ 2016-09-22 13:57 ` Bartlomiej Zolnierkiewicz
  2016-09-30  0:50 ` [PATCH PoC 0/7] mmc: switch to blk-mq Linus Walleij
  7 siblings, 0 replies; 9+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2016-09-22 13:57 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, b.zolnierkie

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
---
 drivers/mmc/card/block.c | 51 +++++++++--------------------------
 drivers/mmc/card/queue.c | 49 ++++++++++++++++++++++++++++++---
 drivers/mmc/card/queue.h | 36 +++++++++++++++++++++++++
 drivers/mmc/core/core.c  | 70 +++++++++++++++++++++++++++++++++++++-----------
 4 files changed, 148 insertions(+), 58 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 9968623..8d73828 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -89,39 +89,6 @@ static int max_devices;
 static DEFINE_IDA(mmc_blk_ida);
 static DEFINE_SPINLOCK(mmc_blk_lock);
 
-/*
- * There is one mmc_blk_data per slot.
- */
-struct mmc_blk_data {
-	spinlock_t	lock;
-	struct gendisk	*disk;
-	struct mmc_queue queue;
-	struct list_head part;
-
-	unsigned int	flags;
-#define MMC_BLK_CMD23	(1 << 0)	/* Can do SET_BLOCK_COUNT for multiblock */
-#define MMC_BLK_REL_WR	(1 << 1)	/* MMC Reliable write support */
-#define MMC_BLK_PACKED_CMD	(1 << 2)	/* MMC packed command support */
-
-	unsigned int	usage;
-	unsigned int	read_only;
-	unsigned int	part_type;
-	unsigned int	reset_done;
-#define MMC_BLK_READ		BIT(0)
-#define MMC_BLK_WRITE		BIT(1)
-#define MMC_BLK_DISCARD		BIT(2)
-#define MMC_BLK_SECDISCARD	BIT(3)
-
-	/*
-	 * Only set in main mmc_blk_data associated
-	 * with mmc_card with dev_set_drvdata, and keeps
-	 * track of the current selected device partition.
-	 */
-	unsigned int	part_curr;
-	struct device_attribute force_ro;
-	struct device_attribute power_ro_lock;
-	int	area_type;
-};
 
 static DEFINE_MUTEX(open_lock);
 
@@ -1316,7 +1283,7 @@ static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req, struct
 	struct mmc_card *card = md->queue.card;
 	int ret = 0;
 
-	ret = mmc_flush_cache(card);
+////	ret = mmc_flush_cache(card);
 	if (ret)
 		ret = -EIO;
 
@@ -1528,7 +1495,7 @@ static int mmc_blk_packed_err_check(struct mmc_card *card,
 	return check;
 }
 
-static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 			       struct mmc_card *card,
 			       int disable_multi,
 			       struct mmc_queue *mq)
@@ -2204,7 +2171,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
 	return 0;
 }
 
-static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mmc_queue_req *mqrq)
+int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mmc_queue_req *mqrq)
 {
 	int ret;
 	struct mmc_blk_data *md = mq->data;
@@ -2216,7 +2183,9 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 	BUG_ON(!req);
 
 	/* claim host only for the first request */
-	mmc_get_card(card);
+////	pr_info("%s: enter mq->qcnt=%d\n", __func__, mq->qcnt);
+	if (mq->qcnt == 1)
+		mmc_get_card(card);
 
 	blk_mq_start_request(req);
 
@@ -2248,7 +2217,7 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mm
 out:
 	/* Release host when there are no more requests */
 /////	mmc_put_card(card);
-//	pr_info("%s: exit (mq=%p md=%p)\n", __func__, mq, md);
+////	pr_info("%s: exit (mq=%p md=%p)\n", __func__, mq, md);
 	return ret;
 }
 
@@ -2624,6 +2593,8 @@ static const struct mmc_fixup blk_fixups[] =
 	END_FIXUP
 };
 
+//static int probe_done = 0;
+
 static int mmc_blk_probe(struct mmc_card *card)
 {
 	struct mmc_blk_data *md, *part_md;
@@ -2635,6 +2606,10 @@ static int mmc_blk_probe(struct mmc_card *card)
 	if (!(card->csd.cmdclass & CCC_BLOCK_READ))
 		return -ENODEV;
 
+//	if (probe_done)
+//		return -ENODEV;
+//	probe_done = 1;
+
 	mmc_fixup_device(card, blk_fixups);
 
 	md = mmc_blk_alloc(card);
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 038c01e..372ec0c2 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -78,7 +78,7 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
 
 ////	spin_lock_irq(req->q->queue_lock);
 	mqrq = &mq->mqrq[i];
-	WARN_ON(mqrq->testtag == 0);
+//	WARN_ON(mqrq->testtag == 0);
 	WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
 		test_bit(mqrq->task_id, &mq->qslots));
 	mqrq->req = req;
@@ -302,6 +302,16 @@ static void mmc_exit_request(void *data, struct request *rq,
 //	kfree(cmd->sense_buffer);
 }
 
+extern void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+			       struct mmc_card *card,
+			       int disable_multi,
+			       struct mmc_queue *mq);
+
+extern void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq,
+		 bool is_first_req);
+
+static struct mmc_queue *probe_mq = NULL;
+
 static int mmc_queue_rq(struct blk_mq_hw_ctx *hctx,
 			 const struct blk_mq_queue_data *bd)
 {
@@ -311,15 +321,40 @@ static int mmc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct mmc_queue_req *mqrq_cur;
 //	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
 	int ret;
+	unsigned long flags = 0;
 
 	WARN_ON(req && req->cmd_type != REQ_TYPE_FS);
 
+	if (!probe_mq)
+		probe_mq = mq;
+
+	if (probe_mq && probe_mq != mq) {
+		return BLK_MQ_RQ_QUEUE_ERROR;
+	}
+
 	if (!mmc_queue_ready(q, mq))
 		return BLK_MQ_RQ_QUEUE_BUSY;
 
+	spin_lock_irqsave(&mq->async_lock, flags);
+////	pr_info("%s: enter mq->qcnt=%d\n", __func__, mq->qcnt);
 	mqrq_cur = mmc_queue_req_find(mq, req);
 	BUG_ON(!mqrq_cur);
-	mq->issue_fn(mq, req, mqrq_cur);
+	if (mq->qcnt == 2) {
+		if ((mqrq_cur->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)) == 0) {
+			struct mmc_blk_data *md = mq->data;
+			struct mmc_card *card = md->queue.card;
+			struct mmc_host *host = card->host;
+			struct mmc_async_req *areq;
+
+			mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+			areq = &mqrq_cur->mmc_active;
+			mmc_pre_req(host, areq->mrq, 1);
+		}
+	}
+	if (mq->qcnt == 1)
+		mq->issue_fn(mq, req, mqrq_cur);
+////	pr_info("%s: exit mq->qcnt=%d\n", __func__, mq->qcnt);
+	spin_unlock_irqrestore(&mq->async_lock, flags);
 
 	return BLK_MQ_RQ_QUEUE_OK;
 }
@@ -333,6 +368,7 @@ static struct blk_mq_ops mmc_mq_ops = {
 	.exit_request	= mmc_exit_request,
 };
 
+//static int q_probe = 0;
 
 /**
  * mmc_init_queue - initialise a queue structure.
@@ -352,6 +388,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	bool bounce = false;
 	int ret = -ENOMEM;
 
+//	if (q_probe)
+//		return -ENOMEM;
+//	q_probe = 1;
+
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
@@ -361,7 +401,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 //		return -ENOMEM;
 	memset(&mq->tag_set, 0, sizeof(mq->tag_set));
 	mq->tag_set.ops = &mmc_mq_ops;
-	mq->tag_set.queue_depth = 1;
+	mq->tag_set.queue_depth = 2;
 	mq->tag_set.numa_node = NUMA_NO_NODE;
 	mq->tag_set.flags =
 		BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
@@ -379,12 +419,13 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	}
 	mq->queue = q;
 
-	mq->qdepth = 1;
+	mq->qdepth = 2;
 	mq->mqrq = mmc_queue_alloc_mqrqs(mq, mq->qdepth);
 	if (!mq->mqrq)
 		goto blk_cleanup;
 	mq->testtag = 1;
 	mq->queue->queuedata = mq;
+	spin_lock_init(&mq->async_lock);
 
 	blk_queue_prep_rq(mq->queue, mmc_prep_request);
 	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index b67ac83..99dacb7 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -66,6 +66,42 @@ struct mmc_queue {
 
 	/* Block layer tags. */
 	struct blk_mq_tag_set	tag_set;
+
+	spinlock_t		async_lock;
+};
+
+/*
+ * There is one mmc_blk_data per slot.
+ */
+struct mmc_blk_data {
+	spinlock_t	lock;
+	struct gendisk	*disk;
+	struct mmc_queue queue;
+	struct list_head part;
+
+	unsigned int	flags;
+#define MMC_BLK_CMD23	(1 << 0)	/* Can do SET_BLOCK_COUNT for multiblock */
+#define MMC_BLK_REL_WR	(1 << 1)	/* MMC Reliable write support */
+#define MMC_BLK_PACKED_CMD	(1 << 2)	/* MMC packed command support */
+
+	unsigned int	usage;
+	unsigned int	read_only;
+	unsigned int	part_type;
+	unsigned int	reset_done;
+#define MMC_BLK_READ		BIT(0)
+#define MMC_BLK_WRITE		BIT(1)
+#define MMC_BLK_DISCARD		BIT(2)
+#define MMC_BLK_SECDISCARD	BIT(3)
+
+	/*
+	 * Only set in main mmc_blk_data associated
+	 * with mmc_card with dev_set_drvdata, and keeps
+	 * track of the current selected device partition.
+	 */
+	unsigned int	part_curr;
+	struct device_attribute force_ro;
+	struct device_attribute power_ro_lock;
+	int	area_type;
 };
 
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 64687f1..ebb6ed5 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -415,6 +415,9 @@ EXPORT_SYMBOL(mmc_start_bkops);
 static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
 			 int err);
 
+int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req, struct mmc_queue_req *mqrq);
+int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq, struct mmc_queue_req *mqrq);
+
 /*
  * mmc_wait_done() - done callback for request
  * @mrq: done request
@@ -431,8 +434,18 @@ static void mmc_wait_done(struct mmc_request *mrq)
 //	struct mmc_queue_req *mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
 	struct mmc_command *cmd;
 	int err = 0, ret = 0;
+	unsigned long flags = 0;
+	struct mmc_queue *mq = NULL;
 
-//	pr_info("%s: enter\n", __func__);
+	if (mq_rq)
+		mq = mq_rq->req->q->queuedata;
+
+////	pr_info("%s: enter\n", __func__);
+
+	if (mq) {
+		spin_lock_irqsave(&mq->async_lock, flags);
+////		pr_info("%s: enter mq\n", __func__);
+	}
 
 	cmd = mrq->cmd;
 //	pr_info("%s: cmd->opcode=%d mq_rq=%p\n", __func__, cmd->opcode, mq_rq);
@@ -459,11 +472,28 @@ static void mmc_wait_done(struct mmc_request *mrq)
 		cmd->error = 0;
 		__mmc_start_request(host, mrq);
 //		goto out;
+		if (mq)
+			spin_unlock_irqrestore(&mq->async_lock, flags);
 		return;
 	}
 
 	mmc_retune_release(host);
 
+	if (mq && mq->qcnt == 2) {
+		struct mmc_queue_req *mq_rq2 = &mq->mqrq[!mq_rq->task_id];
+
+		if (mq_rq2 &&
+		    (mq_rq2->req->cmd_type == REQ_TYPE_FS) &&
+			(mq_rq2->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH))) {
+			mmc_blk_issue_rq(mq, mq_rq2->req, mq_rq2);
+		} else {
+			struct mmc_async_req *areq;
+
+			areq = &mq_rq2->mmc_active;
+			__mmc_start_req(host, areq->mrq, mq_rq2);
+		}
+	}
+
 //	host->areq->pre_req_done = false;
 	if (mq_rq &&
 	    (mq_rq->req->cmd_type == REQ_TYPE_FS) &&
@@ -472,7 +502,7 @@ static void mmc_wait_done(struct mmc_request *mrq)
 	}
 
 	complete(&mrq->completion);
-BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)));
+//BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)));
 	if (mq_rq &&
 	    (mq_rq->req->cmd_type == REQ_TYPE_FS) &&
 	    ((mq_rq->req->cmd_flags & (REQ_DISCARD | REQ_FLUSH)) == 0)) {
@@ -487,7 +517,9 @@ BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags
 		mmc_queue_bounce_post(mq_rq);
 
 		bytes = brq->data.bytes_xfered;
-		mmc_put_card(host->card);
+////		pr_info("%s: enter mq->qcnt=%d\n", __func__, mq->qcnt);
+		if (mq->qcnt == 1)
+			mmc_put_card(host->card);
 //		pr_info("%s: freeing mqrq\n", __func__); //
 		mmc_queue_req_free(req->q->queuedata, mq_rq); //
 //		ret = blk_end_request(req, 0, bytes);
@@ -496,7 +528,9 @@ BUG_ON(mq_rq && (mq_rq->req->cmd_type == REQ_TYPE_FS) && (mq_rq->req->cmd_flags
 			__blk_mq_end_request(req, 0);
 	}
 //out:
-//	pr_info("%s: exit (err=%d, ret=%d)\n", __func__, err, ret);
+////	pr_info("%s: exit (err=%d, ret=%d)\n", __func__, err, ret);
+	if (mq)
+		spin_unlock_irqrestore(&mq->async_lock, flags);
 }
 
 static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
@@ -521,7 +555,7 @@ static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
  * If an ongoing transfer is already in progress, wait for the command line
  * to become available before sending another command.
  */
-static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq, struct mmc_queue_req *mqrq)
+int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq, struct mmc_queue_req *mqrq)
 {
 	int err;
 
@@ -675,7 +709,7 @@ EXPORT_SYMBOL(mmc_is_req_done);
  *	host prepare for the new request. Preparation of a request may be
  *	performed while another request is running on the host.
  */
-static void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq,
+void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq,
 		 bool is_first_req)
 {
 	if (host->ops->pre_req)
@@ -797,9 +831,10 @@ void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
 	if (!mrq->cap_cmd_during_tfr) {
 //		mmc_wait_for_req_done(host, mrq);
 //		BUG(); //
-//		pr_info("%s: wait start\n", __func__);
-		wait_for_completion(&mrq->completion);
-//		pr_info("%s: wait done\n", __func__);
+////		pr_info("%s: wait start\n", __func__);
+		mdelay(500);
+		//wait_for_completion(&mrq->completion);
+////		pr_info("%s: wait done\n", __func__);
 	}
 
 //	pr_info("%s: exit\n", __func__);
@@ -1097,9 +1132,9 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
 {
 	DECLARE_WAITQUEUE(wait, current);
 	unsigned long flags;
-	int stop;
+	int stop = 0; //
 	bool pm = false;
-
+#if 0
 	might_sleep();
 
 	add_wait_queue(&host->wq, &wait);
@@ -1114,17 +1149,20 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
 		spin_lock_irqsave(&host->lock, flags);
 	}
 	set_current_state(TASK_RUNNING);
+#endif
 	if (!stop) {
 		host->claimed = 1;
 		host->claimer = current;
 		host->claim_cnt += 1;
 		if (host->claim_cnt == 1)
 			pm = true;
+	}
+#if 0
 	} else
 		wake_up(&host->wq);
 	spin_unlock_irqrestore(&host->lock, flags);
 	remove_wait_queue(&host->wq, &wait);
-
+#endif
 	if (pm)
 		pm_runtime_get_sync(mmc_dev(host));
 
@@ -1145,15 +1183,15 @@ void mmc_release_host(struct mmc_host *host)
 
 	WARN_ON(!host->claimed);
 
-	spin_lock_irqsave(&host->lock, flags);
+//	spin_lock_irqsave(&host->lock, flags);
 	if (--host->claim_cnt) {
 		/* Release for nested claim */
-		spin_unlock_irqrestore(&host->lock, flags);
+//		spin_unlock_irqrestore(&host->lock, flags);
 	} else {
 		host->claimed = 0;
 		host->claimer = NULL;
-		spin_unlock_irqrestore(&host->lock, flags);
-		wake_up(&host->wq);
+//		spin_unlock_irqrestore(&host->lock, flags);
+//		wake_up(&host->wq);
 		pm_runtime_mark_last_busy(mmc_dev(host));
 		pm_runtime_put_autosuspend(mmc_dev(host));
 	}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH PoC 0/7] mmc: switch to blk-mq
  2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
                   ` (6 preceding siblings ...)
  2016-09-22 13:57 ` [PATCH PoC 7/7] mmc-mq: async request support for blk-mq mode Bartlomiej Zolnierkiewicz
@ 2016-09-30  0:50 ` Linus Walleij
  7 siblings, 0 replies; 9+ messages in thread
From: Linus Walleij @ 2016-09-30  0:50 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz
  Cc: Ulf Hansson, Greg KH, Paolo Valente, Jens Axboe, Hannes Reinecke,
	Tejun Heo, Omar Sandoval, Christoph Hellwig, linux-mmc,
	linux-kernel, Arnd Bergmann

On Thu, Sep 22, 2016 at 6:57 AM, Bartlomiej Zolnierkiewicz
<b.zolnierkie@samsung.com> wrote:

> Since Linus Walleij is also working on that and I won't
> probably have time to touch this code till the end of
> upcoming month, here it is (basically a code dump of my
> proof-of-concept work).  I hope that it would be useful
> to somebody.
>
> It is extremely ugly & full of bogus debug code but boots
> fine on my Odroid-XU3 and benchmarks can be run.

Haha, it is still good discussion material.

FWIW your patchset is way more advanced than whatever I
cooked up, and the approach taken: first rip out async requests,
then adding a mq callback block and add async requests back
after adding a function to monitor if the queue is busy is a way
better approach.

I sat down with Ulf Hansson and Arnd Bergmann to discuss the
material and issues we face if/when migrating the MMC/SD code
to blk-mq.

Just for context to everyone: MMC/SD has an asynchronous
request handling that achieves a call all the way into the driver
to do some DMA mapping (flush) of SGlists with dma_map_sg()
before the hardware start processing the actual request. There
is a post_req() callback as well performing dma_unmap_sg().

This is mostly a non-issue on coherent memory architectures
like x86, but gives a nice performance boost on ARM (etc)
systems. In theory the callback could be used for other stuff
but all current drivers ultimately call
dma_map_sg()/dma_unmap_sg().

The interesting solution to achieve asynchronous requests,
a.k.a. double-buffering a.k.a. request pipelining is basically this
from the last patch:

-       mq->qdepth = 1;
+       mq->qdepth = 2;

So we claim that the hardware queue has a depth of two
requests but well... that is not really true. If we start confusing
concepts like this to get parallelism, what shall we set this
to when we exploit command queueing and actually have a
queue depth of say 64? that will result in a pile of hacks.

The proper solution would be to augment struct blk_mq_ops
vtable with a .pre_queue_rq() and .post_complete_rq() or
something.

The way I read the code the init_request() and exit_request()
callbacks cannot be used as they only deal with allocating the
struct and this seems to happen before the request is actually
filled in with the data (correct me if I don't understand this right!)
this seems to be confirmed by the presence of a .reinit_request()
callback. So we can't map/unmap the requests in these
callbacks.

We noted that this dma map/upmap optimization can also be
applicable for USB mass storage, so we get an optimization
from the MQ block layer that we can reuse in more than
MMC/SD.

After this we will still run into the same issue that you find after
this patchset: regressions in performance because of the
absence of an elevator/scheduler algorithm in blk-mq. So we
cannot really apply the patch set before or at the same time
as we're fixing that.

Apart from that we saw some really arcane things in the
MMC/SD core, mmc_claim_host() being the most obvious
example, as far as we can tell some kind of reimplementation of
mutex_trylock(). Some serious cleanup may be needed here.
It's nice that your first patch rips out the quirky kthread that
polls the block queue for new requests and send them down
to the mmc core, including picking out a few NULL requests
and flusing it's asynch work queue with that.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-09-30  0:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-22 13:57 [PATCH PoC 0/7] mmc: switch to blk-mq Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 1/7] mmc-mq: add debug printks Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 2/7] mmc-mq: remove async requests support Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 3/7] mmc-mq: request completion fixes Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 4/7] mmc-mq: implement checking for queue busy condition Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 5/7] mmc-mq: remove some debug printks Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 6/7] mmc-mq: initial blk-mq support Bartlomiej Zolnierkiewicz
2016-09-22 13:57 ` [PATCH PoC 7/7] mmc-mq: async request support for blk-mq mode Bartlomiej Zolnierkiewicz
2016-09-30  0:50 ` [PATCH PoC 0/7] mmc: switch to blk-mq Linus Walleij

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).