* [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing
@ 2016-11-25 10:06 Adrian Hunter
2016-11-25 10:06 ` [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up Adrian Hunter
` (24 more replies)
0 siblings, 25 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:06 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Hi
Here is an updated version of the Software Command Queuing patches,
re-based on next, with a couple of minor changes - refer changes in V7 below.
It would be good to move at least a few of these patches: for example,
patches 1-5 could be considered to be tidy-ups.
Performance results (not updated since V5):
Results can vary from run to run, but here are some results showing 1, 2 or 4
processes with 4k and 32k record sizes. They show up to 40% improvement in
read performance when there are multiple processes.
iozone -s 8192k -r 4k -i 0 -i 1 -i 2 -i 8 -I -t 1 -F /mnt/mmc/iozone1.tmp
Children see throughput for 1 initial writers = 27909.87 kB/sec 24204.14 kB/sec -13.28 %
Children see throughput for 1 rewriters = 28839.28 kB/sec 25531.92 kB/sec -11.47 %
Children see throughput for 1 readers = 25889.65 kB/sec 24883.23 kB/sec -3.89 %
Children see throughput for 1 re-readers = 25558.23 kB/sec 24679.89 kB/sec -3.44 %
Children see throughput for 1 random readers = 25571.48 kB/sec 24689.52 kB/sec -3.45 %
Children see throughput for 1 mixed workload = 25758.59 kB/sec 24487.52 kB/sec -4.93 %
Children see throughput for 1 random writers = 24787.51 kB/sec 19368.99 kB/sec -21.86 %
iozone -s 8192k -r 32k -i 0 -i 1 -i 2 -i 8 -I -t 1 -F /mnt/mmc/iozone1.tmp
Children see throughput for 1 initial writers = 91344.61 kB/sec 102008.56 kB/sec 11.67 %
Children see throughput for 1 rewriters = 87932.36 kB/sec 96630.44 kB/sec 9.89 %
Children see throughput for 1 readers = 134879.82 kB/sec 110292.79 kB/sec -18.23 %
Children see throughput for 1 re-readers = 147632.13 kB/sec 109053.33 kB/sec -26.13 %
Children see throughput for 1 random readers = 93547.37 kB/sec 112225.50 kB/sec 19.97 %
Children see throughput for 1 mixed workload = 93560.04 kB/sec 110515.21 kB/sec 18.12 %
Children see throughput for 1 random writers = 92841.84 kB/sec 81153.81 kB/sec -12.59 %
iozone -s 8192k -r 4k -i 0 -i 1 -i 2 -i 8 -I -t 2 -F /mnt/mmc/iozone1.tmp /mnt/mmc/iozone2.tmp
Children see throughput for 2 initial writers = 31145.43 kB/sec 33771.25 kB/sec 8.43 %
Children see throughput for 2 rewriters = 30592.57 kB/sec 35916.46 kB/sec 17.40 %
Children see throughput for 2 readers = 31669.83 kB/sec 37460.13 kB/sec 18.28 %
Children see throughput for 2 re-readers = 32079.94 kB/sec 37373.33 kB/sec 16.50 %
Children see throughput for 2 random readers = 27731.19 kB/sec 37601.65 kB/sec 35.59 %
Children see throughput for 2 mixed workload = 13927.50 kB/sec 14617.06 kB/sec 4.95 %
Children see throughput for 2 random writers = 31250.00 kB/sec 33106.72 kB/sec 5.94 %
iozone -s 8192k -r 32k -i 0 -i 1 -i 2 -i 8 -I -t 2 -F /mnt/mmc/iozone1.tmp /mnt/mmc/iozone2.tmp
Children see throughput for 2 initial writers = 123255.84 kB/sec 131252.22 kB/sec 6.49 %
Children see throughput for 2 rewriters = 115234.91 kB/sec 107225.74 kB/sec -6.95 %
Children see throughput for 2 readers = 128921.86 kB/sec 148562.71 kB/sec 15.23 %
Children see throughput for 2 re-readers = 127815.24 kB/sec 149304.32 kB/sec 16.81 %
Children see throughput for 2 random readers = 125600.46 kB/sec 148406.56 kB/sec 18.16 %
Children see throughput for 2 mixed workload = 44006.94 kB/sec 50937.36 kB/sec 15.75 %
Children see throughput for 2 random writers = 120623.95 kB/sec 103969.05 kB/sec -13.81 %
iozone -s 8192k -r 4k -i 0 -i 1 -i 2 -i 8 -I -t 4 -F /mnt/mmc/iozone1.tmp /mnt/mmc/iozone2.tmp /mnt/mmc/iozone3.tmp /mnt/mmc/iozone4.tmp
Children see throughput for 4 initial writers = 24100.96 kB/sec 33336.58 kB/sec 38.32 %
Children see throughput for 4 rewriters = 31650.20 kB/sec 33091.53 kB/sec 4.55 %
Children see throughput for 4 readers = 33276.92 kB/sec 41799.89 kB/sec 25.61 %
Children see throughput for 4 re-readers = 31786.96 kB/sec 41501.74 kB/sec 30.56 %
Children see throughput for 4 random readers = 31991.65 kB/sec 40973.93 kB/sec 28.08 %
Children see throughput for 4 mixed workload = 15804.80 kB/sec 13581.32 kB/sec -14.07 %
Children see throughput for 4 random writers = 31231.42 kB/sec 34537.03 kB/sec 10.58 %
iozone -s 8192k -r 32k -i 0 -i 1 -i 2 -i 8 -I -t 4 -F /mnt/mmc/iozone1.tmp /mnt/mmc/iozone2.tmp /mnt/mmc/iozone3.tmp /mnt/mmc/iozone4.tmp
Children see throughput for 4 initial writers = 116567.42 kB/sec 119280.35 kB/sec 2.33 %
Children see throughput for 4 rewriters = 115010.96 kB/sec 120864.34 kB/sec 5.09 %
Children see throughput for 4 readers = 130700.29 kB/sec 177834.21 kB/sec 36.06 %
Children see throughput for 4 re-readers = 125392.58 kB/sec 175975.28 kB/sec 40.34 %
Children see throughput for 4 random readers = 132194.57 kB/sec 176630.46 kB/sec 33.61 %
Children see throughput for 4 mixed workload = 56464.98 kB/sec 54140.61 kB/sec -4.12 %
Children see throughput for 4 random writers = 109128.36 kB/sec 85359.80 kB/sec -21.78 %
The current block driver supports 2 requests on the go at a time. Patches
1 - 8 make preparations for an arbitrary sized queue. Patches 9 - 12
introduce Command Queue definitions and helpers. Patches 13 - 19
complete the job of making the block driver use a queue. Patches 20 - 23
finally add Software Command Queuing, and 24 - 25 enable it for Intel eMMC
controllers. Most of the Software Command Queuing functionality is added
in patch 22.
As noted below, the patches can also be found here:
http://git.infradead.org/users/ahunter/linux-sdhci.git/shortlog/refs/heads/swcmdq
Changes in V7:
Re-based on next.
mmc: mmc: Add Command Queue definitions
Remove cmdq_en flag and add Linus Walleij's Reviewed-by.
mmc: mmc: Add functions to enable / disable the Command
Add cmdq_en flag.
Changes in V6:
mmc: core: Do not prepare a new request twice
Ensure struct mmc_async_req is always initialized to zero
Changes in V5:
Patches 1-5 dropped because they have been applied.
Re-based on next.
Fixed use of blk_end_request_cur() when it should have been
blk_end_request_all() to error out requests during error recovery.
Fixed unpaired retune_hold / retune_release in the error recovery path.
Changes in V4:
Re-based on next + v4.8-rc2 + "block: Fix secure erase" patch
Changes in V3:
Patches 1-25 dropped because they have been applied.
Re-based on next.
mmc: queue: Allocate queue of size qdepth
Free queue during cleanup
mmc: mmc: Add Command Queue definitions
Add cmdq_en to mmc-dev-attrs.txt documentation
mmc: queue: Share mmc request array between partitions
New patch
Changes in V2:
Added 5 patches already sent here:
http://marc.info/?l=linux-mmc&m=146712062816835
Added 3 more new patches:
mmc: sdhci-pci: Do not runtime suspend at the end of sdhci_pci_probe()
mmc: sdhci: Avoid STOP cmd triggering warning in sdhci_send_command()
mmc: sdhci: sdhci_execute_tuning() must delete timer
Carried forward the V2 fix to:
mmc: mmc_test: Disable Command Queue while mmc_test is used
Also reset the cmd circuit for data timeout if it is processing the data
cmd, in patch:
mmc: sdhci: Do not reset cmd or data circuits that are in use
There wasn't much comment on the RFC so there have been few changes.
Venu Byravarasu commented that it may be more efficient to use Software
Command Queuing only when there is more than 1 request queued - it isn't
obvious how well that would work in practice, but it could be added later
if it could be shown to be beneficial.
Original Cover Letter:
Chuanxiao Dong sent some patches last year relating to eMMC 5.1 Software
Command Queuing. He did not follow-up but I have contacted him and he says
it is OK if I take over upstreaming the patches.
eMMC Command Queuing is a feature added in version 5.1. The card maintains
a queue of up to 32 data transfers. Commands CMD45/CMD45 are sent to queue
up transfers in advance, and then one of the transfers is selected to
"execute" by CMD46/CMD47 at which point data transfer actually begins.
The advantage of command queuing is that the card can prepare for transfers
in advance. That makes a big difference in the case of random reads because
the card can start reading into its cache in advance.
A v5.1 host controller can manage the command queue itself, but it is also
possible for software to manage the queue using an non-v5.1 host controller
- that is what Software Command Queuing is.
Refer to the JEDEC (http://www.jedec.org/) eMMC v5.1 Specification for more
information about Command Queuing.
While these patches are heavily based on Dong's patches, there are some
changes:
SDHCI has been amended to support commands during transfer. That is a
generic change added in patches 1 - 5. [Those patches have now been applied]
In principle, that would also support SDIO's CMD52 during data transfer.
The original approach added multiple commands into the same request for
sending CMD44, CMD45 and CMD13. That is not strictly necessary and has
been omitted for now.
The original approach also called blk_end_request() from the mrq->done()
function, which means the upper layers learnt of completed requests
slightly earlier. That is not strictly related to Software Command Queuing
and is something that could potentially be done for all data requests.
That has been omitted for now.
The current block driver supports 2 requests on the go at a time. Patches
1 - 8 make preparations for an arbitrary sized queue. Patches 9 - 12
introduce Command Queue definitions and helpers. Patches 13 - 19
complete the job of making the block driver use a queue. Patches 20 - 23
finally add Software Command Queuing, and 24 - 25 enable it for Intel eMMC
controllers. Most of the Software Command Queuing functionality is added
in patch 22.
The patches can also be found here:
http://git.infradead.org/users/ahunter/linux-sdhci.git/shortlog/refs/heads/swcmdq
The patches have only had basic testing so far. Ad-hoc testing shows a
degradation in sequential read performance of about 10% but an increase in
throughput for mixed workload of multiple processes of about 90%. The
reduction in sequential performance is due to the need to read the Queue
Status register between each transfer.
These patches should not conflict with Hardware Command Queuing which
handles the queue in a completely different way and thus does not need
to share code with Software Command Queuing. The exceptions being the
Command Queue definitions and queue allocation which should be able to be
used.
Adrian Hunter (25):
mmc: queue: Fix queue thread wake-up
mmc: queue: Factor out mmc_queue_alloc_bounce_bufs()
mmc: queue: Factor out mmc_queue_alloc_bounce_sgs()
mmc: queue: Factor out mmc_queue_alloc_sgs()
mmc: queue: Factor out mmc_queue_reqs_free_bufs()
mmc: queue: Introduce queue depth
mmc: queue: Use queue depth to allocate and free
mmc: queue: Allocate queue of size qdepth
mmc: mmc: Add Command Queue definitions
mmc: mmc: Add functions to enable / disable the Command Queue
mmc: mmc_test: Disable Command Queue while mmc_test is used
mmc: block: Disable Command Queue while RPMB is used
mmc: core: Do not prepare a new request twice
mmc: core: Export mmc_retune_hold() and mmc_retune_release()
mmc: block: Factor out mmc_blk_requeue()
mmc: block: Fix 4K native sector check
mmc: block: Use local var for mqrq_cur
mmc: block: Pass mqrq to mmc_blk_prep_packed_list()
mmc: block: Introduce queue semantics
mmc: queue: Share mmc request array between partitions
mmc: queue: Add a function to control wake-up on new requests
mmc: block: Add Software Command Queuing
mmc: mmc: Enable Software Command Queuing
mmc: sdhci-pci: Enable Software Command Queuing for some Intel controllers
mmc: sdhci-acpi: Enable Software Command Queuing for some Intel controllers
Documentation/mmc/mmc-dev-attrs.txt | 1 +
drivers/mmc/card/block.c | 747 +++++++++++++++++++++++++++++++++---
drivers/mmc/card/mmc_test.c | 21 +-
drivers/mmc/card/queue.c | 323 ++++++++++------
drivers/mmc/card/queue.h | 27 +-
drivers/mmc/core/core.c | 18 +-
drivers/mmc/core/host.c | 2 +
drivers/mmc/core/host.h | 2 -
drivers/mmc/core/mmc.c | 43 ++-
drivers/mmc/core/mmc_ops.c | 27 ++
drivers/mmc/host/sdhci-acpi.c | 2 +-
drivers/mmc/host/sdhci-pci-core.c | 2 +-
include/linux/mmc/card.h | 8 +
include/linux/mmc/core.h | 6 +
include/linux/mmc/host.h | 4 +-
include/linux/mmc/mmc.h | 17 +
16 files changed, 1050 insertions(+), 200 deletions(-)
Regards
Adrian
^ permalink raw reply [flat|nested] 59+ messages in thread
* [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
@ 2016-11-25 10:06 ` Adrian Hunter
2016-11-25 14:37 ` Linus Walleij
2016-11-28 3:32 ` Ritesh Harjani
2016-11-25 10:06 ` [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs() Adrian Hunter
` (23 subsequent siblings)
24 siblings, 2 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:06 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
The only time the driver sleeps expecting to be woken upon the arrival of
a new request, is when the dispatch queue is empty. The only time that it
is known whether the dispatch queue is empty is after NULL is returned
from blk_fetch_request() while under the queue lock.
Recognizing those facts, simplify the synchronization between the queue
thread and the request function. A couple of flags tell the request
function what to do, and the queue lock and barriers associated with
wake-ups ensure synchronization.
The result is simpler and allows the removal of the context_info lock.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 7 -------
drivers/mmc/card/queue.c | 35 +++++++++++++++++++++--------------
drivers/mmc/card/queue.h | 1 +
drivers/mmc/core/core.c | 6 ------
include/linux/mmc/host.h | 2 --
5 files changed, 22 insertions(+), 29 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 6618126fcb9f..f8e51640596e 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -2193,8 +2193,6 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
int ret;
struct mmc_blk_data *md = mq->blkdata;
struct mmc_card *card = md->queue.card;
- struct mmc_host *host = card->host;
- unsigned long flags;
bool req_is_special = mmc_req_is_special(req);
if (req && !mq->mqrq_prev->req)
@@ -2227,11 +2225,6 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
mmc_blk_issue_rw_rq(mq, NULL);
ret = mmc_blk_issue_flush(mq, req);
} else {
- if (!req && host->areq) {
- spin_lock_irqsave(&host->context_info.lock, flags);
- host->context_info.is_waiting_last_req = true;
- spin_unlock_irqrestore(&host->context_info.lock, flags);
- }
ret = mmc_blk_issue_rw_rq(mq, req);
}
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 3f6a2463ab30..c4ac4b8a1a98 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -53,6 +53,7 @@ static int mmc_queue_thread(void *d)
{
struct mmc_queue *mq = d;
struct request_queue *q = mq->queue;
+ struct mmc_context_info *cntx = &mq->card->host->context_info;
current->flags |= PF_MEMALLOC;
@@ -63,6 +64,19 @@ static int mmc_queue_thread(void *d)
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
req = blk_fetch_request(q);
+ mq->asleep = false;
+ cntx->is_waiting_last_req = false;
+ cntx->is_new_req = false;
+ if (!req) {
+ /*
+ * Dispatch queue is empty so set flags for
+ * mmc_request_fn() to wake us up.
+ */
+ if (mq->mqrq_prev->req)
+ cntx->is_waiting_last_req = true;
+ else
+ mq->asleep = true;
+ }
mq->mqrq_cur->req = req;
spin_unlock_irq(q->queue_lock);
@@ -115,7 +129,6 @@ static void mmc_request_fn(struct request_queue *q)
{
struct mmc_queue *mq = q->queuedata;
struct request *req;
- unsigned long flags;
struct mmc_context_info *cntx;
if (!mq) {
@@ -127,19 +140,13 @@ static void mmc_request_fn(struct request_queue *q)
}
cntx = &mq->card->host->context_info;
- if (!mq->mqrq_cur->req && mq->mqrq_prev->req) {
- /*
- * New MMC request arrived when MMC thread may be
- * blocked on the previous request to be complete
- * with no current request fetched
- */
- spin_lock_irqsave(&cntx->lock, flags);
- if (cntx->is_waiting_last_req) {
- cntx->is_new_req = true;
- wake_up_interruptible(&cntx->wait);
- }
- spin_unlock_irqrestore(&cntx->lock, flags);
- } else if (!mq->mqrq_cur->req && !mq->mqrq_prev->req)
+
+ if (cntx->is_waiting_last_req) {
+ cntx->is_new_req = true;
+ wake_up_interruptible(&cntx->wait);
+ }
+
+ if (mq->asleep)
wake_up_process(mq->thread);
}
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 334c9306070f..0e8133c626c9 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -58,6 +58,7 @@ struct mmc_queue {
unsigned int flags;
#define MMC_QUEUE_SUSPENDED (1 << 0)
#define MMC_QUEUE_NEW_REQUEST (1 << 1)
+ bool asleep;
struct mmc_blk_data *blkdata;
struct request_queue *queue;
struct mmc_queue_req mqrq[2];
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index f39397f7c8dc..dc1f27ee50b8 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -504,18 +504,14 @@ static enum mmc_blk_status mmc_wait_for_data_req_done(struct mmc_host *host,
struct mmc_command *cmd;
struct mmc_context_info *context_info = &host->context_info;
enum mmc_blk_status status;
- unsigned long flags;
while (1) {
wait_event_interruptible(context_info->wait,
(context_info->is_done_rcv ||
context_info->is_new_req));
- spin_lock_irqsave(&context_info->lock, flags);
context_info->is_waiting_last_req = false;
- spin_unlock_irqrestore(&context_info->lock, flags);
if (context_info->is_done_rcv) {
context_info->is_done_rcv = false;
- context_info->is_new_req = false;
cmd = mrq->cmd;
if (!cmd->error || !cmd->retries ||
@@ -534,7 +530,6 @@ static enum mmc_blk_status mmc_wait_for_data_req_done(struct mmc_host *host,
continue; /* wait for done/new event again */
}
} else if (context_info->is_new_req) {
- context_info->is_new_req = false;
if (!next_req)
return MMC_BLK_NEW_REQUEST;
}
@@ -3016,7 +3011,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
*/
void mmc_init_context_info(struct mmc_host *host)
{
- spin_lock_init(&host->context_info.lock);
host->context_info.is_new_req = false;
host->context_info.is_done_rcv = false;
host->context_info.is_waiting_last_req = false;
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 2a6418d0c343..bcf6d252ec67 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -197,14 +197,12 @@ struct mmc_slot {
* @is_new_req wake up reason was new request
* @is_waiting_last_req mmc context waiting for single running request
* @wait wait queue
- * @lock lock to protect data fields
*/
struct mmc_context_info {
bool is_done_rcv;
bool is_new_req;
bool is_waiting_last_req;
wait_queue_head_t wait;
- spinlock_t lock;
};
struct regulator;
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
2016-11-25 10:06 ` [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up Adrian Hunter
@ 2016-11-25 10:06 ` Adrian Hunter
2016-11-25 14:38 ` Linus Walleij
2016-11-28 3:36 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs() Adrian Hunter
` (22 subsequent siblings)
24 siblings, 2 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:06 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
In preparation for supporting a queue of requests, factor out
mmc_queue_alloc_bounce_bufs().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/queue.c | 45 +++++++++++++++++++++++++++------------------
1 file changed, 27 insertions(+), 18 deletions(-)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index c4ac4b8a1a98..ea8b01f76d55 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -186,6 +186,31 @@ static void mmc_queue_setup_discard(struct request_queue *q,
queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
}
+static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
+ unsigned int bouncesz)
+{
+ struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+ struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
+
+ mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+ if (!mqrq_cur->bounce_buf) {
+ pr_warn("%s: unable to allocate bounce cur buffer\n",
+ mmc_card_name(mq->card));
+ return false;
+ }
+
+ mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+ if (!mqrq_prev->bounce_buf) {
+ pr_warn("%s: unable to allocate bounce prev buffer\n",
+ mmc_card_name(mq->card));
+ kfree(mqrq_cur->bounce_buf);
+ mqrq_cur->bounce_buf = NULL;
+ return false;
+ }
+
+ return true;
+}
+
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
@@ -235,24 +260,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (bouncesz > (host->max_blk_count * 512))
bouncesz = host->max_blk_count * 512;
- if (bouncesz > 512) {
- mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
- if (!mqrq_cur->bounce_buf) {
- pr_warn("%s: unable to allocate bounce cur buffer\n",
- mmc_card_name(card));
- } else {
- mqrq_prev->bounce_buf =
- kmalloc(bouncesz, GFP_KERNEL);
- if (!mqrq_prev->bounce_buf) {
- pr_warn("%s: unable to allocate bounce prev buffer\n",
- mmc_card_name(card));
- kfree(mqrq_cur->bounce_buf);
- mqrq_cur->bounce_buf = NULL;
- }
- }
- }
-
- if (mqrq_cur->bounce_buf && mqrq_prev->bounce_buf) {
+ if (bouncesz > 512 &&
+ mmc_queue_alloc_bounce_bufs(mq, bouncesz)) {
blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
blk_queue_max_segments(mq->queue, bouncesz / 512);
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
2016-11-25 10:06 ` [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up Adrian Hunter
2016-11-25 10:06 ` [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:39 ` Linus Walleij
2016-11-28 3:48 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs() Adrian Hunter
` (21 subsequent siblings)
24 siblings, 2 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
In preparation for supporting a queue of requests, factor out
mmc_queue_alloc_bounce_sgs().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/queue.c | 44 ++++++++++++++++++++++++++++----------------
1 file changed, 28 insertions(+), 16 deletions(-)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index ea8b01f76d55..3756303b4bbc 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -211,6 +211,30 @@ static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
return true;
}
+static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
+ unsigned int bouncesz)
+{
+ struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+ struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
+ int ret;
+
+ mqrq_cur->sg = mmc_alloc_sg(1, &ret);
+ if (ret)
+ return ret;
+
+ mqrq_cur->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
+ if (ret)
+ return ret;
+
+ mqrq_prev->sg = mmc_alloc_sg(1, &ret);
+ if (ret)
+ return ret;
+
+ mqrq_prev->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
+
+ return ret;
+}
+
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
@@ -225,6 +249,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
{
struct mmc_host *host = card->host;
u64 limit = BLK_BOUNCE_HIGH;
+ bool bounce = false;
int ret;
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
@@ -267,28 +292,15 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
blk_queue_max_segments(mq->queue, bouncesz / 512);
blk_queue_max_segment_size(mq->queue, bouncesz);
- mqrq_cur->sg = mmc_alloc_sg(1, &ret);
- if (ret)
- goto cleanup_queue;
-
- mqrq_cur->bounce_sg =
- mmc_alloc_sg(bouncesz / 512, &ret);
- if (ret)
- goto cleanup_queue;
-
- mqrq_prev->sg = mmc_alloc_sg(1, &ret);
- if (ret)
- goto cleanup_queue;
-
- mqrq_prev->bounce_sg =
- mmc_alloc_sg(bouncesz / 512, &ret);
+ ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz);
if (ret)
goto cleanup_queue;
+ bounce = true;
}
}
#endif
- if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) {
+ if (!bounce) {
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (2 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:41 ` Linus Walleij
2016-11-28 3:49 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs() Adrian Hunter
` (20 subsequent siblings)
24 siblings, 2 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
In preparation for supporting a queue of requests, factor out
mmc_queue_alloc_sgs().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/queue.c | 22 ++++++++++++++++------
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 3756303b4bbc..66091dc6ab34 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -235,6 +235,21 @@ static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
return ret;
}
+static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
+{
+ struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+ struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
+ int ret;
+
+ mqrq_cur->sg = mmc_alloc_sg(max_segs, &ret);
+ if (ret)
+ return ret;
+
+ mqrq_prev->sg = mmc_alloc_sg(max_segs, &ret);
+
+ return ret;
+}
+
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
@@ -307,12 +322,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
- mqrq_cur->sg = mmc_alloc_sg(host->max_segs, &ret);
- if (ret)
- goto cleanup_queue;
-
-
- mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret);
+ ret = mmc_queue_alloc_sgs(mq, host->max_segs);
if (ret)
goto cleanup_queue;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (3 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:42 ` Linus Walleij
2016-11-28 3:50 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 06/25] mmc: queue: Introduce queue depth Adrian Hunter
` (19 subsequent siblings)
24 siblings, 2 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
In preparation for supporting a queue of requests, factor out
mmc_queue_reqs_free_bufs().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/queue.c | 65 +++++++++++++++++++-----------------------------
1 file changed, 26 insertions(+), 39 deletions(-)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 66091dc6ab34..cbe92c9cfda1 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -250,6 +250,27 @@ static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
return ret;
}
+static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
+{
+ struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+ struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
+
+ kfree(mqrq_cur->bounce_sg);
+ mqrq_cur->bounce_sg = NULL;
+ kfree(mqrq_prev->bounce_sg);
+ mqrq_prev->bounce_sg = NULL;
+
+ kfree(mqrq_cur->sg);
+ mqrq_cur->sg = NULL;
+ kfree(mqrq_cur->bounce_buf);
+ mqrq_cur->bounce_buf = NULL;
+
+ kfree(mqrq_prev->sg);
+ mqrq_prev->sg = NULL;
+ kfree(mqrq_prev->bounce_buf);
+ mqrq_prev->bounce_buf = NULL;
+}
+
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
@@ -266,8 +287,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
u64 limit = BLK_BOUNCE_HIGH;
bool bounce = false;
int ret;
- struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
- struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
@@ -277,8 +296,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (!mq->queue)
return -ENOMEM;
- mq->mqrq_cur = mqrq_cur;
- mq->mqrq_prev = mqrq_prev;
+ mq->mqrq_cur = &mq->mqrq[0];
+ mq->mqrq_prev = &mq->mqrq[1];
mq->queue->queuedata = mq;
blk_queue_prep_rq(mq->queue, mmc_prep_request);
@@ -334,27 +353,13 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (IS_ERR(mq->thread)) {
ret = PTR_ERR(mq->thread);
- goto free_bounce_sg;
+ goto cleanup_queue;
}
return 0;
- free_bounce_sg:
- kfree(mqrq_cur->bounce_sg);
- mqrq_cur->bounce_sg = NULL;
- kfree(mqrq_prev->bounce_sg);
- mqrq_prev->bounce_sg = NULL;
cleanup_queue:
- kfree(mqrq_cur->sg);
- mqrq_cur->sg = NULL;
- kfree(mqrq_cur->bounce_buf);
- mqrq_cur->bounce_buf = NULL;
-
- kfree(mqrq_prev->sg);
- mqrq_prev->sg = NULL;
- kfree(mqrq_prev->bounce_buf);
- mqrq_prev->bounce_buf = NULL;
-
+ mmc_queue_reqs_free_bufs(mq);
blk_cleanup_queue(mq->queue);
return ret;
}
@@ -363,8 +368,6 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
- struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
- struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
@@ -378,23 +381,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
- kfree(mqrq_cur->bounce_sg);
- mqrq_cur->bounce_sg = NULL;
-
- kfree(mqrq_cur->sg);
- mqrq_cur->sg = NULL;
-
- kfree(mqrq_cur->bounce_buf);
- mqrq_cur->bounce_buf = NULL;
-
- kfree(mqrq_prev->bounce_sg);
- mqrq_prev->bounce_sg = NULL;
-
- kfree(mqrq_prev->sg);
- mqrq_prev->sg = NULL;
-
- kfree(mqrq_prev->bounce_buf);
- mqrq_prev->bounce_buf = NULL;
+ mmc_queue_reqs_free_bufs(mq);
mq->card = NULL;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 06/25] mmc: queue: Introduce queue depth
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (4 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:43 ` Linus Walleij
2016-11-28 4:19 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 07/25] mmc: queue: Use queue depth to allocate and free Adrian Hunter
` (18 subsequent siblings)
24 siblings, 2 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Add a mmc_queue member to record the size of the queue, which currently
supports 2 requests on-the-go at a time.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 3 +++
drivers/mmc/card/queue.c | 1 +
drivers/mmc/card/queue.h | 1 +
3 files changed, 5 insertions(+)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index f8e51640596e..47835b78872f 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1439,6 +1439,9 @@ static int mmc_packed_init(struct mmc_queue *mq, struct mmc_card *card)
struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
int ret = 0;
+ /* Queue depth is only ever 2 with packed commands */
+ if (mq->qdepth != 2)
+ return -EINVAL;
mqrq_cur->packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL);
if (!mqrq_cur->packed) {
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index cbe92c9cfda1..60fa095adb14 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -296,6 +296,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (!mq->queue)
return -ENOMEM;
+ mq->qdepth = 2;
mq->mqrq_cur = &mq->mqrq[0];
mq->mqrq_prev = &mq->mqrq[1];
mq->queue->queuedata = mq;
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 0e8133c626c9..8a0a45e5650d 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -64,6 +64,7 @@ struct mmc_queue {
struct mmc_queue_req mqrq[2];
struct mmc_queue_req *mqrq_cur;
struct mmc_queue_req *mqrq_prev;
+ int qdepth;
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 07/25] mmc: queue: Use queue depth to allocate and free
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (5 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 06/25] mmc: queue: Introduce queue depth Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:21 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 08/25] mmc: queue: Allocate queue of size qdepth Adrian Hunter
` (17 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Instead of allocating resources for 2 slots in the queue, allow for an
arbitrary number.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/queue.c | 103 +++++++++++++++++++++--------------------------
1 file changed, 46 insertions(+), 57 deletions(-)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 60fa095adb14..1ea007f51ec9 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -189,86 +189,75 @@ static void mmc_queue_setup_discard(struct request_queue *q,
static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
unsigned int bouncesz)
{
- struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
- struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
-
- mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
- if (!mqrq_cur->bounce_buf) {
- pr_warn("%s: unable to allocate bounce cur buffer\n",
- mmc_card_name(mq->card));
- return false;
- }
+ int i;
- mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
- if (!mqrq_prev->bounce_buf) {
- pr_warn("%s: unable to allocate bounce prev buffer\n",
- mmc_card_name(mq->card));
- kfree(mqrq_cur->bounce_buf);
- mqrq_cur->bounce_buf = NULL;
- return false;
+ for (i = 0; i < mq->qdepth; i++) {
+ mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+ if (!mq->mqrq[i].bounce_buf)
+ goto out_err;
}
return true;
+
+out_err:
+ while (--i >= 0) {
+ kfree(mq->mqrq[i].bounce_buf);
+ mq->mqrq[i].bounce_buf = NULL;
+ }
+ pr_warn("%s: unable to allocate bounce buffers\n",
+ mmc_card_name(mq->card));
+ return false;
}
static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
unsigned int bouncesz)
{
- struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
- struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
- int ret;
-
- mqrq_cur->sg = mmc_alloc_sg(1, &ret);
- if (ret)
- return ret;
+ int i, ret;
- mqrq_cur->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
- if (ret)
- return ret;
-
- mqrq_prev->sg = mmc_alloc_sg(1, &ret);
- if (ret)
- return ret;
+ for (i = 0; i < mq->qdepth; i++) {
+ mq->mqrq[i].sg = mmc_alloc_sg(1, &ret);
+ if (ret)
+ return ret;
- mqrq_prev->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
+ mq->mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
+ if (ret)
+ return ret;
+ }
- return ret;
+ return 0;
}
static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
{
- struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
- struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
- int ret;
+ int i, ret;
- mqrq_cur->sg = mmc_alloc_sg(max_segs, &ret);
- if (ret)
- return ret;
+ for (i = 0; i < mq->qdepth; i++) {
+ mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret);
+ if (ret)
+ return ret;
+ }
- mqrq_prev->sg = mmc_alloc_sg(max_segs, &ret);
+ return 0;
+}
- return ret;
+static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
+{
+ kfree(mqrq->bounce_sg);
+ mqrq->bounce_sg = NULL;
+
+ kfree(mqrq->sg);
+ mqrq->sg = NULL;
+
+ kfree(mqrq->bounce_buf);
+ mqrq->bounce_buf = NULL;
}
static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
{
- struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
- struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
-
- kfree(mqrq_cur->bounce_sg);
- mqrq_cur->bounce_sg = NULL;
- kfree(mqrq_prev->bounce_sg);
- mqrq_prev->bounce_sg = NULL;
-
- kfree(mqrq_cur->sg);
- mqrq_cur->sg = NULL;
- kfree(mqrq_cur->bounce_buf);
- mqrq_cur->bounce_buf = NULL;
-
- kfree(mqrq_prev->sg);
- mqrq_prev->sg = NULL;
- kfree(mqrq_prev->bounce_buf);
- mqrq_prev->bounce_buf = NULL;
+ int i;
+
+ for (i = 0; i < mq->qdepth; i++)
+ mmc_queue_req_free_bufs(&mq->mqrq[i]);
}
/**
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 08/25] mmc: queue: Allocate queue of size qdepth
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (6 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 07/25] mmc: queue: Use queue depth to allocate and free Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:22 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions Adrian Hunter
` (16 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Now that the queue resources are allocated according to the size of the
queue, it is possible to allocate the queue to be an arbitrary size.
A side-effect is that deallocation of 'packed' resources must be done
before deallocation of the queue.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 2 +-
drivers/mmc/card/queue.c | 11 ++++++++++-
drivers/mmc/card/queue.h | 2 +-
3 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 47835b78872f..f22df69823cc 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -2467,9 +2467,9 @@ static void mmc_blk_remove_req(struct mmc_blk_data *md)
* from being accepted.
*/
card = md->queue.card;
- mmc_cleanup_queue(&md->queue);
if (md->flags & MMC_BLK_PACKED_CMD)
mmc_packed_clean(&md->queue);
+ mmc_cleanup_queue(&md->queue);
if (md->disk->flags & GENHD_FL_UP) {
device_remove_file(disk_to_dev(md->disk), &md->force_ro);
if ((md->area_type & MMC_BLK_DATA_AREA_BOOT) &&
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 1ea007f51ec9..50d7bf074887 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -275,7 +275,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
struct mmc_host *host = card->host;
u64 limit = BLK_BOUNCE_HIGH;
bool bounce = false;
- int ret;
+ int ret = -ENOMEM;
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
@@ -286,6 +286,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
return -ENOMEM;
mq->qdepth = 2;
+ mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req),
+ GFP_KERNEL);
+ if (!mq->mqrq)
+ goto blk_cleanup;
mq->mqrq_cur = &mq->mqrq[0];
mq->mqrq_prev = &mq->mqrq[1];
mq->queue->queuedata = mq;
@@ -350,6 +354,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
cleanup_queue:
mmc_queue_reqs_free_bufs(mq);
+ kfree(mq->mqrq);
+ mq->mqrq = NULL;
+blk_cleanup:
blk_cleanup_queue(mq->queue);
return ret;
}
@@ -372,6 +379,8 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
spin_unlock_irqrestore(q->queue_lock, flags);
mmc_queue_reqs_free_bufs(mq);
+ kfree(mq->mqrq);
+ mq->mqrq = NULL;
mq->card = NULL;
}
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 8a0a45e5650d..f17f5e505059 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -61,7 +61,7 @@ struct mmc_queue {
bool asleep;
struct mmc_blk_data *blkdata;
struct request_queue *queue;
- struct mmc_queue_req mqrq[2];
+ struct mmc_queue_req *mqrq;
struct mmc_queue_req *mqrq_cur;
struct mmc_queue_req *mqrq_prev;
int qdepth;
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (7 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 08/25] mmc: queue: Allocate queue of size qdepth Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:29 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
` (15 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Add definitions relating to Command Queuing.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
---
drivers/mmc/core/mmc.c | 17 +++++++++++++++++
include/linux/mmc/card.h | 2 ++
include/linux/mmc/mmc.h | 17 +++++++++++++++++
3 files changed, 36 insertions(+)
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 3268fcd3378d..6e9830997eef 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -618,6 +618,23 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
(ext_csd[EXT_CSD_SUPPORTED_MODE] & 0x1) &&
!(ext_csd[EXT_CSD_FW_CONFIG] & 0x1);
}
+
+ /* eMMC v5.1 or later */
+ if (card->ext_csd.rev >= 8) {
+ card->ext_csd.cmdq_support = ext_csd[EXT_CSD_CMDQ_SUPPORT] &
+ EXT_CSD_CMDQ_SUPPORTED;
+ card->ext_csd.cmdq_depth = (ext_csd[EXT_CSD_CMDQ_DEPTH] &
+ EXT_CSD_CMDQ_DEPTH_MASK) + 1;
+ if (card->ext_csd.cmdq_depth <= 2) {
+ card->ext_csd.cmdq_support = false;
+ card->ext_csd.cmdq_depth = 0;
+ }
+ if (card->ext_csd.cmdq_support) {
+ pr_debug("%s: Command Queue supported depth %u\n",
+ mmc_hostname(card->host),
+ card->ext_csd.cmdq_depth);
+ }
+ }
out:
return err;
}
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index e49a3ff9d0e0..95d69d498296 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -89,6 +89,8 @@ struct mmc_ext_csd {
unsigned int boot_ro_lock; /* ro lock support */
bool boot_ro_lockable;
bool ffu_capable; /* Firmware upgrade support */
+ bool cmdq_support; /* Command Queue supported */
+ unsigned int cmdq_depth; /* Command Queue depth */
#define MMC_FIRMWARE_LEN 8
u8 fwrev[MMC_FIRMWARE_LEN]; /* FW version */
u8 raw_exception_status; /* 54 */
diff --git a/include/linux/mmc/mmc.h b/include/linux/mmc/mmc.h
index c376209c70ef..672730acc705 100644
--- a/include/linux/mmc/mmc.h
+++ b/include/linux/mmc/mmc.h
@@ -84,6 +84,13 @@
#define MMC_APP_CMD 55 /* ac [31:16] RCA R1 */
#define MMC_GEN_CMD 56 /* adtc [0] RD/WR R1 */
+ /* class 11 */
+#define MMC_QUE_TASK_PARAMS 44 /* ac [20:16] task id R1 */
+#define MMC_QUE_TASK_ADDR 45 /* ac [31:0] data addr R1 */
+#define MMC_EXECUTE_READ_TASK 46 /* adtc [20:16] task id R1 */
+#define MMC_EXECUTE_WRITE_TASK 47 /* adtc [20:16] task id R1 */
+#define MMC_CMDQ_TASK_MGMT 48 /* ac [20:16] task id R1b */
+
static inline bool mmc_op_multi(u32 opcode)
{
return opcode == MMC_WRITE_MULTIPLE_BLOCK ||
@@ -272,6 +279,7 @@ struct _mmc_csd {
* EXT_CSD fields
*/
+#define EXT_CSD_CMDQ_MODE_EN 15 /* R/W */
#define EXT_CSD_FLUSH_CACHE 32 /* W */
#define EXT_CSD_CACHE_CTRL 33 /* R/W */
#define EXT_CSD_POWER_OFF_NOTIFICATION 34 /* R/W */
@@ -331,6 +339,8 @@ struct _mmc_csd {
#define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */
#define EXT_CSD_PWR_CL_DDR_200_360 253 /* RO */
#define EXT_CSD_FIRMWARE_VERSION 254 /* RO, 8 bytes */
+#define EXT_CSD_CMDQ_DEPTH 307 /* RO */
+#define EXT_CSD_CMDQ_SUPPORT 308 /* RO */
#define EXT_CSD_SUPPORTED_MODE 493 /* RO */
#define EXT_CSD_TAG_UNIT_SIZE 498 /* RO */
#define EXT_CSD_DATA_TAG_SUPPORT 499 /* RO */
@@ -438,6 +448,13 @@ struct _mmc_csd {
#define EXT_CSD_MANUAL_BKOPS_MASK 0x01
/*
+ * Command Queue
+ */
+#define EXT_CSD_CMDQ_MODE_ENABLED BIT(0)
+#define EXT_CSD_CMDQ_DEPTH_MASK GENMASK(4, 0)
+#define EXT_CSD_CMDQ_SUPPORTED BIT(0)
+
+/*
* MMC_SWITCH access modes
*/
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (8 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:36 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 11/25] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
` (14 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Add helper functions to enable or disable the Command Queue.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Documentation/mmc/mmc-dev-attrs.txt | 1 +
drivers/mmc/core/mmc.c | 2 ++
drivers/mmc/core/mmc_ops.c | 27 +++++++++++++++++++++++++++
include/linux/mmc/card.h | 1 +
include/linux/mmc/core.h | 2 ++
5 files changed, 33 insertions(+)
diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
index 404a0e9e92b0..dcd1252877fb 100644
--- a/Documentation/mmc/mmc-dev-attrs.txt
+++ b/Documentation/mmc/mmc-dev-attrs.txt
@@ -30,6 +30,7 @@ All attributes are read-only.
rel_sectors Reliable write sector count
ocr Operation Conditions Register
dsr Driver Stage Register
+ cmdq_en Command Queue enabled: 1 => enabled, 0 => not enabled
Note on Erase Size and Preferred Erase Size:
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 6e9830997eef..d6a30bbd399d 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -770,6 +770,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
+MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
static ssize_t mmc_fwrev_show(struct device *dev,
struct device_attribute *attr,
@@ -823,6 +824,7 @@ static ssize_t mmc_dsr_show(struct device *dev,
&dev_attr_rel_sectors.attr,
&dev_attr_ocr.attr,
&dev_attr_dsr.attr,
+ &dev_attr_cmdq_en.attr,
NULL,
};
ATTRIBUTE_GROUPS(mmc_std);
diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index 9b2617cfff67..92a1de9b4981 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -824,3 +824,30 @@ int mmc_can_ext_csd(struct mmc_card *card)
{
return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
}
+
+int mmc_cmdq_switch(struct mmc_card *card, int enable)
+{
+ int err;
+
+ if (!card->ext_csd.cmdq_support)
+ return -EOPNOTSUPP;
+
+ err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
+ enable, card->ext_csd.generic_cmd6_time);
+ if (!err)
+ card->ext_csd.cmdq_en = enable;
+
+ return err;
+}
+
+int mmc_cmdq_enable(struct mmc_card *card)
+{
+ return mmc_cmdq_switch(card, EXT_CSD_CMDQ_MODE_ENABLED);
+}
+EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
+
+int mmc_cmdq_disable(struct mmc_card *card)
+{
+ return mmc_cmdq_switch(card, 0);
+}
+EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 95d69d498296..2d9c24f4e88e 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -89,6 +89,7 @@ struct mmc_ext_csd {
unsigned int boot_ro_lock; /* ro lock support */
bool boot_ro_lockable;
bool ffu_capable; /* Firmware upgrade support */
+ bool cmdq_en; /* Command Queue enabled */
bool cmdq_support; /* Command Queue supported */
unsigned int cmdq_depth; /* Command Queue depth */
#define MMC_FIRMWARE_LEN 8
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index 0ce928b3ce90..d045b06fc7ea 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -177,6 +177,8 @@ extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *,
extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int);
extern int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error);
extern int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
+extern int mmc_cmdq_enable(struct mmc_card *card);
+extern int mmc_cmdq_disable(struct mmc_card *card);
#define MMC_ERASE_ARG 0x00000000
#define MMC_SECURE_ERASE_ARG 0x80000000
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 11/25] mmc: mmc_test: Disable Command Queue while mmc_test is used
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (9 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:40 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 12/25] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
` (13 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Normal read and write commands may not be used while the command queue is
enabled. Disable the Command Queue when mmc_test is probed and re-enable it
when it is removed.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/mmc_test.c | 13 +++++++++++++
drivers/mmc/core/mmc.c | 7 +++++++
include/linux/mmc/card.h | 1 +
3 files changed, 21 insertions(+)
diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index 5ba6d77b9723..b42c23665104 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -3264,6 +3264,14 @@ static int mmc_test_probe(struct mmc_card *card)
if (ret)
return ret;
+ if (card->ext_csd.cmdq_en) {
+ mmc_claim_host(card->host);
+ ret = mmc_cmdq_disable(card);
+ mmc_release_host(card->host);
+ if (ret)
+ return ret;
+ }
+
dev_info(&card->dev, "Card claimed for testing.\n");
return 0;
@@ -3271,6 +3279,11 @@ static int mmc_test_probe(struct mmc_card *card)
static void mmc_test_remove(struct mmc_card *card)
{
+ if (card->reenable_cmdq) {
+ mmc_claim_host(card->host);
+ mmc_cmdq_enable(card);
+ mmc_release_host(card->host);
+ }
mmc_test_free_result(card);
mmc_test_free_dbgfs_file(card);
}
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index d6a30bbd399d..e310a60ef859 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1755,6 +1755,13 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
}
/*
+ * In some cases (e.g. RPMB or mmc_test), the Command Queue must be
+ * disabled for a time, so a flag is needed to indicate to re-enable the
+ * Command Queue.
+ */
+ card->reenable_cmdq = card->ext_csd.cmdq_en;
+
+ /*
* The mandatory minimum values are defined for packed command.
* read: 5, write: 3
*/
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 2d9c24f4e88e..5ac2243bc5a9 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -273,6 +273,7 @@ struct mmc_card {
#define MMC_QUIRK_TRIM_BROKEN (1<<12) /* Skip trim */
#define MMC_QUIRK_BROKEN_HPI (1<<13) /* Disable broken HPI support */
+ bool reenable_cmdq; /* Re-enable Command Queue */
unsigned int erase_size; /* erase size in sectors */
unsigned int erase_shift; /* if erase unit is power 2 */
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 12/25] mmc: block: Disable Command Queue while RPMB is used
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (10 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 11/25] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:46 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 13/25] mmc: core: Do not prepare a new request twice Adrian Hunter
` (12 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
RPMB does not allow Command Queue commands. Disable and re-enable the
Command Queue when switching.
Note that the driver only switches partitions when the queue is empty.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 46 ++++++++++++++++++++++++++++++++++++++--------
1 file changed, 38 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index f22df69823cc..157d1b3d58d6 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -746,10 +746,41 @@ static int mmc_blk_compat_ioctl(struct block_device *bdev, fmode_t mode,
#endif
};
+static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ unsigned int part_type)
+{
+ int ret;
+
+ if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
+ if (card->ext_csd.cmdq_en) {
+ ret = mmc_cmdq_disable(card);
+ if (ret)
+ return ret;
+ }
+ mmc_retune_pause(card->host);
+ }
+
+ return 0;
+}
+
+static int mmc_blk_part_switch_post(struct mmc_card *card,
+ unsigned int part_type)
+{
+ int ret = 0;
+
+ if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
+ mmc_retune_unpause(card->host);
+ if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
+ ret = mmc_cmdq_enable(card);
+ }
+
+ return ret;
+}
+
static inline int mmc_blk_part_switch(struct mmc_card *card,
struct mmc_blk_data *md)
{
- int ret;
+ int ret = 0;
struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
if (main_md->part_curr == md->part_type)
@@ -758,8 +789,9 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
if (mmc_card_mmc(card)) {
u8 part_config = card->ext_csd.part_config;
- if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
- mmc_retune_pause(card->host);
+ ret = mmc_blk_part_switch_pre(card, md->part_type);
+ if (ret)
+ return ret;
part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK;
part_config |= md->part_type;
@@ -768,19 +800,17 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
EXT_CSD_PART_CONFIG, part_config,
card->ext_csd.part_time);
if (ret) {
- if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
- mmc_retune_unpause(card->host);
+ mmc_blk_part_switch_post(card, md->part_type);
return ret;
}
card->ext_csd.part_config = part_config;
- if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB)
- mmc_retune_unpause(card->host);
+ ret = mmc_blk_part_switch_post(card, main_md->part_curr);
}
main_md->part_curr = md->part_type;
- return 0;
+ return ret;
}
static u32 mmc_sd_num_wr_blocks(struct mmc_card *card)
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 13/25] mmc: core: Do not prepare a new request twice
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (11 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 12/25] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:48 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 14/25] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
` (11 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
mmc_start_req() assumes it is never called with the new request already
prepared. That is true if the queue consists of only 2 requests, but is not
true for a longer queue. e.g. mmc_start_req() has a current and previous
request but still exits to queue a new request if the queue size is
greater than 2. In that case, when mmc_start_req() is called again, the
current request will have been prepared already. Fix by flagging if the
request has been prepared.
That also means ensuring that struct mmc_async_req is always initialized
to zero, which wasn't the case in mmc_test.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/mmc_test.c | 8 ++++----
drivers/mmc/core/core.c | 12 +++++++++---
include/linux/mmc/host.h | 1 +
3 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index b42c23665104..81a71d28239a 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -826,7 +826,10 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
struct mmc_command stop2;
struct mmc_data data2;
- struct mmc_test_async_req test_areq[2];
+ struct mmc_test_async_req test_areq[2] = {
+ { .test = test },
+ { .test = test },
+ };
struct mmc_async_req *done_areq;
struct mmc_async_req *cur_areq = &test_areq[0].areq;
struct mmc_async_req *other_areq = &test_areq[1].areq;
@@ -834,9 +837,6 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
int i;
int ret = RESULT_OK;
- test_areq[0].test = test;
- test_areq[1].test = test;
-
mmc_test_nonblock_reset(&mrq1, &cmd1, &stop1, &data1);
mmc_test_nonblock_reset(&mrq2, &cmd2, &stop2, &data2);
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index dc1f27ee50b8..28e1495ac903 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -658,8 +658,10 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
struct mmc_async_req *data = host->areq;
/* Prepare a new request */
- if (areq)
+ if (areq && !areq->pre_req_done) {
+ areq->pre_req_done = true;
mmc_pre_req(host, areq->mrq);
+ }
if (host->areq) {
status = mmc_wait_for_data_req_done(host, host->areq->mrq, areq);
@@ -695,12 +697,16 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
if (status == MMC_BLK_SUCCESS && areq)
start_err = __mmc_start_data_req(host, areq->mrq);
- if (host->areq)
+ if (host->areq) {
+ host->areq->pre_req_done = false;
mmc_post_req(host, host->areq->mrq, 0);
+ }
/* Cancel a prepared request if it was not started. */
- if ((status != MMC_BLK_SUCCESS || start_err) && areq)
+ if ((status != MMC_BLK_SUCCESS || start_err) && areq) {
+ areq->pre_req_done = false;
mmc_post_req(host, areq->mrq, -EINVAL);
+ }
if (status != MMC_BLK_SUCCESS)
host->areq = NULL;
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index bcf6d252ec67..fa44aa93505a 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -173,6 +173,7 @@ struct mmc_async_req {
* Returns 0 if success otherwise non zero.
*/
enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *);
+ bool pre_req_done;
};
/**
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 14/25] mmc: core: Export mmc_retune_hold() and mmc_retune_release()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (12 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 13/25] mmc: core: Do not prepare a new request twice Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:49 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 15/25] mmc: block: Factor out mmc_blk_requeue() Adrian Hunter
` (10 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Re-tuning can only be done when the Command Queue is empty, when means
holding and releasing re-tuning from the block driver, so export those
functions.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/core/host.c | 2 ++
drivers/mmc/core/host.h | 2 --
include/linux/mmc/core.h | 3 +++
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 98f25ffb4258..878405e28bdb 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -112,6 +112,7 @@ void mmc_retune_hold(struct mmc_host *host)
host->retune_now = 1;
host->hold_retune += 1;
}
+EXPORT_SYMBOL(mmc_retune_hold);
void mmc_retune_release(struct mmc_host *host)
{
@@ -120,6 +121,7 @@ void mmc_retune_release(struct mmc_host *host)
else
WARN_ON(1);
}
+EXPORT_SYMBOL(mmc_retune_release);
int mmc_retune(struct mmc_host *host)
{
diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
index 992bf5397633..0787b3002481 100644
--- a/drivers/mmc/core/host.h
+++ b/drivers/mmc/core/host.h
@@ -17,8 +17,6 @@
void mmc_retune_enable(struct mmc_host *host);
void mmc_retune_disable(struct mmc_host *host);
-void mmc_retune_hold(struct mmc_host *host);
-void mmc_retune_release(struct mmc_host *host);
int mmc_retune(struct mmc_host *host);
#endif
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index d045b06fc7ea..d8f46e1ae7f2 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -220,6 +220,9 @@ extern int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount,
extern int mmc_detect_card_removed(struct mmc_host *host);
+extern void mmc_retune_hold(struct mmc_host *host);
+extern void mmc_retune_release(struct mmc_host *host);
+
/**
* mmc_claim_host - exclusively claim a host
* @host: mmc host to claim
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 15/25] mmc: block: Factor out mmc_blk_requeue()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (13 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 14/25] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-28 4:51 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 16/25] mmc: block: Fix 4K native sector check Adrian Hunter
` (9 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
The same code is used in a couple of places.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 157d1b3d58d6..35e8d01b3013 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1319,6 +1319,13 @@ static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req)
return ret ? 0 : 1;
}
+static void mmc_blk_requeue(struct request_queue *q, struct request *req)
+{
+ spin_lock_irq(q->queue_lock);
+ blk_requeue_request(q, req);
+ spin_unlock_irq(q->queue_lock);
+}
+
/*
* Reformat current write as a reliable write, supporting
* both legacy and the enhanced reliable write MMC cards.
@@ -1835,11 +1842,8 @@ static u8 mmc_blk_prep_packed_list(struct mmc_queue *mq, struct request *req)
reqs++;
} while (1);
- if (put_back) {
- spin_lock_irq(q->queue_lock);
- blk_requeue_request(q, next);
- spin_unlock_irq(q->queue_lock);
- }
+ if (put_back)
+ mmc_blk_requeue(q, next);
if (reqs > 0) {
list_add(&req->queuelist, &mqrq->packed->list);
@@ -2018,9 +2022,7 @@ static void mmc_blk_revert_packed_req(struct mmc_queue *mq,
prq = list_entry_rq(packed->list.prev);
if (prq->queuelist.prev != &packed->list) {
list_del_init(&prq->queuelist);
- spin_lock_irq(q->queue_lock);
- blk_requeue_request(mq->queue, prq);
- spin_unlock_irq(q->queue_lock);
+ mmc_blk_requeue(q, prq);
} else {
list_del_init(&prq->queuelist);
}
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 16/25] mmc: block: Fix 4K native sector check
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (14 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 15/25] mmc: block: Factor out mmc_blk_requeue() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:51 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 17/25] mmc: block: Use local var for mqrq_cur Adrian Hunter
` (8 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
The 4K native sector check does not allow for the 'do' loop nor the
variables used after the 'cmd_abort' label.
'brq' and 'req' get reassigned in the 'do' loop, so the check must not
assume what their values are. After the 'cmd_abort' label, 'mq_rq' and
'req' are used, but 'rqc' must be NULL otherwise it can be started again.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 35e8d01b3013..f9d2bcf998fb 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -2035,11 +2035,11 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
{
struct mmc_blk_data *md = mq->blkdata;
struct mmc_card *card = md->queue.card;
- struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
+ struct mmc_blk_request *brq;
int ret = 1, disable_multi = 0, retry = 0, type, retune_retry_done = 0;
enum mmc_blk_status status;
struct mmc_queue_req *mq_rq;
- struct request *req = rqc;
+ struct request *req;
struct mmc_async_req *areq;
const u8 packed_nr = 2;
u8 reqs = 0;
@@ -2059,8 +2059,10 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
if (mmc_large_sector(card) &&
!IS_ALIGNED(blk_rq_sectors(rqc), 8)) {
pr_err("%s: Transfer size is not 4KB sector size aligned\n",
- req->rq_disk->disk_name);
+ rqc->rq_disk->disk_name);
mq_rq = mq->mqrq_cur;
+ req = rqc;
+ rqc = NULL;
goto cmd_abort;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 17/25] mmc: block: Use local var for mqrq_cur
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (15 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 16/25] mmc: block: Fix 4K native sector check Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:52 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 18/25] mmc: block: Pass mqrq to mmc_blk_prep_packed_list() Adrian Hunter
` (7 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
assigning it to a local variable.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index f9d2bcf998fb..c1745f2270ba 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -2038,6 +2038,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
struct mmc_blk_request *brq;
int ret = 1, disable_multi = 0, retry = 0, type, retune_retry_done = 0;
enum mmc_blk_status status;
+ struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
struct mmc_queue_req *mq_rq;
struct request *req;
struct mmc_async_req *areq;
@@ -2060,18 +2061,18 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
!IS_ALIGNED(blk_rq_sectors(rqc), 8)) {
pr_err("%s: Transfer size is not 4KB sector size aligned\n",
rqc->rq_disk->disk_name);
- mq_rq = mq->mqrq_cur;
+ mq_rq = mqrq_cur;
req = rqc;
rqc = NULL;
goto cmd_abort;
}
if (reqs >= packed_nr)
- mmc_blk_packed_hdr_wrq_prep(mq->mqrq_cur,
+ mmc_blk_packed_hdr_wrq_prep(mqrq_cur,
card, mq);
else
- mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
- areq = &mq->mqrq_cur->mmc_active;
+ mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+ areq = &mqrq_cur->mmc_active;
} else
areq = NULL;
areq = mmc_start_req(card->host, areq, &status);
@@ -2213,12 +2214,12 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
/*
* If current request is packed, it needs to put back.
*/
- if (mmc_packed_cmd(mq->mqrq_cur->cmd_type))
- mmc_blk_revert_packed_req(mq, mq->mqrq_cur);
+ if (mmc_packed_cmd(mqrq_cur->cmd_type))
+ mmc_blk_revert_packed_req(mq, mqrq_cur);
- mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
+ mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
mmc_start_req(card->host,
- &mq->mqrq_cur->mmc_active, NULL);
+ &mqrq_cur->mmc_active, NULL);
}
}
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 18/25] mmc: block: Pass mqrq to mmc_blk_prep_packed_list()
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (16 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 17/25] mmc: block: Use local var for mqrq_cur Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 14:53 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 19/25] mmc: block: Introduce queue semantics Adrian Hunter
` (6 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
passing mqrq to mmc_blk_prep_packed_list().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index c1745f2270ba..ebc5d2ff8f32 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1741,13 +1741,14 @@ static inline u8 mmc_calc_packed_hdr_segs(struct request_queue *q,
return nr_segs;
}
-static u8 mmc_blk_prep_packed_list(struct mmc_queue *mq, struct request *req)
+static u8 mmc_blk_prep_packed_list(struct mmc_queue *mq,
+ struct mmc_queue_req *mqrq)
{
struct request_queue *q = mq->queue;
struct mmc_card *card = mq->card;
+ struct request *req = mqrq->req;
struct request *cur = req, *next = NULL;
struct mmc_blk_data *md = mq->blkdata;
- struct mmc_queue_req *mqrq = mq->mqrq_cur;
bool en_rel_wr = card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN;
unsigned int req_sectors = 0, phys_segments = 0;
unsigned int max_blk_count, max_phys_segs;
@@ -2048,8 +2049,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
if (!rqc && !mq->mqrq_prev->req)
return 0;
- if (rqc)
- reqs = mmc_blk_prep_packed_list(mq, rqc);
+ if (mqrq_cur)
+ reqs = mmc_blk_prep_packed_list(mq, mqrq_cur);
do {
if (rqc) {
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 19/25] mmc: block: Introduce queue semantics
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (17 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 18/25] mmc: block: Pass mqrq to mmc_blk_prep_packed_list() Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions Adrian Hunter
` (5 subsequent siblings)
24 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Change from viewing the requests in progress as 'current' and 'previous',
to viewing them as a queue. The current request is allocated to the first
free slot. The presence of incomplete requests is determined from the
count (mq->qcnt) of entries in the queue. Non-read-write requests (i.e.
discards and flushes) are not added to the queue at all and require no
special handling. Also no special handling is needed for the
MMC_BLK_NEW_REQUEST case.
As well as allowing an arbitrarily sized queue, the queue thread function
is significantly simpler.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 42 ++++++++++++++------------
drivers/mmc/card/queue.c | 76 ++++++++++++++++++++++++++++++------------------
drivers/mmc/card/queue.h | 10 +++++--
3 files changed, 78 insertions(+), 50 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index ebc5d2ff8f32..54a018d38cc9 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -2039,14 +2039,23 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
struct mmc_blk_request *brq;
int ret = 1, disable_multi = 0, retry = 0, type, retune_retry_done = 0;
enum mmc_blk_status status;
- struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+ struct mmc_queue_req *mqrq_cur = NULL;
struct mmc_queue_req *mq_rq;
struct request *req;
struct mmc_async_req *areq;
const u8 packed_nr = 2;
u8 reqs = 0;
- if (!rqc && !mq->mqrq_prev->req)
+ if (rqc) {
+ mqrq_cur = mmc_queue_req_find(mq, rqc);
+ if (!mqrq_cur) {
+ WARN_ON(1);
+ mmc_blk_requeue(mq->queue, rqc);
+ rqc = NULL;
+ }
+ }
+
+ if (!mq->qcnt)
return 0;
if (mqrq_cur)
@@ -2077,11 +2086,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
} else
areq = NULL;
areq = mmc_start_req(card->host, areq, &status);
- if (!areq) {
- if (status == MMC_BLK_NEW_REQUEST)
- mq->flags |= MMC_QUEUE_NEW_REQUEST;
+ if (!areq)
return 0;
- }
mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
brq = &mq_rq->brq;
@@ -2193,6 +2199,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
}
} while (ret);
+ mmc_queue_req_free(mq, mq_rq);
+
return 1;
cmd_abort:
@@ -2211,6 +2219,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
if (mmc_card_removed(card)) {
rqc->cmd_flags |= REQ_QUIET;
blk_end_request_all(rqc, -EIO);
+ mmc_queue_req_free(mq, mqrq_cur);
} else {
/*
* If current request is packed, it needs to put back.
@@ -2224,6 +2233,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
}
}
+ mmc_queue_req_free(mq, mq_rq);
+
return 0;
}
@@ -2232,9 +2243,8 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
int ret;
struct mmc_blk_data *md = mq->blkdata;
struct mmc_card *card = md->queue.card;
- bool req_is_special = mmc_req_is_special(req);
- if (req && !mq->mqrq_prev->req)
+ if (req && !mq->qcnt)
/* claim host only for the first request */
mmc_get_card(card);
@@ -2247,20 +2257,19 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
goto out;
}
- mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
if (req && req_op(req) == REQ_OP_DISCARD) {
/* complete ongoing async transfer before issuing discard */
- if (card->host->areq)
+ if (mq->qcnt)
mmc_blk_issue_rw_rq(mq, NULL);
ret = mmc_blk_issue_discard_rq(mq, req);
} else if (req && req_op(req) == REQ_OP_SECURE_ERASE) {
/* complete ongoing async transfer before issuing secure erase*/
- if (card->host->areq)
+ if (mq->qcnt)
mmc_blk_issue_rw_rq(mq, NULL);
ret = mmc_blk_issue_secdiscard_rq(mq, req);
} else if (req && req_op(req) == REQ_OP_FLUSH) {
/* complete ongoing async transfer before issuing flush */
- if (card->host->areq)
+ if (mq->qcnt)
mmc_blk_issue_rw_rq(mq, NULL);
ret = mmc_blk_issue_flush(mq, req);
} else {
@@ -2268,13 +2277,8 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
}
out:
- if ((!req && !(mq->flags & MMC_QUEUE_NEW_REQUEST)) || req_is_special)
- /*
- * Release host when there are no more requests
- * and after special request(discard, flush) is done.
- * In case sepecial request, there is no reentry to
- * the 'mmc_blk_issue_rq' with 'mqrq_prev->req'.
- */
+ /* Release host when there are no more requests */
+ if (!mq->qcnt)
mmc_put_card(card);
return ret;
}
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 50d7bf074887..2570b813c25a 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -49,6 +49,35 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
return BLKPREP_OK;
}
+struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
+ struct request *req)
+{
+ struct mmc_queue_req *mqrq;
+ int i = ffz(mq->qslots);
+
+ if (i >= mq->qdepth)
+ return NULL;
+
+ mqrq = &mq->mqrq[i];
+ WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
+ test_bit(mqrq->task_id, &mq->qslots));
+ mqrq->req = req;
+ mq->qcnt += 1;
+ __set_bit(mqrq->task_id, &mq->qslots);
+
+ return mqrq;
+}
+
+void mmc_queue_req_free(struct mmc_queue *mq,
+ struct mmc_queue_req *mqrq)
+{
+ WARN_ON(!mqrq->req || mq->qcnt < 1 ||
+ !test_bit(mqrq->task_id, &mq->qslots));
+ mqrq->req = NULL;
+ mq->qcnt -= 1;
+ __clear_bit(mqrq->task_id, &mq->qslots);
+}
+
static int mmc_queue_thread(void *d)
{
struct mmc_queue *mq = d;
@@ -59,7 +88,7 @@ static int mmc_queue_thread(void *d)
down(&mq->thread_sem);
do {
- struct request *req = NULL;
+ struct request *req;
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
@@ -72,38 +101,17 @@ static int mmc_queue_thread(void *d)
* Dispatch queue is empty so set flags for
* mmc_request_fn() to wake us up.
*/
- if (mq->mqrq_prev->req)
+ if (mq->qcnt)
cntx->is_waiting_last_req = true;
else
mq->asleep = true;
}
- mq->mqrq_cur->req = req;
spin_unlock_irq(q->queue_lock);
- if (req || mq->mqrq_prev->req) {
- bool req_is_special = mmc_req_is_special(req);
-
+ if (req || mq->qcnt) {
set_current_state(TASK_RUNNING);
mmc_blk_issue_rq(mq, req);
cond_resched();
- if (mq->flags & MMC_QUEUE_NEW_REQUEST) {
- mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
- continue; /* fetch again */
- }
-
- /*
- * Current request becomes previous request
- * and vice versa.
- * In case of special requests, current request
- * has been finished. Do not assign it to previous
- * request.
- */
- if (req_is_special)
- mq->mqrq_cur->req = NULL;
-
- mq->mqrq_prev->brq.mrq.data = NULL;
- mq->mqrq_prev->req = NULL;
- swap(mq->mqrq_prev, mq->mqrq_cur);
} else {
if (kthread_should_stop()) {
set_current_state(TASK_RUNNING);
@@ -186,6 +194,21 @@ static void mmc_queue_setup_discard(struct request_queue *q,
queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
}
+static struct mmc_queue_req *mmc_queue_alloc_mqrqs(struct mmc_queue *mq,
+ int qdepth)
+{
+ struct mmc_queue_req *mqrq;
+ int i;
+
+ mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
+ if (mqrq) {
+ for (i = 0; i < mq->qdepth; i++)
+ mqrq[i].task_id = i;
+ }
+
+ return mqrq;
+}
+
static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
unsigned int bouncesz)
{
@@ -286,12 +309,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
return -ENOMEM;
mq->qdepth = 2;
- mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req),
- GFP_KERNEL);
+ mq->mqrq = mmc_queue_alloc_mqrqs(mq, mq->qdepth);
if (!mq->mqrq)
goto blk_cleanup;
- mq->mqrq_cur = &mq->mqrq[0];
- mq->mqrq_prev = &mq->mqrq[1];
mq->queue->queuedata = mq;
blk_queue_prep_rq(mq->queue, mmc_prep_request);
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index f17f5e505059..f7bc0ca0b27f 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -49,6 +49,7 @@ struct mmc_queue_req {
struct mmc_async_req mmc_active;
enum mmc_packed_type cmd_type;
struct mmc_packed *packed;
+ int task_id;
};
struct mmc_queue {
@@ -57,14 +58,13 @@ struct mmc_queue {
struct semaphore thread_sem;
unsigned int flags;
#define MMC_QUEUE_SUSPENDED (1 << 0)
-#define MMC_QUEUE_NEW_REQUEST (1 << 1)
bool asleep;
struct mmc_blk_data *blkdata;
struct request_queue *queue;
struct mmc_queue_req *mqrq;
- struct mmc_queue_req *mqrq_cur;
- struct mmc_queue_req *mqrq_prev;
int qdepth;
+ int qcnt;
+ unsigned long qslots;
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
@@ -80,4 +80,8 @@ extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
extern int mmc_access_rpmb(struct mmc_queue *);
+extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
+ struct request *);
+extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
+
#endif
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (18 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 19/25] mmc: block: Introduce queue semantics Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 15:01 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 21/25] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
` (4 subsequent siblings)
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
eMMC can have multiple internal partitions that are represented as separate
disks / queues. However the card has only 1 command queue which must be
empty when switching partitions. Consequently the array of mmc requests
that are queued can be shared between partitions saving memory.
Keep a pointer to the mmc request queue on the card, and use that instead
of allocating a new one for each partition. Use a reference count to keep
track of when to free it.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 6 ++++++
drivers/mmc/card/queue.c | 37 +++++++++++++++++++++++++++++++------
include/linux/mmc/card.h | 4 ++++
3 files changed, 41 insertions(+), 6 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 54a018d38cc9..bbc4a34dcee5 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -1480,6 +1480,9 @@ static int mmc_packed_init(struct mmc_queue *mq, struct mmc_card *card)
if (mq->qdepth != 2)
return -EINVAL;
+ if (mqrq_cur->packed)
+ goto out;
+
mqrq_cur->packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL);
if (!mqrq_cur->packed) {
pr_warn("%s: unable to allocate packed cmd for mqrq_cur\n",
@@ -1510,6 +1513,9 @@ static void mmc_packed_clean(struct mmc_queue *mq)
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
+ if (mq->card->mqrq_ref_cnt > 1)
+ return;
+
kfree(mqrq_cur->packed);
mqrq_cur->packed = NULL;
kfree(mqrq_prev->packed);
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 2570b813c25a..7c329141f4ad 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -200,10 +200,17 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(struct mmc_queue *mq,
struct mmc_queue_req *mqrq;
int i;
+ if (mq->card->mqrq) {
+ mq->card->mqrq_ref_cnt += 1;
+ return mq->card->mqrq;
+ }
+
mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
if (mqrq) {
for (i = 0; i < mq->qdepth; i++)
mqrq[i].task_id = i;
+ mq->card->mqrq = mqrq;
+ mq->card->mqrq_ref_cnt = 1;
}
return mqrq;
@@ -214,6 +221,9 @@ static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
{
int i;
+ if (mq->card->mqrq_ref_cnt > 1)
+ return !!mq->mqrq[0].bounce_buf;
+
for (i = 0; i < mq->qdepth; i++) {
mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mq->mqrq[i].bounce_buf)
@@ -237,6 +247,9 @@ static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
{
int i, ret;
+ if (mq->card->mqrq_ref_cnt > 1)
+ return 0;
+
for (i = 0; i < mq->qdepth; i++) {
mq->mqrq[i].sg = mmc_alloc_sg(1, &ret);
if (ret)
@@ -254,6 +267,9 @@ static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
{
int i, ret;
+ if (mq->card->mqrq_ref_cnt > 1)
+ return 0;
+
for (i = 0; i < mq->qdepth; i++) {
mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret);
if (ret)
@@ -283,6 +299,19 @@ static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
mmc_queue_req_free_bufs(&mq->mqrq[i]);
}
+static void mmc_queue_free_mqrqs(struct mmc_queue *mq)
+{
+ if (!mq->mqrq)
+ return;
+
+ if (!--mq->card->mqrq_ref_cnt) {
+ mmc_queue_reqs_free_bufs(mq);
+ kfree(mq->card->mqrq);
+ mq->card->mqrq = NULL;
+ }
+ mq->mqrq = NULL;
+}
+
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
@@ -373,9 +402,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
return 0;
cleanup_queue:
- mmc_queue_reqs_free_bufs(mq);
- kfree(mq->mqrq);
- mq->mqrq = NULL;
+ mmc_queue_free_mqrqs(mq);
blk_cleanup:
blk_cleanup_queue(mq->queue);
return ret;
@@ -398,9 +425,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
- mmc_queue_reqs_free_bufs(mq);
- kfree(mq->mqrq);
- mq->mqrq = NULL;
+ mmc_queue_free_mqrqs(mq);
mq->card = NULL;
}
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index 5ac2243bc5a9..c0f36b9e832e 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -207,6 +207,7 @@ struct sdio_cis {
struct mmc_ios;
struct sdio_func;
struct sdio_func_tuple;
+struct mmc_queue_req;
#define SDIO_MAX_FUNCS 7
@@ -308,6 +309,9 @@ struct mmc_card {
struct dentry *debugfs_root;
struct mmc_part part[MMC_NUM_PHY_PARTITION]; /* physical partitions */
unsigned int nr_parts;
+
+ struct mmc_queue_req *mqrq; /* Shared queue structure */
+ int mqrq_ref_cnt; /* Shared queue ref. count */
};
/*
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 21/25] mmc: queue: Add a function to control wake-up on new requests
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (19 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 22/25] mmc: block: Add Software Command Queuing Adrian Hunter
` (3 subsequent siblings)
24 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Add a function to control wake-up on new requests. This will enable
Software Command Queuing to choose whether or not to queue new
requests immediately or wait for the current task to complete.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/queue.c | 16 ++++++++++++++++
drivers/mmc/card/queue.h | 2 ++
2 files changed, 18 insertions(+)
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 7c329141f4ad..4847748fac2b 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -97,6 +97,7 @@ static int mmc_queue_thread(void *d)
cntx->is_waiting_last_req = false;
cntx->is_new_req = false;
if (!req) {
+ mq->is_new_req = false;
/*
* Dispatch queue is empty so set flags for
* mmc_request_fn() to wake us up.
@@ -147,6 +148,8 @@ static void mmc_request_fn(struct request_queue *q)
return;
}
+ mq->is_new_req = true;
+
cntx = &mq->card->host->context_info;
if (cntx->is_waiting_last_req) {
@@ -158,6 +161,19 @@ static void mmc_request_fn(struct request_queue *q)
wake_up_process(mq->thread);
}
+void mmc_queue_set_wake(struct mmc_queue *mq, bool wake_me)
+{
+ struct mmc_context_info *cntx = &mq->card->host->context_info;
+ struct request_queue *q = mq->queue;
+
+ if (cntx->is_waiting_last_req != wake_me) {
+ spin_lock_irq(q->queue_lock);
+ cntx->is_waiting_last_req = wake_me;
+ cntx->is_new_req = wake_me && mq->is_new_req;
+ spin_unlock_irq(q->queue_lock);
+ }
+}
+
static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
{
struct scatterlist *sg;
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index f7bc0ca0b27f..1bc3f71b9008 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -59,6 +59,7 @@ struct mmc_queue {
unsigned int flags;
#define MMC_QUEUE_SUSPENDED (1 << 0)
bool asleep;
+ bool is_new_req;
struct mmc_blk_data *blkdata;
struct request_queue *queue;
struct mmc_queue_req *mqrq;
@@ -83,5 +84,6 @@ extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
struct request *);
extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
+extern void mmc_queue_set_wake(struct mmc_queue *, bool);
#endif
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 22/25] mmc: block: Add Software Command Queuing
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (20 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 21/25] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 23/25] mmc: mmc: Enable " Adrian Hunter
` (2 subsequent siblings)
24 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
eMMC Command Queuing is a feature added in version 5.1. The card maintains
a queue of up to 32 data transfers. Commands CMD45/CMD45 are sent to queue
up transfers in advance, and then one of the transfers is selected to
"execute" by CMD46/CMD47 at which point data transfer actually begins.
The advantage of command queuing is that the card can prepare for transfers
in advance. That makes a big difference in the case of random reads because
the card can start reading into its cache in advance.
A v5.1 host controller can manage the command queue itself, but it is also
possible for software to manage the queue using an non-v5.1 host controller
- that is what Software Command Queuing is.
Refer to the JEDEC (http://www.jedec.org/) eMMC v5.1 Specification for more
information about Command Queuing.
Two important aspects of Command Queuing that affect the implementation
are:
- only read/write requests are queued
- the queue must be empty to send other commands, including re-tuning
To support Software Command Queuing a separate function is provided to
issue read/write requests (i.e. mmc_swcmdq_issue_rw_rq()) and the
mmc_blk_request structure amended to cater for additional commands CMD44
and CMD45. There is a separate function (mmc_swcmdq_prep()) to prepare the
needed commands, but transfers are started by mmc_start_req() like normal.
mmc_swcmdq_issue_rw_rq() enqueues the new request and then executes tasks
until the queue is empty or mmc_swcmdq_execute() asks for a new request.
This puts mmc_swcmdq_execute() in control of the decision whether to queue
more requests or wait for the active one.
Recovery is invoked if anything goes wrong. Recovery has 2 options:
1. Discard the queue and re-queue all requests. If that fails, fall back
to option 2.
2. Reset and re-queue all requests. If that fails, error out all the
requests.
In either case, re-tuning will be done if needed after the queue becomes
empty because re-tuning is released at that point.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/card/block.c | 591 ++++++++++++++++++++++++++++++++++++++++++++++-
drivers/mmc/card/queue.c | 6 +-
drivers/mmc/card/queue.h | 11 +-
include/linux/mmc/core.h | 1 +
4 files changed, 606 insertions(+), 3 deletions(-)
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index bbc4a34dcee5..0e573f60e3bb 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -112,6 +112,7 @@ struct mmc_blk_data {
#define MMC_BLK_WRITE BIT(1)
#define MMC_BLK_DISCARD BIT(2)
#define MMC_BLK_SECDISCARD BIT(3)
+#define MMC_BLK_SWCMDQ BIT(4)
/*
* Only set in main mmc_blk_data associated
@@ -2038,7 +2039,584 @@ static void mmc_blk_revert_packed_req(struct mmc_queue *mq,
mmc_blk_clear_packed(mq_rq);
}
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
+static enum mmc_blk_status mmc_swcmdq_err_check(struct mmc_card *card,
+ struct mmc_async_req *areq)
+{
+ struct mmc_queue_req *mqrq = container_of(areq, struct mmc_queue_req,
+ mmc_active);
+ struct mmc_blk_request *brq = &mqrq->brq;
+ struct request *req = mqrq->req;
+ int err;
+
+ err = brq->data.error;
+ /* In the case of data errors, send stop */
+ if (err)
+ mmc_wait_for_cmd(card->host, &brq->stop, 0);
+ else
+ err = brq->cmd.error;
+
+ /* In the case of CRC errors when re-tuning is needed, retry */
+ if (err == -EILSEQ && card->host->need_retune)
+ return MMC_BLK_RETRY;
+
+ /* For other errors abort */
+ if (err)
+ return MMC_BLK_ABORT;
+
+ if (blk_rq_bytes(req) != brq->data.bytes_xfered)
+ return MMC_BLK_PARTIAL;
+
+ return MMC_BLK_SUCCESS;
+}
+
+static void mmc_swcmdq_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
+{
+ struct mmc_blk_data *md = mq->blkdata;
+ struct mmc_card *card = md->queue.card;
+ struct mmc_blk_request *brq = &mqrq->brq;
+ struct request *req = mqrq->req;
+ bool do_data_tag;
+
+ /*
+ * Reliable writes are used to implement Forced Unit Access and
+ * are supported only on MMCs.
+ */
+ bool do_rel_wr = (req->cmd_flags & REQ_FUA) &&
+ (rq_data_dir(req) == WRITE) &&
+ (md->flags & MMC_BLK_REL_WR);
+
+ memset(brq, 0, sizeof(struct mmc_blk_request));
+ brq->mrq.cmd = &brq->cmd;
+ brq->mrq.data = &brq->data;
+ brq->mrq.cap_cmd_during_tfr = true;
+
+ if (rq_data_dir(req) == READ) {
+ brq->cmd.opcode = MMC_EXECUTE_READ_TASK;
+ brq->data.flags = MMC_DATA_READ;
+ brq->stop.flags = MMC_RSP_R1 | MMC_CMD_AC;
+ } else {
+ brq->cmd.opcode = MMC_EXECUTE_WRITE_TASK;
+ brq->data.flags = MMC_DATA_WRITE;
+ brq->stop.flags = MMC_RSP_R1B | MMC_CMD_AC;
+ }
+ brq->cmd.arg = mqrq->task_id << 16;
+ brq->cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
+
+ brq->data.blksz = 512;
+ brq->data.blocks = blk_rq_sectors(req);
+
+ brq->stop.opcode = MMC_STOP_TRANSMISSION;
+ brq->stop.arg = 0;
+
+ /*
+ * The block layer doesn't support all sector count
+ * restrictions, so we need to be prepared for too big
+ * requests.
+ */
+ if (brq->data.blocks > card->host->max_blk_count)
+ brq->data.blocks = card->host->max_blk_count;
+
+ if (do_rel_wr)
+ mmc_apply_rel_rw(brq, card, req);
+
+ /*
+ * Data tag is used only during writing meta data to speed
+ * up write and any subsequent read of this meta data
+ */
+ do_data_tag = (card->ext_csd.data_tag_unit_size) &&
+ (req->cmd_flags & REQ_META) &&
+ (rq_data_dir(req) == WRITE) &&
+ ((brq->data.blocks * brq->data.blksz) >=
+ card->ext_csd.data_tag_unit_size);
+
+ brq->cmd44.opcode = MMC_QUE_TASK_PARAMS;
+ brq->cmd44.arg = brq->data.blocks |
+ (do_rel_wr ? (1 << 31) : 0) |
+ ((rq_data_dir(req) == READ) ? (1 << 30) : 0) |
+ (do_data_tag ? (1 << 29) : 0) |
+ mqrq->task_id << 16;
+ brq->cmd44.flags = MMC_RSP_R1 | MMC_CMD_AC;
+
+ brq->cmd45.opcode = MMC_QUE_TASK_ADDR;
+ brq->cmd45.arg = blk_rq_pos(req);
+
+ mmc_set_data_timeout(&brq->data, card);
+
+ brq->data.sg = mqrq->sg;
+ brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
+
+ /*
+ * Adjust the sg list so it is the same size as the
+ * request.
+ */
+ if (brq->data.blocks != blk_rq_sectors(req)) {
+ int i, data_size = brq->data.blocks << 9;
+ struct scatterlist *sg;
+
+ for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) {
+ data_size -= sg->length;
+ if (data_size <= 0) {
+ sg->length += data_size;
+ i++;
+ break;
+ }
+ }
+ brq->data.sg_len = i;
+ }
+
+ mqrq->mmc_active.mrq = &brq->mrq;
+ mqrq->mmc_active.err_check = mmc_swcmdq_err_check;
+
+ mmc_queue_bounce_pre(mqrq);
+}
+
+static int mmc_swcmdq_blk_err(struct mmc_card *card, int err)
+{
+ /* Re-try after CRC errors when re-tuning is needed */
+ if (err == -EILSEQ && card->host->need_retune)
+ return MMC_BLK_RETRY;
+
+ if (err)
+ return MMC_BLK_ABORT;
+
+ return 0;
+}
+
+#define SWCMDQ_ENQUEUE_ERR ( \
+ R1_OUT_OF_RANGE | \
+ R1_ADDRESS_ERROR | \
+ R1_BLOCK_LEN_ERROR | \
+ R1_WP_VIOLATION | \
+ R1_CC_ERROR | \
+ R1_ERROR | \
+ R1_COM_CRC_ERROR | \
+ R1_ILLEGAL_COMMAND)
+
+static int __mmc_swcmdq_enqueue(struct mmc_queue *mq,
+ struct mmc_queue_req *mqrq)
+{
+ struct mmc_card *card = mq->card;
+ int err;
+
+ mmc_swcmdq_prep(mq, mqrq);
+
+ err = mmc_wait_for_cmd(card->host, &mqrq->brq.cmd44, 0);
+ if (err)
+ goto out;
+
+ err = mmc_wait_for_cmd(card->host, &mqrq->brq.cmd45, 0);
+ if (err)
+ goto out;
+
+ /*
+ * Don't assume the task is queued if there are any error bits set in
+ * the response.
+ */
+ if (mqrq->brq.cmd45.resp[0] & SWCMDQ_ENQUEUE_ERR)
+ return MMC_BLK_ABORT;
+out:
+ return mmc_swcmdq_blk_err(card, err);
+}
+
+static int mmc_swcmdq_enqueue(struct mmc_queue *mq, struct request *req)
+{
+ struct mmc_queue_req *mqrq;
+
+ mqrq = mmc_queue_req_find(mq, req);
+ if (!mqrq) {
+ WARN_ON(1);
+ mmc_blk_requeue(mq->queue, req);
+ return 0;
+ }
+
+ /* Need to hold re-tuning so long as the queue is not empty */
+ if (mq->qcnt == 1)
+ mmc_retune_hold(mq->card->host);
+
+ return __mmc_swcmdq_enqueue(mq, mqrq);
+}
+
+static struct mmc_async_req *mmc_swcmdq_next(struct mmc_queue *mq)
+{
+ int i = __ffs(mq->qsr);
+
+ __clear_bit(i, &mq->qsr);
+
+ if (i >= mq->qdepth)
+ return NULL;
+
+ return &mq->mqrq[i].mmc_active;
+}
+
+static int mmc_get_qsr(struct mmc_card *card, u32 *qsr)
+{
+ struct mmc_command cmd = {0};
+ int err, retries = 3;
+
+ cmd.opcode = MMC_SEND_STATUS;
+ cmd.arg = card->rca << 16 | 1 << 15;
+ cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
+ err = mmc_wait_for_cmd(card->host, &cmd, retries);
+ if (err)
+ return err;
+
+ *qsr = cmd.resp[0];
+
+ return 0;
+}
+
+static int mmc_await_qsr(struct mmc_card *card, u32 *qsr)
+{
+ unsigned long timeout;
+ u32 status = 0;
+ int err;
+
+ timeout = jiffies + msecs_to_jiffies(10 * 1000);
+ while (!status) {
+ err = mmc_get_qsr(card, &status);
+ if (err)
+ return err;
+ if (time_after(jiffies, timeout)) {
+ pr_err("%s: Card stuck with queued tasks\n",
+ mmc_hostname(card->host));
+ return -ETIMEDOUT;
+ }
+ }
+
+ *qsr = status;
+
+ return 0;
+}
+
+static int mmc_swcmdq_await_qsr(struct mmc_queue *mq, struct mmc_card *card,
+ bool wait)
+{
+ struct mmc_queue_req *mqrq;
+ u32 qsr;
+ int err;
+
+ if (wait)
+ err = mmc_await_qsr(card, &qsr);
+ else
+ err = mmc_get_qsr(card, &qsr);
+ if (err)
+ goto out_err;
+
+ mq->qsr = qsr;
+
+ if (card->host->areq) {
+ /*
+ * The active request remains in the QSR until completed. Remove
+ * it so that mq->qsr only contains ones that are ready but not
+ * executed.
+ */
+ mqrq = container_of(card->host->areq, struct mmc_queue_req,
+ mmc_active);
+ __clear_bit(mqrq->task_id, &mq->qsr);
+ }
+
+ if (mq->qsr)
+ mq->qsr_err = false;
+out_err:
+ if (err) {
+ /* Don't repeatedly retry if no progress is made */
+ if (mq->qsr_err)
+ return MMC_BLK_ABORT;
+ mq->qsr_err = true;
+ }
+
+ return mmc_swcmdq_blk_err(card, err);
+}
+
+static int mmc_swcmdq_execute(struct mmc_queue *mq, bool flush, bool requeuing,
+ bool new_req)
+{
+ struct mmc_card *card = mq->card;
+ struct mmc_async_req *next = NULL, *prev;
+ struct mmc_blk_request *brq;
+ struct mmc_queue_req *mqrq;
+ enum mmc_blk_status status;
+ struct request *req;
+ int active = card->host->areq ? 1 : 0;
+ int ret;
+
+ if (mq->prepared_areq) {
+ /*
+ * A request that has been prepared before (i.e. passed to
+ * mmc_start_req()) but not started because another new request
+ * turned up.
+ */
+ next = mq->prepared_areq;
+ } else if (requeuing) {
+ /* Just finish the active request */
+ next = NULL;
+ } else if (mq->qsr) {
+ /* Get the next task from the Queue Status Register */
+ next = mmc_swcmdq_next(mq);
+ } else if (mq->qcnt > active) {
+ /*
+ * There are queued tasks so read the Queue Status Register to
+ * see if any are ready. Wait for a ready task only if there is
+ * no active request and no new request.
+ */
+ ret = mmc_swcmdq_await_qsr(mq, card, !active && !new_req);
+ if (ret)
+ return ret;
+ if (mq->qsr)
+ next = mmc_swcmdq_next(mq);
+ }
+
+ if (next) {
+ /*
+ * Don't wake for a new request when waiting for the active
+ * request if there is another request ready to start.
+ */
+ if (active)
+ mmc_queue_set_wake(mq, false);
+ } else {
+ if (!active)
+ return 0;
+ /*
+ * Don't wake for a new request when flushing or the queue is
+ * full.
+ */
+ if (flush || mq->qcnt == mq->qdepth)
+ mmc_queue_set_wake(mq, false);
+ else
+ mmc_queue_set_wake(mq, true);
+ }
+
+ prev = mmc_start_req(card->host, next, &status);
+
+ if (status == MMC_BLK_NEW_REQUEST) {
+ mq->prepared_areq = next;
+ return status;
+ }
+
+ mq->prepared_areq = NULL;
+
+ if (!prev)
+ return 0;
+
+ mqrq = container_of(prev, struct mmc_queue_req, mmc_active);
+ brq = &mqrq->brq;
+ req = mqrq->req;
+
+ mmc_queue_bounce_post(mqrq);
+
+ switch (status) {
+ case MMC_BLK_SUCCESS:
+ case MMC_BLK_PARTIAL:
+ case MMC_BLK_SUCCESS_ERR:
+ mmc_blk_reset_success(mq->blkdata, MMC_BLK_SWCMDQ);
+ ret = blk_end_request(req, 0, brq->data.bytes_xfered);
+ if (ret) {
+ if (!requeuing)
+ return __mmc_swcmdq_enqueue(mq, mqrq);
+ return 0;
+ }
+ break;
+ case MMC_BLK_NEW_REQUEST:
+ return status;
+ default:
+ if (mqrq->retry_cnt++) {
+ blk_end_request_all(req, -EIO);
+ break;
+ }
+ return status;
+ }
+
+ mmc_queue_req_free(mq, mqrq);
+
+ /* Release re-tuning when queue is empty */
+ if (!mq->qcnt)
+ mmc_retune_release(card->host);
+
+ return 0;
+}
+
+static enum mmc_blk_status mmc_swcmdq_requeue_check(struct mmc_card *card,
+ struct mmc_async_req *areq)
+{
+ enum mmc_blk_status ret = mmc_swcmdq_err_check(card, areq);
+
+ /*
+ * In the case of success, prevent mmc_start_req() from starting
+ * another request by returning MMC_BLK_SUCCESS_ERR.
+ */
+ return ret == MMC_BLK_SUCCESS ? MMC_BLK_SUCCESS_ERR : ret;
+}
+
+static int mmc_swcmdq_await_active(struct mmc_queue *mq)
+{
+ struct mmc_async_req *areq = mq->card->host->areq;
+ int err;
+
+ if (!areq)
+ return 0;
+
+ areq->err_check = mmc_swcmdq_requeue_check;
+
+ err = mmc_swcmdq_execute(mq, true, true, false);
+
+ /* The request will be requeued anyway, so ignore 'retry' */
+ if (err == MMC_BLK_RETRY)
+ err = 0;
+
+ return err;
+}
+
+static int mmc_swcmdq_discard_queue(struct mmc_queue *mq)
+{
+ struct mmc_command cmd = {0};
+
+ if (!mq->qcnt)
+ return 0;
+
+ mq->qsr = 0;
+
+ cmd.opcode = MMC_CMDQ_TASK_MGMT;
+ cmd.arg = 1; /* Discard entire queue */
+ cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
+ /* This is for recovery and the response is not needed, so ignore CRC */
+ cmd.flags &= ~MMC_RSP_CRC;
+
+ return mmc_wait_for_cmd(mq->card->host, &cmd, 0);
+}
+
+static int __mmc_swcmdq_requeue(struct mmc_queue *mq)
+{
+ unsigned long i, qslots = mq->qslots;
+ int err;
+
+ if (qslots) {
+ /* Cause re-tuning before next command, if needed */
+ mmc_retune_release(mq->card->host);
+ mmc_retune_hold(mq->card->host);
+ }
+
+ while (qslots) {
+ i = __ffs(qslots);
+ err = __mmc_swcmdq_enqueue(mq, &mq->mqrq[i]);
+ if (err)
+ return err;
+ __clear_bit(i, &qslots);
+ }
+
+ return 0;
+}
+
+static void __mmc_swcmdq_error_out(struct mmc_queue *mq)
+{
+ unsigned long i, qslots = mq->qslots;
+ struct request *req;
+
+ if (qslots)
+ mmc_retune_release(mq->card->host);
+
+ while (qslots) {
+ i = __ffs(qslots);
+ req = mq->mqrq[i].req;
+ blk_end_request_all(req, -EIO);
+ mq->mqrq[i].req = NULL;
+ __clear_bit(i, &qslots);
+ }
+
+ mq->qslots = 0;
+ mq->qcnt = 0;
+}
+
+static int mmc_swcmdq_requeue(struct mmc_queue *mq)
+{
+ int err;
+
+ /* Wait for active request */
+ err = mmc_swcmdq_await_active(mq);
+ if (err)
+ return err;
+
+ err = mmc_swcmdq_discard_queue(mq);
+ if (err)
+ return err;
+
+ return __mmc_swcmdq_requeue(mq);
+}
+
+static void mmc_swcmdq_reset(struct mmc_queue *mq)
+{
+ /* Wait for active request ignoring errors */
+ mmc_swcmdq_await_active(mq);
+
+ /* Ensure the queue is discarded */
+ mmc_swcmdq_discard_queue(mq);
+
+ /* Reset and requeue else error out all requests */
+ if (mmc_blk_reset(mq->blkdata, mq->card->host, MMC_BLK_SWCMDQ) ||
+ __mmc_swcmdq_requeue(mq))
+ __mmc_swcmdq_error_out(mq);
+}
+
+/*
+ * Recovery has 2 options:
+ * 1. Discard the queue and re-queue all requests. If that fails, fall back to
+ * option 2.
+ * 2. Reset and re-queue all requests. If that fails, error out all the
+ * requests.
+ * In either case, re-tuning will be done if needed after the queue becomes
+ * empty because re-tuning is released at that point.
+ */
+static void mmc_swcmdq_recovery(struct mmc_queue *mq, int err)
+{
+ switch (err) {
+ case MMC_BLK_RETRY:
+ err = mmc_swcmdq_requeue(mq);
+ if (!err)
+ break;
+ /* Fall through */
+ default:
+ mmc_swcmdq_reset(mq);
+ }
+}
+
+static int mmc_swcmdq_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+{
+ struct mmc_context_info *cntx = &mq->card->host->context_info;
+ bool flush = !req && !cntx->is_waiting_last_req;
+ int err;
+
+ /* Enqueue new requests */
+ if (req) {
+ err = mmc_swcmdq_enqueue(mq, req);
+ if (err)
+ mmc_swcmdq_recovery(mq, err);
+ }
+
+ /*
+ * Keep executing queued requests until the queue is empty or
+ * mmc_swcmdq_execute() asks for new requests by returning
+ * MMC_BLK_NEW_REQUEST.
+ */
+ while (mq->qcnt) {
+ /*
+ * Re-tuning can only be done when the queue is empty. Recovery
+ * for MMC_BLK_RETRY will discard the queue and re-queue all
+ * requests. At the point the queue is empty, re-tuning is
+ * released and will be done automatically before the next
+ * mmc_request.
+ */
+ if (mq->card->host->need_retune)
+ mmc_swcmdq_recovery(mq, MMC_BLK_RETRY);
+ err = mmc_swcmdq_execute(mq, flush, false, !!req);
+ if (err == MMC_BLK_NEW_REQUEST)
+ return 0;
+ if (err)
+ mmc_swcmdq_recovery(mq, err);
+ }
+
+ return 0;
+}
+
+static int __mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
{
struct mmc_blk_data *md = mq->blkdata;
struct mmc_card *card = md->queue.card;
@@ -2244,6 +2822,17 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
return 0;
}
+static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+{
+ struct mmc_blk_data *md = mq->blkdata;
+ struct mmc_card *card = md->queue.card;
+
+ if (card->ext_csd.cmdq_en)
+ return mmc_swcmdq_issue_rw_rq(mq, req);
+ else
+ return __mmc_blk_issue_rw_rq(mq, req);
+}
+
int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
{
int ret;
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 4847748fac2b..b292109c82ea 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -64,6 +64,7 @@ struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
mqrq->req = req;
mq->qcnt += 1;
__set_bit(mqrq->task_id, &mq->qslots);
+ mqrq->retry_cnt = 0;
return mqrq;
}
@@ -353,7 +354,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (!mq->queue)
return -ENOMEM;
- mq->qdepth = 2;
+ if (card->ext_csd.cmdq_en)
+ mq->qdepth = card->ext_csd.cmdq_depth;
+ else
+ mq->qdepth = 2;
mq->mqrq = mmc_queue_alloc_mqrqs(mq, mq->qdepth);
if (!mq->mqrq)
goto blk_cleanup;
diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
index 1bc3f71b9008..a2005153b315 100644
--- a/drivers/mmc/card/queue.h
+++ b/drivers/mmc/card/queue.h
@@ -15,9 +15,13 @@ static inline bool mmc_req_is_special(struct request *req)
struct mmc_blk_request {
struct mmc_request mrq;
- struct mmc_command sbc;
+ union {
+ struct mmc_command sbc;
+ struct mmc_command cmd44;
+ };
struct mmc_command cmd;
struct mmc_command stop;
+ struct mmc_command cmd45;
struct mmc_data data;
int retune_retry_done;
};
@@ -50,6 +54,7 @@ struct mmc_queue_req {
enum mmc_packed_type cmd_type;
struct mmc_packed *packed;
int task_id;
+ unsigned int retry_cnt;
};
struct mmc_queue {
@@ -66,6 +71,10 @@ struct mmc_queue {
int qdepth;
int qcnt;
unsigned long qslots;
+ /* Following are defined for Software Command Queuing */
+ unsigned long qsr;
+ struct mmc_async_req *prepared_areq;
+ bool qsr_err;
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index d8f46e1ae7f2..03a013c83e31 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -25,6 +25,7 @@ enum mmc_blk_status {
MMC_BLK_ECC_ERR,
MMC_BLK_NOMEDIUM,
MMC_BLK_NEW_REQUEST,
+ MMC_BLK_SUCCESS_ERR, /* Success but prevent starting another request */
};
struct mmc_command {
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 23/25] mmc: mmc: Enable Software Command Queuing
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (21 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 22/25] mmc: block: Add Software Command Queuing Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 24/25] mmc: sdhci-pci: Enable Software Command Queuing for some Intel controllers Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 25/25] mmc: sdhci-acpi: " Adrian Hunter
24 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Enable the Command Queue if the host controller supports Software Command
Queuing. It is not compatible with Packed Commands, so do not enable that
at the same time.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/core/mmc.c | 17 ++++++++++++++++-
include/linux/mmc/host.h | 1 +
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index e310a60ef859..6a31a7a614be 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1754,6 +1754,20 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
}
}
+ /* Enable Command Queue if supported */
+ card->ext_csd.cmdq_en = false;
+ if (card->ext_csd.cmdq_support && host->caps & MMC_CAP_SWCMDQ) {
+ err = mmc_cmdq_enable(card);
+ if (err && err != -EBADMSG)
+ goto free_card;
+ if (err) {
+ pr_warn("%s: Enabling CMDQ failed\n",
+ mmc_hostname(card->host));
+ card->ext_csd.cmdq_support = false;
+ card->ext_csd.cmdq_depth = 0;
+ err = 0;
+ }
+ }
/*
* In some cases (e.g. RPMB or mmc_test), the Command Queue must be
* disabled for a time, so a flag is needed to indicate to re-enable the
@@ -1767,7 +1781,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
*/
if (card->ext_csd.max_packed_writes >= 3 &&
card->ext_csd.max_packed_reads >= 5 &&
- host->caps2 & MMC_CAP2_PACKED_CMD) {
+ host->caps2 & MMC_CAP2_PACKED_CMD &&
+ !card->ext_csd.cmdq_en) {
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_EXP_EVENTS_CTRL,
EXT_CSD_PACKED_EVENT_EN,
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index fa44aa93505a..ea514470c0c1 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -279,6 +279,7 @@ struct mmc_host {
#define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */
#define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */
#define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */
+#define MMC_CAP_SWCMDQ (1 << 28) /* Software Command Queue */
#define MMC_CAP_CMD_DURING_TFR (1 << 29) /* Commands during data transfer */
#define MMC_CAP_CMD23 (1 << 30) /* CMD23 supported. */
#define MMC_CAP_HW_RESET (1 << 31) /* Hardware reset */
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 24/25] mmc: sdhci-pci: Enable Software Command Queuing for some Intel controllers
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (22 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 23/25] mmc: mmc: Enable " Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 25/25] mmc: sdhci-acpi: " Adrian Hunter
24 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Set MMC_CAP_SWCMDQ for Intel BYT and related eMMC host controllers.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/host/sdhci-pci-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 2d20fb60ce83..6c643b09f358 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -362,7 +362,7 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
{
slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
- MMC_CAP_CMD_DURING_TFR |
+ MMC_CAP_CMD_DURING_TFR | MMC_CAP_SWCMDQ |
MMC_CAP_WAIT_WHILE_BUSY;
slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ;
slot->hw_reset = sdhci_pci_int_hw_reset;
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH V7 25/25] mmc: sdhci-acpi: Enable Software Command Queuing for some Intel controllers
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
` (23 preceding siblings ...)
2016-11-25 10:07 ` [PATCH V7 24/25] mmc: sdhci-pci: Enable Software Command Queuing for some Intel controllers Adrian Hunter
@ 2016-11-25 10:07 ` Adrian Hunter
2016-11-25 15:15 ` Linus Walleij
24 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 10:07 UTC (permalink / raw)
To: Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu, Linus Walleij
Set MMC_CAP_SWCMDQ for Intel BYT and related eMMC host controllers.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/mmc/host/sdhci-acpi.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
index 81d4dc034793..95fc4de05c54 100644
--- a/drivers/mmc/host/sdhci-acpi.c
+++ b/drivers/mmc/host/sdhci-acpi.c
@@ -274,7 +274,7 @@ static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev,
static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
.chip = &sdhci_acpi_chip_int,
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
- MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
+ MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | MMC_CAP_SWCMDQ |
MMC_CAP_CMD_DURING_TFR | MMC_CAP_WAIT_WHILE_BUSY,
.caps2 = MMC_CAP2_HC_ERASE_SZ,
.flags = SDHCI_ACPI_RUNTIME_PM,
--
1.9.1
^ permalink raw reply related [flat|nested] 59+ messages in thread
* Re: [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up
2016-11-25 10:06 ` [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up Adrian Hunter
@ 2016-11-25 14:37 ` Linus Walleij
2016-11-28 3:32 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:37 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:06 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> The only time the driver sleeps expecting to be woken upon the arrival of
> a new request, is when the dispatch queue is empty. The only time that it
> is known whether the dispatch queue is empty is after NULL is returned
> from blk_fetch_request() while under the queue lock.
>
> Recognizing those facts, simplify the synchronization between the queue
> thread and the request function. A couple of flags tell the request
> function what to do, and the queue lock and barriers associated with
> wake-ups ensure synchronization.
>
> The result is simpler and allows the removal of the context_info lock.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Very nice patch!
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs()
2016-11-25 10:06 ` [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs() Adrian Hunter
@ 2016-11-25 14:38 ` Linus Walleij
2016-11-28 3:36 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:38 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:06 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_alloc_bounce_bufs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Nice refactoring.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs()
2016-11-25 10:07 ` [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs() Adrian Hunter
@ 2016-11-25 14:39 ` Linus Walleij
2016-11-28 3:48 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:39 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_alloc_bounce_sgs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Nice refactoring.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs()
2016-11-25 10:07 ` [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs() Adrian Hunter
@ 2016-11-25 14:41 ` Linus Walleij
2016-11-28 3:49 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:41 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_alloc_sgs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Nice refactoring.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs()
2016-11-25 10:07 ` [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs() Adrian Hunter
@ 2016-11-25 14:42 ` Linus Walleij
2016-11-28 3:50 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:42 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_reqs_free_bufs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Nice refactoring.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 06/25] mmc: queue: Introduce queue depth
2016-11-25 10:07 ` [PATCH V7 06/25] mmc: queue: Introduce queue depth Adrian Hunter
@ 2016-11-25 14:43 ` Linus Walleij
2016-11-25 17:20 ` Adrian Hunter
2016-11-28 4:19 ` Ritesh Harjani
1 sibling, 1 reply; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:43 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Add a mmc_queue member to record the size of the queue, which currently
> supports 2 requests on-the-go at a time.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
(...)
> + /* Queue depth is only ever 2 with packed commands */
That's a weird comment.
It doesn't have anything with packed commands to do does it?
In that case it was just deleted.
Are you referring to the asynchronous issuing pipeline thing
(cur & next)?
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 16/25] mmc: block: Fix 4K native sector check
2016-11-25 10:07 ` [PATCH V7 16/25] mmc: block: Fix 4K native sector check Adrian Hunter
@ 2016-11-25 14:51 ` Linus Walleij
0 siblings, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:51 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> The 4K native sector check does not allow for the 'do' loop nor the
> variables used after the 'cmd_abort' label.
>
> 'brq' and 'req' get reassigned in the 'do' loop, so the check must not
> assume what their values are. After the 'cmd_abort' label, 'mq_rq' and
> 'req' are used, but 'rqc' must be NULL otherwise it can be started again.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Looks correct, and is clearly a sign of what we all know:
mmc_blk_issue_rw_rq() is hopelessly convoluted and needs to
be refactored into something we can read.
Shouldn't this patch just be moved to the front of the patch queue
and merged as a fix? AFAICT it's a plain bug.
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 17/25] mmc: block: Use local var for mqrq_cur
2016-11-25 10:07 ` [PATCH V7 17/25] mmc: block: Use local var for mqrq_cur Adrian Hunter
@ 2016-11-25 14:52 ` Linus Walleij
0 siblings, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:52 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
> assigning it to a local variable.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 18/25] mmc: block: Pass mqrq to mmc_blk_prep_packed_list()
2016-11-25 10:07 ` [PATCH V7 18/25] mmc: block: Pass mqrq to mmc_blk_prep_packed_list() Adrian Hunter
@ 2016-11-25 14:53 ` Linus Walleij
0 siblings, 0 replies; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 14:53 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by
> passing mqrq to mmc_blk_prep_packed_list().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
This code path is deleted upstream.
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions
2016-11-25 10:07 ` [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions Adrian Hunter
@ 2016-11-25 15:01 ` Linus Walleij
2016-11-29 10:14 ` Adrian Hunter
0 siblings, 1 reply; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 15:01 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> eMMC can have multiple internal partitions that are represented as separate
> disks / queues. However the card has only 1 command queue which must be
> empty when switching partitions. Consequently the array of mmc requests
> that are queued can be shared between partitions saving memory.
>
> Keep a pointer to the mmc request queue on the card, and use that instead
> of allocating a new one for each partition. Use a reference count to keep
> track of when to free it.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
This is a good refactoring no matter how we proceed with command
queueuing. Some comments.
> @@ -1480,6 +1480,9 @@ static int mmc_packed_init(struct mmc_queue *mq, struct mmc_card *card)
> if (mq->qdepth != 2)
> return -EINVAL;
>
> + if (mqrq_cur->packed)
> + goto out;
Well packed command is gone so this goes away.
> +++ b/drivers/mmc/card/queue.c
> @@ -200,10 +200,17 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(struct mmc_queue *mq,
> struct mmc_queue_req *mqrq;
> int i;
>
> + if (mq->card->mqrq) {
> + mq->card->mqrq_ref_cnt += 1;
> + return mq->card->mqrq;
> + }
> +
> mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
> if (mqrq) {
> for (i = 0; i < mq->qdepth; i++)
> mqrq[i].task_id = i;
> + mq->card->mqrq = mqrq;
> + mq->card->mqrq_ref_cnt = 1;
> }
OK
> + if (mq->card->mqrq_ref_cnt > 1)
> + return !!mq->mqrq[0].bounce_buf;
Hm that seems inseparable from the other changes.
Decrease of refcount seems correct.
> + struct mmc_queue_req *mqrq; /* Shared queue structure */
> + int mqrq_ref_cnt; /* Shared queue ref. count */
I'm not smart enough to see if we're always increasing/decreasing
this under a lock or otherwise exclusive context, or if it would be
better to use an atomic type for counting, like kref does?
Well maybe the whole thing could use kref I dunno.
I guess it should be an unsigned int atleast.
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 25/25] mmc: sdhci-acpi: Enable Software Command Queuing for some Intel controllers
2016-11-25 10:07 ` [PATCH V7 25/25] mmc: sdhci-acpi: " Adrian Hunter
@ 2016-11-25 15:15 ` Linus Walleij
2016-11-28 13:55 ` Adrian Hunter
0 siblings, 1 reply; 59+ messages in thread
From: Linus Walleij @ 2016-11-25 15:15 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> Set MMC_CAP_SWCMDQ for Intel BYT and related eMMC host controllers.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
> drivers/mmc/host/sdhci-acpi.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
> index 81d4dc034793..95fc4de05c54 100644
> --- a/drivers/mmc/host/sdhci-acpi.c
> +++ b/drivers/mmc/host/sdhci-acpi.c
> @@ -274,7 +274,7 @@ static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev,
> static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
> .chip = &sdhci_acpi_chip_int,
> .caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
> - MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
> + MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | MMC_CAP_SWCMDQ |
Actually I don't see why SOFTWARE command queueing would need a cap flag
in the host at all?
Isn't the whole point with it that if it is available, we don't need any special
hardware support to use it with any host?
So why not just enable it if the card supports it in that case, why flag
it in the host at all?
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 06/25] mmc: queue: Introduce queue depth
2016-11-25 14:43 ` Linus Walleij
@ 2016-11-25 17:20 ` Adrian Hunter
0 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-25 17:20 UTC (permalink / raw)
To: Linus Walleij
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On 25/11/2016 4:43 p.m., Linus Walleij wrote:
> On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>
>> Add a mmc_queue member to record the size of the queue, which currently
>> supports 2 requests on-the-go at a time.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> (...)
>> + /* Queue depth is only ever 2 with packed commands */
>
> That's a weird comment.
> It doesn't have anything with packed commands to do does it?
No it doesn't. Packed commands and command queue are mutually exclusive,
but the packed commands implementation expects there to be 2
mmc_queue_req's. Adding a check was just for the code to protect itself
from oops.
> In that case it was just deleted.
Yes, I will re-base.
>
> Are you referring to the asynchronous issuing pipeline thing
> (cur & next)?
Yes.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up
2016-11-25 10:06 ` [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up Adrian Hunter
2016-11-25 14:37 ` Linus Walleij
@ 2016-11-28 3:32 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 3:32 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:36 PM, Adrian Hunter wrote:
> The only time the driver sleeps expecting to be woken upon the arrival of
> a new request, is when the dispatch queue is empty. The only time that it
> is known whether the dispatch queue is empty is after NULL is returned
> from blk_fetch_request() while under the queue lock.
>
> Recognizing those facts, simplify the synchronization between the queue
> thread and the request function. A couple of flags tell the request
> function what to do, and the queue lock and barriers associated with
> wake-ups ensure synchronization.
>
> The result is simpler and allows the removal of the context_info lock.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
> ---
> drivers/mmc/card/block.c | 7 -------
> drivers/mmc/card/queue.c | 35 +++++++++++++++++++++--------------
> drivers/mmc/card/queue.h | 1 +
> drivers/mmc/core/core.c | 6 ------
> include/linux/mmc/host.h | 2 --
> 5 files changed, 22 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
> index 6618126fcb9f..f8e51640596e 100644
> --- a/drivers/mmc/card/block.c
> +++ b/drivers/mmc/card/block.c
> @@ -2193,8 +2193,6 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
> int ret;
> struct mmc_blk_data *md = mq->blkdata;
> struct mmc_card *card = md->queue.card;
> - struct mmc_host *host = card->host;
> - unsigned long flags;
> bool req_is_special = mmc_req_is_special(req);
>
> if (req && !mq->mqrq_prev->req)
> @@ -2227,11 +2225,6 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
> mmc_blk_issue_rw_rq(mq, NULL);
> ret = mmc_blk_issue_flush(mq, req);
> } else {
> - if (!req && host->areq) {
> - spin_lock_irqsave(&host->context_info.lock, flags);
> - host->context_info.is_waiting_last_req = true;
> - spin_unlock_irqrestore(&host->context_info.lock, flags);
> - }
> ret = mmc_blk_issue_rw_rq(mq, req);
> }
>
> diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
> index 3f6a2463ab30..c4ac4b8a1a98 100644
> --- a/drivers/mmc/card/queue.c
> +++ b/drivers/mmc/card/queue.c
> @@ -53,6 +53,7 @@ static int mmc_queue_thread(void *d)
> {
> struct mmc_queue *mq = d;
> struct request_queue *q = mq->queue;
> + struct mmc_context_info *cntx = &mq->card->host->context_info;
>
> current->flags |= PF_MEMALLOC;
>
> @@ -63,6 +64,19 @@ static int mmc_queue_thread(void *d)
> spin_lock_irq(q->queue_lock);
> set_current_state(TASK_INTERRUPTIBLE);
> req = blk_fetch_request(q);
> + mq->asleep = false;
> + cntx->is_waiting_last_req = false;
> + cntx->is_new_req = false;
> + if (!req) {
> + /*
> + * Dispatch queue is empty so set flags for
> + * mmc_request_fn() to wake us up.
> + */
> + if (mq->mqrq_prev->req)
> + cntx->is_waiting_last_req = true;
> + else
> + mq->asleep = true;
> + }
> mq->mqrq_cur->req = req;
> spin_unlock_irq(q->queue_lock);
>
> @@ -115,7 +129,6 @@ static void mmc_request_fn(struct request_queue *q)
> {
> struct mmc_queue *mq = q->queuedata;
> struct request *req;
> - unsigned long flags;
> struct mmc_context_info *cntx;
>
> if (!mq) {
> @@ -127,19 +140,13 @@ static void mmc_request_fn(struct request_queue *q)
> }
>
> cntx = &mq->card->host->context_info;
> - if (!mq->mqrq_cur->req && mq->mqrq_prev->req) {
> - /*
> - * New MMC request arrived when MMC thread may be
> - * blocked on the previous request to be complete
> - * with no current request fetched
> - */
> - spin_lock_irqsave(&cntx->lock, flags);
> - if (cntx->is_waiting_last_req) {
> - cntx->is_new_req = true;
> - wake_up_interruptible(&cntx->wait);
> - }
> - spin_unlock_irqrestore(&cntx->lock, flags);
> - } else if (!mq->mqrq_cur->req && !mq->mqrq_prev->req)
> +
> + if (cntx->is_waiting_last_req) {
> + cntx->is_new_req = true;
> + wake_up_interruptible(&cntx->wait);
> + }
> +
> + if (mq->asleep)
> wake_up_process(mq->thread);
> }
>
> diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
> index 334c9306070f..0e8133c626c9 100644
> --- a/drivers/mmc/card/queue.h
> +++ b/drivers/mmc/card/queue.h
> @@ -58,6 +58,7 @@ struct mmc_queue {
> unsigned int flags;
> #define MMC_QUEUE_SUSPENDED (1 << 0)
> #define MMC_QUEUE_NEW_REQUEST (1 << 1)
> + bool asleep;
> struct mmc_blk_data *blkdata;
> struct request_queue *queue;
> struct mmc_queue_req mqrq[2];
> diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
> index f39397f7c8dc..dc1f27ee50b8 100644
> --- a/drivers/mmc/core/core.c
> +++ b/drivers/mmc/core/core.c
> @@ -504,18 +504,14 @@ static enum mmc_blk_status mmc_wait_for_data_req_done(struct mmc_host *host,
> struct mmc_command *cmd;
> struct mmc_context_info *context_info = &host->context_info;
> enum mmc_blk_status status;
> - unsigned long flags;
>
> while (1) {
> wait_event_interruptible(context_info->wait,
> (context_info->is_done_rcv ||
> context_info->is_new_req));
> - spin_lock_irqsave(&context_info->lock, flags);
> context_info->is_waiting_last_req = false;
> - spin_unlock_irqrestore(&context_info->lock, flags);
> if (context_info->is_done_rcv) {
> context_info->is_done_rcv = false;
> - context_info->is_new_req = false;
> cmd = mrq->cmd;
>
> if (!cmd->error || !cmd->retries ||
> @@ -534,7 +530,6 @@ static enum mmc_blk_status mmc_wait_for_data_req_done(struct mmc_host *host,
> continue; /* wait for done/new event again */
> }
> } else if (context_info->is_new_req) {
> - context_info->is_new_req = false;
> if (!next_req)
> return MMC_BLK_NEW_REQUEST;
> }
> @@ -3016,7 +3011,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
> */
> void mmc_init_context_info(struct mmc_host *host)
> {
> - spin_lock_init(&host->context_info.lock);
> host->context_info.is_new_req = false;
> host->context_info.is_done_rcv = false;
> host->context_info.is_waiting_last_req = false;
> diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
> index 2a6418d0c343..bcf6d252ec67 100644
> --- a/include/linux/mmc/host.h
> +++ b/include/linux/mmc/host.h
> @@ -197,14 +197,12 @@ struct mmc_slot {
> * @is_new_req wake up reason was new request
> * @is_waiting_last_req mmc context waiting for single running request
> * @wait wait queue
> - * @lock lock to protect data fields
> */
> struct mmc_context_info {
> bool is_done_rcv;
> bool is_new_req;
> bool is_waiting_last_req;
> wait_queue_head_t wait;
> - spinlock_t lock;
> };
>
> struct regulator;
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs()
2016-11-25 10:06 ` [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs() Adrian Hunter
2016-11-25 14:38 ` Linus Walleij
@ 2016-11-28 3:36 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 3:36 UTC (permalink / raw)
To: Adrian Hunter, Ulf Hansson
Cc: linux-mmc, Alex Lemberg, Mateusz Nowak, Yuliy Izrailov,
Jaehoon Chung, Dong Aisheng, Das Asutosh, Zhangfei Gao,
Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij, Harjani, Ritesh
On 11/25/2016 3:36 PM, Adrian Hunter wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_alloc_bounce_bufs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs()
2016-11-25 10:07 ` [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs() Adrian Hunter
2016-11-25 14:39 ` Linus Walleij
@ 2016-11-28 3:48 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 3:48 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_alloc_bounce_sgs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
> ---
> drivers/mmc/card/queue.c | 44 ++++++++++++++++++++++++++++----------------
> 1 file changed, 28 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
> index ea8b01f76d55..3756303b4bbc 100644
> --- a/drivers/mmc/card/queue.c
> +++ b/drivers/mmc/card/queue.c
> @@ -211,6 +211,30 @@ static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
> return true;
> }
>
> +static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
> + unsigned int bouncesz)
> +{
> + struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
> + struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
> + int ret;
> +
> + mqrq_cur->sg = mmc_alloc_sg(1, &ret);
> + if (ret)
> + return ret;
> +
> + mqrq_cur->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
> + if (ret)
> + return ret;
> +
> + mqrq_prev->sg = mmc_alloc_sg(1, &ret);
> + if (ret)
> + return ret;
> +
> + mqrq_prev->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
> +
> + return ret;
> +}
> +
> /**
> * mmc_init_queue - initialise a queue structure.
> * @mq: mmc queue
> @@ -225,6 +249,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> {
> struct mmc_host *host = card->host;
> u64 limit = BLK_BOUNCE_HIGH;
> + bool bounce = false;
> int ret;
> struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
> struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
> @@ -267,28 +292,15 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> blk_queue_max_segments(mq->queue, bouncesz / 512);
> blk_queue_max_segment_size(mq->queue, bouncesz);
>
> - mqrq_cur->sg = mmc_alloc_sg(1, &ret);
> - if (ret)
> - goto cleanup_queue;
> -
> - mqrq_cur->bounce_sg =
> - mmc_alloc_sg(bouncesz / 512, &ret);
> - if (ret)
> - goto cleanup_queue;
> -
> - mqrq_prev->sg = mmc_alloc_sg(1, &ret);
> - if (ret)
> - goto cleanup_queue;
> -
> - mqrq_prev->bounce_sg =
> - mmc_alloc_sg(bouncesz / 512, &ret);
> + ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz);
> if (ret)
> goto cleanup_queue;
> + bounce = true;
> }
> }
> #endif
>
> - if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) {
> + if (!bounce) {
> blk_queue_bounce_limit(mq->queue, limit);
> blk_queue_max_hw_sectors(mq->queue,
> min(host->max_blk_count, host->max_req_size / 512));
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs()
2016-11-25 10:07 ` [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs() Adrian Hunter
2016-11-25 14:41 ` Linus Walleij
@ 2016-11-28 3:49 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 3:49 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_alloc_sgs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs()
2016-11-25 10:07 ` [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs() Adrian Hunter
2016-11-25 14:42 ` Linus Walleij
@ 2016-11-28 3:50 ` Ritesh Harjani
1 sibling, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 3:50 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> In preparation for supporting a queue of requests, factor out
> mmc_queue_reqs_free_bufs().
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 06/25] mmc: queue: Introduce queue depth
2016-11-25 10:07 ` [PATCH V7 06/25] mmc: queue: Introduce queue depth Adrian Hunter
2016-11-25 14:43 ` Linus Walleij
@ 2016-11-28 4:19 ` Ritesh Harjani
2016-11-28 12:45 ` Adrian Hunter
1 sibling, 1 reply; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:19 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Add a mmc_queue member to record the size of the queue, which currently
> supports 2 requests on-the-go at a time.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
> drivers/mmc/card/block.c | 3 +++
> drivers/mmc/card/queue.c | 1 +
> drivers/mmc/card/queue.h | 1 +
> 3 files changed, 5 insertions(+)
>
> diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
> index f8e51640596e..47835b78872f 100644
> --- a/drivers/mmc/card/block.c
> +++ b/drivers/mmc/card/block.c
> @@ -1439,6 +1439,9 @@ static int mmc_packed_init(struct mmc_queue *mq, struct mmc_card *card)
> struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
> int ret = 0;
>
> + /* Queue depth is only ever 2 with packed commands */
> + if (mq->qdepth != 2)
> + return -EINVAL;
I think you are referring here that with SWMCDQ, packed commands wont be
used. Instead of qdepth do you think we should check cmdq_en ?
Also maybe we shouldn't even call mmc_packed_init if cmdq_en is true?
>
> mqrq_cur->packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL);
> if (!mqrq_cur->packed) {
> diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
> index cbe92c9cfda1..60fa095adb14 100644
> --- a/drivers/mmc/card/queue.c
> +++ b/drivers/mmc/card/queue.c
> @@ -296,6 +296,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> if (!mq->queue)
> return -ENOMEM;
>
> + mq->qdepth = 2;
> mq->mqrq_cur = &mq->mqrq[0];
> mq->mqrq_prev = &mq->mqrq[1];
> mq->queue->queuedata = mq;
> diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
> index 0e8133c626c9..8a0a45e5650d 100644
> --- a/drivers/mmc/card/queue.h
> +++ b/drivers/mmc/card/queue.h
> @@ -64,6 +64,7 @@ struct mmc_queue {
> struct mmc_queue_req mqrq[2];
> struct mmc_queue_req *mqrq_cur;
> struct mmc_queue_req *mqrq_prev;
> + int qdepth;
> };
>
> extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 07/25] mmc: queue: Use queue depth to allocate and free
2016-11-25 10:07 ` [PATCH V7 07/25] mmc: queue: Use queue depth to allocate and free Adrian Hunter
@ 2016-11-28 4:21 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:21 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Instead of allocating resources for 2 slots in the queue, allow for an
> arbitrary number.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
> ---
> drivers/mmc/card/queue.c | 103 +++++++++++++++++++++--------------------------
> 1 file changed, 46 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
> index 60fa095adb14..1ea007f51ec9 100644
> --- a/drivers/mmc/card/queue.c
> +++ b/drivers/mmc/card/queue.c
> @@ -189,86 +189,75 @@ static void mmc_queue_setup_discard(struct request_queue *q,
> static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq,
> unsigned int bouncesz)
> {
> - struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
> - struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
> -
> - mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
> - if (!mqrq_cur->bounce_buf) {
> - pr_warn("%s: unable to allocate bounce cur buffer\n",
> - mmc_card_name(mq->card));
> - return false;
> - }
> + int i;
>
> - mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
> - if (!mqrq_prev->bounce_buf) {
> - pr_warn("%s: unable to allocate bounce prev buffer\n",
> - mmc_card_name(mq->card));
> - kfree(mqrq_cur->bounce_buf);
> - mqrq_cur->bounce_buf = NULL;
> - return false;
> + for (i = 0; i < mq->qdepth; i++) {
> + mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
> + if (!mq->mqrq[i].bounce_buf)
> + goto out_err;
> }
>
> return true;
> +
> +out_err:
> + while (--i >= 0) {
> + kfree(mq->mqrq[i].bounce_buf);
> + mq->mqrq[i].bounce_buf = NULL;
> + }
> + pr_warn("%s: unable to allocate bounce buffers\n",
> + mmc_card_name(mq->card));
> + return false;
> }
>
> static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq,
> unsigned int bouncesz)
> {
> - struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
> - struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
> - int ret;
> -
> - mqrq_cur->sg = mmc_alloc_sg(1, &ret);
> - if (ret)
> - return ret;
> + int i, ret;
>
> - mqrq_cur->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
> - if (ret)
> - return ret;
> -
> - mqrq_prev->sg = mmc_alloc_sg(1, &ret);
> - if (ret)
> - return ret;
> + for (i = 0; i < mq->qdepth; i++) {
> + mq->mqrq[i].sg = mmc_alloc_sg(1, &ret);
> + if (ret)
> + return ret;
>
> - mqrq_prev->bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
> + mq->mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret);
> + if (ret)
> + return ret;
> + }
>
> - return ret;
> + return 0;
> }
>
> static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs)
> {
> - struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
> - struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
> - int ret;
> + int i, ret;
>
> - mqrq_cur->sg = mmc_alloc_sg(max_segs, &ret);
> - if (ret)
> - return ret;
> + for (i = 0; i < mq->qdepth; i++) {
> + mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret);
> + if (ret)
> + return ret;
> + }
>
> - mqrq_prev->sg = mmc_alloc_sg(max_segs, &ret);
> + return 0;
> +}
>
> - return ret;
> +static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
> +{
> + kfree(mqrq->bounce_sg);
> + mqrq->bounce_sg = NULL;
> +
> + kfree(mqrq->sg);
> + mqrq->sg = NULL;
> +
> + kfree(mqrq->bounce_buf);
> + mqrq->bounce_buf = NULL;
> }
>
> static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq)
> {
> - struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
> - struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
> -
> - kfree(mqrq_cur->bounce_sg);
> - mqrq_cur->bounce_sg = NULL;
> - kfree(mqrq_prev->bounce_sg);
> - mqrq_prev->bounce_sg = NULL;
> -
> - kfree(mqrq_cur->sg);
> - mqrq_cur->sg = NULL;
> - kfree(mqrq_cur->bounce_buf);
> - mqrq_cur->bounce_buf = NULL;
> -
> - kfree(mqrq_prev->sg);
> - mqrq_prev->sg = NULL;
> - kfree(mqrq_prev->bounce_buf);
> - mqrq_prev->bounce_buf = NULL;
> + int i;
> +
> + for (i = 0; i < mq->qdepth; i++)
> + mmc_queue_req_free_bufs(&mq->mqrq[i]);
> }
>
> /**
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 08/25] mmc: queue: Allocate queue of size qdepth
2016-11-25 10:07 ` [PATCH V7 08/25] mmc: queue: Allocate queue of size qdepth Adrian Hunter
@ 2016-11-28 4:22 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:22 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Now that the queue resources are allocated according to the size of the
> queue, it is possible to allocate the queue to be an arbitrary size.
>
> A side-effect is that deallocation of 'packed' resources must be done
> before deallocation of the queue.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions
2016-11-25 10:07 ` [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions Adrian Hunter
@ 2016-11-28 4:29 ` Ritesh Harjani
2016-11-28 13:08 ` Adrian Hunter
0 siblings, 1 reply; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:29 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Add definitions relating to Command Queuing.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
> ---
> drivers/mmc/core/mmc.c | 17 +++++++++++++++++
> include/linux/mmc/card.h | 2 ++
> include/linux/mmc/mmc.h | 17 +++++++++++++++++
> 3 files changed, 36 insertions(+)
>
> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
> index 3268fcd3378d..6e9830997eef 100644
> --- a/drivers/mmc/core/mmc.c
> +++ b/drivers/mmc/core/mmc.c
> @@ -618,6 +618,23 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
> (ext_csd[EXT_CSD_SUPPORTED_MODE] & 0x1) &&
> !(ext_csd[EXT_CSD_FW_CONFIG] & 0x1);
> }
> +
> + /* eMMC v5.1 or later */
> + if (card->ext_csd.rev >= 8) {
> + card->ext_csd.cmdq_support = ext_csd[EXT_CSD_CMDQ_SUPPORT] &
> + EXT_CSD_CMDQ_SUPPORTED;
> + card->ext_csd.cmdq_depth = (ext_csd[EXT_CSD_CMDQ_DEPTH] &
> + EXT_CSD_CMDQ_DEPTH_MASK) + 1;
> + if (card->ext_csd.cmdq_depth <= 2) {
> + card->ext_csd.cmdq_support = false;
> + card->ext_csd.cmdq_depth = 0;
> + }
Could you please explain, why for cmdq_depth <=2, we are disabling
cmdq_support ? Maybe we can add a comment there.
> + if (card->ext_csd.cmdq_support) {
> + pr_debug("%s: Command Queue supported depth %u\n",
> + mmc_hostname(card->host),
> + card->ext_csd.cmdq_depth);
> + }
> + }
> out:
> return err;
> }
> diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
> index e49a3ff9d0e0..95d69d498296 100644
> --- a/include/linux/mmc/card.h
> +++ b/include/linux/mmc/card.h
> @@ -89,6 +89,8 @@ struct mmc_ext_csd {
> unsigned int boot_ro_lock; /* ro lock support */
> bool boot_ro_lockable;
> bool ffu_capable; /* Firmware upgrade support */
> + bool cmdq_support; /* Command Queue supported */
> + unsigned int cmdq_depth; /* Command Queue depth */
> #define MMC_FIRMWARE_LEN 8
> u8 fwrev[MMC_FIRMWARE_LEN]; /* FW version */
> u8 raw_exception_status; /* 54 */
> diff --git a/include/linux/mmc/mmc.h b/include/linux/mmc/mmc.h
> index c376209c70ef..672730acc705 100644
> --- a/include/linux/mmc/mmc.h
> +++ b/include/linux/mmc/mmc.h
> @@ -84,6 +84,13 @@
> #define MMC_APP_CMD 55 /* ac [31:16] RCA R1 */
> #define MMC_GEN_CMD 56 /* adtc [0] RD/WR R1 */
>
> + /* class 11 */
> +#define MMC_QUE_TASK_PARAMS 44 /* ac [20:16] task id R1 */
> +#define MMC_QUE_TASK_ADDR 45 /* ac [31:0] data addr R1 */
> +#define MMC_EXECUTE_READ_TASK 46 /* adtc [20:16] task id R1 */
> +#define MMC_EXECUTE_WRITE_TASK 47 /* adtc [20:16] task id R1 */
> +#define MMC_CMDQ_TASK_MGMT 48 /* ac [20:16] task id R1b */
> +
> static inline bool mmc_op_multi(u32 opcode)
> {
> return opcode == MMC_WRITE_MULTIPLE_BLOCK ||
> @@ -272,6 +279,7 @@ struct _mmc_csd {
> * EXT_CSD fields
> */
>
> +#define EXT_CSD_CMDQ_MODE_EN 15 /* R/W */
> #define EXT_CSD_FLUSH_CACHE 32 /* W */
> #define EXT_CSD_CACHE_CTRL 33 /* R/W */
> #define EXT_CSD_POWER_OFF_NOTIFICATION 34 /* R/W */
> @@ -331,6 +339,8 @@ struct _mmc_csd {
> #define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */
> #define EXT_CSD_PWR_CL_DDR_200_360 253 /* RO */
> #define EXT_CSD_FIRMWARE_VERSION 254 /* RO, 8 bytes */
> +#define EXT_CSD_CMDQ_DEPTH 307 /* RO */
> +#define EXT_CSD_CMDQ_SUPPORT 308 /* RO */
> #define EXT_CSD_SUPPORTED_MODE 493 /* RO */
> #define EXT_CSD_TAG_UNIT_SIZE 498 /* RO */
> #define EXT_CSD_DATA_TAG_SUPPORT 499 /* RO */
> @@ -438,6 +448,13 @@ struct _mmc_csd {
> #define EXT_CSD_MANUAL_BKOPS_MASK 0x01
>
> /*
> + * Command Queue
> + */
> +#define EXT_CSD_CMDQ_MODE_ENABLED BIT(0)
Is there a need for both EXT_CSD_CMDQ_MODE_ENABLED and
EXT_CSD_CMDQ_MODE_SUPPORTED. Arent they doing same thing?
> +#define EXT_CSD_CMDQ_DEPTH_MASK GENMASK(4, 0)
> +#define EXT_CSD_CMDQ_SUPPORTED BIT(0)
Ditto
> +
> +/*
> * MMC_SWITCH access modes
> */
>
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue
2016-11-25 10:07 ` [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
@ 2016-11-28 4:36 ` Ritesh Harjani
2016-11-28 13:23 ` Adrian Hunter
0 siblings, 1 reply; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:36 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Add helper functions to enable or disable the Command Queue.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
> Documentation/mmc/mmc-dev-attrs.txt | 1 +
> drivers/mmc/core/mmc.c | 2 ++
> drivers/mmc/core/mmc_ops.c | 27 +++++++++++++++++++++++++++
> include/linux/mmc/card.h | 1 +
> include/linux/mmc/core.h | 2 ++
> 5 files changed, 33 insertions(+)
>
> diff --git a/Documentation/mmc/mmc-dev-attrs.txt b/Documentation/mmc/mmc-dev-attrs.txt
> index 404a0e9e92b0..dcd1252877fb 100644
> --- a/Documentation/mmc/mmc-dev-attrs.txt
> +++ b/Documentation/mmc/mmc-dev-attrs.txt
> @@ -30,6 +30,7 @@ All attributes are read-only.
> rel_sectors Reliable write sector count
> ocr Operation Conditions Register
> dsr Driver Stage Register
> + cmdq_en Command Queue enabled: 1 => enabled, 0 => not enabled
>
> Note on Erase Size and Preferred Erase Size:
>
> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
> index 6e9830997eef..d6a30bbd399d 100644
> --- a/drivers/mmc/core/mmc.c
> +++ b/drivers/mmc/core/mmc.c
> @@ -770,6 +770,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card, unsigned bus_width)
> MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
> MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
> MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
> +MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
>
> static ssize_t mmc_fwrev_show(struct device *dev,
> struct device_attribute *attr,
> @@ -823,6 +824,7 @@ static ssize_t mmc_dsr_show(struct device *dev,
> &dev_attr_rel_sectors.attr,
> &dev_attr_ocr.attr,
> &dev_attr_dsr.attr,
> + &dev_attr_cmdq_en.attr,
> NULL,
> };
> ATTRIBUTE_GROUPS(mmc_std);
> diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
> index 9b2617cfff67..92a1de9b4981 100644
> --- a/drivers/mmc/core/mmc_ops.c
> +++ b/drivers/mmc/core/mmc_ops.c
> @@ -824,3 +824,30 @@ int mmc_can_ext_csd(struct mmc_card *card)
> {
> return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
> }
> +
> +int mmc_cmdq_switch(struct mmc_card *card, int enable)
> +{
> + int err;
> +
> + if (!card->ext_csd.cmdq_support)
> + return -EOPNOTSUPP;
> +
> + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
> + enable, card->ext_csd.generic_cmd6_time);
> + if (!err)
> + card->ext_csd.cmdq_en = enable;
> +
> + return err;
> +}
> +
> +int mmc_cmdq_enable(struct mmc_card *card)
> +{
> + return mmc_cmdq_switch(card, EXT_CSD_CMDQ_MODE_ENABLED);
EXT_CSD_CMDQ_MODE_ENABLED is defined as a BIT(0), but getting used here
as a value of 1. EXT_CSD_CMDQ_MODE_ENABLED seems redundant anyways.
Do you think we can remove it and pass 1 directly?
> +}
> +EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
> +
> +int mmc_cmdq_disable(struct mmc_card *card)
> +{
> + return mmc_cmdq_switch(card, 0);
> +}
> +EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
> diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
> index 95d69d498296..2d9c24f4e88e 100644
> --- a/include/linux/mmc/card.h
> +++ b/include/linux/mmc/card.h
> @@ -89,6 +89,7 @@ struct mmc_ext_csd {
> unsigned int boot_ro_lock; /* ro lock support */
> bool boot_ro_lockable;
> bool ffu_capable; /* Firmware upgrade support */
> + bool cmdq_en; /* Command Queue enabled */
> bool cmdq_support; /* Command Queue supported */
> unsigned int cmdq_depth; /* Command Queue depth */
> #define MMC_FIRMWARE_LEN 8
> diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
> index 0ce928b3ce90..d045b06fc7ea 100644
> --- a/include/linux/mmc/core.h
> +++ b/include/linux/mmc/core.h
> @@ -177,6 +177,8 @@ extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *,
> extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int);
> extern int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error);
> extern int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
> +extern int mmc_cmdq_enable(struct mmc_card *card);
> +extern int mmc_cmdq_disable(struct mmc_card *card);
>
> #define MMC_ERASE_ARG 0x00000000
> #define MMC_SECURE_ERASE_ARG 0x80000000
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 11/25] mmc: mmc_test: Disable Command Queue while mmc_test is used
2016-11-25 10:07 ` [PATCH V7 11/25] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
@ 2016-11-28 4:40 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:40 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Normal read and write commands may not be used while the command queue is
> enabled. Disable the Command Queue when mmc_test is probed and re-enable it
> when it is removed.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Maybe we can later add SW CMDQ test cases to mmc_test framework.
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 12/25] mmc: block: Disable Command Queue while RPMB is used
2016-11-25 10:07 ` [PATCH V7 12/25] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
@ 2016-11-28 4:46 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:46 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> RPMB does not allow Command Queue commands. Disable and re-enable the
> Command Queue when switching.
>
> Note that the driver only switches partitions when the queue is empty.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Minor comment.
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
> ---
> drivers/mmc/card/block.c | 46 ++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 38 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
> index f22df69823cc..157d1b3d58d6 100644
> --- a/drivers/mmc/card/block.c
> +++ b/drivers/mmc/card/block.c
> @@ -746,10 +746,41 @@ static int mmc_blk_compat_ioctl(struct block_device *bdev, fmode_t mode,
> #endif
> };
>
> +static int mmc_blk_part_switch_pre(struct mmc_card *card,
> + unsigned int part_type)
> +{
> + int ret;
int ret = 0;
> +
> + if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
> + if (card->ext_csd.cmdq_en) {
> + ret = mmc_cmdq_disable(card);
> + if (ret)
> + return ret;
> + }
> + mmc_retune_pause(card->host);
> + }
> +
> + return 0;
Here return ret;
> +}
> +
> +static int mmc_blk_part_switch_post(struct mmc_card *card,
> + unsigned int part_type)
> +{
> + int ret = 0;
> +
> + if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
> + mmc_retune_unpause(card->host);
> + if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
> + ret = mmc_cmdq_enable(card);
> + }
> +
> + return ret;
> +}
> +
> static inline int mmc_blk_part_switch(struct mmc_card *card,
> struct mmc_blk_data *md)
> {
> - int ret;
> + int ret = 0;
> struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
>
> if (main_md->part_curr == md->part_type)
> @@ -758,8 +789,9 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
> if (mmc_card_mmc(card)) {
> u8 part_config = card->ext_csd.part_config;
>
> - if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
> - mmc_retune_pause(card->host);
> + ret = mmc_blk_part_switch_pre(card, md->part_type);
> + if (ret)
> + return ret;
>
> part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK;
> part_config |= md->part_type;
> @@ -768,19 +800,17 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
> EXT_CSD_PART_CONFIG, part_config,
> card->ext_csd.part_time);
> if (ret) {
> - if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
> - mmc_retune_unpause(card->host);
> + mmc_blk_part_switch_post(card, md->part_type);
> return ret;
> }
>
> card->ext_csd.part_config = part_config;
>
> - if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB)
> - mmc_retune_unpause(card->host);
> + ret = mmc_blk_part_switch_post(card, main_md->part_curr);
> }
>
> main_md->part_curr = md->part_type;
> - return 0;
> + return ret;
> }
>
> static u32 mmc_sd_num_wr_blocks(struct mmc_card *card)
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 13/25] mmc: core: Do not prepare a new request twice
2016-11-25 10:07 ` [PATCH V7 13/25] mmc: core: Do not prepare a new request twice Adrian Hunter
@ 2016-11-28 4:48 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:48 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> mmc_start_req() assumes it is never called with the new request already
> prepared. That is true if the queue consists of only 2 requests, but is not
> true for a longer queue. e.g. mmc_start_req() has a current and previous
> request but still exits to queue a new request if the queue size is
> greater than 2. In that case, when mmc_start_req() is called again, the
> current request will have been prepared already. Fix by flagging if the
> request has been prepared.
>
> That also means ensuring that struct mmc_async_req is always initialized
> to zero, which wasn't the case in mmc_test.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 14/25] mmc: core: Export mmc_retune_hold() and mmc_retune_release()
2016-11-25 10:07 ` [PATCH V7 14/25] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
@ 2016-11-28 4:49 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:49 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> Re-tuning can only be done when the Command Queue is empty, when means
> holding and releasing re-tuning from the block driver, so export those
> functions.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 15/25] mmc: block: Factor out mmc_blk_requeue()
2016-11-25 10:07 ` [PATCH V7 15/25] mmc: block: Factor out mmc_blk_requeue() Adrian Hunter
@ 2016-11-28 4:51 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 4:51 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/25/2016 3:37 PM, Adrian Hunter wrote:
> The same code is used in a couple of places.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Looks good!
Reviewed-by: Harjani Ritesh <riteshh@codeaurora.org>
Regards
Ritesh
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 06/25] mmc: queue: Introduce queue depth
2016-11-28 4:19 ` Ritesh Harjani
@ 2016-11-28 12:45 ` Adrian Hunter
0 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-28 12:45 UTC (permalink / raw)
To: Ritesh Harjani
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 28/11/16 06:19, Ritesh Harjani wrote:
>
>
> On 11/25/2016 3:37 PM, Adrian Hunter wrote:
>> Add a mmc_queue member to record the size of the queue, which currently
>> supports 2 requests on-the-go at a time.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>> drivers/mmc/card/block.c | 3 +++
>> drivers/mmc/card/queue.c | 1 +
>> drivers/mmc/card/queue.h | 1 +
>> 3 files changed, 5 insertions(+)
>>
>> diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
>> index f8e51640596e..47835b78872f 100644
>> --- a/drivers/mmc/card/block.c
>> +++ b/drivers/mmc/card/block.c
>> @@ -1439,6 +1439,9 @@ static int mmc_packed_init(struct mmc_queue *mq,
>> struct mmc_card *card)
>> struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
>> int ret = 0;
>>
>> + /* Queue depth is only ever 2 with packed commands */
>> + if (mq->qdepth != 2)
>> + return -EINVAL;
> I think you are referring here that with SWMCDQ, packed commands wont be
> used. Instead of qdepth do you think we should check cmdq_en ?
> Also maybe we shouldn't even call mmc_packed_init if cmdq_en is true?
mmc_packed_init() has gone away but the intention was to make the code
protect itself because the implementation supports only 2 requests. So,
something like this would be more explicit:
/* Packed commands implementation depends on qdepth being 2 */
if (mq->qdepth != 2) {
WARN_ON(1):
return -EINVAL;
}
>
>
>>
>> mqrq_cur->packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL);
>> if (!mqrq_cur->packed) {
>> diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
>> index cbe92c9cfda1..60fa095adb14 100644
>> --- a/drivers/mmc/card/queue.c
>> +++ b/drivers/mmc/card/queue.c
>> @@ -296,6 +296,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct
>> mmc_card *card,
>> if (!mq->queue)
>> return -ENOMEM;
>>
>> + mq->qdepth = 2;
>> mq->mqrq_cur = &mq->mqrq[0];
>> mq->mqrq_prev = &mq->mqrq[1];
>> mq->queue->queuedata = mq;
>> diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
>> index 0e8133c626c9..8a0a45e5650d 100644
>> --- a/drivers/mmc/card/queue.h
>> +++ b/drivers/mmc/card/queue.h
>> @@ -64,6 +64,7 @@ struct mmc_queue {
>> struct mmc_queue_req mqrq[2];
>> struct mmc_queue_req *mqrq_cur;
>> struct mmc_queue_req *mqrq_prev;
>> + int qdepth;
>> };
>>
>> extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *,
>> spinlock_t *,
>>
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions
2016-11-28 4:29 ` Ritesh Harjani
@ 2016-11-28 13:08 ` Adrian Hunter
0 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-28 13:08 UTC (permalink / raw)
To: Ritesh Harjani
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 28/11/16 06:29, Ritesh Harjani wrote:
>
>
> On 11/25/2016 3:37 PM, Adrian Hunter wrote:
>> Add definitions relating to Command Queuing.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
>> ---
>> drivers/mmc/core/mmc.c | 17 +++++++++++++++++
>> include/linux/mmc/card.h | 2 ++
>> include/linux/mmc/mmc.h | 17 +++++++++++++++++
>> 3 files changed, 36 insertions(+)
>>
>> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
>> index 3268fcd3378d..6e9830997eef 100644
>> --- a/drivers/mmc/core/mmc.c
>> +++ b/drivers/mmc/core/mmc.c
>> @@ -618,6 +618,23 @@ static int mmc_decode_ext_csd(struct mmc_card *card,
>> u8 *ext_csd)
>> (ext_csd[EXT_CSD_SUPPORTED_MODE] & 0x1) &&
>> !(ext_csd[EXT_CSD_FW_CONFIG] & 0x1);
>> }
>> +
>> + /* eMMC v5.1 or later */
>> + if (card->ext_csd.rev >= 8) {
>> + card->ext_csd.cmdq_support = ext_csd[EXT_CSD_CMDQ_SUPPORT] &
>> + EXT_CSD_CMDQ_SUPPORTED;
>> + card->ext_csd.cmdq_depth = (ext_csd[EXT_CSD_CMDQ_DEPTH] &
>> + EXT_CSD_CMDQ_DEPTH_MASK) + 1;
>> + if (card->ext_csd.cmdq_depth <= 2) {
>> + card->ext_csd.cmdq_support = false;
>> + card->ext_csd.cmdq_depth = 0;
>> + }
> Could you please explain, why for cmdq_depth <=2, we are disabling
> cmdq_support ? Maybe we can add a comment there.
It was in the original code. I presumed it was because such a small queue
did not give a performance advantage. Certainly a qdepth of 1 is not a
queue at all. However all the eMMC I have come in contact with have a queue
depth of at least 16, so it is unlikely to have any affect in practice. I
can add a comment.
>
>
>> + if (card->ext_csd.cmdq_support) {
>> + pr_debug("%s: Command Queue supported depth %u\n",
>> + mmc_hostname(card->host),
>> + card->ext_csd.cmdq_depth);
>> + }
>> + }
>> out:
>> return err;
>> }
>> diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
>> index e49a3ff9d0e0..95d69d498296 100644
>> --- a/include/linux/mmc/card.h
>> +++ b/include/linux/mmc/card.h
>> @@ -89,6 +89,8 @@ struct mmc_ext_csd {
>> unsigned int boot_ro_lock; /* ro lock support */
>> bool boot_ro_lockable;
>> bool ffu_capable; /* Firmware upgrade support */
>> + bool cmdq_support; /* Command Queue supported */
>> + unsigned int cmdq_depth; /* Command Queue depth */
>> #define MMC_FIRMWARE_LEN 8
>> u8 fwrev[MMC_FIRMWARE_LEN]; /* FW version */
>> u8 raw_exception_status; /* 54 */
>> diff --git a/include/linux/mmc/mmc.h b/include/linux/mmc/mmc.h
>> index c376209c70ef..672730acc705 100644
>> --- a/include/linux/mmc/mmc.h
>> +++ b/include/linux/mmc/mmc.h
>> @@ -84,6 +84,13 @@
>> #define MMC_APP_CMD 55 /* ac [31:16] RCA R1 */
>> #define MMC_GEN_CMD 56 /* adtc [0] RD/WR R1 */
>>
>> + /* class 11 */
>> +#define MMC_QUE_TASK_PARAMS 44 /* ac [20:16] task id R1 */
>> +#define MMC_QUE_TASK_ADDR 45 /* ac [31:0] data addr R1 */
>> +#define MMC_EXECUTE_READ_TASK 46 /* adtc [20:16] task id R1 */
>> +#define MMC_EXECUTE_WRITE_TASK 47 /* adtc [20:16] task id R1 */
>> +#define MMC_CMDQ_TASK_MGMT 48 /* ac [20:16] task id R1b */
>> +
>> static inline bool mmc_op_multi(u32 opcode)
>> {
>> return opcode == MMC_WRITE_MULTIPLE_BLOCK ||
>> @@ -272,6 +279,7 @@ struct _mmc_csd {
>> * EXT_CSD fields
>> */
>>
>> +#define EXT_CSD_CMDQ_MODE_EN 15 /* R/W */
>> #define EXT_CSD_FLUSH_CACHE 32 /* W */
>> #define EXT_CSD_CACHE_CTRL 33 /* R/W */
>> #define EXT_CSD_POWER_OFF_NOTIFICATION 34 /* R/W */
>> @@ -331,6 +339,8 @@ struct _mmc_csd {
>> #define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */
>> #define EXT_CSD_PWR_CL_DDR_200_360 253 /* RO */
>> #define EXT_CSD_FIRMWARE_VERSION 254 /* RO, 8 bytes */
>> +#define EXT_CSD_CMDQ_DEPTH 307 /* RO */
>> +#define EXT_CSD_CMDQ_SUPPORT 308 /* RO */
>> #define EXT_CSD_SUPPORTED_MODE 493 /* RO */
>> #define EXT_CSD_TAG_UNIT_SIZE 498 /* RO */
>> #define EXT_CSD_DATA_TAG_SUPPORT 499 /* RO */
>> @@ -438,6 +448,13 @@ struct _mmc_csd {
>> #define EXT_CSD_MANUAL_BKOPS_MASK 0x01
>>
>> /*
>> + * Command Queue
>> + */
>> +#define EXT_CSD_CMDQ_MODE_ENABLED BIT(0)
>
> Is there a need for both EXT_CSD_CMDQ_MODE_ENABLED and
> EXT_CSD_CMDQ_MODE_SUPPORTED. Arent they doing same thing?
They are bits in different ext-csd bytes. EXT_CSD_CMDQ_MODE_ENABLED is in
the CMDQ_MODE_EN byte (15), and EXT_CSD_CMDQ_SUPPORTED is in CMDQ_SUPPORT
(308). AFAIK being the same bit number is coincidence.
>
>
>> +#define EXT_CSD_CMDQ_DEPTH_MASK GENMASK(4, 0)
>> +#define EXT_CSD_CMDQ_SUPPORTED BIT(0)
> Ditto
>
>> +
>> +/*
>> * MMC_SWITCH access modes
>> */
>>
>>
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue
2016-11-28 4:36 ` Ritesh Harjani
@ 2016-11-28 13:23 ` Adrian Hunter
2016-11-28 14:00 ` Ritesh Harjani
0 siblings, 1 reply; 59+ messages in thread
From: Adrian Hunter @ 2016-11-28 13:23 UTC (permalink / raw)
To: Ritesh Harjani
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 28/11/16 06:36, Ritesh Harjani wrote:
>
>
> On 11/25/2016 3:37 PM, Adrian Hunter wrote:
>> Add helper functions to enable or disable the Command Queue.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>> Documentation/mmc/mmc-dev-attrs.txt | 1 +
>> drivers/mmc/core/mmc.c | 2 ++
>> drivers/mmc/core/mmc_ops.c | 27 +++++++++++++++++++++++++++
>> include/linux/mmc/card.h | 1 +
>> include/linux/mmc/core.h | 2 ++
>> 5 files changed, 33 insertions(+)
>>
>> diff --git a/Documentation/mmc/mmc-dev-attrs.txt
>> b/Documentation/mmc/mmc-dev-attrs.txt
>> index 404a0e9e92b0..dcd1252877fb 100644
>> --- a/Documentation/mmc/mmc-dev-attrs.txt
>> +++ b/Documentation/mmc/mmc-dev-attrs.txt
>> @@ -30,6 +30,7 @@ All attributes are read-only.
>> rel_sectors Reliable write sector count
>> ocr Operation Conditions Register
>> dsr Driver Stage Register
>> + cmdq_en Command Queue enabled: 1 => enabled, 0 => not enabled
>>
>> Note on Erase Size and Preferred Erase Size:
>>
>> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
>> index 6e9830997eef..d6a30bbd399d 100644
>> --- a/drivers/mmc/core/mmc.c
>> +++ b/drivers/mmc/core/mmc.c
>> @@ -770,6 +770,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card,
>> unsigned bus_width)
>> MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
>> MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
>> MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
>> +MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
>>
>> static ssize_t mmc_fwrev_show(struct device *dev,
>> struct device_attribute *attr,
>> @@ -823,6 +824,7 @@ static ssize_t mmc_dsr_show(struct device *dev,
>> &dev_attr_rel_sectors.attr,
>> &dev_attr_ocr.attr,
>> &dev_attr_dsr.attr,
>> + &dev_attr_cmdq_en.attr,
>> NULL,
>> };
>> ATTRIBUTE_GROUPS(mmc_std);
>> diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
>> index 9b2617cfff67..92a1de9b4981 100644
>> --- a/drivers/mmc/core/mmc_ops.c
>> +++ b/drivers/mmc/core/mmc_ops.c
>> @@ -824,3 +824,30 @@ int mmc_can_ext_csd(struct mmc_card *card)
>> {
>> return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
>> }
>> +
>> +int mmc_cmdq_switch(struct mmc_card *card, int enable)
>> +{
>> + int err;
>> +
>> + if (!card->ext_csd.cmdq_support)
>> + return -EOPNOTSUPP;
>> +
>> + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
>> + enable, card->ext_csd.generic_cmd6_time);
>> + if (!err)
>> + card->ext_csd.cmdq_en = enable;
>> +
>> + return err;
>> +}
>> +
>> +int mmc_cmdq_enable(struct mmc_card *card)
>> +{
>> + return mmc_cmdq_switch(card, EXT_CSD_CMDQ_MODE_ENABLED);
>
> EXT_CSD_CMDQ_MODE_ENABLED is defined as a BIT(0), but getting used here as a
> value of 1. EXT_CSD_CMDQ_MODE_ENABLED seems redundant anyways.
> Do you think we can remove it and pass 1 directly?
How about:
static int mmc_cmdq_switch(struct mmc_card *card, bool enable)
{
u8 val = enable ? EXT_CSD_CMDQ_MODE_ENABLED : 0;
int err;
if (!card->ext_csd.cmdq_support)
return -EOPNOTSUPP;
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
val, card->ext_csd.generic_cmd6_time);
if (!err)
card->ext_csd.cmdq_en = enable;
return err;
}
int mmc_cmdq_enable(struct mmc_card *card)
{
return mmc_cmdq_switch(card, true);
}
EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
int mmc_cmdq_disable(struct mmc_card *card)
{
return mmc_cmdq_switch(card, false);
}
EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
>
>> +}
>> +EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
>> +
>> +int mmc_cmdq_disable(struct mmc_card *card)
>> +{
>> + return mmc_cmdq_switch(card, 0);
>> +}
>> +EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
>> diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
>> index 95d69d498296..2d9c24f4e88e 100644
>> --- a/include/linux/mmc/card.h
>> +++ b/include/linux/mmc/card.h
>> @@ -89,6 +89,7 @@ struct mmc_ext_csd {
>> unsigned int boot_ro_lock; /* ro lock support */
>> bool boot_ro_lockable;
>> bool ffu_capable; /* Firmware upgrade support */
>> + bool cmdq_en; /* Command Queue enabled */
>> bool cmdq_support; /* Command Queue supported */
>> unsigned int cmdq_depth; /* Command Queue depth */
>> #define MMC_FIRMWARE_LEN 8
>> diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
>> index 0ce928b3ce90..d045b06fc7ea 100644
>> --- a/include/linux/mmc/core.h
>> +++ b/include/linux/mmc/core.h
>> @@ -177,6 +177,8 @@ extern int mmc_wait_for_app_cmd(struct mmc_host *,
>> struct mmc_card *,
>> extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int);
>> extern int mmc_send_tuning(struct mmc_host *host, u32 opcode, int
>> *cmd_error);
>> extern int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
>> +extern int mmc_cmdq_enable(struct mmc_card *card);
>> +extern int mmc_cmdq_disable(struct mmc_card *card);
>>
>> #define MMC_ERASE_ARG 0x00000000
>> #define MMC_SECURE_ERASE_ARG 0x80000000
>>
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 25/25] mmc: sdhci-acpi: Enable Software Command Queuing for some Intel controllers
2016-11-25 15:15 ` Linus Walleij
@ 2016-11-28 13:55 ` Adrian Hunter
0 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-28 13:55 UTC (permalink / raw)
To: Linus Walleij
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On 25/11/16 17:15, Linus Walleij wrote:
> On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>
>> Set MMC_CAP_SWCMDQ for Intel BYT and related eMMC host controllers.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>> drivers/mmc/host/sdhci-acpi.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
>> index 81d4dc034793..95fc4de05c54 100644
>> --- a/drivers/mmc/host/sdhci-acpi.c
>> +++ b/drivers/mmc/host/sdhci-acpi.c
>> @@ -274,7 +274,7 @@ static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev,
>> static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
>> .chip = &sdhci_acpi_chip_int,
>> .caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
>> - MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
>> + MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | MMC_CAP_SWCMDQ |
>
> Actually I don't see why SOFTWARE command queueing would need a cap flag
> in the host at all?
>
> Isn't the whole point with it that if it is available, we don't need any special
> hardware support to use it with any host?
>
> So why not just enable it if the card supports it in that case, why flag
> it in the host at all?
It is a good question. I was trying to remember why I did that way, but
nothing came to mind.
Now it is dependent on MMC_CAP_CMD_DURING_TFR which host controllers may not
support. An example is SDHCI host controllers that have
SDHCI_QUIRK_RESET_AFTER_REQUEST since the reset will interfere with ongoing
transfers.
So I will drop MMC_CAP_SWCMDQ and just check MMC_CAP_CMD_DURING_TFR.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue
2016-11-28 13:23 ` Adrian Hunter
@ 2016-11-28 14:00 ` Ritesh Harjani
0 siblings, 0 replies; 59+ messages in thread
From: Ritesh Harjani @ 2016-11-28 14:00 UTC (permalink / raw)
To: Adrian Hunter
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Venu Byravarasu, Linus Walleij
On 11/28/2016 6:53 PM, Adrian Hunter wrote:
> On 28/11/16 06:36, Ritesh Harjani wrote:
>>
>>
>> On 11/25/2016 3:37 PM, Adrian Hunter wrote:
>>> Add helper functions to enable or disable the Command Queue.
>>>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>> ---
>>> Documentation/mmc/mmc-dev-attrs.txt | 1 +
>>> drivers/mmc/core/mmc.c | 2 ++
>>> drivers/mmc/core/mmc_ops.c | 27 +++++++++++++++++++++++++++
>>> include/linux/mmc/card.h | 1 +
>>> include/linux/mmc/core.h | 2 ++
>>> 5 files changed, 33 insertions(+)
>>>
>>> diff --git a/Documentation/mmc/mmc-dev-attrs.txt
>>> b/Documentation/mmc/mmc-dev-attrs.txt
>>> index 404a0e9e92b0..dcd1252877fb 100644
>>> --- a/Documentation/mmc/mmc-dev-attrs.txt
>>> +++ b/Documentation/mmc/mmc-dev-attrs.txt
>>> @@ -30,6 +30,7 @@ All attributes are read-only.
>>> rel_sectors Reliable write sector count
>>> ocr Operation Conditions Register
>>> dsr Driver Stage Register
>>> + cmdq_en Command Queue enabled: 1 => enabled, 0 => not enabled
>>>
>>> Note on Erase Size and Preferred Erase Size:
>>>
>>> diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
>>> index 6e9830997eef..d6a30bbd399d 100644
>>> --- a/drivers/mmc/core/mmc.c
>>> +++ b/drivers/mmc/core/mmc.c
>>> @@ -770,6 +770,7 @@ static int mmc_compare_ext_csds(struct mmc_card *card,
>>> unsigned bus_width)
>>> MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult);
>>> MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors);
>>> MMC_DEV_ATTR(ocr, "%08x\n", card->ocr);
>>> +MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en);
>>>
>>> static ssize_t mmc_fwrev_show(struct device *dev,
>>> struct device_attribute *attr,
>>> @@ -823,6 +824,7 @@ static ssize_t mmc_dsr_show(struct device *dev,
>>> &dev_attr_rel_sectors.attr,
>>> &dev_attr_ocr.attr,
>>> &dev_attr_dsr.attr,
>>> + &dev_attr_cmdq_en.attr,
>>> NULL,
>>> };
>>> ATTRIBUTE_GROUPS(mmc_std);
>>> diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
>>> index 9b2617cfff67..92a1de9b4981 100644
>>> --- a/drivers/mmc/core/mmc_ops.c
>>> +++ b/drivers/mmc/core/mmc_ops.c
>>> @@ -824,3 +824,30 @@ int mmc_can_ext_csd(struct mmc_card *card)
>>> {
>>> return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
>>> }
>>> +
>>> +int mmc_cmdq_switch(struct mmc_card *card, int enable)
>>> +{
>>> + int err;
>>> +
>>> + if (!card->ext_csd.cmdq_support)
>>> + return -EOPNOTSUPP;
>>> +
>>> + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
>>> + enable, card->ext_csd.generic_cmd6_time);
>>> + if (!err)
>>> + card->ext_csd.cmdq_en = enable;
>>> +
>>> + return err;
>>> +}
>>> +
>>> +int mmc_cmdq_enable(struct mmc_card *card)
>>> +{
>>> + return mmc_cmdq_switch(card, EXT_CSD_CMDQ_MODE_ENABLED);
>>
>> EXT_CSD_CMDQ_MODE_ENABLED is defined as a BIT(0), but getting used here as a
>> value of 1. EXT_CSD_CMDQ_MODE_ENABLED seems redundant anyways.
>> Do you think we can remove it and pass 1 directly?
>
> How about:
>
> static int mmc_cmdq_switch(struct mmc_card *card, bool enable)
> {
> u8 val = enable ? EXT_CSD_CMDQ_MODE_ENABLED : 0;
> int err;
>
> if (!card->ext_csd.cmdq_support)
> return -EOPNOTSUPP;
>
> err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN,
> val, card->ext_csd.generic_cmd6_time);
> if (!err)
> card->ext_csd.cmdq_en = enable;
>
> return err;
> }
>
> int mmc_cmdq_enable(struct mmc_card *card)
> {
> return mmc_cmdq_switch(card, true);
> }
> EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
>
> int mmc_cmdq_disable(struct mmc_card *card)
> {
> return mmc_cmdq_switch(card, false);
> }
> EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
>
yes. This looks good.
>>
>>> +}
>>> +EXPORT_SYMBOL_GPL(mmc_cmdq_enable);
>>> +
>>> +int mmc_cmdq_disable(struct mmc_card *card)
>>> +{
>>> + return mmc_cmdq_switch(card, 0);
>>> +}
>>> +EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
>>> diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
>>> index 95d69d498296..2d9c24f4e88e 100644
>>> --- a/include/linux/mmc/card.h
>>> +++ b/include/linux/mmc/card.h
>>> @@ -89,6 +89,7 @@ struct mmc_ext_csd {
>>> unsigned int boot_ro_lock; /* ro lock support */
>>> bool boot_ro_lockable;
>>> bool ffu_capable; /* Firmware upgrade support */
>>> + bool cmdq_en; /* Command Queue enabled */
>>> bool cmdq_support; /* Command Queue supported */
>>> unsigned int cmdq_depth; /* Command Queue depth */
>>> #define MMC_FIRMWARE_LEN 8
>>> diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
>>> index 0ce928b3ce90..d045b06fc7ea 100644
>>> --- a/include/linux/mmc/core.h
>>> +++ b/include/linux/mmc/core.h
>>> @@ -177,6 +177,8 @@ extern int mmc_wait_for_app_cmd(struct mmc_host *,
>>> struct mmc_card *,
>>> extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int);
>>> extern int mmc_send_tuning(struct mmc_host *host, u32 opcode, int
>>> *cmd_error);
>>> extern int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
>>> +extern int mmc_cmdq_enable(struct mmc_card *card);
>>> +extern int mmc_cmdq_disable(struct mmc_card *card);
>>>
>>> #define MMC_ERASE_ARG 0x00000000
>>> #define MMC_SECURE_ERASE_ARG 0x80000000
>>>
>>
>
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions
2016-11-25 15:01 ` Linus Walleij
@ 2016-11-29 10:14 ` Adrian Hunter
0 siblings, 0 replies; 59+ messages in thread
From: Adrian Hunter @ 2016-11-29 10:14 UTC (permalink / raw)
To: Linus Walleij
Cc: Ulf Hansson, linux-mmc, Alex Lemberg, Mateusz Nowak,
Yuliy Izrailov, Jaehoon Chung, Dong Aisheng, Das Asutosh,
Zhangfei Gao, Dorfman Konstantin, David Griego, Sahitya Tummala,
Harjani Ritesh, Venu Byravarasu
On 25/11/16 17:01, Linus Walleij wrote:
> On Fri, Nov 25, 2016 at 11:07 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>
>> eMMC can have multiple internal partitions that are represented as separate
>> disks / queues. However the card has only 1 command queue which must be
>> empty when switching partitions. Consequently the array of mmc requests
>> that are queued can be shared between partitions saving memory.
>>
>> Keep a pointer to the mmc request queue on the card, and use that instead
>> of allocating a new one for each partition. Use a reference count to keep
>> track of when to free it.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>
> This is a good refactoring no matter how we proceed with command
> queueuing. Some comments.
>
>> @@ -1480,6 +1480,9 @@ static int mmc_packed_init(struct mmc_queue *mq, struct mmc_card *card)
>> if (mq->qdepth != 2)
>> return -EINVAL;
>>
>> + if (mqrq_cur->packed)
>> + goto out;
>
> Well packed command is gone so this goes away.
>
>> +++ b/drivers/mmc/card/queue.c
>> @@ -200,10 +200,17 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(struct mmc_queue *mq,
>> struct mmc_queue_req *mqrq;
>> int i;
>>
>> + if (mq->card->mqrq) {
>> + mq->card->mqrq_ref_cnt += 1;
>> + return mq->card->mqrq;
>> + }
>> +
>> mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
>> if (mqrq) {
>> for (i = 0; i < mq->qdepth; i++)
>> mqrq[i].task_id = i;
>> + mq->card->mqrq = mqrq;
>> + mq->card->mqrq_ref_cnt = 1;
>> }
>
> OK
>
>> + if (mq->card->mqrq_ref_cnt > 1)
>> + return !!mq->mqrq[0].bounce_buf;
>
> Hm that seems inseparable from the other changes.
>
> Decrease of refcount seems correct.
>
>> + struct mmc_queue_req *mqrq; /* Shared queue structure */
>> + int mqrq_ref_cnt; /* Shared queue ref. count */
>
> I'm not smart enough to see if we're always increasing/decreasing
> this under a lock or otherwise exclusive context, or if it would be
> better to use an atomic type for counting, like kref does?
Queues are allocated and deallocated via mmc_blk_probe() and mmc_blk_probe()
so no other synchronization is needed for the reference count. I have
updated the commit message anf put a comment in the code to reflect that.
>
> Well maybe the whole thing could use kref I dunno.
>
> I guess it should be an unsigned int atleast.
Ok
^ permalink raw reply [flat|nested] 59+ messages in thread
end of thread, other threads:[~2016-11-29 10:19 UTC | newest]
Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-25 10:06 [PATCH V7 00/25] mmc: mmc: Add Software Command Queuing Adrian Hunter
2016-11-25 10:06 ` [PATCH V7 01/25] mmc: queue: Fix queue thread wake-up Adrian Hunter
2016-11-25 14:37 ` Linus Walleij
2016-11-28 3:32 ` Ritesh Harjani
2016-11-25 10:06 ` [PATCH V7 02/25] mmc: queue: Factor out mmc_queue_alloc_bounce_bufs() Adrian Hunter
2016-11-25 14:38 ` Linus Walleij
2016-11-28 3:36 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 03/25] mmc: queue: Factor out mmc_queue_alloc_bounce_sgs() Adrian Hunter
2016-11-25 14:39 ` Linus Walleij
2016-11-28 3:48 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 04/25] mmc: queue: Factor out mmc_queue_alloc_sgs() Adrian Hunter
2016-11-25 14:41 ` Linus Walleij
2016-11-28 3:49 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 05/25] mmc: queue: Factor out mmc_queue_reqs_free_bufs() Adrian Hunter
2016-11-25 14:42 ` Linus Walleij
2016-11-28 3:50 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 06/25] mmc: queue: Introduce queue depth Adrian Hunter
2016-11-25 14:43 ` Linus Walleij
2016-11-25 17:20 ` Adrian Hunter
2016-11-28 4:19 ` Ritesh Harjani
2016-11-28 12:45 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 07/25] mmc: queue: Use queue depth to allocate and free Adrian Hunter
2016-11-28 4:21 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 08/25] mmc: queue: Allocate queue of size qdepth Adrian Hunter
2016-11-28 4:22 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 09/25] mmc: mmc: Add Command Queue definitions Adrian Hunter
2016-11-28 4:29 ` Ritesh Harjani
2016-11-28 13:08 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 10/25] mmc: mmc: Add functions to enable / disable the Command Queue Adrian Hunter
2016-11-28 4:36 ` Ritesh Harjani
2016-11-28 13:23 ` Adrian Hunter
2016-11-28 14:00 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 11/25] mmc: mmc_test: Disable Command Queue while mmc_test is used Adrian Hunter
2016-11-28 4:40 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 12/25] mmc: block: Disable Command Queue while RPMB " Adrian Hunter
2016-11-28 4:46 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 13/25] mmc: core: Do not prepare a new request twice Adrian Hunter
2016-11-28 4:48 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 14/25] mmc: core: Export mmc_retune_hold() and mmc_retune_release() Adrian Hunter
2016-11-28 4:49 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 15/25] mmc: block: Factor out mmc_blk_requeue() Adrian Hunter
2016-11-28 4:51 ` Ritesh Harjani
2016-11-25 10:07 ` [PATCH V7 16/25] mmc: block: Fix 4K native sector check Adrian Hunter
2016-11-25 14:51 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 17/25] mmc: block: Use local var for mqrq_cur Adrian Hunter
2016-11-25 14:52 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 18/25] mmc: block: Pass mqrq to mmc_blk_prep_packed_list() Adrian Hunter
2016-11-25 14:53 ` Linus Walleij
2016-11-25 10:07 ` [PATCH V7 19/25] mmc: block: Introduce queue semantics Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 20/25] mmc: queue: Share mmc request array between partitions Adrian Hunter
2016-11-25 15:01 ` Linus Walleij
2016-11-29 10:14 ` Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 21/25] mmc: queue: Add a function to control wake-up on new requests Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 22/25] mmc: block: Add Software Command Queuing Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 23/25] mmc: mmc: Enable " Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 24/25] mmc: sdhci-pci: Enable Software Command Queuing for some Intel controllers Adrian Hunter
2016-11-25 10:07 ` [PATCH V7 25/25] mmc: sdhci-acpi: " Adrian Hunter
2016-11-25 15:15 ` Linus Walleij
2016-11-28 13:55 ` Adrian Hunter
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.