From: Christoph Hellwig <hch@lst.de>
To: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Russell King <linux@armlinux.org.uk>,
linux-mmc@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] mmc: let the dma map ops handle bouncing
Date: Tue, 25 Jun 2019 11:20:41 +0200 [thread overview]
Message-ID: <20190625092042.19320-2-hch@lst.de> (raw)
In-Reply-To: <20190625092042.19320-1-hch@lst.de>
Just like we do for all other block drivers. Especially as the limit
imposed at the moment might be way to pessimistic for iommus.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/mmc/core/queue.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 3557d5c51141..e327f80ebe70 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -350,18 +350,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{
struct mmc_host *host = card->host;
- u64 limit = BLK_BOUNCE_HIGH;
unsigned block_size = 512;
- if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
- limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
-
blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
- blk_queue_bounce_limit(mq->queue, limit);
+ if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
+ blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
--
2.20.1
WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@lst.de>
To: Ulf Hansson <ulf.hansson@linaro.org>
Cc: iommu@lists.linux-foundation.org, linux-mmc@vger.kernel.org,
Russell King <linux@armlinux.org.uk>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] mmc: let the dma map ops handle bouncing
Date: Tue, 25 Jun 2019 11:20:41 +0200 [thread overview]
Message-ID: <20190625092042.19320-2-hch@lst.de> (raw)
In-Reply-To: <20190625092042.19320-1-hch@lst.de>
Just like we do for all other block drivers. Especially as the limit
imposed at the moment might be way to pessimistic for iommus.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/mmc/core/queue.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 3557d5c51141..e327f80ebe70 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -350,18 +350,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{
struct mmc_host *host = card->host;
- u64 limit = BLK_BOUNCE_HIGH;
unsigned block_size = 512;
- if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
- limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
-
blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
- blk_queue_bounce_limit(mq->queue, limit);
+ if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
+ blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
--
2.20.1
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@lst.de>
To: Ulf Hansson <ulf.hansson@linaro.org>
Cc: iommu@lists.linux-foundation.org, linux-mmc@vger.kernel.org,
Russell King <linux@armlinux.org.uk>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] mmc: let the dma map ops handle bouncing
Date: Tue, 25 Jun 2019 11:20:41 +0200 [thread overview]
Message-ID: <20190625092042.19320-2-hch@lst.de> (raw)
In-Reply-To: <20190625092042.19320-1-hch@lst.de>
Just like we do for all other block drivers. Especially as the limit
imposed at the moment might be way to pessimistic for iommus.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/mmc/core/queue.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 3557d5c51141..e327f80ebe70 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -350,18 +350,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{
struct mmc_host *host = card->host;
- u64 limit = BLK_BOUNCE_HIGH;
unsigned block_size = 512;
- if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
- limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
-
blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
- blk_queue_bounce_limit(mq->queue, limit);
+ if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
+ blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
--
2.20.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-06-25 9:21 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-25 9:20 get rid of dma_max_pfn Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig [this message]
2019-06-25 9:20 ` [PATCH 1/2] mmc: let the dma map ops handle bouncing Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig
2019-07-08 11:55 ` Ulf Hansson
2019-07-08 11:55 ` Ulf Hansson
2019-07-08 11:55 ` Ulf Hansson
2019-07-08 11:55 ` Ulf Hansson
2019-06-25 9:20 ` [PATCH 2/2] dma-mapping: remove dma_max_pfn Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig
2019-06-25 9:20 ` Christoph Hellwig
2019-06-25 11:45 ` Marc Gonzalez
2019-06-28 6:00 ` Christoph Hellwig
2019-07-08 11:55 ` Ulf Hansson
2019-07-08 11:55 ` Ulf Hansson
2019-07-08 11:55 ` Ulf Hansson
-- strict thread matches above, loose matches on Subject: below --
2019-04-11 7:09 get rid of dma_max_pfn Christoph Hellwig
2019-04-11 7:09 ` [PATCH 1/2] mmc: let the dma map ops handle bouncing Christoph Hellwig
2019-04-11 7:09 ` Christoph Hellwig
2019-04-11 7:09 ` Christoph Hellwig
2019-04-11 9:00 ` Ulf Hansson
2019-04-11 9:00 ` Ulf Hansson
2019-04-11 9:00 ` Ulf Hansson
2019-04-11 14:34 ` Christoph Hellwig
2019-04-11 14:34 ` Christoph Hellwig
2019-04-11 14:34 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190625092042.19320-2-hch@lst.de \
--to=hch@lst.de \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mmc@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=ulf.hansson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.