From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Ming Lei <ming.lei@redhat.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Dave Chinner <dchinner@redhat.com>,
Linux FS Devel <linux-fsdevel@vger.kernel.org>,
"Darrick J . Wong" <darrick.wong@oracle.com>,
xfs@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
Bart Van Assche <bvanassche@acm.org>,
Matthew Wilcox <willy@infradead.org>
Subject: [PATCH 3/5] block: make dma_alignment as stacked limit
Date: Thu, 18 Oct 2018 21:18:15 +0800 [thread overview]
Message-ID: <20181018131817.11813-4-ming.lei@redhat.com> (raw)
In-Reply-To: <20181018131817.11813-1-ming.lei@redhat.com>
This patch converts .dma_alignment into stacked limit, so the stack
driver may get updated with underlying dma alignment, and allocate
IO buffer as queue DMA aligned.
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: xfs@vger.kernel.org
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/blk-settings.c | 89 +++++++++++++++++++++++++++++-----------------------
1 file changed, 50 insertions(+), 39 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index cf9cd241dc16..aef4510a99b6 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -525,6 +525,54 @@ void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
EXPORT_SYMBOL(blk_queue_stack_limits);
/**
+ * blk_queue_dma_alignment - set dma length and memory alignment
+ * @q: the request queue for the device
+ * @mask: alignment mask
+ *
+ * description:
+ * set required memory and length alignment for direct dma transactions.
+ * this is used when building direct io requests for the queue.
+ *
+ **/
+void blk_queue_dma_alignment(struct request_queue *q, int mask)
+{
+ q->limits.dma_alignment = mask;
+}
+EXPORT_SYMBOL(blk_queue_dma_alignment);
+
+static int __blk_queue_update_dma_alignment(struct queue_limits *t, int mask)
+{
+ BUG_ON(mask >= PAGE_SIZE);
+
+ if (mask > t->dma_alignment)
+ return mask;
+ else
+ return t->dma_alignment;
+}
+
+/**
+ * blk_queue_update_dma_alignment - update dma length and memory alignment
+ * @q: the request queue for the device
+ * @mask: alignment mask
+ *
+ * description:
+ * update required memory and length alignment for direct dma transactions.
+ * If the requested alignment is larger than the current alignment, then
+ * the current queue alignment is updated to the new value, otherwise it
+ * is left alone. The design of this is to allow multiple objects
+ * (driver, device, transport etc) to set their respective
+ * alignments without having them interfere.
+ *
+ **/
+void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
+{
+ q->limits.dma_alignment =
+ __blk_queue_update_dma_alignment(&q->limits, mask);
+}
+EXPORT_SYMBOL(blk_queue_update_dma_alignment);
+
+
+/**
* blk_stack_limits - adjust queue_limits for stacked devices
* @t: the stacking driver limits (top device)
* @b: the underlying queue limits (bottom, component device)
@@ -563,6 +611,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
b->seg_boundary_mask);
t->virt_boundary_mask = min_not_zero(t->virt_boundary_mask,
b->virt_boundary_mask);
+ t->dma_alignment = __blk_queue_update_dma_alignment(t,
+ b->dma_alignment);
t->max_segments = min_not_zero(t->max_segments, b->max_segments);
t->max_discard_segments = min_not_zero(t->max_discard_segments,
@@ -818,45 +868,6 @@ void blk_queue_virt_boundary(struct request_queue *q, unsigned long mask)
}
EXPORT_SYMBOL(blk_queue_virt_boundary);
-/**
- * blk_queue_dma_alignment - set dma length and memory alignment
- * @q: the request queue for the device
- * @mask: alignment mask
- *
- * description:
- * set required memory and length alignment for direct dma transactions.
- * this is used when building direct io requests for the queue.
- *
- **/
-void blk_queue_dma_alignment(struct request_queue *q, int mask)
-{
- q->limits.dma_alignment = mask;
-}
-EXPORT_SYMBOL(blk_queue_dma_alignment);
-
-/**
- * blk_queue_update_dma_alignment - update dma length and memory alignment
- * @q: the request queue for the device
- * @mask: alignment mask
- *
- * description:
- * update required memory and length alignment for direct dma transactions.
- * If the requested alignment is larger than the current alignment, then
- * the current queue alignment is updated to the new value, otherwise it
- * is left alone. The design of this is to allow multiple objects
- * (driver, device, transport etc) to set their respective
- * alignments without having them interfere.
- *
- **/
-void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
-{
- BUG_ON(mask > PAGE_SIZE);
-
- if (mask > q->limits.dma_alignment)
- q->limits.dma_alignment = mask;
-}
-EXPORT_SYMBOL(blk_queue_update_dma_alignment);
-
void blk_queue_flush_queueable(struct request_queue *q, bool queueable)
{
if (queueable)
--
2.9.5
next prev parent reply other threads:[~2018-10-18 21:19 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-18 13:18 [PATCH 0/5] block: introduce helpers for allocating io buffer from slab Ming Lei
2018-10-18 13:18 ` [PATCH 1/5] block: warn on un-aligned DMA IO buffer Ming Lei
2018-10-18 14:27 ` Jens Axboe
2018-10-18 14:43 ` Christoph Hellwig
2018-10-18 14:46 ` Jens Axboe
2018-10-19 1:28 ` Ming Lei
2018-10-19 1:33 ` Jens Axboe
2018-10-19 1:39 ` Ming Lei
2018-10-19 1:52 ` Jens Axboe
2018-10-19 2:06 ` Ming Lei
2018-10-19 2:10 ` Jens Axboe
2018-10-18 14:28 ` Christoph Hellwig
2018-10-18 13:18 ` [PATCH 2/5] block: move .dma_alignment into q->limits Ming Lei
2018-10-18 14:29 ` Christoph Hellwig
2018-10-18 20:36 ` Bart Van Assche
2018-10-18 13:18 ` Ming Lei [this message]
2018-10-18 14:31 ` [PATCH 3/5] block: make dma_alignment as stacked limit Christoph Hellwig
2018-10-18 13:18 ` [PATCH 4/5] block: introduce helpers for allocating IO buffers from slab Ming Lei
2018-10-18 14:42 ` Christoph Hellwig
2018-10-18 15:11 ` Matthew Wilcox
2018-10-18 15:22 ` Christoph Hellwig
2018-10-19 2:53 ` Ming Lei
2018-10-19 4:06 ` Jens Axboe
2018-10-19 5:43 ` Dave Chinner
2018-10-18 13:18 ` [PATCH 5/5] xfs: use block layer helpers to allocate io buffer " Ming Lei
2018-10-18 14:03 ` [PATCH 0/5] block: introduce helpers for allocating " Matthew Wilcox
2018-10-18 14:05 ` Christoph Hellwig
2018-10-18 15:06 ` Matthew Wilcox
2018-10-18 15:21 ` Christoph Hellwig
2018-10-18 15:50 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181018131817.11813-4-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=darrick.wong@oracle.com \
--cc=dchinner@redhat.com \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=vkuznets@redhat.com \
--cc=willy@infradead.org \
--cc=xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).