From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@fb.com>,
Keith Busch <keith.busch@intel.com>,
Sagi Grimberg <sagi@grimberg.me>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: [PATCH 12/15] nvme-pci: remove the inline scatterlist optimization
Date: Mon, 25 Mar 2019 05:30:38 +0000 [thread overview]
Message-ID: <SN6PR04MB4527EDD559B724430E1CA017865E0@SN6PR04MB4527.namprd04.prod.outlook.com> (raw)
In-Reply-To: 20190321231037.25104-13-hch@lst.de
On 3/21/19 4:12 PM, Christoph Hellwig wrote:
> We'll have a better way to optimize for small I/O that doesn't
> require it soon, so remove the existing inline_sg case to make that
> optimization easier to implement.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvme/host/pci.c | 38 ++++++--------------------------------
> 1 file changed, 6 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index cf29d079ad5b..c6047935e825 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -223,7 +223,6 @@ struct nvme_iod {
> dma_addr_t first_dma;
> dma_addr_t meta_dma;
> struct scatterlist *sg;
> - struct scatterlist inline_sg[0];
> };
>
> /*
> @@ -370,12 +369,6 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
> return true;
> }
>
> -/*
> - * Max size of iod being embedded in the request payload
> - */
> -#define NVME_INT_PAGES 2
> -#define NVME_INT_BYTES(dev) (NVME_INT_PAGES * (dev)->ctrl.page_size)
> -
> /*
> * Will slightly overestimate the number of pages needed. This is OK
> * as it only leads to a small amount of wasted memory for the lifetime of
> @@ -410,15 +403,6 @@ static unsigned int nvme_pci_iod_alloc_size(struct nvme_dev *dev,
> return alloc_size + sizeof(struct scatterlist) * nseg;
> }
>
> -static unsigned int nvme_pci_cmd_size(struct nvme_dev *dev, bool use_sgl)
> -{
> - unsigned int alloc_size = nvme_pci_iod_alloc_size(dev,
> - NVME_INT_BYTES(dev), NVME_INT_PAGES,
> - use_sgl);
> -
> - return sizeof(struct nvme_iod) + alloc_size;
> -}
> -
> static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
> unsigned int hctx_idx)
> {
> @@ -621,8 +605,7 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
> dma_addr = next_dma_addr;
> }
>
> - if (iod->sg != iod->inline_sg)
> - mempool_free(iod->sg, dev->iod_mempool);
> + mempool_free(iod->sg, dev->iod_mempool);
> }
>
> static void nvme_print_sgl(struct scatterlist *sgl, int nents)
> @@ -822,14 +805,9 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
> blk_status_t ret = BLK_STS_IOERR;
> int nr_mapped;
>
> - if (blk_rq_payload_bytes(req) > NVME_INT_BYTES(dev) ||
> - blk_rq_nr_phys_segments(req) > NVME_INT_PAGES) {
> - iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC);
> - if (!iod->sg)
> - return BLK_STS_RESOURCE;
> - } else {
> - iod->sg = iod->inline_sg;
> - }
> + iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC);
> + if (!iod->sg)
> + return BLK_STS_RESOURCE;
>
> iod->use_sgl = nvme_pci_use_sgls(dev, req);
>
> @@ -1619,7 +1597,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
> dev->admin_tagset.queue_depth = NVME_AQ_MQ_TAG_DEPTH;
> dev->admin_tagset.timeout = ADMIN_TIMEOUT;
> dev->admin_tagset.numa_node = dev_to_node(dev->dev);
> - dev->admin_tagset.cmd_size = nvme_pci_cmd_size(dev, false);
> + dev->admin_tagset.cmd_size = sizeof(struct nvme_iod);
> dev->admin_tagset.flags = BLK_MQ_F_NO_SCHED;
> dev->admin_tagset.driver_data = dev;
>
> @@ -2266,11 +2244,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
> dev->tagset.numa_node = dev_to_node(dev->dev);
> dev->tagset.queue_depth =
> min_t(int, dev->q_depth, BLK_MQ_MAX_DEPTH) - 1;
> - dev->tagset.cmd_size = nvme_pci_cmd_size(dev, false);
> - if ((dev->ctrl.sgls & ((1 << 0) | (1 << 1))) && sgl_threshold) {
> - dev->tagset.cmd_size = max(dev->tagset.cmd_size,
> - nvme_pci_cmd_size(dev, true));
> - }
> + dev->tagset.cmd_size = sizeof(struct nvme_iod);
> dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE;
> dev->tagset.driver_data = dev;
>
>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
next prev parent reply other threads:[~2019-03-25 5:30 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-21 23:10 [RFC] optimize nvme single segment I/O Christoph Hellwig
2019-03-21 23:10 ` [PATCH 01/15] block: add a req_bvec helper Christoph Hellwig
2019-03-25 5:07 ` Chaitanya Kulkarni
2019-03-27 14:16 ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 02/15] block: add a rq_integrity_vec helper Christoph Hellwig
2019-03-25 5:10 ` Chaitanya Kulkarni
2019-03-27 14:19 ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 03/15] block: add a rq_dma_dir helper Christoph Hellwig
2019-03-22 13:06 ` Johannes Thumshirn
2019-03-27 14:20 ` Christoph Hellwig
2019-03-28 10:26 ` Johannes Thumshirn
2019-03-25 5:11 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 04/15] block: add dma_map_bvec helper Christoph Hellwig
2019-03-25 5:13 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 05/15] nvme-pci: remove the unused iod->length field Christoph Hellwig
2019-03-25 5:14 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 06/15] nvme-pci: remove nvme_init_iod Christoph Hellwig
2019-03-25 5:19 ` Chaitanya Kulkarni
2019-03-27 14:21 ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 07/15] nvme-pci: move the call to nvme_cleanup_cmd out of nvme_unmap_data Christoph Hellwig
2019-03-25 5:21 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 08/15] nvme-pci: merge nvme_free_iod into nvme_unmap_data Christoph Hellwig
2019-03-25 5:22 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 09/15] nvme-pci: only call nvme_unmap_data for requests transferring data Christoph Hellwig
2019-03-25 5:23 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 10/15] nvme-pci: do not build a scatterlist to map metadata Christoph Hellwig
2019-03-25 5:27 ` Chaitanya Kulkarni
2019-08-28 9:20 ` Ming Lei
2019-09-12 1:02 ` Ming Lei
2019-09-12 8:20 ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 11/15] nvme-pci: split metadata handling from nvme_map_data / nvme_unmap_data Christoph Hellwig
2019-03-25 5:29 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 12/15] nvme-pci: remove the inline scatterlist optimization Christoph Hellwig
2019-03-25 5:30 ` Chaitanya Kulkarni [this message]
2019-03-21 23:10 ` [PATCH 13/15] nvme-pci: optimize mapping of small single segment requests Christoph Hellwig
2019-03-25 5:36 ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 14/15] nvme-pci: optimize mapping single segment requests using SGLs Christoph Hellwig
2019-03-25 5:39 ` Chaitanya Kulkarni
2019-04-30 14:17 ` Klaus Birkelund
2019-04-30 14:32 ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 15/15] nvme-pci: tidy up nvme_map_data Christoph Hellwig
2019-03-25 5:40 ` Chaitanya Kulkarni
2019-03-22 15:44 ` [RFC] optimize nvme single segment I/O Jens Axboe
2019-03-27 14:24 ` Christoph Hellwig
2019-03-22 17:37 ` Keith Busch
2019-03-22 18:55 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SN6PR04MB4527EDD559B724430E1CA017865E0@SN6PR04MB4527.namprd04.prod.outlook.com \
--to=chaitanya.kulkarni@wdc.com \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).