linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@fb.com>, Keith Busch <keith.busch@intel.com>,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Gopal Tiwari <gtiwari@redhat.com>,
	dmilburn@redhat.com
Subject: Re: [PATCH 10/15] nvme-pci: do not build a scatterlist to map metadata
Date: Wed, 28 Aug 2019 17:20:58 +0800	[thread overview]
Message-ID: <20190828092057.GA15524@ming.t460p> (raw)
In-Reply-To: <20190321231037.25104-11-hch@lst.de>

On Thu, Mar 21, 2019 at 04:10:32PM -0700, Christoph Hellwig wrote:
> We always have exactly one segment, so we can simply call dma_map_bvec.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/nvme/host/pci.c | 23 ++++++++++-------------
>  1 file changed, 10 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index bc4ee869fe82..a7dad24e0406 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -221,7 +221,7 @@ struct nvme_iod {
>  	int npages;		/* In the PRP list. 0 means small pool in use */
>  	int nents;		/* Used in scatterlist */
>  	dma_addr_t first_dma;
> -	struct scatterlist meta_sg; /* metadata requires single contiguous buffer */
> +	dma_addr_t meta_dma;
>  	struct scatterlist *sg;
>  	struct scatterlist inline_sg[0];
>  };
> @@ -592,13 +592,16 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
>  	dma_addr_t dma_addr = iod->first_dma, next_dma_addr;
>  	int i;
>  
> +	if (blk_integrity_rq(req)) {
> +		dma_unmap_page(dev->dev, iod->meta_dma,
> +				rq_integrity_vec(req)->bv_len, dma_dir);
> +	}
> +
>  	if (iod->nents) {
>  		/* P2PDMA requests do not need to be unmapped */
>  		if (!is_pci_p2pdma_page(sg_page(iod->sg)))
>  			dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir);
>  
> -		if (blk_integrity_rq(req))
> -			dma_unmap_sg(dev->dev, &iod->meta_sg, 1, dma_dir);
>  	}
>  
>  	if (iod->npages == 0)
> @@ -861,17 +864,11 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
>  
>  	ret = BLK_STS_IOERR;
>  	if (blk_integrity_rq(req)) {
> -		if (blk_rq_count_integrity_sg(q, req->bio) != 1)
> -			goto out;
> -
> -		sg_init_table(&iod->meta_sg, 1);
> -		if (blk_rq_map_integrity_sg(q, req->bio, &iod->meta_sg) != 1)
> -			goto out;
> -
> -		if (!dma_map_sg(dev->dev, &iod->meta_sg, 1, dma_dir))
> +		iod->meta_dma = dma_map_bvec(dev->dev, rq_integrity_vec(req),
> +				dma_dir, 0);

Hi Christoph,

When one bio is enough big, the generated integrity data could cross
more than one pages even though the data is still in single segment.

However, we don't convert to multi-page bvec for bio_integrity_prep(),
and each page may consume one bvec, so is it possible for this patch to
cause issues in case of NVMe's protection? Since this patch supposes
that there is only one bvec for integrity data.

BTW, not see such kind of report, just a concern in theory.

thanks,
Ming

  parent reply	other threads:[~2019-08-28  9:21 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-21 23:10 [RFC] optimize nvme single segment I/O Christoph Hellwig
2019-03-21 23:10 ` [PATCH 01/15] block: add a req_bvec helper Christoph Hellwig
2019-03-25  5:07   ` Chaitanya Kulkarni
2019-03-27 14:16     ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 02/15] block: add a rq_integrity_vec helper Christoph Hellwig
2019-03-25  5:10   ` Chaitanya Kulkarni
2019-03-27 14:19     ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 03/15] block: add a rq_dma_dir helper Christoph Hellwig
2019-03-22 13:06   ` Johannes Thumshirn
2019-03-27 14:20     ` Christoph Hellwig
2019-03-28 10:26       ` Johannes Thumshirn
2019-03-25  5:11   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 04/15] block: add dma_map_bvec helper Christoph Hellwig
2019-03-25  5:13   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 05/15] nvme-pci: remove the unused iod->length field Christoph Hellwig
2019-03-25  5:14   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 06/15] nvme-pci: remove nvme_init_iod Christoph Hellwig
2019-03-25  5:19   ` Chaitanya Kulkarni
2019-03-27 14:21     ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 07/15] nvme-pci: move the call to nvme_cleanup_cmd out of nvme_unmap_data Christoph Hellwig
2019-03-25  5:21   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 08/15] nvme-pci: merge nvme_free_iod into nvme_unmap_data Christoph Hellwig
2019-03-25  5:22   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 09/15] nvme-pci: only call nvme_unmap_data for requests transferring data Christoph Hellwig
2019-03-25  5:23   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 10/15] nvme-pci: do not build a scatterlist to map metadata Christoph Hellwig
2019-03-25  5:27   ` Chaitanya Kulkarni
2019-08-28  9:20   ` Ming Lei [this message]
2019-09-12  1:02     ` Ming Lei
2019-09-12  8:20       ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 11/15] nvme-pci: split metadata handling from nvme_map_data / nvme_unmap_data Christoph Hellwig
2019-03-25  5:29   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 12/15] nvme-pci: remove the inline scatterlist optimization Christoph Hellwig
2019-03-25  5:30   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 13/15] nvme-pci: optimize mapping of small single segment requests Christoph Hellwig
2019-03-25  5:36   ` Chaitanya Kulkarni
2019-03-21 23:10 ` [PATCH 14/15] nvme-pci: optimize mapping single segment requests using SGLs Christoph Hellwig
2019-03-25  5:39   ` Chaitanya Kulkarni
2019-04-30 14:17   ` Klaus Birkelund
2019-04-30 14:32     ` Christoph Hellwig
2019-03-21 23:10 ` [PATCH 15/15] nvme-pci: tidy up nvme_map_data Christoph Hellwig
2019-03-25  5:40   ` Chaitanya Kulkarni
2019-03-22 15:44 ` [RFC] optimize nvme single segment I/O Jens Axboe
2019-03-27 14:24   ` Christoph Hellwig
2019-03-22 17:37 ` Keith Busch
2019-03-22 18:55 ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190828092057.GA15524@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=axboe@fb.com \
    --cc=dmilburn@redhat.com \
    --cc=gtiwari@redhat.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).