From: Beata Michalska <beata.michalska@linaro.org>
To: Klaus Birkelund <its@irrelevant.dk>
Cc: Kevin Wolf <kwolf@redhat.com>,
qemu-block@nongnu.org, qemu-devel@nongnu.org,
Max Reitz <mreitz@redhat.com>,
Paul Durrant <Paul.Durrant@citrix.com>,
Keith Busch <kbusch@kernel.org>
Subject: Re: [PATCH v2 15/20] nvme: add support for scatter gather lists
Date: Mon, 25 Nov 2019 14:10:37 +0000 [thread overview]
Message-ID: <CADSWDzv3h+VcF9kVDRu2M=9E0Q51k9DRyxNhAxX0JwWNdKazMQ@mail.gmail.com> (raw)
In-Reply-To: <20191125062122.GA21341@apples.localdomain>
On Mon, 25 Nov 2019 at 06:21, Klaus Birkelund <its@irrelevant.dk> wrote:
>
> On Tue, Nov 12, 2019 at 03:25:18PM +0000, Beata Michalska wrote:
> > Hi Klaus,
> >
> > On Tue, 15 Oct 2019 at 11:57, Klaus Jensen <its@irrelevant.dk> wrote:
> > > +static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg,
> > > + NvmeSglDescriptor sgl, uint32_t len, NvmeRequest *req)
> > > +{
> > > + const int MAX_NSGLD = 256;
> > > +
> > > + NvmeSglDescriptor segment[MAX_NSGLD];
> > > + uint64_t nsgld;
> > > + uint16_t status;
> > > + bool sgl_in_cmb = false;
> > > + hwaddr addr = le64_to_cpu(sgl.addr);
> > > +
> > > + trace_nvme_map_sgl(req->cid, NVME_SGL_TYPE(sgl.type), req->nlb, len);
> > > +
> > > + pci_dma_sglist_init(qsg, &n->parent_obj, 1);
> > > +
> > > + /*
> > > + * If the entire transfer can be described with a single data block it can
> > > + * be mapped directly.
> > > + */
> > > + if (NVME_SGL_TYPE(sgl.type) == SGL_DESCR_TYPE_DATA_BLOCK) {
> > > + status = nvme_map_sgl_data(n, qsg, &sgl, 1, &len, req);
> > > + if (status) {
> > > + goto unmap;
> > > + }
> > > +
> > > + goto out;
> > > + }
> > > +
> > > + /*
> > > + * If the segment is located in the CMB, the submission queue of the
> > > + * request must also reside there.
> > > + */
> > > + if (nvme_addr_is_cmb(n, addr)) {
> > > + if (!nvme_addr_is_cmb(n, req->sq->dma_addr)) {
> > > + return NVME_INVALID_USE_OF_CMB | NVME_DNR;
> > > + }
> > > +
> > > + sgl_in_cmb = true;
> > > + }
> > > +
> > > + while (NVME_SGL_TYPE(sgl.type) == SGL_DESCR_TYPE_SEGMENT) {
> > > + bool addr_is_cmb;
> > > +
> > > + nsgld = le64_to_cpu(sgl.len) / sizeof(NvmeSglDescriptor);
> > > +
> > > + /* read the segment in chunks of 256 descriptors (4k) */
> > > + while (nsgld > MAX_NSGLD) {
> > > + nvme_addr_read(n, addr, segment, sizeof(segment));
> > Is there any chance this will go outside the CMB?
> >
>
> Yes, there certainly was a chance of that. This has been fixed in a
> general way for both nvme_map_sgl and nvme_map_sgl_data.
>
> > > +
> > > + status = nvme_map_sgl_data(n, qsg, segment, MAX_NSGLD, &len, req);
> > > + if (status) {
> > > + goto unmap;
> > > + }
> > > +
> > > + nsgld -= MAX_NSGLD;
> > > + addr += MAX_NSGLD * sizeof(NvmeSglDescriptor);
> > > + }
> > > +
> > > + nvme_addr_read(n, addr, segment, nsgld * sizeof(NvmeSglDescriptor));
> > > +
> > > + sgl = segment[nsgld - 1];
> > > + addr = le64_to_cpu(sgl.addr);
> > > +
> > > + /* an SGL is allowed to end with a Data Block in a regular Segment */
> > > + if (NVME_SGL_TYPE(sgl.type) == SGL_DESCR_TYPE_DATA_BLOCK) {
> > > + status = nvme_map_sgl_data(n, qsg, segment, nsgld, &len, req);
> > > + if (status) {
> > > + goto unmap;
> > > + }
> > > +
> > > + goto out;
> > > + }
> > > +
> > > + /* do not map last descriptor */
> > > + status = nvme_map_sgl_data(n, qsg, segment, nsgld - 1, &len, req);
> > > + if (status) {
> > > + goto unmap;
> > > + }
> > > +
> > > + /*
> > > + * If the next segment is in the CMB, make sure that the sgl was
> > > + * already located there.
> > > + */
> > > + addr_is_cmb = nvme_addr_is_cmb(n, addr);
> > > + if ((sgl_in_cmb && !addr_is_cmb) || (!sgl_in_cmb && addr_is_cmb)) {
> > > + status = NVME_INVALID_USE_OF_CMB | NVME_DNR;
> > > + goto unmap;
> > > + }
> > > + }
> > > +
> > > + /*
> > > + * If the segment did not end with a Data Block or a Segment descriptor, it
> > > + * must be a Last Segment descriptor.
> > > + */
> > > + if (NVME_SGL_TYPE(sgl.type) != SGL_DESCR_TYPE_LAST_SEGMENT) {
> > > + trace_nvme_err_invalid_sgl_descriptor(req->cid,
> > > + NVME_SGL_TYPE(sgl.type));
> > > + return NVME_SGL_DESCRIPTOR_TYPE_INVALID | NVME_DNR;
> > Shouldn't we handle a case here that requires calling unmap ?
>
> Woops. Fixed.
>
> > > +static uint16_t nvme_dma_read_sgl(NvmeCtrl *n, uint8_t *ptr, uint32_t len,
> > > + NvmeSglDescriptor sgl, NvmeCmd *cmd, NvmeRequest *req)
> > > +{
> > > + QEMUSGList qsg;
> > > + uint16_t err = NVME_SUCCESS;
> > > +
> > Very minor: Mixing convention: status vs error
> >
>
> Fixed by proxy in another refactor.
>
> > >
> > > +#define NVME_CMD_FLAGS_FUSE(flags) (flags & 0x3)
> > > +#define NVME_CMD_FLAGS_PSDT(flags) ((flags >> 6) & 0x3)
> > Minor: This one is slightly misleading - as per the naming and it's usage:
> > the PSDT is a field name and as such does not imply using SGLs
> > and it is being used to verify if given command is actually using
> > SGLs.
> >
>
> Ah, is this because I do
>
> if (NVME_CMD_FLAGS_PSDT(cmd->flags)) {
>
> in the code? That is, just checks for it not being zero? The value of
> the PRP or SGL for Data Transfer (PSDT) field *does* specify if the
> command uses SGLs or not. 0x0: PRPs, 0x1 SGL for data, 0x10: SGLs for
> both data and metadata. Would you prefer the condition was more
> explicit?
>
Yeah, it is just not obvious( at least to me) without referencing the spec
that non-zero value implies SGL usage. Guess a comment would be helpful
but that is not major.
Thanks,
BR
Beata
>
> Thanks!
> Klaus
next prev parent reply other threads:[~2019-11-25 14:27 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-15 10:38 [PATCH v2 00/20] nvme: support NVMe v1.3d, SGLs and multiple namespaces Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 01/20] nvme: remove superfluous breaks Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 02/20] nvme: move device parameters to separate struct Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 03/20] nvme: add missing fields in the identify controller data structure Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 04/20] nvme: populate the mandatory subnqn and ver fields Klaus Jensen
2019-11-12 15:04 ` Beata Michalska
2019-11-13 6:16 ` Klaus Birkelund
2019-10-15 10:38 ` [PATCH v2 05/20] nvme: allow completion queues in the cmb Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 06/20] nvme: add support for the abort command Klaus Jensen
2019-11-12 15:04 ` Beata Michalska
2019-11-13 6:12 ` Klaus Birkelund
2019-11-15 11:56 ` Beata Michalska
2019-11-18 8:49 ` Klaus Birkelund
2019-10-15 10:38 ` [PATCH v2 07/20] nvme: refactor device realization Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 08/20] nvme: add support for the get log page command Klaus Jensen
2019-11-12 15:04 ` Beata Michalska
2019-11-19 20:01 ` Klaus Birkelund
2019-10-15 10:38 ` [PATCH v2 09/20] nvme: add support for the asynchronous event request command Klaus Jensen
2019-11-12 15:04 ` Beata Michalska
2019-11-19 19:51 ` Klaus Birkelund
2019-11-25 12:44 ` Beata Michalska
2019-10-15 10:38 ` [PATCH v2 10/20] nvme: add logging to error information log page Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 11/20] nvme: add missing mandatory features Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 12/20] nvme: bump supported specification version to 1.3 Klaus Jensen
2019-11-12 15:05 ` Beata Michalska
2019-11-18 9:48 ` Klaus Birkelund
2019-11-25 12:13 ` Beata Michalska
2019-11-26 8:40 ` Klaus Birkelund
2019-10-15 10:38 ` [PATCH v2 13/20] nvme: refactor prp mapping Klaus Jensen
2019-11-12 15:23 ` Beata Michalska
2019-11-20 9:39 ` Klaus Birkelund
2019-11-25 13:15 ` Beata Michalska
2019-10-15 10:38 ` [PATCH v2 14/20] nvme: allow multiple aios per command Klaus Jensen
2019-11-12 15:25 ` Beata Michalska
2019-11-21 11:57 ` Klaus Birkelund
2019-11-25 13:59 ` Beata Michalska
2019-10-15 10:38 ` [PATCH v2 15/20] nvme: add support for scatter gather lists Klaus Jensen
2019-11-12 15:25 ` Beata Michalska
2019-11-25 6:21 ` Klaus Birkelund
2019-11-25 14:10 ` Beata Michalska [this message]
2019-11-26 8:34 ` Klaus Birkelund
2019-10-15 10:38 ` [PATCH v2 16/20] nvme: support multiple namespaces Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 17/20] nvme: bump controller pci device id Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 18/20] nvme: remove redundant NvmeCmd pointer parameter Klaus Jensen
2019-10-15 10:38 ` [PATCH v2 19/20] nvme: make lba data size configurable Klaus Jensen
2019-11-12 15:24 ` Beata Michalska
2019-11-13 7:13 ` Klaus Birkelund
2019-10-15 10:39 ` [PATCH v2 20/20] nvme: handle dma errors Klaus Jensen
2019-10-15 17:19 ` [PATCH v2 00/20] nvme: support NVMe v1.3d, SGLs and multiple namespaces no-reply
2019-10-15 17:26 ` no-reply
2019-10-16 6:29 ` Fam Zheng
2019-10-28 6:09 ` Klaus Birkelund
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CADSWDzv3h+VcF9kVDRu2M=9E0Q51k9DRyxNhAxX0JwWNdKazMQ@mail.gmail.com' \
--to=beata.michalska@linaro.org \
--cc=Paul.Durrant@citrix.com \
--cc=its@irrelevant.dk \
--cc=kbusch@kernel.org \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).