All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Klaus Birkelund <klaus@birkelund.eu>
Cc: kwolf@redhat.com, qemu-block@nongnu.org,
	matt.fitzpatrick@oakgatetech.com, qemu-devel@nongnu.org,
	mreitz@redhat.com, keith.busch@intel.com
Subject: Re: [Qemu-devel] [PATCH 16/16] nvme: support multiple namespaces
Date: Mon, 4 Nov 2019 08:46:29 +0000	[thread overview]
Message-ID: <675ecf34-4874-7a10-998a-f85c4aeb9526@citrix.com> (raw)
In-Reply-To: <20190823081022.GA30440@apples.localdomain>

On 8/23/19 9:10 AM, Klaus Birkelund wrote:
> On Thu, Aug 22, 2019 at 02:18:05PM +0100, Ross Lagerwall wrote:
>> On 7/5/19 8:23 AM, Klaus Birkelund Jensen wrote:
>>
>> I tried this patch series by installing Windows with a single NVME
>> controller having two namespaces. QEMU crashed in get_feature /
>> NVME_VOLATILE_WRITE_CACHE because req->ns was NULL.
>>
> 
> Hi Ross,
> 
> Good catch!
> 
>> nvme_get_feature / nvme_set_feature look wrong to me since I can't see how
>> req->ns would have been set. Should they have similar code to nvme_io_cmd to
>> set req->ns from cmd->nsid?
> 
> Definitely. I will fix that for v2.
> 
>>
>> After working around this issue everything else seemed to be working well.
>> Thanks for your work on this patch series.
>>
> 
> And thank you for trying out my patches!
> 

One more thing... it doesn't handle inactive namespaces properly so if you
have two namespaces with e.g. nsid=1 and nsid=3 QEMU ends up crashing in
certain situations. The patch below adds support for inactive namespaces.

Still hoping to see a v2 some day :-)

Thanks,
Ross

---------------------- 8-< ----------------------
nvme-ns: Allow inactive namespaces

The controller advertises the maximum namespace number but not all of these
slots may have proper namespaces. These are defined as inactive namespaces by
the spec.  Implement support for inactive namespaces instead of crashing.

Changes are needed in a few places:
* When identify_ns is used with an inactive namespace, the controller should
  return all zeroes.
* Only active namespaces should be returned by identify_ns_list.
* When the controller is unplugged, only cleanup active namespaces.
* Keep track of and advertise the maximum valid namespace number rather than
* the number of active namespaces.
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 1b8a09853d..29ea5c2023 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -1302,6 +1302,7 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req)
 static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req)
 {
     NvmeNamespace *ns;
+    NvmeIdNs *id_ns, invalid_ns_id = {0};
     uint32_t nsid = le32_to_cpu(cmd->nsid);
 
     trace_nvme_identify_ns(nsid);
@@ -1312,9 +1313,13 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req)
     }
 
     ns = n->namespaces[nsid - 1];
+    if (ns) {
+        id_ns = &ns->id_ns;
+    } else {
+        id_ns = &invalid_ns_id;
+    }
 
-    return nvme_dma_read(n, (uint8_t *) &ns->id_ns, sizeof(ns->id_ns), cmd,
-        req);
+    return nvme_dma_read(n, (uint8_t *) id_ns, sizeof(*id_ns), cmd, req);
 }
 
 static uint16_t nvme_identify_ns_list(NvmeCtrl *n, NvmeCmd *cmd,
@@ -1330,7 +1335,7 @@ static uint16_t nvme_identify_ns_list(NvmeCtrl *n, NvmeCmd *cmd,
 
     list = g_malloc0(data_len);
     for (i = 0; i < n->num_namespaces; i++) {
-        if (i < min_nsid) {
+        if (i < min_nsid || !n->namespaces[i]) {
             continue;
         }
         list[j++] = cpu_to_le32(i + 1);
@@ -1861,7 +1866,9 @@ static void nvme_clear_ctrl(NvmeCtrl *n)
     int i;
 
     for (i = 0; i < n->num_namespaces; i++) {
-        blk_drain(n->namespaces[i]->conf.blk);
+        if (n->namespaces[i]) {
+            blk_drain(n->namespaces[i]->conf.blk);
+        }
     }
 
     for (i = 0; i < n->params.num_queues; i++) {
@@ -1886,7 +1893,9 @@ static void nvme_clear_ctrl(NvmeCtrl *n)
     }
 
     for (i = 0; i < n->num_namespaces; i++) {
-        blk_flush(n->namespaces[i]->conf.blk);
+        if (n->namespaces[i]) {
+            blk_flush(n->namespaces[i]->conf.blk);
+        }
     }
 
     n->bar.cc = 0;
@@ -2464,8 +2473,9 @@ int nvme_register_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp)
     trace_nvme_register_namespace(nsid);
 
     n->namespaces[nsid - 1] = ns;
-    n->num_namespaces++;
-    n->id_ctrl.nn++;
+    if (nsid > n->num_namespaces)
+        n->num_namespaces = nsid;
+    n->id_ctrl.nn = n->num_namespaces;
 
     return 0;
 }


  reply	other threads:[~2019-11-04  8:48 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-05  7:23 [Qemu-devel] [PATCH 00/16] nvme: support NVMe v1.3d, SGLs and multiple namespaces Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 01/16] nvme: simplify namespace code Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 02/16] nvme: move device parameters to separate struct Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 03/16] nvme: fix lpa field Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 04/16] nvme: add missing fields in identify controller Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 05/16] nvme: populate the mandatory subnqn and ver fields Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 06/16] nvme: support completion queue in cmb Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 07/16] nvme: support Abort command Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 08/16] nvme: refactor device realization Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 09/16] nvme: support Asynchronous Event Request command Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 10/16] nvme: support Get Log Page command Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 11/16] nvme: add missing mandatory Features Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 12/16] nvme: bump supported NVMe revision to 1.3d Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 13/16] nvme: simplify dma/cmb mappings Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 14/16] nvme: support multiple block requests per request Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 15/16] nvme: support scatter gather lists Klaus Birkelund Jensen
2019-07-05  7:23 ` [Qemu-devel] [PATCH 16/16] nvme: support multiple namespaces Klaus Birkelund Jensen
2019-07-05 13:36   ` [Qemu-devel] [Qemu-block] " Klaus Birkelund
2019-07-05 13:49     ` Daniel P. Berrangé
2019-07-05 16:20       ` Klaus Birkelund
2019-07-05 16:25         ` Daniel P. Berrangé
2019-08-22 13:18   ` [Qemu-devel] " Ross Lagerwall
2019-08-23  8:10     ` Klaus Birkelund
2019-11-04  8:46       ` Ross Lagerwall [this message]
2019-11-04  9:04         ` Klaus Birkelund
2019-11-04  9:11           ` Ross Lagerwall
2019-07-05  8:56 ` [Qemu-devel] [PATCH 00/16] nvme: support NVMe v1.3d, SGLs and " no-reply
2019-07-05  9:01 ` no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=675ecf34-4874-7a10-998a-f85c4aeb9526@citrix.com \
    --to=ross.lagerwall@citrix.com \
    --cc=keith.busch@intel.com \
    --cc=klaus@birkelund.eu \
    --cc=kwolf@redhat.com \
    --cc=matt.fitzpatrick@oakgatetech.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.