All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] support command retry
@ 2021-02-10 19:52 Minwoo Im
  2021-02-10 19:52 ` [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place Minwoo Im
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Minwoo Im @ 2021-02-10 19:52 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: Kevin Wolf, Dr . David Alan Gilbert, Max Reitz, Klaus Jensen,
	Minwoo Im, Keith Busch

Hello,

This series is RFC about supporting command retry feature in NVMe device
model.  The background to propose this feature is that in kernel
development and testing, retry scheme has not been able to be covered in
QEMU NVMe model device.  If we are able to control the retry scheme
fromt he device side, it would be nice for kernel developers to test.

We have been putting NVME_DNR in the CQ entry status field for
all error cases.  This series added a control for the command retry
based on the 'cmd-retry-delay' parameter which is newly added.  If it's
given to positive value, Command Retry Delay Time1(CRDT1) in the
Identify Controller data structure will be set in 100msec units.
Accordingly, it will cause host to Set Feature with Host Behavior(0x16)
to enable the Advanced Command Retry Enable(ACRE) feature to support
command retry with defined delay.  If 'cmd-retry-delay' param is given
to 0, then command failures will be retried directly without delay.

This series just considered Command Interrupted status code first which
is mainly about the ACRE feature addition.  nvme_should_retry() helper
will decide command should be retried or not.

But, we don't have any use-cases specified for the Command Interrupted
status code in the device model.  So, I proposed [3/3] patch by adding
'nvme_inject_state' HMP command to make users to give pre-defined state
to the controller device by injecting it via QEMU monitor.

Usage:

  # Configure the nvme0 device to be retried every 1sec(1000msec)
  -device nvme,id=nvme0,cmd-retry-delay=1000,...

  (qemu) nvme_inject_state nvme0 cmd-interrupted
  -device nvme,id=nvme0: state cmd-interrupted injected
  (qemu)

  # Then from now on, controller will interrupt all the commands
  # to be processed with Command Interrupted status code.  Then host
  # will retry based on the delay.

Thanks,

Minwoo Im (3):
  hw/block/nvme: set NVME_DNR in a single place
  hw/block/nvme: support command retry delay
  hw/block/nvme: add nvme_inject_state HMP command

 hmp-commands.hx       |  13 ++
 hw/block/nvme.c       | 304 +++++++++++++++++++++++++++++-------------
 hw/block/nvme.h       |  10 ++
 include/block/nvme.h  |  13 +-
 include/monitor/hmp.h |   1 +
 5 files changed, 244 insertions(+), 97 deletions(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place
  2021-02-10 19:52 [RFC PATCH 0/3] support command retry Minwoo Im
@ 2021-02-10 19:52 ` Minwoo Im
  2021-02-10 20:19   ` Klaus Jensen
  2021-02-10 19:52 ` [RFC PATCH 2/3] hw/block/nvme: support command retry delay Minwoo Im
  2021-02-10 19:52 ` [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command Minwoo Im
  2 siblings, 1 reply; 16+ messages in thread
From: Minwoo Im @ 2021-02-10 19:52 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: Kevin Wolf, Dr . David Alan Gilbert, Max Reitz, Klaus Jensen,
	Minwoo Im, Keith Busch

Set NVME_DNR in the CQ entry status field right before writing the CQ
entry: in nvme_post_cqes().  We have put NVME_DNR for all CQ entry
status for all error cases.  This patch is a former patch to support
command retry feature.

Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
---
 hw/block/nvme.c | 192 ++++++++++++++++++++++++------------------------
 1 file changed, 97 insertions(+), 95 deletions(-)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 93345bf3c1fc..816e0e8e5205 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -270,12 +270,12 @@ static int nvme_aor_check(NvmeNamespace *ns, uint32_t act, uint32_t opn)
     if (ns->params.max_active_zones != 0 &&
         ns->nr_active_zones + act > ns->params.max_active_zones) {
         trace_pci_nvme_err_insuff_active_res(ns->params.max_active_zones);
-        return NVME_ZONE_TOO_MANY_ACTIVE | NVME_DNR;
+        return NVME_ZONE_TOO_MANY_ACTIVE;
     }
     if (ns->params.max_open_zones != 0 &&
         ns->nr_open_zones + opn > ns->params.max_open_zones) {
         trace_pci_nvme_err_insuff_open_res(ns->params.max_open_zones);
-        return NVME_ZONE_TOO_MANY_OPEN | NVME_DNR;
+        return NVME_ZONE_TOO_MANY_OPEN;
     }
 
     return NVME_SUCCESS;
@@ -492,7 +492,7 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov,
 
     if (cmb || pmr) {
         if (qsg && qsg->sg) {
-            return NVME_INVALID_USE_OF_CMB | NVME_DNR;
+            return NVME_INVALID_USE_OF_CMB;
         }
 
         assert(iov);
@@ -509,7 +509,7 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov,
     }
 
     if (iov && iov->iov) {
-        return NVME_INVALID_USE_OF_CMB | NVME_DNR;
+        return NVME_INVALID_USE_OF_CMB;
     }
 
     assert(qsg);
@@ -568,7 +568,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2,
                 if (i == n->max_prp_ents - 1 && len > n->page_size) {
                     if (unlikely(prp_ent & (n->page_size - 1))) {
                         trace_pci_nvme_err_invalid_prplist_ent(prp_ent);
-                        return NVME_INVALID_PRP_OFFSET | NVME_DNR;
+                        return NVME_INVALID_PRP_OFFSET;
                     }
 
                     i = 0;
@@ -585,7 +585,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2,
 
                 if (unlikely(prp_ent & (n->page_size - 1))) {
                     trace_pci_nvme_err_invalid_prplist_ent(prp_ent);
-                    return NVME_INVALID_PRP_OFFSET | NVME_DNR;
+                    return NVME_INVALID_PRP_OFFSET;
                 }
 
                 trans_len = MIN(len, n->page_size);
@@ -600,7 +600,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2,
         } else {
             if (unlikely(prp2 & (n->page_size - 1))) {
                 trace_pci_nvme_err_invalid_prp2_align(prp2);
-                return NVME_INVALID_PRP_OFFSET | NVME_DNR;
+                return NVME_INVALID_PRP_OFFSET;
             }
             status = nvme_map_addr(n, qsg, iov, prp2, len);
             if (status) {
@@ -637,9 +637,9 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg,
             break;
         case NVME_SGL_DESCR_TYPE_SEGMENT:
         case NVME_SGL_DESCR_TYPE_LAST_SEGMENT:
-            return NVME_INVALID_NUM_SGL_DESCRS | NVME_DNR;
+            return NVME_INVALID_NUM_SGL_DESCRS;
         default:
-            return NVME_SGL_DESCR_TYPE_INVALID | NVME_DNR;
+            return NVME_SGL_DESCR_TYPE_INVALID;
         }
 
         dlen = le32_to_cpu(segment[i].len);
@@ -660,7 +660,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg,
             }
 
             trace_pci_nvme_err_invalid_sgl_excess_length(nvme_cid(req));
-            return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
+            return NVME_DATA_SGL_LEN_INVALID;
         }
 
         trans_len = MIN(*len, dlen);
@@ -672,7 +672,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg,
         addr = le64_to_cpu(segment[i].addr);
 
         if (UINT64_MAX - addr < dlen) {
-            return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
+            return NVME_DATA_SGL_LEN_INVALID;
         }
 
         status = nvme_map_addr(n, qsg, iov, addr, trans_len);
@@ -731,7 +731,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov,
         case NVME_SGL_DESCR_TYPE_LAST_SEGMENT:
             break;
         default:
-            return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR;
+            return NVME_INVALID_SGL_SEG_DESCR;
         }
 
         seg_len = le32_to_cpu(sgld->len);
@@ -739,11 +739,11 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov,
         /* check the length of the (Last) Segment descriptor */
         if ((!seg_len || seg_len & 0xf) &&
             (NVME_SGL_TYPE(sgld->type) != NVME_SGL_DESCR_TYPE_BIT_BUCKET)) {
-            return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR;
+            return NVME_INVALID_SGL_SEG_DESCR;
         }
 
         if (UINT64_MAX - addr < seg_len) {
-            return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
+            return NVME_DATA_SGL_LEN_INVALID;
         }
 
         nsgld = seg_len / sizeof(NvmeSglDescriptor);
@@ -798,7 +798,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov,
          * current segment must not be a Last Segment.
          */
         if (NVME_SGL_TYPE(sgld->type) == NVME_SGL_DESCR_TYPE_LAST_SEGMENT) {
-            status = NVME_INVALID_SGL_SEG_DESCR | NVME_DNR;
+            status = NVME_INVALID_SGL_SEG_DESCR;
             goto unmap;
         }
 
@@ -818,7 +818,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov,
 out:
     /* if there is any residual left in len, the SGL was too short */
     if (len) {
-        status = NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
+        status = NVME_DATA_SGL_LEN_INVALID;
         goto unmap;
     }
 
@@ -850,7 +850,7 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, size_t len, NvmeRequest *req)
     case NVME_PSDT_SGL_MPTR_SGL:
         /* SGLs shall not be used for Admin commands in NVMe over PCIe */
         if (!req->sq->sqid) {
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
 
         return nvme_map_sgl(n, &req->qsg, &req->iov, req->cmd.dptr.sgl, len,
@@ -884,7 +884,7 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len,
 
         if (unlikely(residual)) {
             trace_pci_nvme_err_invalid_dma();
-            status = NVME_INVALID_FIELD | NVME_DNR;
+            status = NVME_INVALID_FIELD;
         }
     } else {
         size_t bytes;
@@ -897,7 +897,7 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len,
 
         if (unlikely(bytes != len)) {
             trace_pci_nvme_err_invalid_dma();
-            status = NVME_INVALID_FIELD | NVME_DNR;
+            status = NVME_INVALID_FIELD;
         }
     }
 
@@ -945,6 +945,11 @@ static void nvme_post_cqes(void *opaque)
 static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
 {
     assert(cq->cqid == req->sq->cqid);
+
+    if (req->status != NVME_SUCCESS) {
+        req->status |= NVME_DNR;
+    }
+
     trace_pci_nvme_enqueue_req_completion(nvme_cid(req), cq->cqid,
                                           req->status);
 
@@ -1069,7 +1074,7 @@ static inline uint16_t nvme_check_mdts(NvmeCtrl *n, size_t len)
     uint8_t mdts = n->params.mdts;
 
     if (mdts && len > n->page_size << mdts) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     return NVME_SUCCESS;
@@ -1081,7 +1086,7 @@ static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba,
     uint64_t nsze = le64_to_cpu(ns->id_ns.nsze);
 
     if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) {
-        return NVME_LBA_RANGE | NVME_DNR;
+        return NVME_LBA_RANGE;
     }
 
     return NVME_SUCCESS;
@@ -1791,11 +1796,11 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req)
 
     if (!(n->id_ctrl.ocfs & (1 << format))) {
         trace_pci_nvme_err_copy_invalid_format(format);
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (nr > ns->id_ns.msrc + 1) {
-        return NVME_CMD_SIZE_LIMIT | NVME_DNR;
+        return NVME_CMD_SIZE_LIMIT;
     }
 
     range = g_new(NvmeCopySourceRange, nr);
@@ -1811,7 +1816,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req)
         uint32_t _nlb = le16_to_cpu(range[i].nlb) + 1;
 
         if (_nlb > le16_to_cpu(ns->id_ns.mssrl)) {
-            return NVME_CMD_SIZE_LIMIT | NVME_DNR;
+            return NVME_CMD_SIZE_LIMIT;
         }
 
         status = nvme_check_bounds(ns, slba, _nlb);
@@ -1838,7 +1843,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req)
     }
 
     if (nlb > le32_to_cpu(ns->id_ns.mcl)) {
-        return NVME_CMD_SIZE_LIMIT | NVME_DNR;
+        return NVME_CMD_SIZE_LIMIT;
     }
 
     bounce = bouncep = g_malloc(nvme_l2b(ns, nlb));
@@ -2006,7 +2011,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req)
 
 invalid:
     block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_READ);
-    return status | NVME_DNR;
+    return status;
 }
 
 static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append,
@@ -2100,7 +2105,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append,
 
 invalid:
     block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_WRITE);
-    return status | NVME_DNR;
+    return status;
 }
 
 static inline uint16_t nvme_write(NvmeCtrl *n, NvmeRequest *req)
@@ -2126,14 +2131,14 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c,
 
     if (!ns->params.zoned) {
         trace_pci_nvme_err_invalid_opc(c->opcode);
-        return NVME_INVALID_OPCODE | NVME_DNR;
+        return NVME_INVALID_OPCODE;
     }
 
     *slba = ((uint64_t)dw11) << 32 | dw10;
     if (unlikely(*slba >= ns->id_ns.nsze)) {
         trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze);
         *slba = 0;
-        return NVME_LBA_RANGE | NVME_DNR;
+        return NVME_LBA_RANGE;
     }
 
     *zone_idx = nvme_zone_idx(ns, *slba);
@@ -2364,7 +2369,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req)
     zone = &ns->zone_array[zone_idx];
     if (slba != zone->d.zslba) {
         trace_pci_nvme_err_unaligned_zone_cmd(action, slba, zone->d.zslba);
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     switch (action) {
@@ -2421,7 +2426,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req)
     case NVME_ZONE_ACTION_SET_ZD_EXT:
         trace_pci_nvme_set_descriptor_extension(slba, zone_idx);
         if (all || !ns->params.zd_extension_size) {
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
         zd_ext = nvme_get_zd_extension(ns, zone_idx);
         status = nvme_dma(n, zd_ext, ns->params.zd_extension_size,
@@ -2447,9 +2452,6 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req)
         trace_pci_nvme_err_invalid_zone_state_transition(action, slba,
                                                          zone->d.za);
     }
-    if (status) {
-        status |= NVME_DNR;
-    }
 
     return status;
 }
@@ -2506,19 +2508,19 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req)
 
     zra = dw13 & 0xff;
     if (zra != NVME_ZONE_REPORT && zra != NVME_ZONE_REPORT_EXTENDED) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
     if (zra == NVME_ZONE_REPORT_EXTENDED && !ns->params.zd_extension_size) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     zrasf = (dw13 >> 8) & 0xff;
     if (zrasf > NVME_ZONE_REPORT_OFFLINE) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (data_size < sizeof(NvmeZoneReportHeader)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     status = nvme_check_mdts(n, data_size);
@@ -2596,17 +2598,17 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req)
                           req->cmd.opcode, nvme_io_opc_str(req->cmd.opcode));
 
     if (!nvme_nsid_valid(n, nsid)) {
-        return NVME_INVALID_NSID | NVME_DNR;
+        return NVME_INVALID_NSID;
     }
 
     req->ns = nvme_ns(n, nsid);
     if (unlikely(!req->ns)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (!(req->ns->iocs[req->cmd.opcode] & NVME_CMD_EFF_CSUPP)) {
         trace_pci_nvme_err_invalid_opc(req->cmd.opcode);
-        return NVME_INVALID_OPCODE | NVME_DNR;
+        return NVME_INVALID_OPCODE;
     }
 
     switch (req->cmd.opcode) {
@@ -2634,7 +2636,7 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req)
         assert(false);
     }
 
-    return NVME_INVALID_OPCODE | NVME_DNR;
+    return NVME_INVALID_OPCODE;
 }
 
 static void nvme_free_sq(NvmeSQueue *sq, NvmeCtrl *n)
@@ -2657,7 +2659,7 @@ static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req)
 
     if (unlikely(!qid || nvme_check_sqid(n, qid))) {
         trace_pci_nvme_err_invalid_del_sq(qid);
-        return NVME_INVALID_QID | NVME_DNR;
+        return NVME_INVALID_QID;
     }
 
     trace_pci_nvme_del_sq(qid);
@@ -2728,24 +2730,24 @@ static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req)
 
     if (unlikely(!cqid || nvme_check_cqid(n, cqid))) {
         trace_pci_nvme_err_invalid_create_sq_cqid(cqid);
-        return NVME_INVALID_CQID | NVME_DNR;
+        return NVME_INVALID_CQID;
     }
     if (unlikely(!sqid || sqid > n->params.max_ioqpairs ||
         n->sq[sqid] != NULL)) {
         trace_pci_nvme_err_invalid_create_sq_sqid(sqid);
-        return NVME_INVALID_QID | NVME_DNR;
+        return NVME_INVALID_QID;
     }
     if (unlikely(!qsize || qsize > NVME_CAP_MQES(n->bar.cap))) {
         trace_pci_nvme_err_invalid_create_sq_size(qsize);
-        return NVME_MAX_QSIZE_EXCEEDED | NVME_DNR;
+        return NVME_MAX_QSIZE_EXCEEDED;
     }
     if (unlikely(prp1 & (n->page_size - 1))) {
         trace_pci_nvme_err_invalid_create_sq_addr(prp1);
-        return NVME_INVALID_PRP_OFFSET | NVME_DNR;
+        return NVME_INVALID_PRP_OFFSET;
     }
     if (unlikely(!(NVME_SQ_FLAGS_PC(qflags)))) {
         trace_pci_nvme_err_invalid_create_sq_qflags(NVME_SQ_FLAGS_PC(qflags));
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
     sq = g_malloc0(sizeof(*sq));
     nvme_init_sq(sq, n, prp1, sqid, cqid, qsize + 1);
@@ -2780,13 +2782,13 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len,
     time_t current_ms;
 
     if (off >= sizeof(smart)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (nsid != 0xffffffff) {
         ns = nvme_ns(n, nsid);
         if (!ns) {
-            return NVME_INVALID_NSID | NVME_DNR;
+            return NVME_INVALID_NSID;
         }
         nvme_set_blk_stats(ns, &stats);
     } else {
@@ -2839,7 +2841,7 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off,
     };
 
     if (off >= sizeof(fw_log)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     strpadcpy((char *)&fw_log.frs1, sizeof(fw_log.frs1), "1.0", ' ');
@@ -2856,7 +2858,7 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len,
     NvmeErrorLog errlog;
 
     if (off >= sizeof(errlog)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (!rae) {
@@ -2879,7 +2881,7 @@ static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len,
 
     if (off >= sizeof(log)) {
         trace_pci_nvme_err_invalid_log_page_offset(off, sizeof(log));
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     switch (NVME_CC_CSS(n->bar.cc)) {
@@ -2937,7 +2939,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req)
     off = (lpou << 32ULL) | lpol;
 
     if (off & 0x3) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     trace_pci_nvme_get_log(nvme_cid(req), lid, lsp, rae, len, off);
@@ -2959,7 +2961,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req)
         return nvme_cmd_effects(n, csi, len, off, req);
     default:
         trace_pci_nvme_err_invalid_log_page(nvme_cid(req), lid);
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 }
 
@@ -2983,7 +2985,7 @@ static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeRequest *req)
 
     if (unlikely(!qid || nvme_check_cqid(n, qid))) {
         trace_pci_nvme_err_invalid_del_cq_cqid(qid);
-        return NVME_INVALID_CQID | NVME_DNR;
+        return NVME_INVALID_CQID;
     }
 
     cq = n->cq[qid];
@@ -3037,27 +3039,27 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
     if (unlikely(!cqid || cqid > n->params.max_ioqpairs ||
         n->cq[cqid] != NULL)) {
         trace_pci_nvme_err_invalid_create_cq_cqid(cqid);
-        return NVME_INVALID_QID | NVME_DNR;
+        return NVME_INVALID_QID;
     }
     if (unlikely(!qsize || qsize > NVME_CAP_MQES(n->bar.cap))) {
         trace_pci_nvme_err_invalid_create_cq_size(qsize);
-        return NVME_MAX_QSIZE_EXCEEDED | NVME_DNR;
+        return NVME_MAX_QSIZE_EXCEEDED;
     }
     if (unlikely(prp1 & (n->page_size - 1))) {
         trace_pci_nvme_err_invalid_create_cq_addr(prp1);
-        return NVME_INVALID_PRP_OFFSET | NVME_DNR;
+        return NVME_INVALID_PRP_OFFSET;
     }
     if (unlikely(!msix_enabled(&n->parent_obj) && vector)) {
         trace_pci_nvme_err_invalid_create_cq_vector(vector);
-        return NVME_INVALID_IRQ_VECTOR | NVME_DNR;
+        return NVME_INVALID_IRQ_VECTOR;
     }
     if (unlikely(vector >= n->params.msix_qsize)) {
         trace_pci_nvme_err_invalid_create_cq_vector(vector);
-        return NVME_INVALID_IRQ_VECTOR | NVME_DNR;
+        return NVME_INVALID_IRQ_VECTOR;
     }
     if (unlikely(!(NVME_CQ_FLAGS_PC(qflags)))) {
         trace_pci_nvme_err_invalid_create_cq_qflags(NVME_CQ_FLAGS_PC(qflags));
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     cq = g_malloc0(sizeof(*cq));
@@ -3115,7 +3117,7 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req)
                         DMA_DIRECTION_FROM_DEVICE, req);
     }
 
-    return NVME_INVALID_FIELD | NVME_DNR;
+    return NVME_INVALID_FIELD;
 }
 
 static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req)
@@ -3127,7 +3129,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req)
     trace_pci_nvme_identify_ns(nsid);
 
     if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
-        return NVME_INVALID_NSID | NVME_DNR;
+        return NVME_INVALID_NSID;
     }
 
     ns = nvme_ns(n, nsid);
@@ -3140,7 +3142,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req)
                         DMA_DIRECTION_FROM_DEVICE, req);
     }
 
-    return NVME_INVALID_CMD_SET | NVME_DNR;
+    return NVME_INVALID_CMD_SET;
 }
 
 static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req)
@@ -3152,7 +3154,7 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req)
     trace_pci_nvme_identify_ns_csi(nsid, c->csi);
 
     if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
-        return NVME_INVALID_NSID | NVME_DNR;
+        return NVME_INVALID_NSID;
     }
 
     ns = nvme_ns(n, nsid);
@@ -3167,7 +3169,7 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req)
                         DMA_DIRECTION_FROM_DEVICE, req);
     }
 
-    return NVME_INVALID_FIELD | NVME_DNR;
+    return NVME_INVALID_FIELD;
 }
 
 static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req)
@@ -3189,7 +3191,7 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req)
      * in the spec (NVM Express v1.3d, Section 5.15.4).
      */
     if (min_nsid >= NVME_NSID_BROADCAST - 1) {
-        return NVME_INVALID_NSID | NVME_DNR;
+        return NVME_INVALID_NSID;
     }
 
     for (i = 1; i <= n->num_namespaces; i++) {
@@ -3225,11 +3227,11 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req)
      * Same as in nvme_identify_nslist(), 0xffffffff/0xfffffffe are invalid.
      */
     if (min_nsid >= NVME_NSID_BROADCAST - 1) {
-        return NVME_INVALID_NSID | NVME_DNR;
+        return NVME_INVALID_NSID;
     }
 
     if (c->csi != NVME_CSI_NVM && c->csi != NVME_CSI_ZONED) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     for (i = 1; i <= n->num_namespaces; i++) {
@@ -3272,12 +3274,12 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req)
     trace_pci_nvme_identify_ns_descr_list(nsid);
 
     if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
-        return NVME_INVALID_NSID | NVME_DNR;
+        return NVME_INVALID_NSID;
     }
 
     ns = nvme_ns(n, nsid);
     if (unlikely(!ns)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     /*
@@ -3340,7 +3342,7 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req)
         return nvme_identify_cmd_set(n, req);
     default:
         trace_pci_nvme_err_invalid_identify_cns(le32_to_cpu(c->cns));
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 }
 
@@ -3350,7 +3352,7 @@ static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *req)
 
     req->cqe.result = 1;
     if (nvme_check_sqid(n, sqid)) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     return NVME_SUCCESS;
@@ -3419,7 +3421,7 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req)
     trace_pci_nvme_getfeat(nvme_cid(req), nsid, fid, sel, dw11);
 
     if (!nvme_feature_support[fid]) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (nvme_feature_cap[fid] & NVME_FEAT_CAP_NS) {
@@ -3431,11 +3433,11 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req)
              * features we can always return Invalid Namespace or Format as we
              * should do for all other features.
              */
-            return NVME_INVALID_NSID | NVME_DNR;
+            return NVME_INVALID_NSID;
         }
 
         if (!nvme_ns(n, nsid)) {
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
     }
 
@@ -3472,15 +3474,15 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req)
             goto out;
         }
 
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     case NVME_ERROR_RECOVERY:
         if (!nvme_nsid_valid(n, nsid)) {
-            return NVME_INVALID_NSID | NVME_DNR;
+            return NVME_INVALID_NSID;
         }
 
         ns = nvme_ns(n, nsid);
         if (unlikely(!ns)) {
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
 
         result = ns->features.err_rec;
@@ -3530,7 +3532,7 @@ defaults:
     case NVME_INTERRUPT_VECTOR_CONF:
         iv = dw11 & 0xffff;
         if (iv >= n->params.max_ioqpairs + 1) {
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
 
         result = iv;
@@ -3582,34 +3584,34 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
     trace_pci_nvme_setfeat(nvme_cid(req), nsid, fid, save, dw11);
 
     if (save && !(nvme_feature_cap[fid] & NVME_FEAT_CAP_SAVE)) {
-        return NVME_FID_NOT_SAVEABLE | NVME_DNR;
+        return NVME_FID_NOT_SAVEABLE;
     }
 
     if (!nvme_feature_support[fid]) {
-        return NVME_INVALID_FIELD | NVME_DNR;
+        return NVME_INVALID_FIELD;
     }
 
     if (nvme_feature_cap[fid] & NVME_FEAT_CAP_NS) {
         if (nsid != NVME_NSID_BROADCAST) {
             if (!nvme_nsid_valid(n, nsid)) {
-                return NVME_INVALID_NSID | NVME_DNR;
+                return NVME_INVALID_NSID;
             }
 
             ns = nvme_ns(n, nsid);
             if (unlikely(!ns)) {
-                return NVME_INVALID_FIELD | NVME_DNR;
+                return NVME_INVALID_FIELD;
             }
         }
     } else if (nsid && nsid != NVME_NSID_BROADCAST) {
         if (!nvme_nsid_valid(n, nsid)) {
-            return NVME_INVALID_NSID | NVME_DNR;
+            return NVME_INVALID_NSID;
         }
 
-        return NVME_FEAT_NOT_NS_SPEC | NVME_DNR;
+        return NVME_FEAT_NOT_NS_SPEC;
     }
 
     if (!(nvme_feature_cap[fid] & NVME_FEAT_CAP_CHANGE)) {
-        return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR;
+        return NVME_FEAT_NOT_CHANGEABLE;
     }
 
     switch (fid) {
@@ -3626,7 +3628,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
             n->features.temp_thresh_low = NVME_TEMP_TMPTH(dw11);
             break;
         default:
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
 
         if ((n->temperature >= n->features.temp_thresh_hi) ||
@@ -3675,7 +3677,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
 
     case NVME_NUMBER_OF_QUEUES:
         if (n->qs_created) {
-            return NVME_CMD_SEQ_ERROR | NVME_DNR;
+            return NVME_CMD_SEQ_ERROR;
         }
 
         /*
@@ -3683,7 +3685,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
          * and NSQR.
          */
         if ((dw11 & 0xffff) == 0xffff || ((dw11 >> 16) & 0xffff) == 0xffff) {
-            return NVME_INVALID_FIELD | NVME_DNR;
+            return NVME_INVALID_FIELD;
         }
 
         trace_pci_nvme_setfeat_numq((dw11 & 0xFFFF) + 1,
@@ -3701,11 +3703,11 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
     case NVME_COMMAND_SET_PROFILE:
         if (dw11 & 0x1ff) {
             trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff);
-            return NVME_CMD_SET_CMB_REJECTED | NVME_DNR;
+            return NVME_CMD_SET_CMB_REJECTED;
         }
         break;
     default:
-        return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR;
+        return NVME_FEAT_NOT_CHANGEABLE;
     }
     return NVME_SUCCESS;
 }
@@ -3736,7 +3738,7 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req)
 
     if (!(nvme_cse_acs[req->cmd.opcode] & NVME_CMD_EFF_CSUPP)) {
         trace_pci_nvme_err_invalid_admin_opc(req->cmd.opcode);
-        return NVME_INVALID_OPCODE | NVME_DNR;
+        return NVME_INVALID_OPCODE;
     }
 
     switch (req->cmd.opcode) {
@@ -3764,7 +3766,7 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req)
         assert(false);
     }
 
-    return NVME_INVALID_OPCODE | NVME_DNR;
+    return NVME_INVALID_OPCODE;
 }
 
 static void nvme_process_sq(void *opaque)
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/3] hw/block/nvme: support command retry delay
  2021-02-10 19:52 [RFC PATCH 0/3] support command retry Minwoo Im
  2021-02-10 19:52 ` [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place Minwoo Im
@ 2021-02-10 19:52 ` Minwoo Im
  2021-02-10 20:37   ` Klaus Jensen
  2021-02-10 19:52 ` [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command Minwoo Im
  2 siblings, 1 reply; 16+ messages in thread
From: Minwoo Im @ 2021-02-10 19:52 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: Kevin Wolf, Dr . David Alan Gilbert, Max Reitz, Klaus Jensen,
	Minwoo Im, Keith Busch

Set CRDT1(Command Retry Delay Time 1) in the Identify controller data
structure to milliseconds units of 100ms by the given value of
'cmd-retry-delay' parameter which is newly added.  If
cmd-retry-delay=1000, it will be set CRDT1 to 10.  This patch only
considers the CRDT1 without CRDT2 and 3 for the simplicity.

This patch also introduced set/get feature command handler for Host
Behavior feature (16h).  In this feature, ACRE(Advanced Command Retry
Enable) will be set by the host based on the Identify controller data
structure, especially by CRDTs.

If 'cmd-retry-delay' is not given, the default value will be -1 which is
CRDT will not be configured at all and ACRE will not be supported.  In
this case, we just set NVME_DNR to the error CQ entry just like we used
to.  If it's given to positive value, then ACRE will be supported by the
device.

Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
---
 hw/block/nvme.c      | 65 ++++++++++++++++++++++++++++++++++++++++++--
 hw/block/nvme.h      |  2 ++
 include/block/nvme.h | 13 ++++++++-
 3 files changed, 77 insertions(+), 3 deletions(-)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 816e0e8e5205..6d3c554a0e99 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -23,7 +23,7 @@
  *              max_ioqpairs=<N[optional]>, \
  *              aerl=<N[optional]>, aer_max_queued=<N[optional]>, \
  *              mdts=<N[optional]>,zoned.append_size_limit=<N[optional]>, \
- *              subsys=<subsys_id> \
+ *              subsys=<subsys_id>,cmd-retry-delay=<N[optional]> \
  *      -device nvme-ns,drive=<drive_id>,bus=<bus_name>,nsid=<nsid>,\
  *              zoned=<true|false[optional]>, \
  *              subsys=<subsys_id>
@@ -71,6 +71,14 @@
  *   data size being in effect. By setting this property to 0, users can make
  *   ZASL to be equal to MDTS. This property only affects zoned namespaces.
  *
+ * - `cmd-retry-delay`
+ *   Command Retry Delay value in unit of millisecond.  This value will be
+ *   reported to the CRDT1(Command Retry Delay Time 1) in Identify Controller
+ *   data structure in 100 milliseconds unit.  If this is not given, DNR(Do Not
+ *   Retry) bit field in the Status field of CQ entry.  If it's given to 0,
+ *   CRD(Command Retry Delay) will be set to 0 which is for retry without
+ *   delay.  Otherwise, it will set to 1 to delay for CRDT1 value.
+ *
  * nvme namespace device parameters
  * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  * - `subsys`
@@ -154,6 +162,7 @@ static const bool nvme_feature_support[NVME_FID_MAX] = {
     [NVME_WRITE_ATOMICITY]          = true,
     [NVME_ASYNCHRONOUS_EVENT_CONF]  = true,
     [NVME_TIMESTAMP]                = true,
+    [NVME_HOST_BEHAVIOR_SUPPORT]    = true,
 };
 
 static const uint32_t nvme_feature_cap[NVME_FID_MAX] = {
@@ -163,6 +172,7 @@ static const uint32_t nvme_feature_cap[NVME_FID_MAX] = {
     [NVME_NUMBER_OF_QUEUES]         = NVME_FEAT_CAP_CHANGE,
     [NVME_ASYNCHRONOUS_EVENT_CONF]  = NVME_FEAT_CAP_CHANGE,
     [NVME_TIMESTAMP]                = NVME_FEAT_CAP_CHANGE,
+    [NVME_HOST_BEHAVIOR_SUPPORT]    = NVME_FEAT_CAP_CHANGE,
 };
 
 static const uint32_t nvme_cse_acs[256] = {
@@ -904,6 +914,16 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len,
     return status;
 }
 
+static inline bool nvme_should_retry(NvmeRequest *req)
+{
+    switch (req->status) {
+    case NVME_COMMAND_INTERRUPTED:
+        return true;
+    default:
+        return false;
+    }
+}
+
 static void nvme_post_cqes(void *opaque)
 {
     NvmeCQueue *cq = opaque;
@@ -947,7 +967,13 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
     assert(cq->cqid == req->sq->cqid);
 
     if (req->status != NVME_SUCCESS) {
-        req->status |= NVME_DNR;
+        if (cq->ctrl->features.acre && nvme_should_retry(req)) {
+            if (cq->ctrl->params.cmd_retry_delay > 0) {
+                req->status |= NVME_CRD_CRDT1;
+            }
+        } else {
+            req->status |= NVME_DNR;
+        }
     }
 
     trace_pci_nvme_enqueue_req_completion(nvme_cid(req), cq->cqid,
@@ -3401,6 +3427,16 @@ static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req)
                     DMA_DIRECTION_FROM_DEVICE, req);
 }
 
+static uint16_t nvme_get_feature_host_behavior(NvmeCtrl *n, NvmeRequest *req)
+{
+    NvmeFeatureHostBehavior data = {};
+
+    data.acre = n->features.acre;
+
+    return nvme_dma(n, (uint8_t *)&data, sizeof(data),
+                    DMA_DIRECTION_FROM_DEVICE, req);
+}
+
 static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req)
 {
     NvmeCmd *cmd = &req->cmd;
@@ -3506,6 +3542,8 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req)
         goto out;
     case NVME_TIMESTAMP:
         return nvme_get_feature_timestamp(n, req);
+    case NVME_HOST_BEHAVIOR_SUPPORT:
+        return nvme_get_feature_host_behavior(n, req);
     default:
         break;
     }
@@ -3569,6 +3607,22 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req)
     return NVME_SUCCESS;
 }
 
+static uint16_t nvme_set_feature_host_behavior(NvmeCtrl *n, NvmeRequest *req)
+{
+    NvmeFeatureHostBehavior data;
+    int ret;
+
+    ret = nvme_dma(n, (uint8_t *)&data, sizeof(data),
+                   DMA_DIRECTION_TO_DEVICE, req);
+    if (ret) {
+        return ret;
+    }
+
+    n->features.acre = data.acre;
+
+    return NVME_SUCCESS;
+}
+
 static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
 {
     NvmeNamespace *ns = NULL;
@@ -3700,6 +3754,8 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
         break;
     case NVME_TIMESTAMP:
         return nvme_set_feature_timestamp(n, req);
+    case NVME_HOST_BEHAVIOR_SUPPORT:
+        return nvme_set_feature_host_behavior(n, req);
     case NVME_COMMAND_SET_PROFILE:
         if (dw11 & 0x1ff) {
             trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff);
@@ -4699,6 +4755,10 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev)
     id->oacs = cpu_to_le16(0);
     id->cntrltype = 0x1;
 
+    if (n->params.cmd_retry_delay > 0) {
+        id->crdt1 = cpu_to_le16(DIV_ROUND_UP(n->params.cmd_retry_delay, 100));
+    }
+
     /*
      * Because the controller always completes the Abort command immediately,
      * there can never be more than one concurrently executing Abort command,
@@ -4859,6 +4919,7 @@ static Property nvme_props[] = {
     DEFINE_PROP_BOOL("legacy-cmb", NvmeCtrl, params.legacy_cmb, false),
     DEFINE_PROP_SIZE32("zoned.append_size_limit", NvmeCtrl, params.zasl_bs,
                        NVME_DEFAULT_MAX_ZA_SIZE),
+    DEFINE_PROP_INT32("cmd-retry-delay", NvmeCtrl, params.cmd_retry_delay, -1),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/block/nvme.h b/hw/block/nvme.h
index cb2b5175f1a1..37940b3ac2d2 100644
--- a/hw/block/nvme.h
+++ b/hw/block/nvme.h
@@ -22,6 +22,7 @@ typedef struct NvmeParams {
     bool     use_intel_id;
     uint32_t zasl_bs;
     bool     legacy_cmb;
+    int32_t  cmd_retry_delay;
 } NvmeParams;
 
 typedef struct NvmeAsyncEvent {
@@ -124,6 +125,7 @@ typedef struct NvmeFeatureVal {
         uint16_t temp_thresh_low;
     };
     uint32_t    async_config;
+    uint8_t     acre;
 } NvmeFeatureVal;
 
 typedef struct NvmeCtrl {
diff --git a/include/block/nvme.h b/include/block/nvme.h
index 9f8eb3988c0e..e1e2259da933 100644
--- a/include/block/nvme.h
+++ b/include/block/nvme.h
@@ -813,6 +813,7 @@ enum NvmeStatusCodes {
     NVME_SGL_DESCR_TYPE_INVALID = 0x0011,
     NVME_INVALID_USE_OF_CMB     = 0x0012,
     NVME_INVALID_PRP_OFFSET     = 0x0013,
+    NVME_COMMAND_INTERRUPTED    = 0x0021,
     NVME_CMD_SET_CMB_REJECTED   = 0x002b,
     NVME_INVALID_CMD_SET        = 0x002c,
     NVME_LBA_RANGE              = 0x0080,
@@ -858,6 +859,7 @@ enum NvmeStatusCodes {
     NVME_DULB                   = 0x0287,
     NVME_MORE                   = 0x2000,
     NVME_DNR                    = 0x4000,
+    NVME_CRD_CRDT1              = 0x0800,
     NVME_NO_COMPLETE            = 0xffff,
 };
 
@@ -987,7 +989,10 @@ typedef struct QEMU_PACKED NvmeIdCtrl {
     uint8_t     rsvd100[11];
     uint8_t     cntrltype;
     uint8_t     fguid[16];
-    uint8_t     rsvd128[128];
+    uint16_t    crdt1;
+    uint16_t    crdt2;
+    uint16_t    crdt3;
+    uint8_t     rsvd134[122];
     uint16_t    oacs;
     uint8_t     acl;
     uint8_t     aerl;
@@ -1131,6 +1136,7 @@ enum NvmeFeatureIds {
     NVME_WRITE_ATOMICITY            = 0xa,
     NVME_ASYNCHRONOUS_EVENT_CONF    = 0xb,
     NVME_TIMESTAMP                  = 0xe,
+    NVME_HOST_BEHAVIOR_SUPPORT      = 0x16,
     NVME_COMMAND_SET_PROFILE        = 0x19,
     NVME_SOFTWARE_PROGRESS_MARKER   = 0x80,
     NVME_FID_MAX                    = 0x100,
@@ -1162,6 +1168,11 @@ typedef enum NvmeGetFeatureSelect {
 #define NVME_SETFEAT_SAVE(dw10) \
     ((dw10 >> NVME_SETFEAT_SAVE_SHIFT) & NVME_SETFEAT_SAVE_MASK)
 
+typedef struct QEMU_PACKED NvmeFeatureHostBehavior {
+    uint8_t     acre;
+    uint8_t     rsvd1[511];
+} NvmeFeatureHostBehavior;
+
 typedef struct QEMU_PACKED NvmeRangeType {
     uint8_t     type;
     uint8_t     attributes;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-10 19:52 [RFC PATCH 0/3] support command retry Minwoo Im
  2021-02-10 19:52 ` [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place Minwoo Im
  2021-02-10 19:52 ` [RFC PATCH 2/3] hw/block/nvme: support command retry delay Minwoo Im
@ 2021-02-10 19:52 ` Minwoo Im
  2021-02-10 20:33   ` Klaus Jensen
  2021-02-11  3:00   ` Keith Busch
  2 siblings, 2 replies; 16+ messages in thread
From: Minwoo Im @ 2021-02-10 19:52 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: Kevin Wolf, Dr . David Alan Gilbert, Max Reitz, Klaus Jensen,
	Minwoo Im, Keith Busch

nvme_inject_state command is to give a controller state to be.
Human Monitor Interface(HMP) supports users to make controller to a
specified state of:

	normal:			Normal state (no injection)
	cmd-interrupted:	Commands will be interrupted internally

This patch is just a start to give dynamic command from the HMP to the
QEMU NVMe device model.  If "cmd-interrupted" state is given, then the
controller will return all the CQ entries with Command Interrupts status
code.

Usage:
	-device nvme,id=nvme0,....

	(qemu) nvme_inject_state nvme0 cmd-interrupted

	<All the commands will be interrupted internally>

	(qemu) nvme_inject_state nvme0 normal

This feature is required to test Linux kernel NVMe driver for the
command retry feature.

Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
---
 hmp-commands.hx       | 13 ++++++++++++
 hw/block/nvme.c       | 49 +++++++++++++++++++++++++++++++++++++++++++
 hw/block/nvme.h       |  8 +++++++
 include/monitor/hmp.h |  1 +
 4 files changed, 71 insertions(+)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index d4001f9c5dc6..ef288c567b46 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1307,6 +1307,19 @@ SRST
   Inject PCIe AER error
 ERST
 
+    {
+        .name       = "nvme_inject_state",
+        .args_type  = "id:s,state:s",
+        .params     = "id [normal|cmd-interrupted]",
+        .help       = "inject controller/namespace state",
+        .cmd        = hmp_nvme_inject_state,
+    },
+
+SRST
+``nvme_inject_state``
+  Inject NVMe controller/namespace state
+ERST
+
     {
         .name       = "netdev_add",
         .args_type  = "netdev:O",
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 6d3c554a0e99..42cf5bd113e6 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -123,6 +123,7 @@
 #include "sysemu/sysemu.h"
 #include "qapi/error.h"
 #include "qapi/visitor.h"
+#include "qapi/qmp/qdict.h"
 #include "sysemu/hostmem.h"
 #include "sysemu/block-backend.h"
 #include "exec/memory.h"
@@ -132,6 +133,7 @@
 #include "trace.h"
 #include "nvme.h"
 #include "nvme-ns.h"
+#include "monitor/monitor.h"
 
 #define NVME_MAX_IOQPAIRS 0xffff
 #define NVME_DB_SIZE  4
@@ -966,6 +968,14 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
 {
     assert(cq->cqid == req->sq->cqid);
 
+    /*
+     * Override request status field if controller state has been injected by
+     * the QMP.
+     */
+    if (cq->ctrl->state == NVME_STATE_CMD_INTERRUPTED) {
+        req->status = NVME_COMMAND_INTERRUPTED;
+    }
+
     if (req->status != NVME_SUCCESS) {
         if (cq->ctrl->features.acre && nvme_should_retry(req)) {
             if (cq->ctrl->params.cmd_retry_delay > 0) {
@@ -5025,4 +5035,43 @@ static void nvme_register_types(void)
     type_register_static(&nvme_bus_info);
 }
 
+static void nvme_inject_state(NvmeCtrl *n, NvmeState state)
+{
+    n->state = state;
+}
+
+static const char *nvme_states[] = {
+    [NVME_STATE_NORMAL]             = "normal",
+    [NVME_STATE_CMD_INTERRUPTED]    = "cmd-interrupted",
+};
+
+void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict)
+{
+    const char *id = qdict_get_str(qdict, "id");
+    const char *state = qdict_get_str(qdict, "state");
+    PCIDevice *dev;
+    NvmeCtrl *n;
+    int ret, i;
+
+    ret = pci_qdev_find_device(id, &dev);
+    if (ret < 0) {
+        monitor_printf(mon, "invalid device id %s\n", id);
+        return;
+    }
+
+    n = NVME(dev);
+
+    for (i = 0; i < ARRAY_SIZE(nvme_states); i++) {
+        if (!strcmp(nvme_states[i], state)) {
+            nvme_inject_state(n, i);
+            monitor_printf(mon,
+                           "-device nvme,id=%s: state %s injected\n",
+                           id, state);
+            return;
+        }
+    }
+
+    monitor_printf(mon, "invalid state %s\n", state);
+}
+
 type_init(nvme_register_types)
diff --git a/hw/block/nvme.h b/hw/block/nvme.h
index 37940b3ac2d2..1af1e0380d9b 100644
--- a/hw/block/nvme.h
+++ b/hw/block/nvme.h
@@ -128,6 +128,11 @@ typedef struct NvmeFeatureVal {
     uint8_t     acre;
 } NvmeFeatureVal;
 
+typedef enum NvmeState {
+    NVME_STATE_NORMAL,
+    NVME_STATE_CMD_INTERRUPTED,
+} NvmeState;
+
 typedef struct NvmeCtrl {
     PCIDevice    parent_obj;
     MemoryRegion bar0;
@@ -185,6 +190,8 @@ typedef struct NvmeCtrl {
     NvmeCQueue      admin_cq;
     NvmeIdCtrl      id_ctrl;
     NvmeFeatureVal  features;
+
+    NvmeState       state;
 } NvmeCtrl;
 
 static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid)
@@ -212,4 +219,5 @@ static inline NvmeCtrl *nvme_ctrl(NvmeRequest *req)
 
 int nvme_register_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp);
 
+void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict);
 #endif /* HW_NVME_H */
diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h
index ed2913fd18e8..668384ea2e34 100644
--- a/include/monitor/hmp.h
+++ b/include/monitor/hmp.h
@@ -79,6 +79,7 @@ void hmp_migrate(Monitor *mon, const QDict *qdict);
 void hmp_device_add(Monitor *mon, const QDict *qdict);
 void hmp_device_del(Monitor *mon, const QDict *qdict);
 void hmp_dump_guest_memory(Monitor *mon, const QDict *qdict);
+void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict);
 void hmp_netdev_add(Monitor *mon, const QDict *qdict);
 void hmp_netdev_del(Monitor *mon, const QDict *qdict);
 void hmp_getfd(Monitor *mon, const QDict *qdict);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place
  2021-02-10 19:52 ` [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place Minwoo Im
@ 2021-02-10 20:19   ` Klaus Jensen
  2021-02-11  3:40     ` Minwoo Im
  0 siblings, 1 reply; 16+ messages in thread
From: Klaus Jensen @ 2021-02-10 20:19 UTC (permalink / raw)
  To: Minwoo Im
  Cc: Kevin Wolf, qemu-block, qemu-devel, Max Reitz, Keith Busch,
	Dr . David Alan Gilbert

[-- Attachment #1: Type: text/plain, Size: 597 bytes --]

On Feb 11 04:52, Minwoo Im wrote:
> @@ -945,6 +945,11 @@ static void nvme_post_cqes(void *opaque)
>  static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
>  {
>      assert(cq->cqid == req->sq->cqid);
> +
> +    if (req->status != NVME_SUCCESS) {
> +        req->status |= NVME_DNR;
> +    }

There are status codes where we do not set the DNR bit (e.g. Data
Transfer Error, and that might be the only one actually).

Maybe a switch such that we do not explicitly set DNR for Data Transfer
Error (and any other errors we identify), but only if we set it earlier
in the stack.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-10 19:52 ` [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command Minwoo Im
@ 2021-02-10 20:33   ` Klaus Jensen
  2021-02-11  3:23     ` Minwoo Im
  2021-02-11 10:02     ` Dr. David Alan Gilbert
  2021-02-11  3:00   ` Keith Busch
  1 sibling, 2 replies; 16+ messages in thread
From: Klaus Jensen @ 2021-02-10 20:33 UTC (permalink / raw)
  To: Minwoo Im
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Keith Busch, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 6433 bytes --]

On Feb 11 04:52, Minwoo Im wrote:
> nvme_inject_state command is to give a controller state to be.
> Human Monitor Interface(HMP) supports users to make controller to a
> specified state of:
> 
> 	normal:			Normal state (no injection)
> 	cmd-interrupted:	Commands will be interrupted internally
> 
> This patch is just a start to give dynamic command from the HMP to the
> QEMU NVMe device model.  If "cmd-interrupted" state is given, then the
> controller will return all the CQ entries with Command Interrupts status
> code.
> 
> Usage:
> 	-device nvme,id=nvme0,....
> 
> 	(qemu) nvme_inject_state nvme0 cmd-interrupted
> 
> 	<All the commands will be interrupted internally>
> 
> 	(qemu) nvme_inject_state nvme0 normal
> 
> This feature is required to test Linux kernel NVMe driver for the
> command retry feature.
> 

This is super cool and commands like this feel much nicer than the
qom-set approach that the SMART critical warning feature took.

But... looking at the existing commands I don't think we can "bloat" it
up with a device specific command like this, but I don't know the policy
around this.

If an HMP command is out, then we should be able to make do with the
qom-set approach just fine though.

> Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
> ---
>  hmp-commands.hx       | 13 ++++++++++++
>  hw/block/nvme.c       | 49 +++++++++++++++++++++++++++++++++++++++++++
>  hw/block/nvme.h       |  8 +++++++
>  include/monitor/hmp.h |  1 +
>  4 files changed, 71 insertions(+)
> 
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index d4001f9c5dc6..ef288c567b46 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1307,6 +1307,19 @@ SRST
>    Inject PCIe AER error
>  ERST
>  
> +    {
> +        .name       = "nvme_inject_state",
> +        .args_type  = "id:s,state:s",
> +        .params     = "id [normal|cmd-interrupted]",
> +        .help       = "inject controller/namespace state",
> +        .cmd        = hmp_nvme_inject_state,
> +    },
> +
> +SRST
> +``nvme_inject_state``
> +  Inject NVMe controller/namespace state
> +ERST
> +
>      {
>          .name       = "netdev_add",
>          .args_type  = "netdev:O",
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 6d3c554a0e99..42cf5bd113e6 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -123,6 +123,7 @@
>  #include "sysemu/sysemu.h"
>  #include "qapi/error.h"
>  #include "qapi/visitor.h"
> +#include "qapi/qmp/qdict.h"
>  #include "sysemu/hostmem.h"
>  #include "sysemu/block-backend.h"
>  #include "exec/memory.h"
> @@ -132,6 +133,7 @@
>  #include "trace.h"
>  #include "nvme.h"
>  #include "nvme-ns.h"
> +#include "monitor/monitor.h"
>  
>  #define NVME_MAX_IOQPAIRS 0xffff
>  #define NVME_DB_SIZE  4
> @@ -966,6 +968,14 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
>  {
>      assert(cq->cqid == req->sq->cqid);
>  
> +    /*
> +     * Override request status field if controller state has been injected by
> +     * the QMP.
> +     */
> +    if (cq->ctrl->state == NVME_STATE_CMD_INTERRUPTED) {
> +        req->status = NVME_COMMAND_INTERRUPTED;
> +    }
> +
>      if (req->status != NVME_SUCCESS) {
>          if (cq->ctrl->features.acre && nvme_should_retry(req)) {
>              if (cq->ctrl->params.cmd_retry_delay > 0) {
> @@ -5025,4 +5035,43 @@ static void nvme_register_types(void)
>      type_register_static(&nvme_bus_info);
>  }
>  
> +static void nvme_inject_state(NvmeCtrl *n, NvmeState state)
> +{
> +    n->state = state;
> +}
> +
> +static const char *nvme_states[] = {
> +    [NVME_STATE_NORMAL]             = "normal",
> +    [NVME_STATE_CMD_INTERRUPTED]    = "cmd-interrupted",
> +};
> +
> +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict)
> +{
> +    const char *id = qdict_get_str(qdict, "id");
> +    const char *state = qdict_get_str(qdict, "state");
> +    PCIDevice *dev;
> +    NvmeCtrl *n;
> +    int ret, i;
> +
> +    ret = pci_qdev_find_device(id, &dev);
> +    if (ret < 0) {
> +        monitor_printf(mon, "invalid device id %s\n", id);
> +        return;
> +    }
> +
> +    n = NVME(dev);
> +
> +    for (i = 0; i < ARRAY_SIZE(nvme_states); i++) {
> +        if (!strcmp(nvme_states[i], state)) {
> +            nvme_inject_state(n, i);
> +            monitor_printf(mon,
> +                           "-device nvme,id=%s: state %s injected\n",
> +                           id, state);
> +            return;
> +        }
> +    }
> +
> +    monitor_printf(mon, "invalid state %s\n", state);
> +}
> +
>  type_init(nvme_register_types)
> diff --git a/hw/block/nvme.h b/hw/block/nvme.h
> index 37940b3ac2d2..1af1e0380d9b 100644
> --- a/hw/block/nvme.h
> +++ b/hw/block/nvme.h
> @@ -128,6 +128,11 @@ typedef struct NvmeFeatureVal {
>      uint8_t     acre;
>  } NvmeFeatureVal;
>  
> +typedef enum NvmeState {
> +    NVME_STATE_NORMAL,
> +    NVME_STATE_CMD_INTERRUPTED,
> +} NvmeState;
> +
>  typedef struct NvmeCtrl {
>      PCIDevice    parent_obj;
>      MemoryRegion bar0;
> @@ -185,6 +190,8 @@ typedef struct NvmeCtrl {
>      NvmeCQueue      admin_cq;
>      NvmeIdCtrl      id_ctrl;
>      NvmeFeatureVal  features;
> +
> +    NvmeState       state;
>  } NvmeCtrl;
>  
>  static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid)
> @@ -212,4 +219,5 @@ static inline NvmeCtrl *nvme_ctrl(NvmeRequest *req)
>  
>  int nvme_register_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp);
>  
> +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict);
>  #endif /* HW_NVME_H */
> diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h
> index ed2913fd18e8..668384ea2e34 100644
> --- a/include/monitor/hmp.h
> +++ b/include/monitor/hmp.h
> @@ -79,6 +79,7 @@ void hmp_migrate(Monitor *mon, const QDict *qdict);
>  void hmp_device_add(Monitor *mon, const QDict *qdict);
>  void hmp_device_del(Monitor *mon, const QDict *qdict);
>  void hmp_dump_guest_memory(Monitor *mon, const QDict *qdict);
> +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict);
>  void hmp_netdev_add(Monitor *mon, const QDict *qdict);
>  void hmp_netdev_del(Monitor *mon, const QDict *qdict);
>  void hmp_getfd(Monitor *mon, const QDict *qdict);
> -- 
> 2.17.1
> 

-- 
One of us - No more doubt, silence or taboo about mental illness.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/3] hw/block/nvme: support command retry delay
  2021-02-10 19:52 ` [RFC PATCH 2/3] hw/block/nvme: support command retry delay Minwoo Im
@ 2021-02-10 20:37   ` Klaus Jensen
  0 siblings, 0 replies; 16+ messages in thread
From: Klaus Jensen @ 2021-02-10 20:37 UTC (permalink / raw)
  To: Minwoo Im
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Keith Busch, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 1062 bytes --]

On Feb 11 04:52, Minwoo Im wrote:
> Set CRDT1(Command Retry Delay Time 1) in the Identify controller data
> structure to milliseconds units of 100ms by the given value of
> 'cmd-retry-delay' parameter which is newly added.  If
> cmd-retry-delay=1000, it will be set CRDT1 to 10.  This patch only
> considers the CRDT1 without CRDT2 and 3 for the simplicity.
> 
> This patch also introduced set/get feature command handler for Host
> Behavior feature (16h).  In this feature, ACRE(Advanced Command Retry
> Enable) will be set by the host based on the Identify controller data
> structure, especially by CRDTs.
> 
> If 'cmd-retry-delay' is not given, the default value will be -1 which is
> CRDT will not be configured at all and ACRE will not be supported.  In
> this case, we just set NVME_DNR to the error CQ entry just like we used
> to.  If it's given to positive value, then ACRE will be supported by the
> device.
> 
> Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
> ---

LGTM.

Reviewed-by: Klaus Jensen <k.jensen@samsung.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-10 19:52 ` [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command Minwoo Im
  2021-02-10 20:33   ` Klaus Jensen
@ 2021-02-11  3:00   ` Keith Busch
  2021-02-11  3:38     ` Minwoo Im
  1 sibling, 1 reply; 16+ messages in thread
From: Keith Busch @ 2021-02-11  3:00 UTC (permalink / raw)
  To: Minwoo Im
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Klaus Jensen, Max Reitz

On Thu, Feb 11, 2021 at 04:52:52AM +0900, Minwoo Im wrote:
> nvme_inject_state command is to give a controller state to be.
> Human Monitor Interface(HMP) supports users to make controller to a
> specified state of:
> 
> 	normal:			Normal state (no injection)
> 	cmd-interrupted:	Commands will be interrupted internally
> 
> This patch is just a start to give dynamic command from the HMP to the
> QEMU NVMe device model.  If "cmd-interrupted" state is given, then the
> controller will return all the CQ entries with Command Interrupts status
> code.
> 
> Usage:
> 	-device nvme,id=nvme0,....
> 
> 	(qemu) nvme_inject_state nvme0 cmd-interrupted
> 
> 	<All the commands will be interrupted internally>
> 
> 	(qemu) nvme_inject_state nvme0 normal
> 
> This feature is required to test Linux kernel NVMe driver for the
> command retry feature.

Once the user sets the injected state, all commands return that status
until the user injects the normal state, so the CRD time is meaningless
here. If we're really going this route, the state needs to return to
normal on it's own.

But I would prefer to see advanced retry tied to real errors that can be
retried, like if we got an EBUSY or EAGAIN errno or something like that.

The interface you found to implement this is very interesting though.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-10 20:33   ` Klaus Jensen
@ 2021-02-11  3:23     ` Minwoo Im
  2021-02-11 10:02     ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 16+ messages in thread
From: Minwoo Im @ 2021-02-11  3:23 UTC (permalink / raw)
  To: Klaus Jensen
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Keith Busch, Max Reitz

On 21-02-10 21:33:50, Klaus Jensen wrote:
> On Feb 11 04:52, Minwoo Im wrote:
> > nvme_inject_state command is to give a controller state to be.
> > Human Monitor Interface(HMP) supports users to make controller to a
> > specified state of:
> > 
> > 	normal:			Normal state (no injection)
> > 	cmd-interrupted:	Commands will be interrupted internally
> > 
> > This patch is just a start to give dynamic command from the HMP to the
> > QEMU NVMe device model.  If "cmd-interrupted" state is given, then the
> > controller will return all the CQ entries with Command Interrupts status
> > code.
> > 
> > Usage:
> > 	-device nvme,id=nvme0,....
> > 
> > 	(qemu) nvme_inject_state nvme0 cmd-interrupted
> > 
> > 	<All the commands will be interrupted internally>
> > 
> > 	(qemu) nvme_inject_state nvme0 normal
> > 
> > This feature is required to test Linux kernel NVMe driver for the
> > command retry feature.
> > 
> 
> This is super cool and commands like this feel much nicer than the
> qom-set approach that the SMART critical warning feature took.

This interface is super easy to inject some errors to the running
device with a function call-back.

> But... looking at the existing commands I don't think we can "bloat" it
> up with a device specific command like this, but I don't know the policy
> around this.

Me neither, but I've seen the PCI AER error injection feature from
the existing commands, so I suggested this command to control the
NVMe device itself like error injection.

> If an HMP command is out, then we should be able to make do with the
> qom-set approach just fine though.

Hope so.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-11  3:00   ` Keith Busch
@ 2021-02-11  3:38     ` Minwoo Im
  2021-02-11  4:24       ` Keith Busch
  0 siblings, 1 reply; 16+ messages in thread
From: Minwoo Im @ 2021-02-11  3:38 UTC (permalink / raw)
  To: Keith Busch
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Klaus Jensen, Max Reitz

On 21-02-11 12:00:11, Keith Busch wrote:
> On Thu, Feb 11, 2021 at 04:52:52AM +0900, Minwoo Im wrote:
> > nvme_inject_state command is to give a controller state to be.
> > Human Monitor Interface(HMP) supports users to make controller to a
> > specified state of:
> > 
> > 	normal:			Normal state (no injection)
> > 	cmd-interrupted:	Commands will be interrupted internally
> > 
> > This patch is just a start to give dynamic command from the HMP to the
> > QEMU NVMe device model.  If "cmd-interrupted" state is given, then the
> > controller will return all the CQ entries with Command Interrupts status
> > code.
> > 
> > Usage:
> > 	-device nvme,id=nvme0,....
> > 
> > 	(qemu) nvme_inject_state nvme0 cmd-interrupted
> > 
> > 	<All the commands will be interrupted internally>
> > 
> > 	(qemu) nvme_inject_state nvme0 normal
> > 
> > This feature is required to test Linux kernel NVMe driver for the
> > command retry feature.
> 
> Once the user sets the injected state, all commands return that status
> until the user injects the normal state, so the CRD time is meaningless
> here. If we're really going this route, the state needs to return to
> normal on it's own.

That would also be fine to me.

> But I would prefer to see advanced retry tied to real errors that can be
> retried, like if we got an EBUSY or EAGAIN errno or something like that.

I have seen a thread [1] about ACRE.  Forgive me If I misunderstood this
thread or missed something after this thread.  It looks like CRD field in
the CQE can be set for any NVMe error state which means it *may* depend on
the device status.  And this patch just introduced a internal temporarily
error state of the controller by returning Command Intrrupted status.

I think, in this stage, we can go with some errors in the middle of the
AIO (nvme_aio_err()) for advanced retry.  Shouldn't AIO errors are
retry-able and supposed to be retried ?

> The interface you found to implement this is very interesting though.

Thanks, I just wanted to suggest a scheme to inject something to a
running NVMe device model for various testing.

[1] https://www.spinics.net/lists/dm-devel/msg42165.html


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place
  2021-02-10 20:19   ` Klaus Jensen
@ 2021-02-11  3:40     ` Minwoo Im
  0 siblings, 0 replies; 16+ messages in thread
From: Minwoo Im @ 2021-02-11  3:40 UTC (permalink / raw)
  To: Klaus Jensen
  Cc: Kevin Wolf, qemu-block, qemu-devel, Max Reitz, Keith Busch,
	Dr . David Alan Gilbert

On 21-02-10 21:19:43, Klaus Jensen wrote:
> On Feb 11 04:52, Minwoo Im wrote:
> > @@ -945,6 +945,11 @@ static void nvme_post_cqes(void *opaque)
> >  static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
> >  {
> >      assert(cq->cqid == req->sq->cqid);
> > +
> > +    if (req->status != NVME_SUCCESS) {
> > +        req->status |= NVME_DNR;
> > +    }
> 
> There are status codes where we do not set the DNR bit (e.g. Data
> Transfer Error, and that might be the only one actually).

Ouch, I think I need to prepare some of switch-helper to figure out
which one needs to be retried or not.

> Maybe a switch such that we do not explicitly set DNR for Data Transfer
> Error (and any other errors we identify), but only if we set it earlier
> in the stack.

Agreed.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-11  3:38     ` Minwoo Im
@ 2021-02-11  4:24       ` Keith Busch
  2021-02-11  4:40         ` Warner Losh
                           ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Keith Busch @ 2021-02-11  4:24 UTC (permalink / raw)
  To: Minwoo Im
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Klaus Jensen, Max Reitz

On Thu, Feb 11, 2021 at 12:38:48PM +0900, Minwoo Im wrote:
> On 21-02-11 12:00:11, Keith Busch wrote:
> > But I would prefer to see advanced retry tied to real errors that can be
> > retried, like if we got an EBUSY or EAGAIN errno or something like that.
> 
> I have seen a thread [1] about ACRE.  Forgive me If I misunderstood this
> thread or missed something after this thread.  It looks like CRD field in
> the CQE can be set for any NVMe error state which means it *may* depend on
> the device status.

Right! Setting CRD values is at the controller's discretion for any
error status as long as the host enables ACRE.

> And this patch just introduced a internal temporarily error state of
> the controller by returning Command Intrrupted status.

It's just purely synthetic, though. I was hoping something more natural
could trigger the status. That might not provide the deterministic
scenario you're looking for, though.

I'm not completely against using QEMU as a development/test vehicle for
corner cases like this, but we are introducing a whole lot of knobs
recently, and you practically need to be a QEMU developer to even find
them. We probably should step up the documentation in the wiki along
with these types of features.
 
> I think, in this stage, we can go with some errors in the middle of the
> AIO (nvme_aio_err()) for advanced retry.  Shouldn't AIO errors are
> retry-able and supposed to be retried ?

Sure, we can assume that receiving an error in the AIO callback means
the lower layers exhausted available recovery mechanisms.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-11  4:24       ` Keith Busch
@ 2021-02-11  4:40         ` Warner Losh
  2021-02-11  6:06         ` Minwoo Im
  2021-02-11  7:26         ` Klaus Jensen
  2 siblings, 0 replies; 16+ messages in thread
From: Warner Losh @ 2021-02-11  4:40 UTC (permalink / raw)
  To: Keith Busch
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Minwoo Im, Klaus Jensen, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 1837 bytes --]

On Wed, Feb 10, 2021, 9:26 PM Keith Busch <kbusch@kernel.org> wrote:

> On Thu, Feb 11, 2021 at 12:38:48PM +0900, Minwoo Im wrote:
> > On 21-02-11 12:00:11, Keith Busch wrote:
> > > But I would prefer to see advanced retry tied to real errors that can
> be
> > > retried, like if we got an EBUSY or EAGAIN errno or something like
> that.
> >
> > I have seen a thread [1] about ACRE.  Forgive me If I misunderstood this
> > thread or missed something after this thread.  It looks like CRD field in
> > the CQE can be set for any NVMe error state which means it *may* depend
> on
> > the device status.
>
> Right! Setting CRD values is at the controller's discretion for any
> error status as long as the host enables ACRE.
>
> > And this patch just introduced a internal temporarily error state of
> > the controller by returning Command Intrrupted status.
>
> It's just purely synthetic, though. I was hoping something more natural
> could trigger the status. That might not provide the deterministic
> scenario you're looking for, though.
>
> I'm not completely against using QEMU as a development/test vehicle for
> corner cases like this, but we are introducing a whole lot of knobs
> recently, and you practically need to be a QEMU developer to even find
> them. We probably should step up the documentation in the wiki along
> with these types of features.
>

I'd love that too... I need to test FreeBSD's nvme driver for different
error conditions. I know qemu can help, but it's a bit obscure.

Warner

> I think, in this stage, we can go with some errors in the middle of the
> > AIO (nvme_aio_err()) for advanced retry.  Shouldn't AIO errors are
> > retry-able and supposed to be retried ?
>
> Sure, we can assume that receiving an error in the AIO callback means
> the lower layers exhausted available recovery mechanisms.
>
>

[-- Attachment #2: Type: text/html, Size: 2577 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-11  4:24       ` Keith Busch
  2021-02-11  4:40         ` Warner Losh
@ 2021-02-11  6:06         ` Minwoo Im
  2021-02-11  7:26         ` Klaus Jensen
  2 siblings, 0 replies; 16+ messages in thread
From: Minwoo Im @ 2021-02-11  6:06 UTC (permalink / raw)
  To: Keith Busch
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Klaus Jensen, Max Reitz

On 21-02-11 13:24:22, Keith Busch wrote:
> On Thu, Feb 11, 2021 at 12:38:48PM +0900, Minwoo Im wrote:
> > On 21-02-11 12:00:11, Keith Busch wrote:
> > > But I would prefer to see advanced retry tied to real errors that can be
> > > retried, like if we got an EBUSY or EAGAIN errno or something like that.
> > 
> > I have seen a thread [1] about ACRE.  Forgive me If I misunderstood this
> > thread or missed something after this thread.  It looks like CRD field in
> > the CQE can be set for any NVMe error state which means it *may* depend on
> > the device status.
> 
> Right! Setting CRD values is at the controller's discretion for any
> error status as long as the host enables ACRE.
> 
> > And this patch just introduced a internal temporarily error state of
> > the controller by returning Command Intrrupted status.
> 
> It's just purely synthetic, though. I was hoping something more natural
> could trigger the status. That might not provide the deterministic
> scenario you're looking for, though.

That makes snese.  If some status can be triggered more naturally,  that
would be much better.

> I'm not completely against using QEMU as a development/test vehicle for
> corner cases like this, but we are introducing a whole lot of knobs
> recently, and you practically need to be a QEMU developer to even find
> them. We probably should step up the documentation in the wiki along
> with these types of features.

Oh, that's a really good advice, really appreciate that one.

> > I think, in this stage, we can go with some errors in the middle of the
> > AIO (nvme_aio_err()) for advanced retry.  Shouldn't AIO errors are
> > retry-able and supposed to be retried ?
> 
> Sure, we can assume that receiving an error in the AIO callback means
> the lower layers exhausted available recovery mechanisms.

Okay, please let me find a way to trigger this kind of errors more
naturally.  I think this HMP command should be the last one to try if
there's nothing we can do really.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-11  4:24       ` Keith Busch
  2021-02-11  4:40         ` Warner Losh
  2021-02-11  6:06         ` Minwoo Im
@ 2021-02-11  7:26         ` Klaus Jensen
  2 siblings, 0 replies; 16+ messages in thread
From: Klaus Jensen @ 2021-02-11  7:26 UTC (permalink / raw)
  To: Keith Busch
  Cc: Kevin Wolf, qemu-block, Dr . David Alan Gilbert, qemu-devel,
	Minwoo Im, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 1781 bytes --]

On Feb 11 13:24, Keith Busch wrote:
> On Thu, Feb 11, 2021 at 12:38:48PM +0900, Minwoo Im wrote:
> > On 21-02-11 12:00:11, Keith Busch wrote:
> > > But I would prefer to see advanced retry tied to real errors that can be
> > > retried, like if we got an EBUSY or EAGAIN errno or something like that.
> > 
> > I have seen a thread [1] about ACRE.  Forgive me If I misunderstood this
> > thread or missed something after this thread.  It looks like CRD field in
> > the CQE can be set for any NVMe error state which means it *may* depend on
> > the device status.
> 
> Right! Setting CRD values is at the controller's discretion for any
> error status as long as the host enables ACRE.
> 
> > And this patch just introduced a internal temporarily error state of
> > the controller by returning Command Intrrupted status.
> 
> It's just purely synthetic, though. I was hoping something more natural
> could trigger the status. That might not provide the deterministic
> scenario you're looking for, though.
> 
> I'm not completely against using QEMU as a development/test vehicle for
> corner cases like this, but we are introducing a whole lot of knobs
> recently, and you practically need to be a QEMU developer to even find
> them. We probably should step up the documentation in the wiki along
> with these types of features.
>  

Understood, I'll make docs/specs/nvme.txt and wiki documentation a
priority for 6.0.

> > I think, in this stage, we can go with some errors in the middle of the
> > AIO (nvme_aio_err()) for advanced retry.  Shouldn't AIO errors are
> > retry-able and supposed to be retried ?
> 
> Sure, we can assume that receiving an error in the AIO callback means
> the lower layers exhausted available recovery mechanisms.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command
  2021-02-10 20:33   ` Klaus Jensen
  2021-02-11  3:23     ` Minwoo Im
@ 2021-02-11 10:02     ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 16+ messages in thread
From: Dr. David Alan Gilbert @ 2021-02-11 10:02 UTC (permalink / raw)
  To: Klaus Jensen
  Cc: Kevin Wolf, qemu-block, qemu-devel, Max Reitz, Minwoo Im, Keith Busch

* Klaus Jensen (its@irrelevant.dk) wrote:
> On Feb 11 04:52, Minwoo Im wrote:
> > nvme_inject_state command is to give a controller state to be.
> > Human Monitor Interface(HMP) supports users to make controller to a
> > specified state of:
> > 
> > 	normal:			Normal state (no injection)
> > 	cmd-interrupted:	Commands will be interrupted internally
> > 
> > This patch is just a start to give dynamic command from the HMP to the
> > QEMU NVMe device model.  If "cmd-interrupted" state is given, then the
> > controller will return all the CQ entries with Command Interrupts status
> > code.
> > 
> > Usage:
> > 	-device nvme,id=nvme0,....
> > 
> > 	(qemu) nvme_inject_state nvme0 cmd-interrupted
> > 
> > 	<All the commands will be interrupted internally>
> > 
> > 	(qemu) nvme_inject_state nvme0 normal
> > 
> > This feature is required to test Linux kernel NVMe driver for the
> > command retry feature.
> > 
> 
> This is super cool and commands like this feel much nicer than the
> qom-set approach that the SMART critical warning feature took.
> 
> But... looking at the existing commands I don't think we can "bloat" it
> up with a device specific command like this, but I don't know the policy
> around this.
> 
> If an HMP command is out, then we should be able to make do with the
> qom-set approach just fine though.

HMP is there to make life easy for Humans debugging; if it makes sense from an
NVMe perspective for test/debug then I'm OK with it from an HMP side.
Note that if it was for more common use than debug/test then you'd want
to make it go via QMP and make sure it was a stable interface that was
going to live for a longtime; but for test uses HMP is fine.

Dave

> > Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
> > ---
> >  hmp-commands.hx       | 13 ++++++++++++
> >  hw/block/nvme.c       | 49 +++++++++++++++++++++++++++++++++++++++++++
> >  hw/block/nvme.h       |  8 +++++++
> >  include/monitor/hmp.h |  1 +
> >  4 files changed, 71 insertions(+)
> > 
> > diff --git a/hmp-commands.hx b/hmp-commands.hx
> > index d4001f9c5dc6..ef288c567b46 100644
> > --- a/hmp-commands.hx
> > +++ b/hmp-commands.hx
> > @@ -1307,6 +1307,19 @@ SRST
> >    Inject PCIe AER error
> >  ERST
> >  
> > +    {
> > +        .name       = "nvme_inject_state",
> > +        .args_type  = "id:s,state:s",
> > +        .params     = "id [normal|cmd-interrupted]",
> > +        .help       = "inject controller/namespace state",
> > +        .cmd        = hmp_nvme_inject_state,
> > +    },
> > +
> > +SRST
> > +``nvme_inject_state``
> > +  Inject NVMe controller/namespace state
> > +ERST
> > +
> >      {
> >          .name       = "netdev_add",
> >          .args_type  = "netdev:O",
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 6d3c554a0e99..42cf5bd113e6 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -123,6 +123,7 @@
> >  #include "sysemu/sysemu.h"
> >  #include "qapi/error.h"
> >  #include "qapi/visitor.h"
> > +#include "qapi/qmp/qdict.h"
> >  #include "sysemu/hostmem.h"
> >  #include "sysemu/block-backend.h"
> >  #include "exec/memory.h"
> > @@ -132,6 +133,7 @@
> >  #include "trace.h"
> >  #include "nvme.h"
> >  #include "nvme-ns.h"
> > +#include "monitor/monitor.h"
> >  
> >  #define NVME_MAX_IOQPAIRS 0xffff
> >  #define NVME_DB_SIZE  4
> > @@ -966,6 +968,14 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
> >  {
> >      assert(cq->cqid == req->sq->cqid);
> >  
> > +    /*
> > +     * Override request status field if controller state has been injected by
> > +     * the QMP.
> > +     */
> > +    if (cq->ctrl->state == NVME_STATE_CMD_INTERRUPTED) {
> > +        req->status = NVME_COMMAND_INTERRUPTED;
> > +    }
> > +
> >      if (req->status != NVME_SUCCESS) {
> >          if (cq->ctrl->features.acre && nvme_should_retry(req)) {
> >              if (cq->ctrl->params.cmd_retry_delay > 0) {
> > @@ -5025,4 +5035,43 @@ static void nvme_register_types(void)
> >      type_register_static(&nvme_bus_info);
> >  }
> >  
> > +static void nvme_inject_state(NvmeCtrl *n, NvmeState state)
> > +{
> > +    n->state = state;
> > +}
> > +
> > +static const char *nvme_states[] = {
> > +    [NVME_STATE_NORMAL]             = "normal",
> > +    [NVME_STATE_CMD_INTERRUPTED]    = "cmd-interrupted",
> > +};
> > +
> > +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict)
> > +{
> > +    const char *id = qdict_get_str(qdict, "id");
> > +    const char *state = qdict_get_str(qdict, "state");
> > +    PCIDevice *dev;
> > +    NvmeCtrl *n;
> > +    int ret, i;
> > +
> > +    ret = pci_qdev_find_device(id, &dev);
> > +    if (ret < 0) {
> > +        monitor_printf(mon, "invalid device id %s\n", id);
> > +        return;
> > +    }
> > +
> > +    n = NVME(dev);
> > +
> > +    for (i = 0; i < ARRAY_SIZE(nvme_states); i++) {
> > +        if (!strcmp(nvme_states[i], state)) {
> > +            nvme_inject_state(n, i);
> > +            monitor_printf(mon,
> > +                           "-device nvme,id=%s: state %s injected\n",
> > +                           id, state);
> > +            return;
> > +        }
> > +    }
> > +
> > +    monitor_printf(mon, "invalid state %s\n", state);
> > +}
> > +
> >  type_init(nvme_register_types)
> > diff --git a/hw/block/nvme.h b/hw/block/nvme.h
> > index 37940b3ac2d2..1af1e0380d9b 100644
> > --- a/hw/block/nvme.h
> > +++ b/hw/block/nvme.h
> > @@ -128,6 +128,11 @@ typedef struct NvmeFeatureVal {
> >      uint8_t     acre;
> >  } NvmeFeatureVal;
> >  
> > +typedef enum NvmeState {
> > +    NVME_STATE_NORMAL,
> > +    NVME_STATE_CMD_INTERRUPTED,
> > +} NvmeState;
> > +
> >  typedef struct NvmeCtrl {
> >      PCIDevice    parent_obj;
> >      MemoryRegion bar0;
> > @@ -185,6 +190,8 @@ typedef struct NvmeCtrl {
> >      NvmeCQueue      admin_cq;
> >      NvmeIdCtrl      id_ctrl;
> >      NvmeFeatureVal  features;
> > +
> > +    NvmeState       state;
> >  } NvmeCtrl;
> >  
> >  static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid)
> > @@ -212,4 +219,5 @@ static inline NvmeCtrl *nvme_ctrl(NvmeRequest *req)
> >  
> >  int nvme_register_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp);
> >  
> > +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict);
> >  #endif /* HW_NVME_H */
> > diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h
> > index ed2913fd18e8..668384ea2e34 100644
> > --- a/include/monitor/hmp.h
> > +++ b/include/monitor/hmp.h
> > @@ -79,6 +79,7 @@ void hmp_migrate(Monitor *mon, const QDict *qdict);
> >  void hmp_device_add(Monitor *mon, const QDict *qdict);
> >  void hmp_device_del(Monitor *mon, const QDict *qdict);
> >  void hmp_dump_guest_memory(Monitor *mon, const QDict *qdict);
> > +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict);
> >  void hmp_netdev_add(Monitor *mon, const QDict *qdict);
> >  void hmp_netdev_del(Monitor *mon, const QDict *qdict);
> >  void hmp_getfd(Monitor *mon, const QDict *qdict);
> > -- 
> > 2.17.1
> > 
> 
> -- 
> One of us - No more doubt, silence or taboo about mental illness.


-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-02-11 10:04 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10 19:52 [RFC PATCH 0/3] support command retry Minwoo Im
2021-02-10 19:52 ` [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place Minwoo Im
2021-02-10 20:19   ` Klaus Jensen
2021-02-11  3:40     ` Minwoo Im
2021-02-10 19:52 ` [RFC PATCH 2/3] hw/block/nvme: support command retry delay Minwoo Im
2021-02-10 20:37   ` Klaus Jensen
2021-02-10 19:52 ` [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command Minwoo Im
2021-02-10 20:33   ` Klaus Jensen
2021-02-11  3:23     ` Minwoo Im
2021-02-11 10:02     ` Dr. David Alan Gilbert
2021-02-11  3:00   ` Keith Busch
2021-02-11  3:38     ` Minwoo Im
2021-02-11  4:24       ` Keith Busch
2021-02-11  4:40         ` Warner Losh
2021-02-11  6:06         ` Minwoo Im
2021-02-11  7:26         ` Klaus Jensen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.