linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>, Keith Busch <kbusch@kernel.org>
Cc: "Ewan D. Milne" <emilne@redhat.com>,
	Daniel Wagner <dwagner@suse.de>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
	Jens Axboe <axboe@fb.com>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v2] nvme-tcp: Check if request has started before processing it
Date: Tue, 11 May 2021 11:16:10 -0700	[thread overview]
Message-ID: <1989b8fe-7ef2-2145-75c5-5e938f74014c@grimberg.me> (raw)
In-Reply-To: <8a396f94-ac33-6bea-8d70-ded0188eb98a@suse.de>



On 5/9/21 4:30 AM, Hannes Reinecke wrote:
> On 5/8/21 1:22 AM, Sagi Grimberg wrote:
>>
>>>>> Well, that would require a modification to the CQE specification, no?
>>>>> fmds was not amused when I proposed that :-(
>>>>
>>>> Why would that require a modification to the CQE? it's just using say
>>>> 4 msbits of the command_id to a running sequence...
>>>
>>> I think Hannes was under the impression that the counter proposal wasn't
>>> part of the "command_id". The host can encode whatever it wants in that
>>> value, and the controller just has to return the same value.
>>
>> Yea, maybe something like this?
>> -- 
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index e6612971f4eb..7af48827ea56 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -1006,7 +1006,7 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, 
>> struct request *req)
>>                 return BLK_STS_IOERR;
>>         }
>>
>> -       cmd->common.command_id = req->tag;
>> +       cmd->common.command_id = nvme_cid(req);
>>         trace_nvme_setup_cmd(req, cmd);
>>         return ret;
>> }
>> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
>> index 05f31a2c64bb..96abfb0e2ddd 100644
>> --- a/drivers/nvme/host/nvme.h
>> +++ b/drivers/nvme/host/nvme.h
>> @@ -158,6 +158,7 @@ enum nvme_quirks {
>> struct nvme_request {
>>         struct nvme_command     *cmd;
>>         union nvme_result       result;
>> +       u8                      genctr;
>>         u8                      retries;
>>         u8                      flags;
>>         u16                     status;
>> @@ -497,6 +498,48 @@ struct nvme_ctrl_ops {
>>         int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
>> };
>>
>> +/*
>> + * nvme command_id is constructed as such:
>> + * | xxxx | xxxxxxxxxxxx |
>> + *   gen    request tag
>> + */
>> +#define nvme_cid_install_genctr(gen)           ((gen & 0xf) << 12)
>> +#define nvme_genctr_from_cid(cid)              ((cid & 0xf000) >> 12)
>> +#define nvme_tag_from_cid(cid)                 (cid & 0xfff)
>> +
> 
> That is a good idea, but we should ensure to limit the number of 
> commands a controller can request, too.

We take the minimum between what the host does vs. what the controller
supports anyways.

> As per spec each controller can support a full 32 bit worth of requests, 
> and if we limit that arbitrarily from the stack we'll need to cap the 
> number of requests a controller or fabrics driver can request.

NVMF_MAX_QUEUE_SIZE is already 1024, you are right that we also need:
--
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 92e03f15c9f6..66a4a7f7c504 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -60,6 +60,7 @@ MODULE_PARM_DESC(sgl_threshold,
                 "Use SGLs when average request segment size is larger 
or equal to "
                 "this size. Use 0 to disable SGLs.");

+#define NVME_PCI_MAX_QUEUE_SIZE 4096
  static int io_queue_depth_set(const char *val, const struct 
kernel_param *kp);
  static const struct kernel_param_ops io_queue_depth_ops = {
         .set = io_queue_depth_set,
@@ -68,7 +69,7 @@ static const struct kernel_param_ops 
io_queue_depth_ops = {

  static unsigned int io_queue_depth = 1024;
  module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 
0644);
-MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2");
+MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2 and 
<= 4096");

  static int io_queue_count_set(const char *val, const struct 
kernel_param *kp)
  {
@@ -164,6 +165,9 @@ static int io_queue_depth_set(const char *val, const 
struct kernel_param *kp)
         if (ret != 0 || n < 2)
                 return -EINVAL;

+       if (n > NVME_PCI_MAX_QUEUE_SIZE)
+               return -EINVAL;
+
         return param_set_uint(val, kp);
  }

--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-05-11 18:16 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-01 17:56 [PATCH v2] nvme-tcp: Check if request has started before processing it Daniel Wagner
2021-03-05 19:57 ` Sagi Grimberg
2021-03-11  9:43   ` Daniel Wagner
2021-03-15 17:16     ` Sagi Grimberg
2021-03-30 16:19       ` Ewan D. Milne
2021-03-30 17:34         ` Sagi Grimberg
2021-03-30 23:28           ` Keith Busch
2021-03-31  7:11             ` Hannes Reinecke
2021-03-31 21:01               ` Ewan D. Milne
2021-03-31 22:24                 ` Sagi Grimberg
2021-04-01  6:20                   ` Christoph Hellwig
2021-04-01  8:25                     ` Sagi Grimberg
2021-03-31 22:37             ` Sagi Grimberg
2021-05-06 15:36               ` Hannes Reinecke
2021-05-07 20:26                 ` Sagi Grimberg
2021-05-07 20:40                   ` Keith Busch
2021-05-07 23:22                     ` Sagi Grimberg
2021-05-08  0:03                       ` Keith Busch
2021-05-09 11:30                       ` Hannes Reinecke
2021-05-11 18:16                         ` Sagi Grimberg [this message]
2021-05-17 14:58                       ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1989b8fe-7ef2-2145-75c5-5e938f74014c@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@fb.com \
    --cc=dwagner@suse.de \
    --cc=emilne@redhat.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).