All of lore.kernel.org
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: Sagi Grimberg <sagi@grimberg.me>, Keith Busch <kbusch@kernel.org>
Cc: linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
	Daniel Wagner <dwagner@suse.de>
Subject: Re: [PATCH 3/3] nvme: code command_id with a genctr for use-after-free validation
Date: Mon, 17 May 2021 15:06:58 -0700	[thread overview]
Message-ID: <decf0b41-fa05-410c-80ab-b7f8de5618a9@acm.org> (raw)
In-Reply-To: <6474b916-9911-ba34-bd35-e5b5c7d11ac5@grimberg.me>

On 5/17/21 2:50 PM, Sagi Grimberg wrote:
> 
>>> On Mon, May 17, 2021 at 12:09:46PM -0700, Bart Van Assche wrote:
>>>> Additionally, I do not agree with the statement "we never create such
>>>> long queues anyways". I have already done this myself.
>>>
>>> Why? That won't improve bandwidth, and will increase latency. We already
>>> have timeout problems with the current default 1k qdepth on some
>>> devices.
>>
>> For testing FPGA or ASIC solutions that support offloading NVMe it is
>> more convenient to use a single queue pair with a high queue depth than
>> creating multiple queue pairs that each have a lower queue depth.
> 
> And you actually see a benefit for using queues that are >=40956 in
> depth? That is surprising to me...

Hi Sagi,

It seems like there is a misunderstanding. I'm not aware of any use case
where very high queue depths provide a performance benefit. Such high
queue depths are necessary to verify an implementation of an NVMe
controller that maintains state per NVMe command and to verify whether
the NVMe controller pauses fetching new NVMe commands if the internal
NVMe command buffer is full.

Bart.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-05-17 22:07 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17 17:59 [PATCH 0/3] nvme: protect against possible request reference after completion Sagi Grimberg
2021-05-17 17:59 ` [PATCH 1/3] nvme-tcp: don't check blk_mq_tag_to_rq when receiving pdu data Sagi Grimberg
2021-05-18  6:59   ` Christoph Hellwig
2021-05-17 17:59 ` [PATCH 2/3] nvme-pci: limit maximum queue depth to 4095 Sagi Grimberg
2021-05-18  7:01   ` Christoph Hellwig
2021-05-17 17:59 ` [PATCH 3/3] nvme: code command_id with a genctr for use-after-free validation Sagi Grimberg
2021-05-17 19:04   ` Keith Busch
2021-05-17 20:23     ` Sagi Grimberg
2021-05-17 19:09   ` Bart Van Assche
2021-05-17 19:46     ` Keith Busch
2021-05-17 20:27       ` Sagi Grimberg
2021-05-17 20:28       ` Bart Van Assche
2021-05-17 21:50         ` Sagi Grimberg
2021-05-17 22:06           ` Bart Van Assche [this message]
2021-05-17 22:15             ` Sagi Grimberg
2021-05-17 18:47 ` [PATCH 0/3] nvme: protect against possible request reference after completion Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=decf0b41-fa05-410c-80ab-b7f8de5618a9@acm.org \
    --to=bvanassche@acm.org \
    --cc=dwagner@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.