From: Keith Busch <kbusch@kernel.org> To: Tokunori Ikegami <ikegami.t@gmail.com> Cc: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>, "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>, "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org> Subject: Re: [PATCH] block, nvme: Increase max segments parameter setting value Date: Tue, 24 Mar 2020 09:02:37 +0900 [thread overview] Message-ID: <20200324000237.GB15091@redsun51.ssa.fujisawa.hgst.com> (raw) In-Reply-To: <cff52955-e55c-068a-44a6-8ed4edc0696f@gmail.com> On Tue, Mar 24, 2020 at 08:09:19AM +0900, Tokunori Ikegami wrote: > Hi, > > The change looks okay, but why do we need such a large data length ? > > > > Do you have a use-case or performance numbers ? > > We use the large data length to get log page by the NVMe admin command. > In the past it was able to get with the same length but failed currently > with it. > > So it seems that depended on the kernel version as caused by the version up. We didn't have 32-bit max segments before, though. Why was 16-bits enough in older kernels? Which kernel did this stop working? > Also I have confirmed that currently failed with the length 0x10000000 > 256MB. If your hitting max segment limits before any other limit, you should be able to do larger transfers with more physically contiguous memory. Huge pages can get the same data length in fewer segments, if you want to try that. But wouldn't it be better if your application splits the transfer into smaller chunks across multiple commands? NVMe log page command supports offsets for this reason.
WARNING: multiple messages have this Message-ID (diff)
From: Keith Busch <kbusch@kernel.org> To: Tokunori Ikegami <ikegami.t@gmail.com> Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>, "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>, Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com> Subject: Re: [PATCH] block, nvme: Increase max segments parameter setting value Date: Tue, 24 Mar 2020 09:02:37 +0900 [thread overview] Message-ID: <20200324000237.GB15091@redsun51.ssa.fujisawa.hgst.com> (raw) In-Reply-To: <cff52955-e55c-068a-44a6-8ed4edc0696f@gmail.com> On Tue, Mar 24, 2020 at 08:09:19AM +0900, Tokunori Ikegami wrote: > Hi, > > The change looks okay, but why do we need such a large data length ? > > > > Do you have a use-case or performance numbers ? > > We use the large data length to get log page by the NVMe admin command. > In the past it was able to get with the same length but failed currently > with it. > > So it seems that depended on the kernel version as caused by the version up. We didn't have 32-bit max segments before, though. Why was 16-bits enough in older kernels? Which kernel did this stop working? > Also I have confirmed that currently failed with the length 0x10000000 > 256MB. If your hitting max segment limits before any other limit, you should be able to do larger transfers with more physically contiguous memory. Huge pages can get the same data length in fewer segments, if you want to try that. But wouldn't it be better if your application splits the transfer into smaller chunks across multiple commands? NVMe log page command supports offsets for this reason. _______________________________________________ linux-nvme mailing list linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-03-24 0:02 UTC|newest] Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-03-23 18:23 [PATCH] block, nvme: Increase max segments parameter setting value Tokunori Ikegami 2020-03-23 18:23 ` Tokunori Ikegami 2020-03-23 19:14 ` Chaitanya Kulkarni 2020-03-23 19:14 ` Chaitanya Kulkarni 2020-03-23 23:09 ` Tokunori Ikegami 2020-03-23 23:09 ` Tokunori Ikegami 2020-03-24 0:02 ` Keith Busch [this message] 2020-03-24 0:02 ` Keith Busch 2020-03-24 16:51 ` Tokunori Ikegami 2020-03-24 16:51 ` Tokunori Ikegami 2020-03-27 17:50 ` Tokunori Ikegami 2020-03-27 17:50 ` Tokunori Ikegami 2020-03-27 18:18 ` Keith Busch 2020-03-27 18:18 ` Keith Busch 2020-03-28 2:11 ` Ming Lei 2020-03-28 2:11 ` Ming Lei 2020-03-28 3:13 ` Keith Busch 2020-03-28 3:13 ` Keith Busch 2020-03-28 8:28 ` Ming Lei 2020-03-28 8:28 ` Ming Lei 2020-03-28 12:57 ` Tokunori Ikegami 2020-03-28 12:57 ` Tokunori Ikegami 2020-03-29 3:01 ` Ming Lei 2020-03-29 3:01 ` Ming Lei 2020-03-30 9:15 ` Tokunori Ikegami 2020-03-30 9:15 ` Tokunori Ikegami 2020-03-30 13:53 ` Keith Busch 2020-03-30 13:53 ` Keith Busch 2020-03-31 15:24 ` Tokunori Ikegami 2020-03-31 15:24 ` Tokunori Ikegami 2020-03-31 14:13 ` Joshi 2020-03-31 14:13 ` Joshi 2020-03-31 15:37 ` Tokunori Ikegami 2020-03-31 15:37 ` Tokunori Ikegami 2020-03-24 7:16 ` Hannes Reinecke 2020-03-24 7:16 ` Hannes Reinecke 2020-03-24 17:17 ` Tokunori Ikegami 2020-03-24 17:17 ` Tokunori Ikegami
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200324000237.GB15091@redsun51.ssa.fujisawa.hgst.com \ --to=kbusch@kernel.org \ --cc=Chaitanya.Kulkarni@wdc.com \ --cc=ikegami.t@gmail.com \ --cc=linux-block@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.