linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bjorn Helgaas <helgaas@kernel.org>
To: Radjendirane Codandaramane <radjendirane.codanda@microsemi.com>
Cc: Ron Yuan <ron.yuan@memblaze.com>,
	Sinan Kaya <okaya@codeaurora.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Bo Chen <bo.chen@memblaze.com>,
	William Huang <william.huang@memblaze.com>,
	Fengming Wu <fengming.wu@memblaze.com>,
	Jason Jiang <jason.jiang@microsemi.com>,
	Ramyakanth Edupuganti <Ramyakanth.Edupuganti@microsemi.com>,
	William Cheng <william.cheng@microsemi.com>,
	"Kim Helper (khelper)" <khelper@micron.com>,
	Linux PCI <linux-pci@vger.kernel.org>
Subject: Re: One Question About PCIe BUS Config Type with pcie_bus_safe or pcie_bus_perf On NVMe Device
Date: Wed, 24 Jan 2018 12:01:18 -0600	[thread overview]
Message-ID: <20180124180118.GJ5317@bhelgaas-glaptop.roam.corp.google.com> (raw)
In-Reply-To: <b93a4fb1ec42480596b8d1a2b525e8f1@microsemi.com>

On Tue, Jan 23, 2018 at 11:50:30PM +0000, Radjendirane Codandaramane wrote:
> Hi Bjorne,

s/Bjorne/Bjorn/

Also, the convention on Linux mailing lists is to use interleaved
reply style [1] because it makes it easier to follow the conversation.

> Ceiling the MRRS to the MPS value in order to guarantee the
> interoperability in pcie_bus_perf mode does not make sense.

I disagree with this part.  The Linux pcie_bus_perf mode isn't my
favorite approach, but given the assumptions it makes, it does make
sense.  I laid out the explanation in [2]; if you disagree with
something there, please be specific about what's wrong.

> A device can make a memrd request according to the MRRS setting
> (which can be higher than its MPS), but the completer has to respect
> the MPS and send completions accordingly. As an example, system can
> configure MPS=128B and MRRS=4K, where an endpoint can a make 4K
> MemRd request, but the completer has to send completions as 128B
> TLPs, by respecting the MPS setting. MRRS does not force a device to
> use higher MPS value than it is configured to.

This part is certainly true.  I used a very similar example in [2]:

  Consider the case where a function has MPS=256 and MRRS=1024: if the
  function generates a 1024-byte read request, it should receive 4
  completions, each with a 256-byte data payload.

> Another factor that need to be considered for storage devices is
> that support of T10 Protection Information (DIF). For every 512B or
> 4KB, a 8B PI is computed and inserted or verified, which require the
> 512B of data to arrive in sequence. 

MPS and MRRS are parts of the PCIe TLP protocol.  I'm not aware of any
connection between the PCIe protocol and the T10 Protection
Information/DIF stuff.  I assume the T10/DIF stuff is content being
carried by PCIe, and PCIe doesn't care at all about the content it
carries.  Can you clarify what the connection is here?

> If the MRRS is < 512B, this might pose out of order completions to
> the storage device, if the EP has to submit multiple outstanding
> read requests in order to achieve higher performance. 

If I understand correctly, this is exactly the situation described in
the implementation note in PCIe r4.0, sec 2.4.1: completions for a
single memory read request are always returned in order (Table 2-39,
entry D5b), but completions for multiple read requests may be returned
in any order because completions may pass each other (entry D5a).  The
note doesn't mention this, but I think even the multiple outstanding
read requests themselves can pass each other (entry B3).

So I agree that if the function generates multiple outstanding read
requests, the completions for those requests may come back out of
order.  For example, if the function generates req-A and req-B,
completions for req-A will be in order and completions for req-B will
be in order, but it might receive cmpl-B1, cmpl-A1, cmpl-B2, cmpl-B3,
cmpl-A2, cmpl-A3, cmpl-A4, cmpl-B4.

This reordering may happen no matter what the MRRS setting is.  The
only way to avoid it is to avoid multiple outstanding read requests.

> This would be a challenge for the storage endpoints that process the
> T10 PI inline with the transfer, now they have to store and process
> the 512B sectors once they receive all the TLPs for that sector.

Are you saying these endpoints depend on MRRS >= 512 for correct
operation?

I don't think think that would be legal, per sec 7.5.3.4, which says
"the Function must not generate Read Requests with a size exceeding
the set [MRRS] value."  That doesn't leave the function any wiggle
room -- if the OS programs MRRS=128, the function has to deal with it.
There's no provision for the function to say "I can only support
MRRS > X".

But you probably mean the endpoints will function correctly even with
MRRS=128, but performance will be poor.  That's certainly a concern
for all devices; reducing MRRS increases overhead, so we should use
the largest possible MRRS, subject to the constraints of any MPS/MRRS
design (like pcie_bus_perf) and concerns about bandwidth allocation
(see the implementation note in sec 7.5.3.4).

> So, it is better to decouple the MRRS and MPS in pcie_bus_perf mode.
> Like stated earlier in the thread, provide an option to configure
> MRRS separately in pcie_bus_perf mode.

If you read [2] again, it will be obvious why the Linux pcie_bus_perf
mode requires the use of MRRS in addition to MPS.  At the very end of
that email and in [3], I pointed out that the current Linux
implementation is unnecessarily conservative in some cases, and there
is room for fixing that, which should improve performance in some
cases.  But given the design of pcie_bus_perf, it's impossible to
completely decouple MRRS and MPS in that mode.

[1] https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
[2] https://lkml.kernel.org/r/20180119205153.GB160618@bhelgaas-glaptop.roam.corp.google.com
[3] https://lkml.kernel.org/r/20180123174821.GF5317@bhelgaas-glaptop.roam.corp.google.com

  parent reply	other threads:[~2018-01-24 18:01 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <SH2PR01MB106A1E21DEB5FE3FFB3D61C83E90@SH2PR01MB106.CHNPR01.prod.partner.outlook.cn>
     [not found] ` <ef16a3cc-b641-a30d-644a-7940e340eb3e@codeaurora.org>
     [not found]   ` <SHAPR01MB173A5EA1677C2138CB528F2FEE90@SHAPR01MB173.CHNPR01.prod.partner.outlook.cn>
     [not found]     ` <5727b0b1-f0d5-7c78-373e-fc9d1bd662d2@codeaurora.org>
     [not found]       ` <CABhMZUU0643U-qVc9ymA+1PMZToSLFm2dg8-cu=iQ2LGw2Pi8Q@mail.gmail.com>
     [not found]         ` <SHAPR01MB173A36104635A8BFF9A83E1FEE80@SHAPR01MB173.CHNPR01.prod.partner.outlook.cn>
2018-01-18 16:24           ` One Question About PCIe BUS Config Type with pcie_bus_safe or pcie_bus_perf On NVMe Device Sinan Kaya
2018-01-19 20:51             ` Bjorn Helgaas
2018-01-20 19:20               ` Sinan Kaya
2018-01-20 19:29                 ` Sinan Kaya
2018-01-22 21:36                 ` Bjorn Helgaas
2018-01-22 22:04                   ` Sinan Kaya
2018-01-22 22:51                     ` Bjorn Helgaas
2018-01-22 23:24                       ` Sinan Kaya
2018-01-23  0:16                         ` Bjorn Helgaas
2018-01-23  2:27                           ` Sinan Kaya
2018-01-23 13:25                             ` Ron Yuan
2018-01-23 14:01                               ` Ron Yuan
2018-01-23 17:48                                 ` Bjorn Helgaas
2018-01-23 18:07                                   ` Bjorn Helgaas
2018-01-23 14:38                               ` Bjorn Helgaas
2018-01-23 23:50                                 ` Radjendirane Codandaramane
2018-01-24 16:29                                   ` Myron Stowe
2018-01-24 17:59                                     ` Ron Yuan
2018-01-24 18:01                                   ` Bjorn Helgaas [this message]
2018-01-31  8:40                                     ` Ron Yuan
2018-02-01  0:01                                       ` Myron Stowe
2018-02-01  0:13                                         ` Sinan Kaya
2018-02-01  3:37                                           ` Bjorn Helgaas
2018-02-01 15:14                                             ` Sinan Kaya
2018-02-05  1:02                                               ` Sinan Kaya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180124180118.GJ5317@bhelgaas-glaptop.roam.corp.google.com \
    --to=helgaas@kernel.org \
    --cc=Ramyakanth.Edupuganti@microsemi.com \
    --cc=bhelgaas@google.com \
    --cc=bo.chen@memblaze.com \
    --cc=fengming.wu@memblaze.com \
    --cc=jason.jiang@microsemi.com \
    --cc=khelper@micron.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=okaya@codeaurora.org \
    --cc=radjendirane.codanda@microsemi.com \
    --cc=ron.yuan@memblaze.com \
    --cc=william.cheng@microsemi.com \
    --cc=william.huang@memblaze.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).