All of lore.kernel.org
 help / color / mirror / Atom feed
* ECRC and Max Read Request Size
@ 2015-10-28 17:51 Sinan Kaya
  2015-11-06 17:22 ` Bjorn Helgaas
  0 siblings, 1 reply; 11+ messages in thread
From: Sinan Kaya @ 2015-10-28 17:51 UTC (permalink / raw)
  To: linux-pci

I'm seeing two problems with the current PCI framework when it comes to 
end-to-end CRC (ECRC) and Max Read Request Size.

The problem with ECRC is that it blindly enables ECRC generation on 
devices without checking if it is supported by the entire bus with 
ECRC=on option.

ECRC check can be enabled on all devices but ECRC generation on the root 
complex and switches needs to be set only if all devices support ECRC 
checking.

I'm thinking of making changes in this area to cover this gap with 
ECRC=safe option.

The other problem I'm seeing is about the maximum read request size. If 
I choose the PCI bus performance mode, maximum read request size is 
being limited to the maximum payload size.

I'd like to add a new mode where I can have bigger read request size 
than the maximum payload size.

Opinions?

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-10-28 17:51 ECRC and Max Read Request Size Sinan Kaya
@ 2015-11-06 17:22 ` Bjorn Helgaas
  2015-11-06 17:54   ` Sinan Kaya
  2015-11-08 17:20   ` Sinan Kaya
  0 siblings, 2 replies; 11+ messages in thread
From: Bjorn Helgaas @ 2015-11-06 17:22 UTC (permalink / raw)
  To: Sinan Kaya; +Cc: linux-pci

Hi Sinan,

On Wed, Oct 28, 2015 at 01:51:48PM -0400, Sinan Kaya wrote:
> I'm seeing two problems with the current PCI framework when it comes
> to end-to-end CRC (ECRC) and Max Read Request Size.
> 
> The problem with ECRC is that it blindly enables ECRC generation on
> devices without checking if it is supported by the entire bus with
> ECRC=on option.
> 
> ECRC check can be enabled on all devices but ECRC generation on the
> root complex and switches needs to be set only if all devices
> support ECRC checking.
> 
> I'm thinking of making changes in this area to cover this gap with
> ECRC=safe option.

Sometimes they can't be avoided, but I'm generally not in favor of
adding command line parameters because they require too much of our
users and they introduce code paths that are rarely exercised.

What specific ECRC problem are you seeing?  Do you have devices that
don't operate correctly with the current code?  Or do you want to add
some new functionality, e.g., to enable ECRC in cases where we don't
enable it today?

> The other problem I'm seeing is about the maximum read request size.
> If I choose the PCI bus performance mode, maximum read request size
> is being limited to the maximum payload size.
> 
> I'd like to add a new mode where I can have bigger read request size
> than the maximum payload size.

I've never been thrilled about the way Linux ties MRRS and MPS
together.  I don't think the spec envisioned MRRS being used to
control segment size on the link.  My impression is that the purpose
of MRRS is to limit the amount of time one device can dominate a link.

I am sympathetic to the idea of having MRRS larger than MPS.  The
question is how to accomplish that.  I'm not really happy with the
current set of "pcie_bus_tune_*" parameters, so I'd hesitate to add
yet another one.  They feel like they're basically workarounds for the
fact that Linux can't optimize MPS directly itself.

Can you give any more specifics of your MRRS/MPS situation?  I guess
you hope to improve bandwidth to some device by reducing the number of
read requests?  Do you have any quantitative estimate of what you can
gain?

Bjorn

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-06 17:22 ` Bjorn Helgaas
@ 2015-11-06 17:54   ` Sinan Kaya
  2015-11-07  0:11     ` Bjorn Helgaas
  2015-11-09 19:15     ` Bjorn Helgaas
  2015-11-08 17:20   ` Sinan Kaya
  1 sibling, 2 replies; 11+ messages in thread
From: Sinan Kaya @ 2015-11-06 17:54 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: linux-pci



On 11/6/2015 12:22 PM, Bjorn Helgaas wrote:
> Hi Sinan,
>
> On Wed, Oct 28, 2015 at 01:51:48PM -0400, Sinan Kaya wrote:
>> I'm seeing two problems with the current PCI framework when it comes
>> to end-to-end CRC (ECRC) and Max Read Request Size.
>>
>> The problem with ECRC is that it blindly enables ECRC generation on
>> devices without checking if it is supported by the entire bus with
>> ECRC=on option.
>>
>> ECRC check can be enabled on all devices but ECRC generation on the
>> root complex and switches needs to be set only if all devices
>> support ECRC checking.
>>
>> I'm thinking of making changes in this area to cover this gap with
>> ECRC=safe option.
>
> Sometimes they can't be avoided, but I'm generally not in favor of
> adding command line parameters because they require too much of our
> users and they introduce code paths that are rarely exercised.
>
> What specific ECRC problem are you seeing?  Do you have devices that
> don't operate correctly with the current code?  Or do you want to add
> some new functionality, e.g., to enable ECRC in cases where we don't
> enable it today?

ECRC is an optional PCIe feature. Even ECRC support has some flavors

- A card can support ECRC checking.
- A card can support ECRC checking and generation.

Right now, the code is enabling both without checking if they are 
supported at all.

I have some legacy PCIe cards that don't support ECRC completely even 
though the host bridge supports it. If ECRC checking and generation is 
enabled under this situation, I have problems communicating to the endpoint.

I would like to be able to turn on this feature all the time and not 
think about if things will break or not.

Maybe, I can fix the code and enable it only when the entire bus 
supports it instead of adding a new feature if nobody objects.


>
>> The other problem I'm seeing is about the maximum read request size.
>> If I choose the PCI bus performance mode, maximum read request size
>> is being limited to the maximum payload size.
>>
>> I'd like to add a new mode where I can have bigger read request size
>> than the maximum payload size.
>
> I've never been thrilled about the way Linux ties MRRS and MPS
> together.  I don't think the spec envisioned MRRS being used to
> control segment size on the link.  My impression is that the purpose
> of MRRS is to limit the amount of time one device can dominate a link.
>
> I am sympathetic to the idea of having MRRS larger than MPS.  The
> question is how to accomplish that.  I'm not really happy with the
> current set of "pcie_bus_tune_*" parameters, so I'd hesitate to add
> yet another one.  They feel like they're basically workarounds for the
> fact that Linux can't optimize MPS directly itself.
>
> Can you give any more specifics of your MRRS/MPS situation?  I guess
> you hope to improve bandwidth to some device by reducing the number of
> read requests?  Do you have any quantitative estimate of what you can
> gain?

I talked to our performance team. They are saying that max read request 
does not gain you much compared to max payload size single direction but 
it helps tremendously if you are moving data forth and back between the 
card. I don't have real numbers though.

They asked me to work on adding this feature. Most of the endpoints I 
have seen seem to support something around 4k for max read request size.

I have one directly connected card to my system. I'd like to give the 
maximum bandwidth to the card including maximum payload size and maximum 
read request size which is much bigger than the payload size.

Another opinion is let the firmware do its magic before the OS starts 
but there is no way to apply the same settings after a hot-plug insertion.

>
> Bjorn
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-06 17:54   ` Sinan Kaya
@ 2015-11-07  0:11     ` Bjorn Helgaas
  2015-11-07  3:39       ` Sinan Kaya
  2015-11-09 19:15     ` Bjorn Helgaas
  1 sibling, 1 reply; 11+ messages in thread
From: Bjorn Helgaas @ 2015-11-07  0:11 UTC (permalink / raw)
  To: Sinan Kaya; +Cc: linux-pci

On Fri, Nov 06, 2015 at 12:54:07PM -0500, Sinan Kaya wrote:
> On 11/6/2015 12:22 PM, Bjorn Helgaas wrote:
> >On Wed, Oct 28, 2015 at 01:51:48PM -0400, Sinan Kaya wrote:
> >>I'm seeing two problems with the current PCI framework when it comes
> >>to end-to-end CRC (ECRC) and Max Read Request Size.
> >>
> >>The problem with ECRC is that it blindly enables ECRC generation on
> >>devices without checking if it is supported by the entire bus with
> >>ECRC=on option.
> >>
> >>ECRC check can be enabled on all devices but ECRC generation on the
> >>root complex and switches needs to be set only if all devices
> >>support ECRC checking.
> >>
> >>I'm thinking of making changes in this area to cover this gap with
> >>ECRC=safe option.
> >
> >Sometimes they can't be avoided, but I'm generally not in favor of
> >adding command line parameters because they require too much of our
> >users and they introduce code paths that are rarely exercised.
> >
> >What specific ECRC problem are you seeing?  Do you have devices that
> >don't operate correctly with the current code?  Or do you want to add
> >some new functionality, e.g., to enable ECRC in cases where we don't
> >enable it today?
> 
> ECRC is an optional PCIe feature. Even ECRC support has some flavors
> 
> - A card can support ECRC checking.
> - A card can support ECRC checking and generation.
> 
> Right now, the code is enabling both without checking if they are
> supported at all.
> 
> I have some legacy PCIe cards that don't support ECRC completely
> even though the host bridge supports it. If ECRC checking and
> generation is enabled under this situation, I have problems
> communicating to the endpoint.

That should definitely be fixed.  Do we enable ECRC unconditionally,
or only when we boot with "pci=ecrc=on"?  The doc
(Documentation/kernel-parameters.txt) suggests the latter.

It would be ideal if we could try to turn on ECRC all the time by
default for paths that support it.  I suppose there's some risk that
we'd trip over broken hardware.  If that's an issue, we could consider
turning it on all the time on machines newer than X (that's easy on
x86, where we have a DMI BIOS date, but maybe not so easy on other
arches).

> Another opinion is let the firmware do its magic before the OS
> starts but there is no way to apply the same settings after a
> hot-plug insertion.

The ACPI _HPX method does allow the BIOS to specify Device Control
register values, so it's conceivable that MPS and MRRS could be
configured that way for hot-added devices.  But we currently
explicitly ignore those _HPX fields (see 302328c00341) because I don't
think it's safe to use them blindly, at least for MPS.  If the Linux
MPS configuration stopped making assumptions about MRRS settings, I
think it probably would be safe to use _HPX MRRS values.

Bjorn

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-07  0:11     ` Bjorn Helgaas
@ 2015-11-07  3:39       ` Sinan Kaya
  2015-11-09 19:47         ` Bjorn Helgaas
  0 siblings, 1 reply; 11+ messages in thread
From: Sinan Kaya @ 2015-11-07  3:39 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: linux-pci



On 11/6/2015 7:11 PM, Bjorn Helgaas wrote:
> That should definitely be fixed.  Do we enable ECRC unconditionally,
> or only when we boot with "pci=ecrc=on"?  The doc
> (Documentation/kernel-parameters.txt) suggests the latter.
>
> It would be ideal if we could try to turn on ECRC all the time by
> default for paths that support it.  I suppose there's some risk that
> we'd trip over broken hardware.  If that's an issue, we could consider
> turning it on all the time on machines newer than X (that's easy on
> x86, where we have a DMI BIOS date, but maybe not so easy on other
> arches).

ecrc=on kernel option works like "force on" rather than just a simple on.

I agree we should enable it by default.


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-06 17:22 ` Bjorn Helgaas
  2015-11-06 17:54   ` Sinan Kaya
@ 2015-11-08 17:20   ` Sinan Kaya
  1 sibling, 0 replies; 11+ messages in thread
From: Sinan Kaya @ 2015-11-08 17:20 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: linux-pci



On 11/6/2015 12:22 PM, Bjorn Helgaas wrote:
> I've never been thrilled about the way Linux ties MRRS and MPS
> together.  I don't think the spec envisioned MRRS being used to
> control segment size on the link.  My impression is that the purpose
> of MRRS is to limit the amount of time one device can dominate a link.
>
> I am sympathetic to the idea of having MRRS larger than MPS.  The
> question is how to accomplish that.  I'm not really happy with the
> current set of "pcie_bus_tune_*" parameters, so I'd hesitate to add
> yet another one.  They feel like they're basically workarounds for the
> fact that Linux can't optimize MPS directly itself.
>
> Can you give any more specifics of your MRRS/MPS situation?  I guess
> you hope to improve bandwidth to some device by reducing the number of
> read requests?  Do you have any quantitative estimate of what you can
> gain?

Xilinx has a nice whitepaper about PCIe performance here. See the 
section about Maximum Read Request Size.

http://www.xilinx.com/support/documentation/white_papers/wp350.pdf

The benefits of maximum read request is seen when moving large amounts 
of data usually.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-06 17:54   ` Sinan Kaya
  2015-11-07  0:11     ` Bjorn Helgaas
@ 2015-11-09 19:15     ` Bjorn Helgaas
  2015-11-10  0:43       ` Sinan Kaya
  1 sibling, 1 reply; 11+ messages in thread
From: Bjorn Helgaas @ 2015-11-09 19:15 UTC (permalink / raw)
  To: Sinan Kaya; +Cc: linux-pci

On Fri, Nov 06, 2015 at 12:54:07PM -0500, Sinan Kaya wrote:
> ECRC is an optional PCIe feature. Even ECRC support has some flavors
> 
> - A card can support ECRC checking.
> - A card can support ECRC checking and generation.
> 
> Right now, the code is enabling both without checking if they are
> supported at all.
> 
> I have some legacy PCIe cards that don't support ECRC completely
> even though the host bridge supports it. If ECRC checking and
> generation is enabled under this situation, I have problems
> communicating to the endpoint.
> 
> I would like to be able to turn on this feature all the time and not
> think about if things will break or not.
> 
> Maybe, I can fix the code and enable it only when the entire bus
> supports it instead of adding a new feature if nobody objects.

I don't know whether this is a Linux kernel defect or a hardware
defect in the PCIe card.  The ECRC is in the TLP Digest, and per spec,
if a TLP receiver does not support ECRC, it must ignore the TLP Digest
(PCIe spec r3.0, sec 2.2.3).

If a card doesn't support ECRC checking at all, i.e., the AER "ECRC
Check Capable" bit is zero, I would expect the card to work fine even
if you enable ECRC at the Root Port.  If it doesn't, that sounds like
a hardware issue with the card.

It sounds like you're contemplating enabling ECRC only when the Root
Port and every device in the hierarchy below it supports ECRC
checking.  As I read the spec, that would be overly restrictive.  If I
understand it correctly, it should be safe to enable ECRC generation
on every device that supports it.  Devices where ECRC checking is
supported and enabled should check ECRC, and other devices should just
ignore it.

> >>The other problem I'm seeing is about the maximum read request size.
> >>If I choose the PCI bus performance mode, maximum read request size
> >>is being limited to the maximum payload size.
> >>
> >>I'd like to add a new mode where I can have bigger read request size
> >>than the maximum payload size.
> >
> >I've never been thrilled about the way Linux ties MRRS and MPS
> >together.  I don't think the spec envisioned MRRS being used to
> >control segment size on the link.  My impression is that the purpose
> >of MRRS is to limit the amount of time one device can dominate a link.
> >
> >I am sympathetic to the idea of having MRRS larger than MPS.  The
> >question is how to accomplish that.  I'm not really happy with the
> >current set of "pcie_bus_tune_*" parameters, so I'd hesitate to add
> >yet another one.  They feel like they're basically workarounds for the
> >fact that Linux can't optimize MPS directly itself.
> >
> >Can you give any more specifics of your MRRS/MPS situation?  I guess
> >you hope to improve bandwidth to some device by reducing the number of
> >read requests?  Do you have any quantitative estimate of what you can
> >gain?
> 
> I talked to our performance team. They are saying that max read
> request does not gain you much compared to max payload size single
> direction but it helps tremendously if you are moving data forth and
> back between the card. I don't have real numbers though.

I'm not enough of a hardware or performance person to visualize how
MRRS makes a tremendous difference in this situation.  Sample
timelines comparing small vs. large MRRS would help everybody
understand what's happening here.

Bjorn

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-07  3:39       ` Sinan Kaya
@ 2015-11-09 19:47         ` Bjorn Helgaas
  2015-11-10  0:09           ` Sinan Kaya
  0 siblings, 1 reply; 11+ messages in thread
From: Bjorn Helgaas @ 2015-11-09 19:47 UTC (permalink / raw)
  To: Sinan Kaya; +Cc: linux-pci

On Fri, Nov 06, 2015 at 10:39:51PM -0500, Sinan Kaya wrote:
> On 11/6/2015 7:11 PM, Bjorn Helgaas wrote:
> >That should definitely be fixed.  Do we enable ECRC unconditionally,
> >or only when we boot with "pci=ecrc=on"?  The doc
> >(Documentation/kernel-parameters.txt) suggests the latter.
> >
> >It would be ideal if we could try to turn on ECRC all the time by
> >default for paths that support it.  I suppose there's some risk that
> >we'd trip over broken hardware.  If that's an issue, we could consider
> >turning it on all the time on machines newer than X (that's easy on
> >x86, where we have a DMI BIOS date, but maybe not so easy on other
> >arches).
> 
> ecrc=on kernel option works like "force on" rather than just a simple on.

What do you mean by "force on" as opposed to "simple on"?

> I agree we should enable it by default.

Part of the consideration is that enabling ECRC generation does
increase the size of every TLP by 4 bytes, which reduces the link
throughput slightly, so I'm not sure we want to blindly enable it.

If I understand the current Linux code correctly, by default (booting
with no parameter at all, or with "pci=ecrc=bios"), we don't touch
ECRC at all -- we don't even look to see whether it's supported, and
we don't enable or disable anything.

With "pci=ecrc=on", we turn on ECRC generation (if supported) and ECRC
checking (if supported) for every device, including those hot-added
after boot.

I do like the idea of letting platform firmware decide whether ECRC
should be enabled by default, because then we don't have to decide on
a policy in Linux.  Following the lead of the firmware lets OEMs
decide whether ECRC is a feature they want to market and support.

I think the current strategy could be improved for hot-added devices.
In the default ("pci=ecrc=bios") case, we leave them alone, which
means they won't check ECRC because the enable bits will be zero after
power-up.  If the platform firmware enabled ECRC generation in the
upstream Root Port, we're paying the cost of lower throughput without
getting any benefit for it.

It doesn't look like ECRC has any dependencies or ordering
requirements, so we could also consider adding sysfs knobs to
enable/disable ECRC at run-time.  I can imagine systems where we want
ECRC on some paths, e.g., to a disk or network, but not on others,
e.g., to a display.  Sysfs would be a way to allow that, although it
*is* an administrative burden.

Bjorn

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-09 19:47         ` Bjorn Helgaas
@ 2015-11-10  0:09           ` Sinan Kaya
  0 siblings, 0 replies; 11+ messages in thread
From: Sinan Kaya @ 2015-11-10  0:09 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: linux-pci



On 11/9/2015 2:47 PM, Bjorn Helgaas wrote:
> On Fri, Nov 06, 2015 at 10:39:51PM -0500, Sinan Kaya wrote:
>> On 11/6/2015 7:11 PM, Bjorn Helgaas wrote:
>>> That should definitely be fixed.  Do we enable ECRC unconditionally,
>>> or only when we boot with "pci=ecrc=on"?  The doc
>>> (Documentation/kernel-parameters.txt) suggests the latter.
>>>
>>> It would be ideal if we could try to turn on ECRC all the time by
>>> default for paths that support it.  I suppose there's some risk that
>>> we'd trip over broken hardware.  If that's an issue, we could consider
>>> turning it on all the time on machines newer than X (that's easy on
>>> x86, where we have a DMI BIOS date, but maybe not so easy on other
>>> arches).
>>
>> ecrc=on kernel option works like "force on" rather than just a simple on.
>
> What do you mean by "force on" as opposed to "simple on"?
>
>> I agree we should enable it by default.
>
> Part of the consideration is that enabling ECRC generation does
> increase the size of every TLP by 4 bytes, which reduces the link
> throughput slightly, so I'm not sure we want to blindly enable it.
>
> If I understand the current Linux code correctly, by default (booting
> with no parameter at all, or with "pci=ecrc=bios"), we don't touch
> ECRC at all -- we don't even look to see whether it's supported, and
> we don't enable or disable anything.
>
> With "pci=ecrc=on", we turn on ECRC generation (if supported) and ECRC
> checking (if supported) for every device, including those hot-added
> after boot.
>
> I do like the idea of letting platform firmware decide whether ECRC
> should be enabled by default, because then we don't have to decide on
> a policy in Linux.  Following the lead of the firmware lets OEMs
> decide whether ECRC is a feature they want to market and support.
>
> I think the current strategy could be improved for hot-added devices.
> In the default ("pci=ecrc=bios") case, we leave them alone, which
> means they won't check ECRC because the enable bits will be zero after
> power-up.  If the platform firmware enabled ECRC generation in the
> upstream Root Port, we're paying the cost of lower throughput without
> getting any benefit for it.
>
> It doesn't look like ECRC has any dependencies or ordering
> requirements, so we could also consider adding sysfs knobs to
> enable/disable ECRC at run-time.  I can imagine systems where we want
> ECRC on some paths, e.g., to a disk or network, but not on others,
> e.g., to a display.  Sysfs would be a way to allow that, although it
> *is* an administrative burden.
>

I had to go back and re-read the spec. My understanding was that you can 
always enable ECRC checking but you need to have every device ECRC check 
enabled in order to generate the ECRC downstream direction.

Now, by looking at the spec section you mentioned; this turns out to be 
not true. ECRC checking and generation can be enabled all the time.

I'll enable ECRC and run it against multiple cards to confirm. I was 
using PCIe Keysight Exerciser when I was testing ECRC. Since it is a 
PCIE exerciser, I might have forgotten to enable something in the GUI. 
I'll double check.

BTW, ECRC has negligible performance overhead.


> Bjorn
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: ECRC and Max Read Request Size
  2015-11-09 19:15     ` Bjorn Helgaas
@ 2015-11-10  0:43       ` Sinan Kaya
  0 siblings, 0 replies; 11+ messages in thread
From: Sinan Kaya @ 2015-11-10  0:43 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: linux-pci



On 11/9/2015 2:15 PM, Bjorn Helgaas wrote:
>> I talked to our performance team. They are saying that max read
>> >request does not gain you much compared to max payload size single
>> >direction but it helps tremendously if you are moving data forth and
>> >back between the card. I don't have real numbers though.
> I'm not enough of a hardware or performance person to visualize how
> MRRS makes a tremendous difference in this situation.  Sample
> timelines comparing small vs. large MRRS would help everybody
> understand what's happening here.
>
> Bjorn

The LSI SAS 9211i card supports 4k maximum read request size. I manually 
forced this to 4k instead of the maximum payload size of the platform on 
both the host bridge and the endpoint.

I did a quick test with:
dd if=/dev/zero of=/dev/sda bs=1G count=1

It gave me %9 higher write performance.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* ECRC and Max Read Request Size
@ 2015-10-26 18:42 Sinan Kaya
  0 siblings, 0 replies; 11+ messages in thread
From: Sinan Kaya @ 2015-10-26 18:42 UTC (permalink / raw)
  To: linux-pci

I'm seeing two problems with the current PCI framework when it comes to 
end-to-end CRC (ECRC) and Max Read Request Size.

The problem with ECRC is that it blindly enables ECRC generation on 
devices without checking if it is supported by the entire bus with 
ECRC=on option.

ECRC check can be enabled on all devices but ECRC generation on the root 
complex and switches needs to be set only if all devices support ECRC 
checking.

I'm thinking of making changes in this area to cover this gap with 
ECRC=safe option.

The other problem I'm seeing is about the maximum read request size. If 
I choose the PCI bus performance mode, maximum read request size is 
being limited to the maximum payload size.

I'd like to add a new mode where I can have bigger read request size 
than the maximum payload size.

Opinions?

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-11-10  0:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-28 17:51 ECRC and Max Read Request Size Sinan Kaya
2015-11-06 17:22 ` Bjorn Helgaas
2015-11-06 17:54   ` Sinan Kaya
2015-11-07  0:11     ` Bjorn Helgaas
2015-11-07  3:39       ` Sinan Kaya
2015-11-09 19:47         ` Bjorn Helgaas
2015-11-10  0:09           ` Sinan Kaya
2015-11-09 19:15     ` Bjorn Helgaas
2015-11-10  0:43       ` Sinan Kaya
2015-11-08 17:20   ` Sinan Kaya
  -- strict thread matches above, loose matches on Subject: below --
2015-10-26 18:42 Sinan Kaya

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.