All of lore.kernel.org
 help / color / mirror / Atom feed
* PCIe: can't set Max Payload Size to 256
@ 2021-04-16 17:31 Pali Rohár
  2021-04-16 20:29 ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Pali Rohár @ 2021-04-16 17:31 UTC (permalink / raw)
  To: linux-pci, Marek Behún

Hello! I'm getting following error line in dmesg for NVMe disk with
v5.12-rc7 kernel version:

[    3.226462] pci 0000:04:00.0: can't set Max Payload Size to 256; if necessary, use "pci=pcie_bus_safe" and report a bug

lspci output for this NVMe disk is:

04:00.0 Non-Volatile memory controller [0108]: Silicon Motion, Inc. Device [126f:2263] (rev 03) (prog-if 02 [NVM Express])
        Subsystem: Silicon Motion, Inc. Device [126f:2263]
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 55
        Region 0: Memory at e8000000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 26.000W
                DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM not supported
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s (downgraded), Width x1 (downgraded)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [b0] MSI-X: Enable+ Count=16 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=0 offset=00002100
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [158 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
                LaneErrStat: 0
        Capabilities: [178 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [180 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
                          PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                           T_CommonMode=0us
                L1SubCtl2: T_PwrOn=10us
        Kernel driver in use: nvme

And I cannot understand. Why is kernel trying to set Max Payload Size to
256 bytes when NVMe disk in Device Capabilities register presents that
supports Maximal Payload size only 128 bytes?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: PCIe: can't set Max Payload Size to 256
  2021-04-16 17:31 PCIe: can't set Max Payload Size to 256 Pali Rohár
@ 2021-04-16 20:29 ` Keith Busch
  2021-04-16 23:04   ` Pali Rohár
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2021-04-16 20:29 UTC (permalink / raw)
  To: Pali Rohár; +Cc: linux-pci, Marek Behún

On Fri, Apr 16, 2021 at 07:31:19PM +0200, Pali Rohár wrote:
> Hello! I'm getting following error line in dmesg for NVMe disk with
> v5.12-rc7 kernel version:
> 
> [    3.226462] pci 0000:04:00.0: can't set Max Payload Size to 256; if necessary, use "pci=pcie_bus_safe" and report a bug
> 
> lspci output for this NVMe disk is:
> 
> 04:00.0 Non-Volatile memory controller [0108]: Silicon Motion, Inc. Device [126f:2263] (rev 03) (prog-if 02 [NVM Express])
>         Subsystem: Silicon Motion, Inc. Device [126f:2263]
>         Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Latency: 0
>         Interrupt: pin A routed to IRQ 55
>         Region 0: Memory at e8000000 (64-bit, non-prefetchable) [size=16K]
>         Capabilities: [40] Power Management version 3
>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
>                 Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
>         Capabilities: [50] MSI: Enable- Count=1/8 Maskable+ 64bit+
>                 Address: 0000000000000000  Data: 0000
>                 Masking: 00000000  Pending: 00000000
>         Capabilities: [70] Express (v2) Endpoint, MSI 00
>                 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
>                         ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 26.000W
>                 DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
>                         RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
>                         MaxPayload 128 bytes, MaxReadReq 512 bytes
>                 DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
>                 LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM not supported
>                         ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
>                 LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
>                         ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>                 LnkSta: Speed 2.5GT/s (downgraded), Width x1 (downgraded)
>                         TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
>                 DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
>                          10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
>                          EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
>                          FRS- TPHComp- ExtTPHComp-
>                          AtomicOpsCap: 32bit- 64bit- 128bitCAS-
>                 DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
>                          AtomicOpsCtl: ReqEn-
>                 LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
>                 LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
>                          Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
>                          Compliance De-emphasis: -6dB
>                 LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
>                          EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
>                          Retimer- 2Retimers- CrosslinkRes: unsupported
>         Capabilities: [b0] MSI-X: Enable+ Count=16 Masked-
>                 Vector table: BAR=0 offset=00002000
>                 PBA: BAR=0 offset=00002100
>         Capabilities: [100 v2] Advanced Error Reporting
>                 UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
>                 UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
>                 UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
>                 CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
>                 CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
>                 AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
>                         MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
>                 HeaderLog: 00000000 00000000 00000000 00000000
>         Capabilities: [158 v1] Secondary PCI Express
>                 LnkCtl3: LnkEquIntrruptEn- PerformEqu-
>                 LaneErrStat: 0
>         Capabilities: [178 v1] Latency Tolerance Reporting
>                 Max snoop latency: 0ns
>                 Max no snoop latency: 0ns
>         Capabilities: [180 v1] L1 PM Substates
>                 L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
>                           PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
>                 L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
>                            T_CommonMode=0us
>                 L1SubCtl2: T_PwrOn=10us
>         Kernel driver in use: nvme
> 
> And I cannot understand. Why is kernel trying to set Max Payload Size to
> 256 bytes when NVMe disk in Device Capabilities register presents that
> supports Maximal Payload size only 128 bytes?

The error indicates that the port your nvme pcie device is connected
is not reporting a matching MPS. The kernel will attempt to tune the
port if it's a RP so they match. If you see this error, that means the
RP setting wasn't successful.

If the SSD is connected to a bridge, you'll need to use the kernel
parameter to force retuning the bus.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: PCIe: can't set Max Payload Size to 256
  2021-04-16 20:29 ` Keith Busch
@ 2021-04-16 23:04   ` Pali Rohár
  2021-04-17  2:29     ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Pali Rohár @ 2021-04-16 23:04 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-pci, Marek Behún

On Saturday 17 April 2021 05:29:41 Keith Busch wrote:
> On Fri, Apr 16, 2021 at 07:31:19PM +0200, Pali Rohár wrote:
> > Hello! I'm getting following error line in dmesg for NVMe disk with
> > v5.12-rc7 kernel version:
> > 
> > [    3.226462] pci 0000:04:00.0: can't set Max Payload Size to 256; if necessary, use "pci=pcie_bus_safe" and report a bug
> > 
> > lspci output for this NVMe disk is:
> > 
> > 04:00.0 Non-Volatile memory controller [0108]: Silicon Motion, Inc. Device [126f:2263] (rev 03) (prog-if 02 [NVM Express])
> >         Subsystem: Silicon Motion, Inc. Device [126f:2263]
> >         Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
> >         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
> >         Latency: 0
> >         Interrupt: pin A routed to IRQ 55
> >         Region 0: Memory at e8000000 (64-bit, non-prefetchable) [size=16K]
> >         Capabilities: [40] Power Management version 3
> >                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
> >                 Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
> >         Capabilities: [50] MSI: Enable- Count=1/8 Maskable+ 64bit+
> >                 Address: 0000000000000000  Data: 0000
> >                 Masking: 00000000  Pending: 00000000
> >         Capabilities: [70] Express (v2) Endpoint, MSI 00
> >                 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
> >                         ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 26.000W
> >                 DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
> >                         RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
> >                         MaxPayload 128 bytes, MaxReadReq 512 bytes
> >                 DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
> >                 LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM not supported
> >                         ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
> >                 LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
> >                         ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
> >                 LnkSta: Speed 2.5GT/s (downgraded), Width x1 (downgraded)
> >                         TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
> >                 DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
> >                          10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
> >                          EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
> >                          FRS- TPHComp- ExtTPHComp-
> >                          AtomicOpsCap: 32bit- 64bit- 128bitCAS-
> >                 DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
> >                          AtomicOpsCtl: ReqEn-
> >                 LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
> >                 LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
> >                          Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
> >                          Compliance De-emphasis: -6dB
> >                 LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
> >                          EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
> >                          Retimer- 2Retimers- CrosslinkRes: unsupported
> >         Capabilities: [b0] MSI-X: Enable+ Count=16 Masked-
> >                 Vector table: BAR=0 offset=00002000
> >                 PBA: BAR=0 offset=00002100
> >         Capabilities: [100 v2] Advanced Error Reporting
> >                 UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
> >                 UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
> >                 UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
> >                 CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
> >                 CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
> >                 AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
> >                         MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
> >                 HeaderLog: 00000000 00000000 00000000 00000000
> >         Capabilities: [158 v1] Secondary PCI Express
> >                 LnkCtl3: LnkEquIntrruptEn- PerformEqu-
> >                 LaneErrStat: 0
> >         Capabilities: [178 v1] Latency Tolerance Reporting
> >                 Max snoop latency: 0ns
> >                 Max no snoop latency: 0ns
> >         Capabilities: [180 v1] L1 PM Substates
> >                 L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
> >                           PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
> >                 L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
> >                            T_CommonMode=0us
> >                 L1SubCtl2: T_PwrOn=10us
> >         Kernel driver in use: nvme
> > 
> > And I cannot understand. Why is kernel trying to set Max Payload Size to
> > 256 bytes when NVMe disk in Device Capabilities register presents that
> > supports Maximal Payload size only 128 bytes?
> 
> The error indicates that the port your nvme pcie device is connected
> is not reporting a matching MPS. The kernel will attempt to tune the
> port if it's a RP so they match. If you see this error, that means the
> RP setting wasn't successful.
> 
> If the SSD is connected to a bridge, you'll need to use the kernel
> parameter to force retuning the bus.

Above NVMe disk is connected to PCIe packet switch (which acts as pair
of Upstream and Downstream ports of PCI bridge) and PCIe packet switch
is connected to the Root port.

I'm not sure what should I set or what to force.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: PCIe: can't set Max Payload Size to 256
  2021-04-16 23:04   ` Pali Rohár
@ 2021-04-17  2:29     ` Keith Busch
  2021-04-17  9:31       ` Pali Rohár
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2021-04-17  2:29 UTC (permalink / raw)
  To: Pali Rohár; +Cc: linux-pci, Marek Behún

On Sat, Apr 17, 2021 at 01:04:30AM +0200, Pali Rohár wrote:
> Above NVMe disk is connected to PCIe packet switch (which acts as pair
> of Upstream and Downstream ports of PCI bridge) and PCIe packet switch
> is connected to the Root port.
> 
> I'm not sure what should I set or what to force.

Try adding the suggested kernel parameter, "pci=pcie_bus_safe".

Unless this is a hot-plug scenario, it is odd the OS was handed
mismatched PCIe settings. That usually indicates a platform bios issue,
and the kernel parameter is typically successful at working around it.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: PCIe: can't set Max Payload Size to 256
  2021-04-17  2:29     ` Keith Busch
@ 2021-04-17  9:31       ` Pali Rohár
  2021-04-19 15:27         ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Pali Rohár @ 2021-04-17  9:31 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-pci, Marek Behún

On Saturday 17 April 2021 11:29:04 Keith Busch wrote:
> On Sat, Apr 17, 2021 at 01:04:30AM +0200, Pali Rohár wrote:
> > Above NVMe disk is connected to PCIe packet switch (which acts as pair
> > of Upstream and Downstream ports of PCI bridge) and PCIe packet switch
> > is connected to the Root port.
> > 
> > I'm not sure what should I set or what to force.
> 
> Try adding the suggested kernel parameter, "pci=pcie_bus_safe".

Ok, I will try it.

> Unless this is a hot-plug scenario, it is odd the OS was handed
> mismatched PCIe settings. That usually indicates a platform bios issue,
> and the kernel parameter is typically successful at working around it.

This is arm64, no BIOS. Kernel uses native pci-aardvark.c host
controller driver which handles everything related to PCIe.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: PCIe: can't set Max Payload Size to 256
  2021-04-17  9:31       ` Pali Rohár
@ 2021-04-19 15:27         ` Keith Busch
  0 siblings, 0 replies; 6+ messages in thread
From: Keith Busch @ 2021-04-19 15:27 UTC (permalink / raw)
  To: Pali Rohár; +Cc: linux-pci, Marek Behún

On Sat, Apr 17, 2021 at 11:31:08AM +0200, Pali Rohár wrote:
> On Saturday 17 April 2021 11:29:04 Keith Busch wrote:
> > On Sat, Apr 17, 2021 at 01:04:30AM +0200, Pali Rohár wrote:
> > > Above NVMe disk is connected to PCIe packet switch (which acts as pair
> > > of Upstream and Downstream ports of PCI bridge) and PCIe packet switch
> > > is connected to the Root port.
> > > 
> > > I'm not sure what should I set or what to force.
> > 
> > Try adding the suggested kernel parameter, "pci=pcie_bus_safe".
> 
> Ok, I will try it.
> 
> > Unless this is a hot-plug scenario, it is odd the OS was handed
> > mismatched PCIe settings. That usually indicates a platform bios issue,
> > and the kernel parameter is typically successful at working around it.
> 
> This is arm64, no BIOS. Kernel uses native pci-aardvark.c host
> controller driver which handles everything related to PCIe.

That also sounds odd. The default MPS value is 128b, so something
changed your bridge to 256b. Linux pci driver generally uses the
existing settings. The only times it attempts to change them is if you
used parameters to tell it to do that, or if it detects a mismatch, so
I'm curious what component set the bridge as you've observed.

In any case, the 'safe' parameter sounds like the most likely way to
work around it.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-04-19 15:27 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-16 17:31 PCIe: can't set Max Payload Size to 256 Pali Rohár
2021-04-16 20:29 ` Keith Busch
2021-04-16 23:04   ` Pali Rohár
2021-04-17  2:29     ` Keith Busch
2021-04-17  9:31       ` Pali Rohár
2021-04-19 15:27         ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.