From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from szxga04-in.huawei.com ([119.145.14.52]:48876 "EHLO szxga04-in.huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1751472AbcG2Cy0 (ORCPT ); Thu, 28 Jul 2016 22:54:26 -0400 Subject: Re: Question about cacheline size in PCIe SAS card To: Bjorn Helgaas References: <5799BF23.2020902@huawei.com> <20160728184306.GA12187@localhost> CC: , From: wangyijing Message-ID: <579AC545.2030306@huawei.com> Date: Fri, 29 Jul 2016 10:53:57 +0800 MIME-Version: 1.0 In-Reply-To: <20160728184306.GA12187@localhost> Content-Type: text/plain; charset="UTF-8" Sender: linux-pci-owner@vger.kernel.org List-ID: Hi Bjorn, thanks for your comment! 在 2016/7/29 2:43, Bjorn Helgaas 写道: > On Thu, Jul 28, 2016 at 04:15:31PM +0800, wangyijing wrote: >> Hi all, we found a question about PCIe cacheline, the cacheline here is mean the >> configure space register at offset 0x0C in type 0 and type 1 configure space header. >> >> We did a hotplug in our platform for PCIe SAS controller, this sas controller has >> SSD disks and the disk sector is 520 bytes. Defaultly, BIOS set cacheline size to >> 64bytes, we test the IO read(io size is 128k/256k), the bandwith is 6G. >> After hotplug, the cacheline size in SAS controller changes to 0(default after #RST), >> and we test the IO read again, the bandwith changes to 5.2G. >> >> We Tested other SAS controller which is not 520 bytes sector, we didn't found this issue, >> and I grep the PCI_CACHE_LINE_SIZE in kernel, I found most of code change the PCI_CACHE_LINE_SIZE >> are device driver, like net, ata, and some arm pci controller. >> >> In PCI 3.0 spec, I found there are descriptions about cacheline size releated to performance, >> but in PCIe 3.0 spec, there is nothing related to cacheline size. > > Not quite true: sec 7.5.1.3 of PCIe r3.0 says: > > This field [Cache Line Size] is implemented by PCI Express devices > as a read-write field for legacy compatibility purposes but has no > effect on any PCI Express device behavior. Oh, sorry, I searched the key word "cacheline" in PCIe spec, according to this description, there is no effect on any PCIe device. > > Unless your SAS controller is doing something wrong, I suspect > something other than Cache Line Size is responsible for the difference > in performance. > > After hot-add of your controller, Cache Line Size is probably zero > because Linux doesn't set it. What happens if you set it manually > using "setpci"? Does that affect the performance? Yes, after hotplug, the cacheline size is reset to 0, linux doesn't touch it, and we tried to change cacheline size to 64 bytes by setpci, if we test the IO at this time, the IO bandwith is still 5.2G, but if we reset the firmware after change the cacheline size to 64 bytes, then test IO bandwith again, the IO bandwith would reach the 6G again. > > You might look at the MPS and MRRS settings in the two scenarios also. There is no difference for MPS and MRRS in the two scenarios, hotplug driver restore them according their original values. > > You could try collecting the output of "lspci -vvxxx" for the whole > system in the default case and again after the hotplug, and then > compare the two for differences. Yes, I did, and there is no other significant difference found except cacheline size. I suspect the SAS controller internal issues hurt the performance. The normal config space after system boot up 13:00.0 Serial Attached SCSI controller: PMC-Sierra Inc. Device 8072 (rev 06) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort- SERR- > Bjorn > > . >