From: John Garry <john.garry@huawei.com> To: Damien Le Moal <damien.lemoal@opensource.wdc.com>, Oliver Sang <oliver.sang@intel.com> Cc: Christoph Hellwig <hch@lst.de>, "Martin K. Petersen" <martin.petersen@oracle.com>, LKML <linux-kernel@vger.kernel.org>, "Linux Memory Management List" <linux-mm@kvack.org>, <linux-ide@vger.kernel.org>, <lkp@lists.01.org>, <lkp@intel.com>, <ying.huang@intel.com>, <feng.tang@intel.com>, <zhengjun.xing@linux.intel.com>, <fengwei.yin@intel.com> Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression Date: Tue, 16 Aug 2022 17:38:43 +0100 [thread overview] Message-ID: <28d6e48b-f52f-9467-8260-262504a1a1ff@huawei.com> (raw) In-Reply-To: <43eaa104-5b09-072c-56aa-6289569b0015@opensource.wdc.com> On 16/08/2022 16:42, Damien Le Moal wrote: > On 2022/08/16 3:35, John Garry wrote: >> On 16/08/2022 07:57, Oliver Sang wrote: >>>>> For me, a complete kernel log may help. >>>> and since only 1HDD, the output of the following would be helpful: >>>> >>>> /sys/block/sda/queue/max_sectors_kb >>>> /sys/block/sda/queue/max_hw_sectors_kb >>>> >>>> And for 5.19, if possible. >>> for commit >>> 0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors") >>> >>> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb >>> 512 >>> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb >>> 512 >>> >>> for both commit >>> 4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit") >>> and v5.19 >>> >>> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb >>> 1280 >>> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb >>> 32767 >>> >> >> thanks, I appreciate this. >> >> From the dmesg, I see 2x SATA disks - I was under the impression that >> the system only has 1x. >> >> Anyway, both drives show LBA48, which means the large max hw sectors at >> 32767KB: >> [ 31.129629][ T1146] ata6.00: 1562824368 sectors, multi 1: LBA48 NCQ >> (depth 32) >> >> So this is what I suspected: we are capped from the default shost max >> sectors (1024 sectors). >> >> This seems like the simplest fix for you: >> >> --- a/include/linux/libata.h >> +++ b/include/linux/libata.h >> @@ -1382,7 +1382,8 @@ extern const struct attribute_group >> *ata_common_sdev_groups[]; >> .proc_name = drv_name, \ >> .slave_destroy = ata_scsi_slave_destroy, \ >> .bios_param = ata_std_bios_param, \ >> - .unlock_native_capacity = ata_scsi_unlock_native_capacity >> + .unlock_native_capacity = ata_scsi_unlock_native_capacity,\ >> + .max_sectors = ATA_MAX_SECTORS_LBA48 > > This is crazy large (65535 x 512 B sectors) and never result in that being > exposed as the actual max_sectors_kb since other limits will apply first > (mapping size). Here is how I read values from above for max_sectors_kb and max_hw_sectors_kb: v5.19 + 0568e61225 : 512/512 v5.19 + 0568e61225 + 4cbfca5f77 : 512/512 v5.19: 1280/32767 They are want makes sense to me, at least. Oliver, can you confirm this? Thanks! On this basis, it appears that max_hw_sectors_kb is getting capped from scsi default @ 1024 sectors by commit 0568e61225. If it were getting capped by swiotlb mapping limit then that would give us 512 sectors - this value is fixed. So for my SHT change proposal I am just trying to revert to previous behaviour in 5.19 - make max_hw_sectors_kb crazy big again. > > The regression may come not from commands becoming tiny, but from the fact that > after the patch, max_sectors_kb is too large, I don't think it is, but need confirmation. >causing a lot of overhead with > qemu swiotlb mapping and slowing down IO processing. > > Above, it can be seen that we ed up with max_sectors_kb being 1280, which is the > default for most scsi disks (including ATA drives). That is normal. But before > that, it was 512, which likely better fits qemu swiotlb and does not generate Again, I don't think this this is the case. Need confirmation. > overhead. So the above fix will not change anything I think... Thanks, John
WARNING: multiple messages have this Message-ID (diff)
From: John Garry <john.garry@huawei.com> To: lkp@lists.01.org Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression Date: Tue, 16 Aug 2022 17:38:43 +0100 [thread overview] Message-ID: <28d6e48b-f52f-9467-8260-262504a1a1ff@huawei.com> (raw) In-Reply-To: <43eaa104-5b09-072c-56aa-6289569b0015@opensource.wdc.com> [-- Attachment #1: Type: text/plain, Size: 3514 bytes --] On 16/08/2022 16:42, Damien Le Moal wrote: > On 2022/08/16 3:35, John Garry wrote: >> On 16/08/2022 07:57, Oliver Sang wrote: >>>>> For me, a complete kernel log may help. >>>> and since only 1HDD, the output of the following would be helpful: >>>> >>>> /sys/block/sda/queue/max_sectors_kb >>>> /sys/block/sda/queue/max_hw_sectors_kb >>>> >>>> And for 5.19, if possible. >>> for commit >>> 0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors") >>> >>> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb >>> 512 >>> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb >>> 512 >>> >>> for both commit >>> 4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit") >>> and v5.19 >>> >>> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb >>> 1280 >>> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb >>> 32767 >>> >> >> thanks, I appreciate this. >> >> From the dmesg, I see 2x SATA disks - I was under the impression that >> the system only has 1x. >> >> Anyway, both drives show LBA48, which means the large max hw sectors at >> 32767KB: >> [ 31.129629][ T1146] ata6.00: 1562824368 sectors, multi 1: LBA48 NCQ >> (depth 32) >> >> So this is what I suspected: we are capped from the default shost max >> sectors (1024 sectors). >> >> This seems like the simplest fix for you: >> >> --- a/include/linux/libata.h >> +++ b/include/linux/libata.h >> @@ -1382,7 +1382,8 @@ extern const struct attribute_group >> *ata_common_sdev_groups[]; >> .proc_name = drv_name, \ >> .slave_destroy = ata_scsi_slave_destroy, \ >> .bios_param = ata_std_bios_param, \ >> - .unlock_native_capacity = ata_scsi_unlock_native_capacity >> + .unlock_native_capacity = ata_scsi_unlock_native_capacity,\ >> + .max_sectors = ATA_MAX_SECTORS_LBA48 > > This is crazy large (65535 x 512 B sectors) and never result in that being > exposed as the actual max_sectors_kb since other limits will apply first > (mapping size). Here is how I read values from above for max_sectors_kb and max_hw_sectors_kb: v5.19 + 0568e61225 : 512/512 v5.19 + 0568e61225 + 4cbfca5f77 : 512/512 v5.19: 1280/32767 They are want makes sense to me, at least. Oliver, can you confirm this? Thanks! On this basis, it appears that max_hw_sectors_kb is getting capped from scsi default @ 1024 sectors by commit 0568e61225. If it were getting capped by swiotlb mapping limit then that would give us 512 sectors - this value is fixed. So for my SHT change proposal I am just trying to revert to previous behaviour in 5.19 - make max_hw_sectors_kb crazy big again. > > The regression may come not from commands becoming tiny, but from the fact that > after the patch, max_sectors_kb is too large, I don't think it is, but need confirmation. >causing a lot of overhead with > qemu swiotlb mapping and slowing down IO processing. > > Above, it can be seen that we ed up with max_sectors_kb being 1280, which is the > default for most scsi disks (including ATA drives). That is normal. But before > that, it was 512, which likely better fits qemu swiotlb and does not generate Again, I don't think this this is the case. Need confirmation. > overhead. So the above fix will not change anything I think... Thanks, John
next prev parent reply other threads:[~2022-08-16 16:38 UTC|newest] Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-08-05 8:05 [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression kernel test robot 2022-08-05 8:05 ` kernel test robot 2022-08-08 14:52 ` Damien Le Moal 2022-08-08 14:52 ` Damien Le Moal 2022-08-09 9:58 ` John Garry 2022-08-09 9:58 ` John Garry 2022-08-09 14:16 ` John Garry 2022-08-09 14:16 ` John Garry 2022-08-09 14:57 ` Damien Le Moal 2022-08-09 14:57 ` Damien Le Moal 2022-08-10 8:33 ` John Garry 2022-08-10 8:33 ` John Garry 2022-08-10 13:52 ` Damien Le Moal 2022-08-10 13:52 ` Damien Le Moal 2022-08-09 14:55 ` Damien Le Moal 2022-08-09 14:55 ` Damien Le Moal 2022-08-09 15:16 ` David Laight 2022-08-09 15:16 ` David Laight 2022-08-10 13:57 ` Damien Le Moal 2022-08-10 13:57 ` Damien Le Moal 2022-08-12 5:01 ` Oliver Sang 2022-08-12 5:01 ` Oliver Sang 2022-08-12 11:13 ` John Garry 2022-08-12 11:13 ` John Garry 2022-08-12 14:58 ` John Garry 2022-08-12 14:58 ` John Garry 2022-08-16 6:57 ` Oliver Sang 2022-08-16 6:57 ` Oliver Sang 2022-08-16 10:35 ` John Garry 2022-08-16 10:35 ` John Garry 2022-08-16 15:42 ` Damien Le Moal 2022-08-16 15:42 ` Damien Le Moal 2022-08-16 16:38 ` John Garry [this message] 2022-08-16 16:38 ` John Garry 2022-08-16 20:02 ` Damien Le Moal 2022-08-16 20:02 ` Damien Le Moal 2022-08-16 20:44 ` John Garry 2022-08-16 20:44 ` John Garry 2022-08-17 15:55 ` Damien Le Moal 2022-08-17 15:55 ` Damien Le Moal 2022-08-17 13:51 ` Oliver Sang 2022-08-17 13:51 ` Oliver Sang 2022-08-17 14:04 ` John Garry 2022-08-17 14:04 ` John Garry 2022-08-18 2:06 ` Oliver Sang 2022-08-18 2:06 ` Oliver Sang 2022-08-18 9:28 ` John Garry 2022-08-18 9:28 ` John Garry 2022-08-19 6:24 ` Oliver Sang 2022-08-19 6:24 ` Oliver Sang 2022-08-19 7:54 ` John Garry 2022-08-19 7:54 ` John Garry 2022-08-20 16:36 ` Damien Le Moal 2022-08-20 16:36 ` Damien Le Moal 2022-08-12 15:41 ` Damien Le Moal 2022-08-12 15:41 ` Damien Le Moal 2022-08-12 17:17 ` John Garry 2022-08-12 17:17 ` John Garry 2022-08-12 18:27 ` Damien Le Moal 2022-08-12 18:27 ` Damien Le Moal 2022-08-13 7:23 ` John Garry 2022-08-13 7:23 ` John Garry 2022-08-16 2:52 ` Oliver Sang 2022-08-16 2:52 ` Oliver Sang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=28d6e48b-f52f-9467-8260-262504a1a1ff@huawei.com \ --to=john.garry@huawei.com \ --cc=damien.lemoal@opensource.wdc.com \ --cc=feng.tang@intel.com \ --cc=fengwei.yin@intel.com \ --cc=hch@lst.de \ --cc=linux-ide@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lkp@intel.com \ --cc=lkp@lists.01.org \ --cc=martin.petersen@oracle.com \ --cc=oliver.sang@intel.com \ --cc=ying.huang@intel.com \ --cc=zhengjun.xing@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.