All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Oliver Sang <oliver.sang@intel.com>,
	Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Christoph Hellwig <hch@lst.de>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	LKML <linux-kernel@vger.kernel.org>,
	"Linux Memory Management List" <linux-mm@kvack.org>,
	<linux-ide@vger.kernel.org>, <lkp@lists.01.org>, <lkp@intel.com>,
	<ying.huang@intel.com>, <feng.tang@intel.com>,
	<zhengjun.xing@linux.intel.com>, <fengwei.yin@intel.com>
Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression
Date: Tue, 16 Aug 2022 11:35:11 +0100	[thread overview]
Message-ID: <aabf7ed8-8d4d-dc68-1b8b-c91653701def@huawei.com> (raw)
In-Reply-To: <Yvs/w93KUkgD9f7/@xsang-OptiPlex-9020>

On 16/08/2022 07:57, Oliver Sang wrote:
>>> For me, a complete kernel log may help.
>> and since only 1HDD, the output of the following would be helpful:
>>
>> /sys/block/sda/queue/max_sectors_kb
>> /sys/block/sda/queue/max_hw_sectors_kb
>>
>> And for 5.19, if possible.
> for commit
> 0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors")
> 
> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
> 512
> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
> 512
> 
> for both commit
> 4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit")
> and v5.19
> 
> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
> 1280
> root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
> 32767
> 

thanks, I appreciate this.

 From the dmesg, I see 2x SATA disks - I was under the impression that 
the system only has 1x.

Anyway, both drives show LBA48, which means the large max hw sectors at 
32767KB:
[   31.129629][ T1146] ata6.00: 1562824368 sectors, multi 1: LBA48 NCQ 
(depth 32)

So this is what I suspected: we are capped from the default shost max 
sectors (1024 sectors).

This seems like the simplest fix for you:

--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -1382,7 +1382,8 @@ extern const struct attribute_group 
*ata_common_sdev_groups[];
        .proc_name              = drv_name,                     \
        .slave_destroy          = ata_scsi_slave_destroy,       \
        .bios_param             = ata_std_bios_param,           \
-       .unlock_native_capacity = ata_scsi_unlock_native_capacity
+       .unlock_native_capacity = ata_scsi_unlock_native_capacity,\
+       .max_sectors = ATA_MAX_SECTORS_LBA48


A concern is that other drivers which use libata may have similar 
issues, as they use default in SCSI_DEFAULT_MAX_SECTORS for max_sectors:
hisi_sas
pm8001
aic9xxx
mvsas
isci

So they may be needlessly hobbled for some SATA disks. However I have a 
system with hisi_sas controller and attached LBA48 disk. I tested 
performance for v5.19 vs 6.0 and it was about the same for fio rw=read @ 
~120K IOPS. I can test this further.

Thanks,
John

WARNING: multiple messages have this Message-ID (diff)
From: John Garry <john.garry@huawei.com>
To: lkp@lists.01.org
Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression
Date: Tue, 16 Aug 2022 11:35:11 +0100	[thread overview]
Message-ID: <aabf7ed8-8d4d-dc68-1b8b-c91653701def@huawei.com> (raw)
In-Reply-To: <Yvs/w93KUkgD9f7/@xsang-OptiPlex-9020>

[-- Attachment #1: Type: text/plain, Size: 2271 bytes --]

On 16/08/2022 07:57, Oliver Sang wrote:
>>> For me, a complete kernel log may help.
>> and since only 1HDD, the output of the following would be helpful:
>>
>> /sys/block/sda/queue/max_sectors_kb
>> /sys/block/sda/queue/max_hw_sectors_kb
>>
>> And for 5.19, if possible.
> for commit
> 0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors")
> 
> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
> 512
> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
> 512
> 
> for both commit
> 4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit")
> and v5.19
> 
> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
> 1280
> root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
> 32767
> 

thanks, I appreciate this.

 From the dmesg, I see 2x SATA disks - I was under the impression that 
the system only has 1x.

Anyway, both drives show LBA48, which means the large max hw sectors at 
32767KB:
[   31.129629][ T1146] ata6.00: 1562824368 sectors, multi 1: LBA48 NCQ 
(depth 32)

So this is what I suspected: we are capped from the default shost max 
sectors (1024 sectors).

This seems like the simplest fix for you:

--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -1382,7 +1382,8 @@ extern const struct attribute_group 
*ata_common_sdev_groups[];
        .proc_name              = drv_name,                     \
        .slave_destroy          = ata_scsi_slave_destroy,       \
        .bios_param             = ata_std_bios_param,           \
-       .unlock_native_capacity = ata_scsi_unlock_native_capacity
+       .unlock_native_capacity = ata_scsi_unlock_native_capacity,\
+       .max_sectors = ATA_MAX_SECTORS_LBA48


A concern is that other drivers which use libata may have similar 
issues, as they use default in SCSI_DEFAULT_MAX_SECTORS for max_sectors:
hisi_sas
pm8001
aic9xxx
mvsas
isci

So they may be needlessly hobbled for some SATA disks. However I have a 
system with hisi_sas controller and attached LBA48 disk. I tested 
performance for v5.19 vs 6.0 and it was about the same for fio rw=read @ 
~120K IOPS. I can test this further.

Thanks,
John

  reply	other threads:[~2022-08-16 10:50 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-05  8:05 [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression kernel test robot
2022-08-05  8:05 ` kernel test robot
2022-08-08 14:52 ` Damien Le Moal
2022-08-08 14:52   ` Damien Le Moal
2022-08-09  9:58   ` John Garry
2022-08-09  9:58     ` John Garry
2022-08-09 14:16     ` John Garry
2022-08-09 14:16       ` John Garry
2022-08-09 14:57       ` Damien Le Moal
2022-08-09 14:57         ` Damien Le Moal
2022-08-10  8:33         ` John Garry
2022-08-10  8:33           ` John Garry
2022-08-10 13:52           ` Damien Le Moal
2022-08-10 13:52             ` Damien Le Moal
2022-08-09 14:55     ` Damien Le Moal
2022-08-09 14:55       ` Damien Le Moal
2022-08-09 15:16       ` David Laight
2022-08-09 15:16         ` David Laight
2022-08-10 13:57         ` Damien Le Moal
2022-08-10 13:57           ` Damien Le Moal
2022-08-12  5:01       ` Oliver Sang
2022-08-12  5:01         ` Oliver Sang
2022-08-12 11:13         ` John Garry
2022-08-12 11:13           ` John Garry
2022-08-12 14:58           ` John Garry
2022-08-12 14:58             ` John Garry
2022-08-16  6:57             ` Oliver Sang
2022-08-16  6:57               ` Oliver Sang
2022-08-16 10:35               ` John Garry [this message]
2022-08-16 10:35                 ` John Garry
2022-08-16 15:42                 ` Damien Le Moal
2022-08-16 15:42                   ` Damien Le Moal
2022-08-16 16:38                   ` John Garry
2022-08-16 16:38                     ` John Garry
2022-08-16 20:02                     ` Damien Le Moal
2022-08-16 20:02                       ` Damien Le Moal
2022-08-16 20:44                       ` John Garry
2022-08-16 20:44                         ` John Garry
2022-08-17 15:55                         ` Damien Le Moal
2022-08-17 15:55                           ` Damien Le Moal
2022-08-17 13:51                     ` Oliver Sang
2022-08-17 13:51                       ` Oliver Sang
2022-08-17 14:04                       ` John Garry
2022-08-17 14:04                         ` John Garry
2022-08-18  2:06                         ` Oliver Sang
2022-08-18  2:06                           ` Oliver Sang
2022-08-18  9:28                           ` John Garry
2022-08-18  9:28                             ` John Garry
2022-08-19  6:24                             ` Oliver Sang
2022-08-19  6:24                               ` Oliver Sang
2022-08-19  7:54                               ` John Garry
2022-08-19  7:54                                 ` John Garry
2022-08-20 16:36                               ` Damien Le Moal
2022-08-20 16:36                                 ` Damien Le Moal
2022-08-12 15:41           ` Damien Le Moal
2022-08-12 15:41             ` Damien Le Moal
2022-08-12 17:17             ` John Garry
2022-08-12 17:17               ` John Garry
2022-08-12 18:27               ` Damien Le Moal
2022-08-12 18:27                 ` Damien Le Moal
2022-08-13  7:23                 ` John Garry
2022-08-13  7:23                   ` John Garry
2022-08-16  2:52           ` Oliver Sang
2022-08-16  2:52             ` Oliver Sang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aabf7ed8-8d4d-dc68-1b8b-c91653701def@huawei.com \
    --to=john.garry@huawei.com \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hch@lst.de \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=martin.petersen@oracle.com \
    --cc=oliver.sang@intel.com \
    --cc=ying.huang@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.