All of lore.kernel.org
 help / color / mirror / Atom feed
From: Oliver Sang <oliver.sang@intel.com>
To: John Garry <john.garry@huawei.com>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>,
	Christoph Hellwig <hch@lst.de>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	<linux-ide@vger.kernel.org>, <lkp@lists.01.org>, <lkp@intel.com>,
	<ying.huang@intel.com>, <feng.tang@intel.com>,
	<zhengjun.xing@linux.intel.com>, <fengwei.yin@intel.com>
Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression
Date: Tue, 16 Aug 2022 14:57:07 +0800	[thread overview]
Message-ID: <Yvs/w93KUkgD9f7/@xsang-OptiPlex-9020> (raw)
In-Reply-To: <f1c3d717-339d-ba2b-9775-fc0e00f57ae3@huawei.com>

Hi John,

On Fri, Aug 12, 2022 at 03:58:14PM +0100, John Garry wrote:
> On 12/08/2022 12:13, John Garry wrote:
> > > On Tue, Aug 09, 2022 at 07:55:53AM -0700, Damien Le Moal wrote:
> > > > On 2022/08/09 2:58, John Garry wrote:
> > > > > On 08/08/2022 15:52, Damien Le Moal wrote:
> > > > > > On 2022/08/05 1:05, kernel test robot wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > Greeting,
> > > > > > > 
> > > > > > > FYI, we noticed a -15.0% regression of
> > > > > > > stress-ng.copy-file.ops_per_sec due to commit:
> > > > > > > 
> > > > > > > 
> > > > > > > commit: 0568e6122574dcc1aded2979cd0245038efe22b6
> > > > > > > ("ata: libata-scsi: cap ata_device->max_sectors
> > > > > > > according to shost->max_sectors")
> > > > > > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git
> > > > > > > master
> > > > > > > 
> > > > > > > in testcase: stress-ng
> > > > > > > on test machine: 96 threads 2 sockets Ice Lake with 256G memory
> > > > > > > with following parameters:
> > > > > > > 
> > > > > > >     nr_threads: 10%
> > > > > > >     disk: 1HDD
> > > > > > >     testtime: 60s
> > > > > > >     fs: f2fs
> > > > > > >     class: filesystem
> > > > > > >     test: copy-file
> > > > > > >     cpufreq_governor: performance
> > > > > > >     ucode: 0xb000280
> > > > > > 
> > > > > > Without knowing what the device adapter is, hard to say
> > > > > > where the problem is. I
> > > > > > suspect that with the patch applied, we may be ending up
> > > > > > with a small default
> > > > > > max_sectors value, causing overhead due to more commands
> > > > > > than necessary.
> > > > > > 
> > > > > > Will check what I see with my test rig.
> > > > > 
> > > > > As far as I can see, this patch should not make a difference unless the
> > > > > ATA shost driver is setting the max_sectors value unnecessarily low.
> > > > 
> > > > That is my hunch too, hence my question about which host driver
> > > > is being used
> > > > for this test... That is not apparent from the problem report.
> > > 
> > > we noticed the commit is already in mainline now, and in our tests,
> > > there is
> > > still similar regression and also on other platforms.
> > > could you guide us how to check "which host driver is being used for this
> > > test"? hope to supply some useful information.
> > > 
> > 
> > For me, a complete kernel log may help.
> 
> and since only 1HDD, the output of the following would be helpful:
> 
> /sys/block/sda/queue/max_sectors_kb
> /sys/block/sda/queue/max_hw_sectors_kb
> 
> And for 5.19, if possible.

for commit
0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors")

root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
512
root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
512

for both commit
4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit")
and v5.19

root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
1280
root@lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
32767

> 
> Thanks!
> 
> > 
> > > > 
> > > > > 
> 

WARNING: multiple messages have this Message-ID (diff)
From: Oliver Sang <oliver.sang@intel.com>
To: lkp@lists.01.org
Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression
Date: Tue, 16 Aug 2022 14:57:07 +0800	[thread overview]
Message-ID: <Yvs/w93KUkgD9f7/@xsang-OptiPlex-9020> (raw)
In-Reply-To: <f1c3d717-339d-ba2b-9775-fc0e00f57ae3@huawei.com>

[-- Attachment #1: Type: text/plain, Size: 3213 bytes --]

Hi John,

On Fri, Aug 12, 2022 at 03:58:14PM +0100, John Garry wrote:
> On 12/08/2022 12:13, John Garry wrote:
> > > On Tue, Aug 09, 2022 at 07:55:53AM -0700, Damien Le Moal wrote:
> > > > On 2022/08/09 2:58, John Garry wrote:
> > > > > On 08/08/2022 15:52, Damien Le Moal wrote:
> > > > > > On 2022/08/05 1:05, kernel test robot wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > Greeting,
> > > > > > > 
> > > > > > > FYI, we noticed a -15.0% regression of
> > > > > > > stress-ng.copy-file.ops_per_sec due to commit:
> > > > > > > 
> > > > > > > 
> > > > > > > commit: 0568e6122574dcc1aded2979cd0245038efe22b6
> > > > > > > ("ata: libata-scsi: cap ata_device->max_sectors
> > > > > > > according to shost->max_sectors")
> > > > > > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git
> > > > > > > master
> > > > > > > 
> > > > > > > in testcase: stress-ng
> > > > > > > on test machine: 96 threads 2 sockets Ice Lake with 256G memory
> > > > > > > with following parameters:
> > > > > > > 
> > > > > > >     nr_threads: 10%
> > > > > > >     disk: 1HDD
> > > > > > >     testtime: 60s
> > > > > > >     fs: f2fs
> > > > > > >     class: filesystem
> > > > > > >     test: copy-file
> > > > > > >     cpufreq_governor: performance
> > > > > > >     ucode: 0xb000280
> > > > > > 
> > > > > > Without knowing what the device adapter is, hard to say
> > > > > > where the problem is. I
> > > > > > suspect that with the patch applied, we may be ending up
> > > > > > with a small default
> > > > > > max_sectors value, causing overhead due to more commands
> > > > > > than necessary.
> > > > > > 
> > > > > > Will check what I see with my test rig.
> > > > > 
> > > > > As far as I can see, this patch should not make a difference unless the
> > > > > ATA shost driver is setting the max_sectors value unnecessarily low.
> > > > 
> > > > That is my hunch too, hence my question about which host driver
> > > > is being used
> > > > for this test... That is not apparent from the problem report.
> > > 
> > > we noticed the commit is already in mainline now, and in our tests,
> > > there is
> > > still similar regression and also on other platforms.
> > > could you guide us how to check "which host driver is being used for this
> > > test"? hope to supply some useful information.
> > > 
> > 
> > For me, a complete kernel log may help.
> 
> and since only 1HDD, the output of the following would be helpful:
> 
> /sys/block/sda/queue/max_sectors_kb
> /sys/block/sda/queue/max_hw_sectors_kb
> 
> And for 5.19, if possible.

for commit
0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors")

root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
512
root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
512

for both commit
4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit")
and v5.19

root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
1280
root(a)lkp-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
32767

> 
> Thanks!
> 
> > 
> > > > 
> > > > > 
> 

  reply	other threads:[~2022-08-16  8:52 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-05  8:05 [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0% regression kernel test robot
2022-08-05  8:05 ` kernel test robot
2022-08-08 14:52 ` Damien Le Moal
2022-08-08 14:52   ` Damien Le Moal
2022-08-09  9:58   ` John Garry
2022-08-09  9:58     ` John Garry
2022-08-09 14:16     ` John Garry
2022-08-09 14:16       ` John Garry
2022-08-09 14:57       ` Damien Le Moal
2022-08-09 14:57         ` Damien Le Moal
2022-08-10  8:33         ` John Garry
2022-08-10  8:33           ` John Garry
2022-08-10 13:52           ` Damien Le Moal
2022-08-10 13:52             ` Damien Le Moal
2022-08-09 14:55     ` Damien Le Moal
2022-08-09 14:55       ` Damien Le Moal
2022-08-09 15:16       ` David Laight
2022-08-09 15:16         ` David Laight
2022-08-10 13:57         ` Damien Le Moal
2022-08-10 13:57           ` Damien Le Moal
2022-08-12  5:01       ` Oliver Sang
2022-08-12  5:01         ` Oliver Sang
2022-08-12 11:13         ` John Garry
2022-08-12 11:13           ` John Garry
2022-08-12 14:58           ` John Garry
2022-08-12 14:58             ` John Garry
2022-08-16  6:57             ` Oliver Sang [this message]
2022-08-16  6:57               ` Oliver Sang
2022-08-16 10:35               ` John Garry
2022-08-16 10:35                 ` John Garry
2022-08-16 15:42                 ` Damien Le Moal
2022-08-16 15:42                   ` Damien Le Moal
2022-08-16 16:38                   ` John Garry
2022-08-16 16:38                     ` John Garry
2022-08-16 20:02                     ` Damien Le Moal
2022-08-16 20:02                       ` Damien Le Moal
2022-08-16 20:44                       ` John Garry
2022-08-16 20:44                         ` John Garry
2022-08-17 15:55                         ` Damien Le Moal
2022-08-17 15:55                           ` Damien Le Moal
2022-08-17 13:51                     ` Oliver Sang
2022-08-17 13:51                       ` Oliver Sang
2022-08-17 14:04                       ` John Garry
2022-08-17 14:04                         ` John Garry
2022-08-18  2:06                         ` Oliver Sang
2022-08-18  2:06                           ` Oliver Sang
2022-08-18  9:28                           ` John Garry
2022-08-18  9:28                             ` John Garry
2022-08-19  6:24                             ` Oliver Sang
2022-08-19  6:24                               ` Oliver Sang
2022-08-19  7:54                               ` John Garry
2022-08-19  7:54                                 ` John Garry
2022-08-20 16:36                               ` Damien Le Moal
2022-08-20 16:36                                 ` Damien Le Moal
2022-08-12 15:41           ` Damien Le Moal
2022-08-12 15:41             ` Damien Le Moal
2022-08-12 17:17             ` John Garry
2022-08-12 17:17               ` John Garry
2022-08-12 18:27               ` Damien Le Moal
2022-08-12 18:27                 ` Damien Le Moal
2022-08-13  7:23                 ` John Garry
2022-08-13  7:23                   ` John Garry
2022-08-16  2:52           ` Oliver Sang
2022-08-16  2:52             ` Oliver Sang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yvs/w93KUkgD9f7/@xsang-OptiPlex-9020 \
    --to=oliver.sang@intel.com \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=martin.petersen@oracle.com \
    --cc=ying.huang@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.