All of lore.kernel.org
 help / color / mirror / Atom feed
* pm8001 performance degradation?
@ 2011-07-11 19:06 ersatz splatt
  2011-07-12  1:27 ` Jack Wang
  0 siblings, 1 reply; 8+ messages in thread
From: ersatz splatt @ 2011-07-11 19:06 UTC (permalink / raw)
  To: jack_wang, lindar_liu; +Cc: linux-scsi

Hello Jack Wang and Lindar Liu,


I am running the pm8001 driver (on applicable hardware including a
several core SMP server).

When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
via an fio test.

When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
same storage I get about 15Kiops running the same fio test.

Perhaps something has changes in the kernel that is not being accounted for?
Are you two still maintaining this driver?


Regards,
David

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: pm8001 performance degradation?
  2011-07-11 19:06 pm8001 performance degradation? ersatz splatt
@ 2011-07-12  1:27 ` Jack Wang
  2011-07-12  2:44   ` ersatz splatt
  0 siblings, 1 reply; 8+ messages in thread
From: Jack Wang @ 2011-07-12  1:27 UTC (permalink / raw)
  To: 'ersatz splatt', lindar_liu; +Cc: linux-scsi

> Hello Jack Wang and Lindar Liu,
> 
> 
> I am running the pm8001 driver (on applicable hardware including a
> several core SMP server).
> 
> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
> via an fio test.
> 
> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
> same storage I get about 15Kiops running the same fio test.
> 
> Perhaps something has changes in the kernel that is not being accounted
for?
> Are you two still maintaining this driver?
> 
> 
> Regards,
> David
[Jack Wang]  Could you give your detailed topology, I will later try to
investigate the performance issue, but as I remember an Intel developer
reports in mailist some changes in block layer lead to JBOD performance
degradation. 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: pm8001 performance degradation?
  2011-07-12  1:27 ` Jack Wang
@ 2011-07-12  2:44   ` ersatz splatt
  2011-07-12  2:46     ` ersatz splatt
  2011-07-12  4:18     ` Jack Wang
  0 siblings, 2 replies; 8+ messages in thread
From: ersatz splatt @ 2011-07-12  2:44 UTC (permalink / raw)
  To: Jack Wang; +Cc: lindar_liu, linux-scsi

I have one HBA connected directly to 4 SAS drives ... using a single 1
to four cable.


On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@usish.com> wrote:
>> Hello Jack Wang and Lindar Liu,
>>
>>
>> I am running the pm8001 driver (on applicable hardware including a
>> several core SMP server).
>>
>> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
>> via an fio test.
>>
>> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
>> same storage I get about 15Kiops running the same fio test.
>>
>> Perhaps something has changes in the kernel that is not being accounted
> for?
>> Are you two still maintaining this driver?
>>
>>
>> Regards,
>> David
> [Jack Wang]  Could you give your detailed topology, I will later try to
> investigate the performance issue, but as I remember an Intel developer
> reports in mailist some changes in block layer lead to JBOD performance
> degradation.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: pm8001 performance degradation?
  2011-07-12  2:44   ` ersatz splatt
@ 2011-07-12  2:46     ` ersatz splatt
  2011-07-12  4:18     ` Jack Wang
  1 sibling, 0 replies; 8+ messages in thread
From: ersatz splatt @ 2011-07-12  2:46 UTC (permalink / raw)
  To: Jack Wang; +Cc: lindar_liu, linux-scsi

Please let me know if you would like any more details.

Can you point to the degradation reference in earlier mail?



On Mon, Jul 11, 2011 at 7:44 PM, ersatz splatt <ersatzsplatt@gmail.com> wrote:
> I have one HBA connected directly to 4 SAS drives ... using a single 1
> to four cable.
>
>
> On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@usish.com> wrote:
>>> Hello Jack Wang and Lindar Liu,
>>>
>>>
>>> I am running the pm8001 driver (on applicable hardware including a
>>> several core SMP server).
>>>
>>> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
>>> via an fio test.
>>>
>>> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
>>> same storage I get about 15Kiops running the same fio test.
>>>
>>> Perhaps something has changes in the kernel that is not being accounted
>> for?
>>> Are you two still maintaining this driver?
>>>
>>>
>>> Regards,
>>> David
>> [Jack Wang]  Could you give your detailed topology, I will later try to
>> investigate the performance issue, but as I remember an Intel developer
>> reports in mailist some changes in block layer lead to JBOD performance
>> degradation.
>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: pm8001 performance degradation?
  2011-07-12  2:44   ` ersatz splatt
  2011-07-12  2:46     ` ersatz splatt
@ 2011-07-12  4:18     ` Jack Wang
  2011-07-12 19:34       ` ersatz splatt
  1 sibling, 1 reply; 8+ messages in thread
From: Jack Wang @ 2011-07-12  4:18 UTC (permalink / raw)
  To: 'ersatz splatt'; +Cc: lindar_liu, linux-scsi

Could you share your fio test scripts? disk detail and HBA firmware version
are also wanted if available.

Jack
> 
> I have one HBA connected directly to 4 SAS drives ... using a single 1
> to four cable.
> 
> 
> On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@usish.com> wrote:
> >> Hello Jack Wang and Lindar Liu,
> >>
> >>
> >> I am running the pm8001 driver (on applicable hardware including a
> >> several core SMP server).
> >>
> >> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
> >> via an fio test.
> >>
> >> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
> >> same storage I get about 15Kiops running the same fio test.
> >>
> >> Perhaps something has changes in the kernel that is not being accounted
> > for?
> >> Are you two still maintaining this driver?
> >>
> >>
> >> Regards,
> >> David
> > [Jack Wang]  Could you give your detailed topology, I will later try to
> > investigate the performance issue, but as I remember an Intel developer
> > reports in mailist some changes in block layer lead to JBOD performance
> > degradation.
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: pm8001 performance degradation?
  2011-07-12  4:18     ` Jack Wang
@ 2011-07-12 19:34       ` ersatz splatt
  2011-07-13  3:15         ` ersatz splatt
  0 siblings, 1 reply; 8+ messages in thread
From: ersatz splatt @ 2011-07-12 19:34 UTC (permalink / raw)
  To: Jack Wang; +Cc: lindar_liu, linux-scsi

Jack,

fio script is:
[global]
rw=read
direct=1
time_based
runtime=1m
ioengine=libaio
iodepth=32
bs=512
[dB]
filename=/dev/sdb
cpus_allowed=2
[dC]
filename=/dev/sdc
cpus_allowed=3
[dD]
filename=/dev/sdd
cpus_allowed=4
[dE]
filename=/dev/sde
cpus_allowed=5

(keep in mind this is a system with several cores)


Before running the script I (of course) shut down coalescing:
echo "2"> /sys/block/sdb/queue/nomerges
echo "2"> /sys/block/sdc/queue/nomerges
echo "2"> /sys/block/sdd/queue/nomerges
echo "2"> /sys/block/sde/queue/nomerges

echo noop > /sys/block/sdb/queue/scheduler
echo noop > /sys/block/sdc/queue/scheduler
echo noop > /sys/block/sdd/queue/scheduler
echo noop > /sys/block/sde/queue/scheduler

As you know, disk details are shown in the log on driver load:
pm8001 0000:05:00.0: pm8001: driver version 0.1.36
pm8001 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
scsi4 : pm8001
scsi 4:0:0:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
sd 4:0:0:0: [sdb] 286749488 512-byte logical blocks: (146 GB/136 GiB)
sd 4:0:0:0: Attached scsi generic sg1 type 0
sd 4:0:0:0: [sdb] Write Protect is off
sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports
DPO and FUA
 sdb: unknown partition table
sd 4:0:0:0: [sdb] Attached SCSI disk
scsi 4:0:1:0: Direct-Access     SEAGATE  ST9146803SS      0006 PQ: 0 ANSI: 5
sd 4:0:1:0: Attached scsi generic sg2 type 0
sd 4:0:1:0: [sdc] 286749488 512-byte logical blocks: (146 GB/136 GiB)
sd 4:0:1:0: [sdc] Write Protect is off
sd 4:0:1:0: [sdc] Write cache: enabled, read cache: enabled, supports
DPO and FUA
 sdc: unknown partition table
sd 4:0:1:0: [sdc] Attached SCSI disk
scsi 4:0:2:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
sd 4:0:2:0: [sdd] 286749488 512-byte logical blocks: (146 GB/136 GiB)
sd 4:0:2:0: Attached scsi generic sg3 type 0
sd 4:0:2:0: [sdd] Write Protect is off
sd 4:0:2:0: [sdd] Write cache: enabled, read cache: enabled, supports
DPO and FUA
 sdd: unknown partition table
sd 4:0:2:0: [sdd] Attached SCSI disk
scsi 4:0:3:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
sd 4:0:3:0: [sde] 286749488 512-byte logical blocks: (146 GB/136 GiB)
sd 4:0:3:0: Attached scsi generic sg4 type 0
sd 4:0:3:0: [sde] Write Protect is off
sd 4:0:3:0: [sde] Write cache: enabled, read cache: enabled, supports
DPO and FUA
 sde: unknown partition table
sd 4:0:3:0: [sde] Attached SCSI disk


The firmware version is 1.11.

Let me know if you have any other questions.  Please let me know if
you can confirm the performance degradation with the driver as it is.


David


On Mon, Jul 11, 2011 at 9:18 PM, Jack Wang <jack_wang@usish.com> wrote:
> Could you share your fio test scripts? disk detail and HBA firmware version
> are also wanted if available.
>
> Jack
>>
>> I have one HBA connected directly to 4 SAS drives ... using a single 1
>> to four cable.
>>
>>
>> On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@usish.com> wrote:
>> >> Hello Jack Wang and Lindar Liu,
>> >>
>> >>
>> >> I am running the pm8001 driver (on applicable hardware including a
>> >> several core SMP server).
>> >>
>> >> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
>> >> via an fio test.
>> >>
>> >> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
>> >> same storage I get about 15Kiops running the same fio test.
>> >>
>> >> Perhaps something has changes in the kernel that is not being accounted
>> > for?
>> >> Are you two still maintaining this driver?
>> >>
>> >>
>> >> Regards,
>> >> David
>> > [Jack Wang]  Could you give your detailed topology, I will later try to
>> > investigate the performance issue, but as I remember an Intel developer
>> > reports in mailist some changes in block layer lead to JBOD performance
>> > degradation.
>> >
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: pm8001 performance degradation?
  2011-07-12 19:34       ` ersatz splatt
@ 2011-07-13  3:15         ` ersatz splatt
  2011-07-13  4:08           ` Jack Wang
  0 siblings, 1 reply; 8+ messages in thread
From: ersatz splatt @ 2011-07-13  3:15 UTC (permalink / raw)
  To: Jack Wang; +Cc: lindar_liu, linux-scsi

Jack,

I think the apparent degradation was the result of profiling flags in
the .config file.

I turned off TASKSTATS, AUDIT, OPTIMIZE_FOR_SIZE, PROFILING (including
OPROFILE), and GCOV_KERNEL.

Somewhere in there I got the performance back.

Without intending to run any of the tools at the time of my tests, I
did not expect the consequences (I would only expect that if I was
using a tool).

Apologies for any confusion I passed to others.


David



On Tue, Jul 12, 2011 at 12:34 PM, ersatz splatt <ersatzsplatt@gmail.com> wrote:
> Jack,
>
> fio script is:
> [global]
> rw=read
> direct=1
> time_based
> runtime=1m
> ioengine=libaio
> iodepth=32
> bs=512
> [dB]
> filename=/dev/sdb
> cpus_allowed=2
> [dC]
> filename=/dev/sdc
> cpus_allowed=3
> [dD]
> filename=/dev/sdd
> cpus_allowed=4
> [dE]
> filename=/dev/sde
> cpus_allowed=5
>
> (keep in mind this is a system with several cores)
>
>
> Before running the script I (of course) shut down coalescing:
> echo "2"> /sys/block/sdb/queue/nomerges
> echo "2"> /sys/block/sdc/queue/nomerges
> echo "2"> /sys/block/sdd/queue/nomerges
> echo "2"> /sys/block/sde/queue/nomerges
>
> echo noop > /sys/block/sdb/queue/scheduler
> echo noop > /sys/block/sdc/queue/scheduler
> echo noop > /sys/block/sdd/queue/scheduler
> echo noop > /sys/block/sde/queue/scheduler
>
> As you know, disk details are shown in the log on driver load:
> pm8001 0000:05:00.0: pm8001: driver version 0.1.36
> pm8001 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
> scsi4 : pm8001
> scsi 4:0:0:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
> sd 4:0:0:0: [sdb] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:0:0: Attached scsi generic sg1 type 0
> sd 4:0:0:0: [sdb] Write Protect is off
> sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sdb: unknown partition table
> sd 4:0:0:0: [sdb] Attached SCSI disk
> scsi 4:0:1:0: Direct-Access     SEAGATE  ST9146803SS      0006 PQ: 0 ANSI: 5
> sd 4:0:1:0: Attached scsi generic sg2 type 0
> sd 4:0:1:0: [sdc] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:1:0: [sdc] Write Protect is off
> sd 4:0:1:0: [sdc] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sdc: unknown partition table
> sd 4:0:1:0: [sdc] Attached SCSI disk
> scsi 4:0:2:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
> sd 4:0:2:0: [sdd] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:2:0: Attached scsi generic sg3 type 0
> sd 4:0:2:0: [sdd] Write Protect is off
> sd 4:0:2:0: [sdd] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sdd: unknown partition table
> sd 4:0:2:0: [sdd] Attached SCSI disk
> scsi 4:0:3:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
> sd 4:0:3:0: [sde] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:3:0: Attached scsi generic sg4 type 0
> sd 4:0:3:0: [sde] Write Protect is off
> sd 4:0:3:0: [sde] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sde: unknown partition table
> sd 4:0:3:0: [sde] Attached SCSI disk
>
>
> The firmware version is 1.11.
>
> Let me know if you have any other questions.  Please let me know if
> you can confirm the performance degradation with the driver as it is.
>
>
> David
>
>
> On Mon, Jul 11, 2011 at 9:18 PM, Jack Wang <jack_wang@usish.com> wrote:
>> Could you share your fio test scripts? disk detail and HBA firmware version
>> are also wanted if available.
>>
>> Jack
>>>
>>> I have one HBA connected directly to 4 SAS drives ... using a single 1
>>> to four cable.
>>>
>>>
>>> On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@usish.com> wrote:
>>> >> Hello Jack Wang and Lindar Liu,
>>> >>
>>> >>
>>> >> I am running the pm8001 driver (on applicable hardware including a
>>> >> several core SMP server).
>>> >>
>>> >> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
>>> >> via an fio test.
>>> >>
>>> >> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
>>> >> same storage I get about 15Kiops running the same fio test.
>>> >>
>>> >> Perhaps something has changes in the kernel that is not being accounted
>>> > for?
>>> >> Are you two still maintaining this driver?
>>> >>
>>> >>
>>> >> Regards,
>>> >> David
>>> > [Jack Wang]  Could you give your detailed topology, I will later try to
>>> > investigate the performance issue, but as I remember an Intel developer
>>> > reports in mailist some changes in block layer lead to JBOD performance
>>> > degradation.
>>> >
>>> >
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: pm8001 performance degradation?
  2011-07-13  3:15         ` ersatz splatt
@ 2011-07-13  4:08           ` Jack Wang
  0 siblings, 0 replies; 8+ messages in thread
From: Jack Wang @ 2011-07-13  4:08 UTC (permalink / raw)
  To: 'ersatz splatt'; +Cc: lindar_liu, linux-scsi

> Jack,
> 
> I think the apparent degradation was the result of profiling flags in
> the .config file.
> 
> I turned off TASKSTATS, AUDIT, OPTIMIZE_FOR_SIZE, PROFILING (including
> OPROFILE), and GCOV_KERNEL.
> 
> Somewhere in there I got the performance back.
> 
> Without intending to run any of the tools at the time of my tests, I
> did not expect the consequences (I would only expect that if I was
> using a tool).
> 
> Apologies for any confusion I passed to others.
> 
> 
> David
> 
[Jack Wang] Nice to hear that.
> 
> 
> On Tue, Jul 12, 2011 at 12:34 PM, ersatz splatt <ersatzsplatt@gmail.com>
wrote:
> > Jack,
> >
> > fio script is:
> > [global]
> > rw=read
> > direct=1
> > time_based
> > runtime=1m
> > ioengine=libaio
> > iodepth=32
> > bs=512
> > [dB]
> > filename=/dev/sdb
> > cpus_allowed=2
> > [dC]
> > filename=/dev/sdc
> > cpus_allowed=3
> > [dD]
> > filename=/dev/sdd
> > cpus_allowed=4
> > [dE]
> > filename=/dev/sde
> > cpus_allowed=5
> >
> > (keep in mind this is a system with several cores)
> >
> >
> > Before running the script I (of course) shut down coalescing:
> > echo "2"> /sys/block/sdb/queue/nomerges
> > echo "2"> /sys/block/sdc/queue/nomerges
> > echo "2"> /sys/block/sdd/queue/nomerges
> > echo "2"> /sys/block/sde/queue/nomerges
> >
> > echo noop > /sys/block/sdb/queue/scheduler
> > echo noop > /sys/block/sdc/queue/scheduler
> > echo noop > /sys/block/sdd/queue/scheduler
> > echo noop > /sys/block/sde/queue/scheduler
> >
> > As you know, disk details are shown in the log on driver load:
> > pm8001 0000:05:00.0: pm8001: driver version 0.1.36
> > pm8001 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
> > scsi4 : pm8001
> > scsi 4:0:0:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0
ANSI:
> 5
> > sd 4:0:0:0: [sdb] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> > sd 4:0:0:0: Attached scsi generic sg1 type 0
> > sd 4:0:0:0: [sdb] Write Protect is off
> > sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports
> > DPO and FUA
> >  sdb: unknown partition table
> > sd 4:0:0:0: [sdb] Attached SCSI disk
> > scsi 4:0:1:0: Direct-Access     SEAGATE  ST9146803SS      0006 PQ: 0
ANSI:
> 5
> > sd 4:0:1:0: Attached scsi generic sg2 type 0
> > sd 4:0:1:0: [sdc] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> > sd 4:0:1:0: [sdc] Write Protect is off
> > sd 4:0:1:0: [sdc] Write cache: enabled, read cache: enabled, supports
> > DPO and FUA
> >  sdc: unknown partition table
> > sd 4:0:1:0: [sdc] Attached SCSI disk
> > scsi 4:0:2:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0
ANSI:
> 5
> > sd 4:0:2:0: [sdd] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> > sd 4:0:2:0: Attached scsi generic sg3 type 0
> > sd 4:0:2:0: [sdd] Write Protect is off
> > sd 4:0:2:0: [sdd] Write cache: enabled, read cache: enabled, supports
> > DPO and FUA
> >  sdd: unknown partition table
> > sd 4:0:2:0: [sdd] Attached SCSI disk
> > scsi 4:0:3:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0
ANSI:
> 5
> > sd 4:0:3:0: [sde] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> > sd 4:0:3:0: Attached scsi generic sg4 type 0
> > sd 4:0:3:0: [sde] Write Protect is off
> > sd 4:0:3:0: [sde] Write cache: enabled, read cache: enabled, supports
> > DPO and FUA
> >  sde: unknown partition table
> > sd 4:0:3:0: [sde] Attached SCSI disk
> >
> >
> > The firmware version is 1.11.
> >
> > Let me know if you have any other questions.  Please let me know if
> > you can confirm the performance degradation with the driver as it is.
> >
> >
> > David
> >
> >
> > On Mon, Jul 11, 2011 at 9:18 PM, Jack Wang <jack_wang@usish.com> wrote:
> >> Could you share your fio test scripts? disk detail and HBA firmware
version
> >> are also wanted if available.
> >>
> >> Jack
> >>>
> >>> I have one HBA connected directly to 4 SAS drives ... using a single 1
> >>> to four cable.
> >>>
> >>>
> >>> On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@usish.com>
wrote:
> >>> >> Hello Jack Wang and Lindar Liu,
> >>> >>
> >>> >>
> >>> >> I am running the pm8001 driver (on applicable hardware including a
> >>> >> several core SMP server).
> >>> >>
> >>> >> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about
73Kiops
> >>> >> via an fio test.
> >>> >>
> >>> >> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system
and
> >>> >> same storage I get about 15Kiops running the same fio test.
> >>> >>
> >>> >> Perhaps something has changes in the kernel that is not being
accounted
> >>> > for?
> >>> >> Are you two still maintaining this driver?
> >>> >>
> >>> >>
> >>> >> Regards,
> >>> >> David
> >>> > [Jack Wang]  Could you give your detailed topology, I will later try
> to
> >>> > investigate the performance issue, but as I remember an Intel
developer
> >>> > reports in mailist some changes in block layer lead to JBOD
performance
> >>> > degradation.
> >>> >
> >>> >
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe linux-scsi"
in
> >>> the body of a message to majordomo@vger.kernel.org
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-07-13  4:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-11 19:06 pm8001 performance degradation? ersatz splatt
2011-07-12  1:27 ` Jack Wang
2011-07-12  2:44   ` ersatz splatt
2011-07-12  2:46     ` ersatz splatt
2011-07-12  4:18     ` Jack Wang
2011-07-12 19:34       ` ersatz splatt
2011-07-13  3:15         ` ersatz splatt
2011-07-13  4:08           ` Jack Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.