All of lore.kernel.org
 help / color / mirror / Atom feed
* Scalability issue with multiple NVMe Devices with one core
@ 2016-10-13  5:44 Roy Shterman
  2016-10-13 14:18 ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Roy Shterman @ 2016-10-13  5:44 UTC (permalink / raw)



Hi,

scenario is when running traffic over 1 NVMe Device with 1 core I'm 
getting X IOPS and Y% core utilization.

In my perception, when adding more NVMe Devices I should see some 
linearity of the above results, but I'm getting only a small improvement 
in IOPS and still not getting 100% (or closer ) in CPU utilization.

Any suggestions?

Thanks,

Roy

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Scalability issue with multiple NVMe Devices with one core
  2016-10-13 14:18 ` Keith Busch
@ 2016-10-13 14:11   ` Roy Shterman
  2016-10-13 14:44     ` Keith Busch
  2016-10-13 14:13   ` Roy Shterman
  1 sibling, 1 reply; 6+ messages in thread
From: Roy Shterman @ 2016-10-13 14:11 UTC (permalink / raw)




On 10/13/2016 5:18 PM, Keith Busch wrote:
> On Thu, Oct 13, 2016@08:44:37AM +0300, Roy Shterman wrote:
>> scenario is when running traffic over 1 NVMe Device with 1 core I'm getting
>> X IOPS and Y% core utilization.
>>
>> In my perception, when adding more NVMe Devices I should see some linearity
>> of the above results, but I'm getting only a small improvement in IOPS and
>> still not getting 100% (or closer ) in CPU utilization.
>>
>> Any suggestions?
> How are you generating IO?

fio --group_reporting --rw=randread --bs=4k --numjobs=1 --ramp_time=30 
--iodepth=1 --runtime=300 --direct=1 --time_based --loops=1 
--ioengine=libaio --invalidate=1 --randrepeat=1 --norandommap --exitall 
--name task_nvme0n1 --filename=/dev/nvme0n1

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Scalability issue with multiple NVMe Devices with one core
  2016-10-13 14:18 ` Keith Busch
  2016-10-13 14:11   ` Roy Shterman
@ 2016-10-13 14:13   ` Roy Shterman
  1 sibling, 0 replies; 6+ messages in thread
From: Roy Shterman @ 2016-10-13 14:13 UTC (permalink / raw)




On 10/13/2016 5:18 PM, Keith Busch wrote:
> On Thu, Oct 13, 2016@08:44:37AM +0300, Roy Shterman wrote:
>> scenario is when running traffic over 1 NVMe Device with 1 core I'm getting
>> X IOPS and Y% core utilization.
>>
>> In my perception, when adding more NVMe Devices I should see some linearity
>> of the above results, but I'm getting only a small improvement in IOPS and
>> still not getting 100% (or closer ) in CPU utilization.
>>
>> Any suggestions?
> How are you generating IO?
Sorry, last response I sent was with IODEPTH=1, I'm changing the IODEPTH 
to 8-32.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Scalability issue with multiple NVMe Devices with one core
  2016-10-13  5:44 Scalability issue with multiple NVMe Devices with one core Roy Shterman
@ 2016-10-13 14:18 ` Keith Busch
  2016-10-13 14:11   ` Roy Shterman
  2016-10-13 14:13   ` Roy Shterman
  0 siblings, 2 replies; 6+ messages in thread
From: Keith Busch @ 2016-10-13 14:18 UTC (permalink / raw)


On Thu, Oct 13, 2016@08:44:37AM +0300, Roy Shterman wrote:
> scenario is when running traffic over 1 NVMe Device with 1 core I'm getting
> X IOPS and Y% core utilization.
> 
> In my perception, when adding more NVMe Devices I should see some linearity
> of the above results, but I'm getting only a small improvement in IOPS and
> still not getting 100% (or closer ) in CPU utilization.
> 
> Any suggestions?

How are you generating IO?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Scalability issue with multiple NVMe Devices with one core
  2016-10-13 14:11   ` Roy Shterman
@ 2016-10-13 14:44     ` Keith Busch
       [not found]       ` <2F56CFEA-4C6D-48B9-92B0-C4E74AF0B60B@mellanox.com>
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2016-10-13 14:44 UTC (permalink / raw)


On Thu, Oct 13, 2016@05:11:58PM +0300, Roy Shterman wrote:
> On 10/13/2016 5:18 PM, Keith Busch wrote:
> > On Thu, Oct 13, 2016@08:44:37AM +0300, Roy Shterman wrote:
> > > scenario is when running traffic over 1 NVMe Device with 1 core I'm getting
> > > X IOPS and Y% core utilization.
> > > 
> > > In my perception, when adding more NVMe Devices I should see some linearity
> > > of the above results, but I'm getting only a small improvement in IOPS and
> > > still not getting 100% (or closer ) in CPU utilization.
> > > 
> > > Any suggestions?
> > How are you generating IO?
> 
> fio --group_reporting --rw=randread --bs=4k --numjobs=1 --ramp_time=30
> --iodepth=1 --runtime=300 --direct=1 --time_based --loops=1
> --ioengine=libaio --invalidate=1 --randrepeat=1 --norandommap --exitall
> --name task_nvme0n1 --filename=/dev/nvme0n1

And if you append "--name task_nvme1n1 --filename=/dev/nvme1n1" to this
command, you are not observing a meaningful IOPS improvement?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Scalability issue with multiple NVMe Devices with one core
       [not found]       ` <2F56CFEA-4C6D-48B9-92B0-C4E74AF0B60B@mellanox.com>
@ 2016-10-13 15:07         ` Keith Busch
  0 siblings, 0 replies; 6+ messages in thread
From: Keith Busch @ 2016-10-13 15:07 UTC (permalink / raw)


On Thu, Oct 13, 2016@02:48:34PM +0000, Roy Shterman wrote:
>> And if you append "--name task_nvme1n1 --filename=/dev/nvme1n1" to this
>> command, you are not observing a
>> meaningful IOPS improvement?
> 
> I appended only --filename=/dev/nvme0n1 without the new name, I'm not sure if it makes any difference but I can check.

If you don't add --name, fio isn't going to start another job, so you're
not accessing the devices in parallel and shouldn't expect an IOPS
improvement.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-10-13 15:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-13  5:44 Scalability issue with multiple NVMe Devices with one core Roy Shterman
2016-10-13 14:18 ` Keith Busch
2016-10-13 14:11   ` Roy Shterman
2016-10-13 14:44     ` Keith Busch
     [not found]       ` <2F56CFEA-4C6D-48B9-92B0-C4E74AF0B60B@mellanox.com>
2016-10-13 15:07         ` Keith Busch
2016-10-13 14:13   ` Roy Shterman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.