All of lore.kernel.org
 help / color / mirror / Atom feed
* Extremely high context switches of i/o to NVM
@ 2015-07-24 20:00 Junjie Qian
  2015-07-24 20:27 ` Keith Busch
  0 siblings, 1 reply; 3+ messages in thread
From: Junjie Qian @ 2015-07-24 20:00 UTC (permalink / raw)


Hi List,

I run experiment with NVM on NUMA, and found the context switch is extremely high.

The platform is, 1. Linux 4.1-rc7 with multi-queue enabled, kernel is polling enabled (5 secs polling, but the results show little difference between polling and interrupt); 2. 4-socket NUMA machine; 3. Intel PC3700 NVM

The command is sudo perf state -e context-switches nice -n -20 numactl -C 0 fio-master/fio --name=1 --bs=4k --ioengine=libaio --iodepth=1 --rw=read --numjobs=1 --filename=/dev/nvme0n1 --thread --direct=1 --group_reporting --time_based=1 --runtime=60

The result is 3,567,428 context switches.

Would someone give me some help on explaining this? Is this reasonable?
 Thanks!
Best
Junjie

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Extremely high context switches of i/o to NVM
  2015-07-24 20:00 Extremely high context switches of i/o to NVM Junjie Qian
@ 2015-07-24 20:27 ` Keith Busch
  2015-07-24 23:03   ` Junjie Qian
  0 siblings, 1 reply; 3+ messages in thread
From: Keith Busch @ 2015-07-24 20:27 UTC (permalink / raw)


On Fri, 24 Jul 2015, Junjie Qian wrote:
> Hi List,
>
> I run experiment with NVM on NUMA, and found the context switch is extremely high.
>
> The platform is, 1. Linux 4.1-rc7 with multi-queue enabled, kernel is polling enabled (5 secs polling, but the results show little difference between polling and interrupt); 2. 4-socket NUMA machine; 3. Intel PC3700 NVM
>
> The command is sudo perf state -e context-switches nice -n -20 numactl -C 0 fio-master/fio --name=1 --bs=4k --ioengine=libaio --iodepth=1 --rw=read --numjobs=1 --filename=/dev/nvme0n1 --thread --direct=1 --group_reporting --time_based=1 --runtime=60
>
> The result is 3,567,428 context switches.
>
> Would someone give me some help on explaining this? Is this reasonable?
> Thanks!

Sounds about right with an IO depth of 1. You're going to get a context
switch per IO, right?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Extremely high context switches of i/o to NVM
  2015-07-24 20:27 ` Keith Busch
@ 2015-07-24 23:03   ` Junjie Qian
  0 siblings, 0 replies; 3+ messages in thread
From: Junjie Qian @ 2015-07-24 23:03 UTC (permalink / raw)


Hi Keith,

Thank you for the explanation! Yes, the number of context switch is equal to the number of IOs times the IO depth.

In my understanding, the IOs are submitted continuously by one thread, and the context switch only happens when the thread is scheduled out of execution. I did not know that it needs context switch between the IOs.
 Thanks!
Best
Junjie


----- Original Message -----
From: Keith Busch <keith.busch@intel.com>
To: Junjie Qian <junjie.qian at yahoo.com>
Cc: "linux-nvme at lists.infradead.org" <linux-nvme at lists.infradead.org>
Sent: Friday, July 24, 2015 3:27 PM
Subject: Re: Extremely high context switches of i/o to NVM

On Fri, 24 Jul 2015, Junjie Qian wrote:

> Hi List,
>
> I run experiment with NVM on NUMA, and found the context switch is extremely high.
>
> The platform is, 1. Linux 4.1-rc7 with multi-queue enabled, kernel is polling enabled (5 secs polling, but the results show little difference between polling and interrupt); 2. 4-socket NUMA machine; 3. Intel PC3700 NVM
>
> The command is sudo perf state -e context-switches nice -n -20 numactl -C 0 fio-master/fio --name=1 --bs=4k --ioengine=libaio --iodepth=1 --rw=read --numjobs=1 --filename=/dev/nvme0n1 --thread --direct=1 --group_reporting --time_based=1 --runtime=60
>
> The result is 3,567,428 context switches.
>
> Would someone give me some help on explaining this? Is this reasonable?
> Thanks!

Sounds about right with an IO depth of 1. You're going to get a context
switch per IO, right?

_______________________________________________
Linux-nvme mailing list
Linux-nvme at lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-07-24 23:03 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-24 20:00 Extremely high context switches of i/o to NVM Junjie Qian
2015-07-24 20:27 ` Keith Busch
2015-07-24 23:03   ` Junjie Qian

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.