All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] IO scheduler based io controller (V5)
@ 2009-06-19 20:37 ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz


Hi All,

Here is the V5 of the IO controller patches generated on top of 2.6.30.

Previous versions of the patches was posted here.

(V1) http://lkml.org/lkml/2009/3/11/486
(V2) http://lkml.org/lkml/2009/5/5/275
(V3) http://lkml.org/lkml/2009/5/26/472
(V4) http://lkml.org/lkml/2009/6/8/580

This patchset is still work in progress but I want to keep on getting the
snapshot of my tree out at regular intervals to get the feedback hence V5.

Changes from V4
===============
- Implemented bdi_*_congested_group() functions to also determine if a
  particular io group on a bdi is congested or not. So far we only used
  determine whether bdi is congested or not. But now there is one request
  list per group and one also needs to check whether the particular
  io group io is going into is congested or not.

- Fixed preemption logic in hiearchical mode. In hiearchical mode, one
  needs to traverse up the hiearchy so that current queue and new queue
  are at same level to make a decision whether preeption should be done
  or not. Took the idea and code from CFS cpu scheduler.

- There were some tunables which were appearing under
  /sys/block/<device>/queue dir but these tunables actually belonged to
  ioschedulers in hierarhical moded. Fixed it.
 
- Fixed another preemption issue where if any RT queue was pending
  (busy_rt_queues), current queue was being expired. Now this preemption is
  done only if there are busy_rt_queues in the same group.

  (Though I think that busy_rt_queues is redundant code as the moment RT
   request comes, we preempt the BE queue so we should never run into the
   issue of RT reuqest pending while BE is running. Keeping the code for the
   time being). 
 
- Applied the patch from Gui where he got rid of only_root_group code and
  now used cgroups children list to determine if root group is only group
  or there are childrens too.

- Applied few cleanup patches from Gui.

- We store the device id (major, minor) in io group. Previously I was
  retrieving that info from bio. Switched to gettting that info from
  backing device.

Limitations
===========

- This IO controller provides the bandwidth control at the IO scheduler
  level (leaf node in stacked hiearchy of logical devices). So there can
  be cases (depending on configuration) where application does not see
  proportional BW division at higher logical level device.

  LWN has written an article about the issue here.

	http://lwn.net/Articles/332839/

How to solve the issue of fairness at higher level logical devices
==================================================================
Couple of suggestions have come forward.

- Implement IO control at IO scheduler layer and then with the help of
  some daemon, adjust the weight on underlying devices dynamiclly, depending
  on what kind of BW gurantees are to be achieved at higher level logical
  block devices.

- Also implement a higher level IO controller along with IO scheduler
  based controller and let user choose one depending on his needs.

  A higher level controller does not know about the assumptions/policies
  of unerldying IO scheduler, hence it has the potential to break down
  the IO scheduler's policy with-in cgroup. A lower level controller
  can work with IO scheduler much more closely and efficiently.
 
Other active IO controller developments
=======================================

IO throttling
-------------

  This is a max bandwidth controller and not the proportional one. Secondly
  it is a second level controller which can break the IO scheduler's
  policy/assumtions with-in cgroup. 

dm-ioband
---------

 This is a proportional bandwidth controller implemented as device mapper
 driver. It is also a second level controller which can break the
 IO scheduler's policy/assumptions with-in cgroup.

Testing
=======

I have been able to do only very basic testing of reads and writes.

Test1 (Fairness for synchronous reads)
======================================
- Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
  cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)

dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &

234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s

group1 time=8 16 2471 group1 sectors=8 16 457840
group2 time=8 16 1220 group2 sectors=8 16 225736

First two fields in time and sectors statistics represent major and minor
number of the device. Third field represents disk time in milliseconds and
number of sectors transferred respectively.

This patchset tries to provide fairness in terms of disk time received. group1
got almost double of group2 disk time (At the time of first dd finish). These
time and sectors statistics can be read using io.disk_time and io.disk_sector
files in cgroup. More about it in documentation file.

Test2 (Fairness for async writes)
=================================
Fairness for async writes is tricky and biggest reason is that async writes
are cached in higher layers (page cahe) as well as possibly in file system
layer also (btrfs, xfs etc), and are dispatched to lower layers not necessarily
in proportional manner.

For example, consider two dd threads reading /dev/zero as input file and doing
writes of huge files. Very soon we will cross vm_dirty_ratio and dd thread will
be forced to write out some pages to disk before more pages can be dirtied. But
not necessarily dirty pages of same thread are picked. It can very well pick
the inode of lesser priority dd thread and do some writeout. So effectively
higher weight dd is doing writeouts of lower weight dd pages and we don't see
service differentation.

IOW, the core problem with async write fairness is that higher weight thread
does not throw enought IO traffic at IO controller to keep the queue
continuously backlogged. In my testing, there are many .2 to .8 second
intervals where higher weight queue is empty and in that duration lower weight
queue get lots of job done giving the impression that there was no service
differentiation.

In summary, from IO controller point of view async writes support is there.
Because page cache has not been designed in such a manner that higher 
prio/weight writer can do more write out as compared to lower prio/weight
writer, gettting service differentiation is hard and it is visible in some
cases and not visible in some cases.

To get fairness for async writes in all cases, higher layer needs to be
fixed. That probably is a lot of work. Do we really care that much for
fairness among two writer cgroups? One can choose to do direct IO if
fairness for buffered writes really matters for him. I think we care more
for fairness in following cases and with this patch we should be able to
achive that.

- Read Vs Read
- Read Vs Writes (Buffered writes or direct IO writes)
	- Making sure that isolation is achieved between reader and writer
	  cgroup.  
- All form of direct IO.

Following is the only case where it is hard to ensure fairness between cgroups
because of higher layer design.

- Buffered writes Vs Buffered Writes.

So to test async writes I generated lots of write traffic in two cgroups (50
fio threads) and watched the disk time statistics in respective cgroups at
the interval of 2 seconds. Thanks to ryo tsuruta for the test case.

*****************************************************************
sync
echo 3 > /proc/sys/vm/drop_caches

fio_args="--size=64m --rw=write --numjobs=50 --group_reporting"

echo $$ > /cgroup/bfqio/test1/tasks
fio $fio_args --name=test1 --directory=/mnt/sdd1/fio/ --output=/mnt/sdd1/fio/test1.log &

echo $$ > /cgroup/bfqio/test2/tasks
fio $fio_args --name=test2 --directory=/mnt/sdd2/fio/ --output=/mnt/sdd2/fio/test2.log &
*********************************************************************** 

And watched the disk time and sector statistics for the both the cgroups
every 2 seconds using a script. How is snippet from output.

test1 statistics: time=8 48 1315   sectors=8 48 55776 dq=8 48 1
test2 statistics: time=8 48 633   sectors=8 48 14720 dq=8 48 2

test1 statistics: time=8 48 5586   sectors=8 48 339064 dq=8 48 2
test2 statistics: time=8 48 2985   sectors=8 48 146656 dq=8 48 3

test1 statistics: time=8 48 9935   sectors=8 48 628728 dq=8 48 3
test2 statistics: time=8 48 5265   sectors=8 48 278688 dq=8 48 4

test1 statistics: time=8 48 14156   sectors=8 48 932488 dq=8 48 6
test2 statistics: time=8 48 7646   sectors=8 48 412704 dq=8 48 7

test1 statistics: time=8 48 18141   sectors=8 48 1231488 dq=8 48 10
test2 statistics: time=8 48 9820   sectors=8 48 548400 dq=8 48 8

test1 statistics: time=8 48 21953   sectors=8 48 1485632 dq=8 48 13
test2 statistics: time=8 48 12394   sectors=8 48 698288 dq=8 48 10

test1 statistics: time=8 48 25167   sectors=8 48 1705264 dq=8 48 13
test2 statistics: time=8 48 14042   sectors=8 48 817808 dq=8 48 10

First two fields in time and sectors statistics represent major and minor
number of the device. Third field represents disk time in milliseconds and
number of sectors transferred respectively.

So disk time consumed by group1 is almost double of group2.

TODO
====
- Lots of code cleanups, testing, bug fixing, optimizations, benchmarking
  etc...

- Work on a better interface (possibly cgroup based) for configuring per
  group request descriptor limits.

- Debug and fix some of the areas like page cache where higher weight cgroup
  async writes are stuck behind lower weight cgroup async writes.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* [RFC] IO scheduler based io controller (V5)
@ 2009-06-19 20:37 ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal


Hi All,

Here is the V5 of the IO controller patches generated on top of 2.6.30.

Previous versions of the patches was posted here.

(V1) http://lkml.org/lkml/2009/3/11/486
(V2) http://lkml.org/lkml/2009/5/5/275
(V3) http://lkml.org/lkml/2009/5/26/472
(V4) http://lkml.org/lkml/2009/6/8/580

This patchset is still work in progress but I want to keep on getting the
snapshot of my tree out at regular intervals to get the feedback hence V5.

Changes from V4
===============
- Implemented bdi_*_congested_group() functions to also determine if a
  particular io group on a bdi is congested or not. So far we only used
  determine whether bdi is congested or not. But now there is one request
  list per group and one also needs to check whether the particular
  io group io is going into is congested or not.

- Fixed preemption logic in hiearchical mode. In hiearchical mode, one
  needs to traverse up the hiearchy so that current queue and new queue
  are at same level to make a decision whether preeption should be done
  or not. Took the idea and code from CFS cpu scheduler.

- There were some tunables which were appearing under
  /sys/block/<device>/queue dir but these tunables actually belonged to
  ioschedulers in hierarhical moded. Fixed it.
 
- Fixed another preemption issue where if any RT queue was pending
  (busy_rt_queues), current queue was being expired. Now this preemption is
  done only if there are busy_rt_queues in the same group.

  (Though I think that busy_rt_queues is redundant code as the moment RT
   request comes, we preempt the BE queue so we should never run into the
   issue of RT reuqest pending while BE is running. Keeping the code for the
   time being). 
 
- Applied the patch from Gui where he got rid of only_root_group code and
  now used cgroups children list to determine if root group is only group
  or there are childrens too.

- Applied few cleanup patches from Gui.

- We store the device id (major, minor) in io group. Previously I was
  retrieving that info from bio. Switched to gettting that info from
  backing device.

Limitations
===========

- This IO controller provides the bandwidth control at the IO scheduler
  level (leaf node in stacked hiearchy of logical devices). So there can
  be cases (depending on configuration) where application does not see
  proportional BW division at higher logical level device.

  LWN has written an article about the issue here.

	http://lwn.net/Articles/332839/

How to solve the issue of fairness at higher level logical devices
==================================================================
Couple of suggestions have come forward.

- Implement IO control at IO scheduler layer and then with the help of
  some daemon, adjust the weight on underlying devices dynamiclly, depending
  on what kind of BW gurantees are to be achieved at higher level logical
  block devices.

- Also implement a higher level IO controller along with IO scheduler
  based controller and let user choose one depending on his needs.

  A higher level controller does not know about the assumptions/policies
  of unerldying IO scheduler, hence it has the potential to break down
  the IO scheduler's policy with-in cgroup. A lower level controller
  can work with IO scheduler much more closely and efficiently.
 
Other active IO controller developments
=======================================

IO throttling
-------------

  This is a max bandwidth controller and not the proportional one. Secondly
  it is a second level controller which can break the IO scheduler's
  policy/assumtions with-in cgroup. 

dm-ioband
---------

 This is a proportional bandwidth controller implemented as device mapper
 driver. It is also a second level controller which can break the
 IO scheduler's policy/assumptions with-in cgroup.

Testing
=======

I have been able to do only very basic testing of reads and writes.

Test1 (Fairness for synchronous reads)
======================================
- Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
  cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)

dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &

234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s

group1 time=8 16 2471 group1 sectors=8 16 457840
group2 time=8 16 1220 group2 sectors=8 16 225736

First two fields in time and sectors statistics represent major and minor
number of the device. Third field represents disk time in milliseconds and
number of sectors transferred respectively.

This patchset tries to provide fairness in terms of disk time received. group1
got almost double of group2 disk time (At the time of first dd finish). These
time and sectors statistics can be read using io.disk_time and io.disk_sector
files in cgroup. More about it in documentation file.

Test2 (Fairness for async writes)
=================================
Fairness for async writes is tricky and biggest reason is that async writes
are cached in higher layers (page cahe) as well as possibly in file system
layer also (btrfs, xfs etc), and are dispatched to lower layers not necessarily
in proportional manner.

For example, consider two dd threads reading /dev/zero as input file and doing
writes of huge files. Very soon we will cross vm_dirty_ratio and dd thread will
be forced to write out some pages to disk before more pages can be dirtied. But
not necessarily dirty pages of same thread are picked. It can very well pick
the inode of lesser priority dd thread and do some writeout. So effectively
higher weight dd is doing writeouts of lower weight dd pages and we don't see
service differentation.

IOW, the core problem with async write fairness is that higher weight thread
does not throw enought IO traffic at IO controller to keep the queue
continuously backlogged. In my testing, there are many .2 to .8 second
intervals where higher weight queue is empty and in that duration lower weight
queue get lots of job done giving the impression that there was no service
differentiation.

In summary, from IO controller point of view async writes support is there.
Because page cache has not been designed in such a manner that higher 
prio/weight writer can do more write out as compared to lower prio/weight
writer, gettting service differentiation is hard and it is visible in some
cases and not visible in some cases.

To get fairness for async writes in all cases, higher layer needs to be
fixed. That probably is a lot of work. Do we really care that much for
fairness among two writer cgroups? One can choose to do direct IO if
fairness for buffered writes really matters for him. I think we care more
for fairness in following cases and with this patch we should be able to
achive that.

- Read Vs Read
- Read Vs Writes (Buffered writes or direct IO writes)
	- Making sure that isolation is achieved between reader and writer
	  cgroup.  
- All form of direct IO.

Following is the only case where it is hard to ensure fairness between cgroups
because of higher layer design.

- Buffered writes Vs Buffered Writes.

So to test async writes I generated lots of write traffic in two cgroups (50
fio threads) and watched the disk time statistics in respective cgroups at
the interval of 2 seconds. Thanks to ryo tsuruta for the test case.

*****************************************************************
sync
echo 3 > /proc/sys/vm/drop_caches

fio_args="--size=64m --rw=write --numjobs=50 --group_reporting"

echo $$ > /cgroup/bfqio/test1/tasks
fio $fio_args --name=test1 --directory=/mnt/sdd1/fio/ --output=/mnt/sdd1/fio/test1.log &

echo $$ > /cgroup/bfqio/test2/tasks
fio $fio_args --name=test2 --directory=/mnt/sdd2/fio/ --output=/mnt/sdd2/fio/test2.log &
*********************************************************************** 

And watched the disk time and sector statistics for the both the cgroups
every 2 seconds using a script. How is snippet from output.

test1 statistics: time=8 48 1315   sectors=8 48 55776 dq=8 48 1
test2 statistics: time=8 48 633   sectors=8 48 14720 dq=8 48 2

test1 statistics: time=8 48 5586   sectors=8 48 339064 dq=8 48 2
test2 statistics: time=8 48 2985   sectors=8 48 146656 dq=8 48 3

test1 statistics: time=8 48 9935   sectors=8 48 628728 dq=8 48 3
test2 statistics: time=8 48 5265   sectors=8 48 278688 dq=8 48 4

test1 statistics: time=8 48 14156   sectors=8 48 932488 dq=8 48 6
test2 statistics: time=8 48 7646   sectors=8 48 412704 dq=8 48 7

test1 statistics: time=8 48 18141   sectors=8 48 1231488 dq=8 48 10
test2 statistics: time=8 48 9820   sectors=8 48 548400 dq=8 48 8

test1 statistics: time=8 48 21953   sectors=8 48 1485632 dq=8 48 13
test2 statistics: time=8 48 12394   sectors=8 48 698288 dq=8 48 10

test1 statistics: time=8 48 25167   sectors=8 48 1705264 dq=8 48 13
test2 statistics: time=8 48 14042   sectors=8 48 817808 dq=8 48 10

First two fields in time and sectors statistics represent major and minor
number of the device. Third field represents disk time in milliseconds and
number of sectors transferred respectively.

So disk time consumed by group1 is almost double of group2.

TODO
====
- Lots of code cleanups, testing, bug fixing, optimizations, benchmarking
  etc...

- Work on a better interface (possibly cgroup based) for configuring per
  group request descriptor limits.

- Debug and fix some of the areas like page cache where higher weight cgroup
  async writes are stuck behind lower weight cgroup async writes.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* [PATCH 01/20] io-controller: Documentation
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
                     ` (20 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o Documentation for io-controller.

Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 Documentation/block/00-INDEX          |    2 +
 Documentation/block/io-controller.txt |  360 +++++++++++++++++++++++++++++++++
 2 files changed, 362 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/block/io-controller.txt

diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX
index 961a051..dc8bf95 100644
--- a/Documentation/block/00-INDEX
+++ b/Documentation/block/00-INDEX
@@ -10,6 +10,8 @@ capability.txt
 	- Generic Block Device Capability (/sys/block/<disk>/capability)
 deadline-iosched.txt
 	- Deadline IO scheduler tunables
+io-controller.txt
+	- IO controller for provding hierarchical IO scheduling
 ioprio.txt
 	- Block io priorities (in CFQ scheduler)
 request.txt
diff --git a/Documentation/block/io-controller.txt b/Documentation/block/io-controller.txt
new file mode 100644
index 0000000..bf95bf7
--- /dev/null
+++ b/Documentation/block/io-controller.txt
@@ -0,0 +1,360 @@
+				IO Controller
+				=============
+
+Overview
+========
+
+This patchset implements a proportional weight IO controller. That is one
+can create cgroups and assign prio/weights to those cgroups and task group
+will get access to disk proportionate to the weight of the group.
+
+These patches modify elevator layer and individual IO schedulers to do
+IO control hence this io controller works only on block devices which use
+one of the standard io schedulers can not be used with any xyz logical block
+device.
+
+The assumption/thought behind modifying IO scheduler is that resource control
+is needed only on leaf nodes where the actual contention for resources is
+present and not on intertermediate logical block devices.
+
+Consider following hypothetical scenario. Lets say there are three physical
+disks, namely sda, sdb and sdc. Two logical volumes (lv0 and lv1) have been
+created on top of these. Some part of sdb is in lv0 and some part is in lv1.
+
+			    lv0      lv1
+			  /	\  /     \
+			sda      sdb      sdc
+
+Also consider following cgroup hierarchy
+
+				root
+				/   \
+			       A     B
+			      / \    / \
+			     T1 T2  T3  T4
+
+A and B are two cgroups and T1, T2, T3 and T4 are tasks with-in those cgroups.
+Assuming T1, T2, T3 and T4 are doing IO on lv0 and lv1. These tasks should
+get their fair share of bandwidth on disks sda, sdb and sdc. There is no
+IO control on intermediate logical block nodes (lv0, lv1).
+
+So if tasks T1 and T2 are doing IO on lv0 and T3 and T4 are doing IO on lv1
+only, there will not be any contetion for resources between group A and B if
+IO is going to sda or sdc. But if actual IO gets translated to disk sdb, then
+IO scheduler associated with the sdb will distribute disk bandwidth to
+group A and B proportionate to their weight.
+
+CFQ already has the notion of fairness and it provides differential disk
+access based on priority and class of the task. Just that it is flat and
+with cgroup stuff, it needs to be made hierarchical to achive a good
+hierarchical control on IO.
+
+Rest of the IO schedulers (noop, deadline and AS) don't have any notion
+of fairness among various threads. They maintain only one queue where all
+the IO gets queued (internally this queue is split in read and write queue
+for deadline and AS). With this patchset, now we maintain one queue per
+cgropu per device and then try to do fair queuing among those queues.
+
+One of the concerns raised with modifying IO schedulers was that we don't
+want to replicate the code in all the IO schedulers. These patches share
+the fair queuing code which has been moved to a common layer (elevator
+layer). Hence we don't end up replicating code across IO schedulers. Following
+diagram depicts the concept.
+
+			--------------------------------
+			| Elevator Layer + Fair Queuing |
+			--------------------------------
+			 |	     |	     |       |
+			NOOP     DEADLINE    AS     CFQ
+
+Design
+======
+This patchset primarily uses BFQ (Budget Fair Queuing) code to provide
+fairness among different IO queues. Fabio and Paolo implemented BFQ which uses
+B-WF2Q+ algorithm for fair queuing.
+
+Why BFQ?
+
+- Not sure if weighted round robin logic of CFQ can be easily extended for
+  hierarchical mode. One of the things is that we can not keep dividing
+  the time slice of parent group among childrens. Deeper we go in hierarchy
+  time slice will get smaller.
+
+  One of the ways to implement hierarchical support could be to keep track
+  of virtual time and service provided to queue/group and select a queue/group
+  for service based on any of the various available algoriths.
+
+  BFQ already had support for hierarchical scheduling, taking those patches
+  was easier.
+
+- BFQ was designed to provide tighter bounds/delay w.r.t service provided
+  to a queue. Delay/Jitter with BFQ is O(1).
+
+  Note: BFQ originally used amount of IO done (number of sectors) as notion
+        of service provided. IOW, it tried to provide fairness in terms of
+        actual IO done and not in terms of actual time disk access was
+	given to a queue.
+
+	This patcheset modified BFQ to provide fairness in time domain because
+	that's what CFQ does. So idea was try not to deviate too much from
+	the CFQ behavior initially.
+
+	Providing fairness in time domain makes accounting trciky because
+	due to command queueing, at one time there might be multiple requests
+	from different queues and there is no easy way to find out how much
+	disk time actually was consumed by the requests of a particular
+	queue. More about this in comments in source code.
+
+We have taken BFQ code as starting point for providing fairness among groups
+because it already contained lots of features which we required to implement
+hierarhical IO scheduling. With this patch set, I am not trying to ensure O(1)
+delay here as my goal is to provide fairness among groups. Most likely that
+will mean that latencies are not worse than what cfq currently provides (if
+not improved ones). Once fairness is ensured, one can look into  more in
+ensuring O(1) latencies.
+
+From data structure point of view, one can think of a tree per device, where
+io groups and io queues are hanging and are being scheduled using B-WF2Q+
+algorithm. io_queue, is end queue where requests are actually stored and
+dispatched from (like cfqq).
+
+These io queues are primarily created by and managed by end io schedulers
+depending on its semantics. For example, noop, deadline and AS ioschedulers
+keep one io queues per cgroup and cfqq keeps one io queue per io_context in
+a cgroup (apart from async queues).
+
+A request is mapped to an io group by elevator layer and which io queue it
+is mapped to with in group depends on ioscheduler. Currently "current" task
+is used to determine the cgroup (hence io group) of the request. Down the
+line we need to make use of bio-cgroup patches to map delayed writes to
+right group.
+
+Going back to old behavior
+==========================
+In new scheme of things essentially we are creating hierarchical fair
+queuing logic in elevator layer and chaning IO schedulers to make use of
+that logic so that end IO schedulers start supporting hierarchical scheduling.
+
+Elevator layer continues to support the old interfaces. So even if fair queuing
+is enabled at elevator layer, one can have both new hierchical scheduler as
+well as old non-hierarchical scheduler operating.
+
+Also noop, deadline and AS have option of enabling hierarchical scheduling.
+If it is selected, fair queuing is done in hierarchical manner. If hierarchical
+scheduling is disabled, noop, deadline and AS should retain their existing
+behavior.
+
+CFQ is the only exception where one can not disable fair queuing as it is
+needed for provding fairness among various threads even in non-hierarchical
+mode.
+
+Various user visible config options
+===================================
+CONFIG_IOSCHED_NOOP_HIER
+	- Enables hierchical fair queuing in noop. Not selecting this option
+	  leads to old behavior of noop.
+
+CONFIG_IOSCHED_DEADLINE_HIER
+	- Enables hierchical fair queuing in deadline. Not selecting this
+	  option leads to old behavior of deadline.
+
+CONFIG_IOSCHED_AS_HIER
+	- Enables hierchical fair queuing in AS. Not selecting this option
+	  leads to old behavior of AS.
+
+CONFIG_IOSCHED_CFQ_HIER
+	- Enables hierarchical fair queuing in CFQ. Not selecting this option
+	  still does fair queuing among various queus but it is flat and not
+	  hierarchical.
+
+CGROUP_BLKIO
+	- This option enables blkio-cgroup controller for IO tracking
+	  purposes. That means, by this controller one can attribute a write
+	  to the original cgroup and not assume that it belongs to submitting
+	  thread.
+
+CONFIG_TRACK_ASYNC_CONTEXT
+	- Currently CFQ attributes the writes to the submitting thread and
+	  caches the async queue pointer in the io context of the process.
+	  If this option is set, it tells cfq and elevator fair queuing logic
+	  that for async writes make use of IO tracking patches and attribute
+	  writes to original cgroup and not to write submitting thread.
+
+CONFIG_DEBUG_GROUP_IOSCHED
+	- Throws extra debug messages in blktrace output helpful in doing
+	  doing debugging in hierarchical setup.
+
+	- Also allows for export of extra debug statistics like group queue
+	  and dequeue statistics on device through cgroup interface.
+
+Config options selected automatically
+=====================================
+These config options are not user visible and are selected/deselected
+automatically based on IO scheduler configurations.
+
+CONFIG_ELV_FAIR_QUEUING
+	- Enables/Disables the fair queuing logic at elevator layer.
+
+CONFIG_GROUP_IOSCHED
+	- Enables/Disables hierarchical queuing and associated cgroup bits.
+
+HOWTO
+=====
+So far I have done very simple testing of running two dd threads in two
+different cgroups. Here is what you can do.
+
+- Enable hierarchical scheduling in io scheuduler of your choice (say cfq).
+	CONFIG_IOSCHED_CFQ_HIER=y
+
+- Enable IO tracking for async writes.
+	CONFIG_TRACK_ASYNC_CONTEXT=y
+
+  (This will automatically select CGROUP_BLKIO)
+
+- Compile and boot into kernel and mount IO controller and blkio io tracking
+  controller.
+
+	mount -t cgroup -o io,blkio none /cgroup
+
+- Create two cgroups
+	mkdir -p /cgroup/test1/ /cgroup/test2
+
+- Set weights of group test1 and test2
+	echo 1000 > /cgroup/test1/io.weight
+	echo 500 > /cgroup/test2/io.weight
+
+- Set "fairness" parameter to 1 at the disk you are testing.
+
+  echo 1 > /sys/block/<disk>/queue/iosched/fairness
+
+- Create two same size files (say 512MB each) on same disk (file1, file2) and
+  launch two dd threads in different cgroup to read those files. Make sure
+  right io scheduler is being used for the block device where files are
+  present (the one you compiled in hierarchical mode).
+
+	sync
+	echo 3 > /proc/sys/vm/drop_caches
+
+	dd if=/mnt/sdb/zerofile1 of=/dev/null &
+	echo $! > /cgroup/test1/tasks
+	cat /cgroup/test1/tasks
+
+	dd if=/mnt/sdb/zerofile2 of=/dev/null &
+	echo $! > /cgroup/test2/tasks
+	cat /cgroup/test2/tasks
+
+- At macro level, first dd should finish first. To get more precise data, keep
+  on looking at (with the help of script), at io.disk_time and io.disk_sectors
+  files of both test1 and test2 groups. This will tell how much disk time
+  (in milli seconds), each group got and how many secotors each group
+  dispatched to the disk. We provide fairness in terms of disk time, so
+  ideally io.disk_time of cgroups should be in proportion to the weight.
+  (It is hard to achieve though :-)).
+
+Details of cgroup files
+=======================
+- io.ioprio_class
+	- Specifies class of the cgroup (RT, BE, IDLE). This is default io
+	  class of the group on all the devices until and unless overridden by
+	  per device rule. (See io.policy).
+
+	  1 = RT; 2 = BE, 3 = IDLE
+
+- io.weight
+	- Specifies per cgroup weight. This is default weight of the group
+	  on all the devices until and unless overridden by per device rule.
+	  (See io.policy).
+
+- io.disk_time
+	- disk time allocated to cgroup per device in milliseconds. First
+	  two fields specify the major and minor number of the device and
+	  third field specifies the disk time allocated to group in
+	  milliseconds.
+
+- io.disk_sectors
+	- number of sectors transferred to/from disk by the group. First
+	  two fields specify the major and minor number of the device and
+	  third field specifies the number of sectors transferred by the
+	  group to/from the device.
+
+- io.disk_queue
+	- Debugging aid only enabled if CONFIG_DEBUG_GROUP_IOSCHED=y. This
+	  gives the statistics about how many a times a group was queued
+	  on service tree of the device. First two fields specify the major
+	  and minor number of the device and third field specifies the number
+	  of times a group was queued on a particular device.
+
+- io.disk_queue
+	- Debugging aid only enabled if CONFIG_DEBUG_GROUP_IOSCHED=y. This
+	  gives the statistics about how many a times a group was de-queued
+	  or removed from the service tree of the device. This basically gives
+	  and idea if we can generate enough IO to create continuously
+	  backlogged groups. First two fields specify the major and minor
+	  number of the device and third field specifies the number
+	  of times a group was de-queued on a particular device.
+
+- io.policy
+	- One can specify per cgroup per device rules using this interface.
+	  These rules override the default value of group weight and class as
+	  specified by io.weight and io.ioprio_class.
+
+	  Following is the format.
+
+	#echo DEV:weight:ioprio_class > /patch/to/cgroup/io.policy
+
+	weight=0 means removing a policy.
+
+	Examples:
+
+	Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
+	# echo /dev/hdb:300:2 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hdb 300 2
+
+	Configure weight=500 ioprio_class=1 on /dev/hda in this cgroup
+	# echo /dev/hda:500:1 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hda 500 1
+	/dev/hdb 300 2
+
+	Remove the policy for /dev/hda in this cgroup
+	# echo /dev/hda:0:1 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hdb 300 2
+
+About configuring request desriptors
+====================================
+Traditionally there are 128 request desriptors allocated per request queue
+where io scheduler is operating (/sys/block/<disk>/queue/nr_requests). If these
+request descriptors are exhausted, processes will put to sleep and woken
+up once request descriptors are available.
+
+With io controller and cgroup stuff, one can not afford to allocate requests
+from single pool as one group might allocate lots of requests and then tasks
+from other groups might be put to sleep and this other group might be a
+higher weight group. Hence to make sure that a group always can get the
+request descriptors it is entitled to, one needs to make request descriptor
+limit per group on every queue.
+
+A new parameter /sys/block/<disk>/queue/nr_group_requests has been introduced
+and this parameter controlls the maximum number of requests per group.
+nr_requests still continues to control total number of request descriptors
+on the queue.
+
+Ideally one should set nr_requests to be following.
+
+nr_requests = number_of_cgroups * nr_group_requests
+
+This will make sure that at any point of time nr_group_requests number of
+request descriptors will be available for any of the cgroups.
+
+Currently default nr_requests=512 and nr_group_requests=128. This will make
+sure that apart from root group one can create 3 more group without running
+into any issues. If one decides to create more cgorus, nr_requests and
+nr_group_requests should be adjusted accordingly.
+
+Probably a better way to assign limit to group request descriptors is through
+sysfs interface. This is a future TODO item.
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 01/20] io-controller: Documentation
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o Documentation for io-controller.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 Documentation/block/00-INDEX          |    2 +
 Documentation/block/io-controller.txt |  360 +++++++++++++++++++++++++++++++++
 2 files changed, 362 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/block/io-controller.txt

diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX
index 961a051..dc8bf95 100644
--- a/Documentation/block/00-INDEX
+++ b/Documentation/block/00-INDEX
@@ -10,6 +10,8 @@ capability.txt
 	- Generic Block Device Capability (/sys/block/<disk>/capability)
 deadline-iosched.txt
 	- Deadline IO scheduler tunables
+io-controller.txt
+	- IO controller for provding hierarchical IO scheduling
 ioprio.txt
 	- Block io priorities (in CFQ scheduler)
 request.txt
diff --git a/Documentation/block/io-controller.txt b/Documentation/block/io-controller.txt
new file mode 100644
index 0000000..bf95bf7
--- /dev/null
+++ b/Documentation/block/io-controller.txt
@@ -0,0 +1,360 @@
+				IO Controller
+				=============
+
+Overview
+========
+
+This patchset implements a proportional weight IO controller. That is one
+can create cgroups and assign prio/weights to those cgroups and task group
+will get access to disk proportionate to the weight of the group.
+
+These patches modify elevator layer and individual IO schedulers to do
+IO control hence this io controller works only on block devices which use
+one of the standard io schedulers can not be used with any xyz logical block
+device.
+
+The assumption/thought behind modifying IO scheduler is that resource control
+is needed only on leaf nodes where the actual contention for resources is
+present and not on intertermediate logical block devices.
+
+Consider following hypothetical scenario. Lets say there are three physical
+disks, namely sda, sdb and sdc. Two logical volumes (lv0 and lv1) have been
+created on top of these. Some part of sdb is in lv0 and some part is in lv1.
+
+			    lv0      lv1
+			  /	\  /     \
+			sda      sdb      sdc
+
+Also consider following cgroup hierarchy
+
+				root
+				/   \
+			       A     B
+			      / \    / \
+			     T1 T2  T3  T4
+
+A and B are two cgroups and T1, T2, T3 and T4 are tasks with-in those cgroups.
+Assuming T1, T2, T3 and T4 are doing IO on lv0 and lv1. These tasks should
+get their fair share of bandwidth on disks sda, sdb and sdc. There is no
+IO control on intermediate logical block nodes (lv0, lv1).
+
+So if tasks T1 and T2 are doing IO on lv0 and T3 and T4 are doing IO on lv1
+only, there will not be any contetion for resources between group A and B if
+IO is going to sda or sdc. But if actual IO gets translated to disk sdb, then
+IO scheduler associated with the sdb will distribute disk bandwidth to
+group A and B proportionate to their weight.
+
+CFQ already has the notion of fairness and it provides differential disk
+access based on priority and class of the task. Just that it is flat and
+with cgroup stuff, it needs to be made hierarchical to achive a good
+hierarchical control on IO.
+
+Rest of the IO schedulers (noop, deadline and AS) don't have any notion
+of fairness among various threads. They maintain only one queue where all
+the IO gets queued (internally this queue is split in read and write queue
+for deadline and AS). With this patchset, now we maintain one queue per
+cgropu per device and then try to do fair queuing among those queues.
+
+One of the concerns raised with modifying IO schedulers was that we don't
+want to replicate the code in all the IO schedulers. These patches share
+the fair queuing code which has been moved to a common layer (elevator
+layer). Hence we don't end up replicating code across IO schedulers. Following
+diagram depicts the concept.
+
+			--------------------------------
+			| Elevator Layer + Fair Queuing |
+			--------------------------------
+			 |	     |	     |       |
+			NOOP     DEADLINE    AS     CFQ
+
+Design
+======
+This patchset primarily uses BFQ (Budget Fair Queuing) code to provide
+fairness among different IO queues. Fabio and Paolo implemented BFQ which uses
+B-WF2Q+ algorithm for fair queuing.
+
+Why BFQ?
+
+- Not sure if weighted round robin logic of CFQ can be easily extended for
+  hierarchical mode. One of the things is that we can not keep dividing
+  the time slice of parent group among childrens. Deeper we go in hierarchy
+  time slice will get smaller.
+
+  One of the ways to implement hierarchical support could be to keep track
+  of virtual time and service provided to queue/group and select a queue/group
+  for service based on any of the various available algoriths.
+
+  BFQ already had support for hierarchical scheduling, taking those patches
+  was easier.
+
+- BFQ was designed to provide tighter bounds/delay w.r.t service provided
+  to a queue. Delay/Jitter with BFQ is O(1).
+
+  Note: BFQ originally used amount of IO done (number of sectors) as notion
+        of service provided. IOW, it tried to provide fairness in terms of
+        actual IO done and not in terms of actual time disk access was
+	given to a queue.
+
+	This patcheset modified BFQ to provide fairness in time domain because
+	that's what CFQ does. So idea was try not to deviate too much from
+	the CFQ behavior initially.
+
+	Providing fairness in time domain makes accounting trciky because
+	due to command queueing, at one time there might be multiple requests
+	from different queues and there is no easy way to find out how much
+	disk time actually was consumed by the requests of a particular
+	queue. More about this in comments in source code.
+
+We have taken BFQ code as starting point for providing fairness among groups
+because it already contained lots of features which we required to implement
+hierarhical IO scheduling. With this patch set, I am not trying to ensure O(1)
+delay here as my goal is to provide fairness among groups. Most likely that
+will mean that latencies are not worse than what cfq currently provides (if
+not improved ones). Once fairness is ensured, one can look into  more in
+ensuring O(1) latencies.
+
+From data structure point of view, one can think of a tree per device, where
+io groups and io queues are hanging and are being scheduled using B-WF2Q+
+algorithm. io_queue, is end queue where requests are actually stored and
+dispatched from (like cfqq).
+
+These io queues are primarily created by and managed by end io schedulers
+depending on its semantics. For example, noop, deadline and AS ioschedulers
+keep one io queues per cgroup and cfqq keeps one io queue per io_context in
+a cgroup (apart from async queues).
+
+A request is mapped to an io group by elevator layer and which io queue it
+is mapped to with in group depends on ioscheduler. Currently "current" task
+is used to determine the cgroup (hence io group) of the request. Down the
+line we need to make use of bio-cgroup patches to map delayed writes to
+right group.
+
+Going back to old behavior
+==========================
+In new scheme of things essentially we are creating hierarchical fair
+queuing logic in elevator layer and chaning IO schedulers to make use of
+that logic so that end IO schedulers start supporting hierarchical scheduling.
+
+Elevator layer continues to support the old interfaces. So even if fair queuing
+is enabled at elevator layer, one can have both new hierchical scheduler as
+well as old non-hierarchical scheduler operating.
+
+Also noop, deadline and AS have option of enabling hierarchical scheduling.
+If it is selected, fair queuing is done in hierarchical manner. If hierarchical
+scheduling is disabled, noop, deadline and AS should retain their existing
+behavior.
+
+CFQ is the only exception where one can not disable fair queuing as it is
+needed for provding fairness among various threads even in non-hierarchical
+mode.
+
+Various user visible config options
+===================================
+CONFIG_IOSCHED_NOOP_HIER
+	- Enables hierchical fair queuing in noop. Not selecting this option
+	  leads to old behavior of noop.
+
+CONFIG_IOSCHED_DEADLINE_HIER
+	- Enables hierchical fair queuing in deadline. Not selecting this
+	  option leads to old behavior of deadline.
+
+CONFIG_IOSCHED_AS_HIER
+	- Enables hierchical fair queuing in AS. Not selecting this option
+	  leads to old behavior of AS.
+
+CONFIG_IOSCHED_CFQ_HIER
+	- Enables hierarchical fair queuing in CFQ. Not selecting this option
+	  still does fair queuing among various queus but it is flat and not
+	  hierarchical.
+
+CGROUP_BLKIO
+	- This option enables blkio-cgroup controller for IO tracking
+	  purposes. That means, by this controller one can attribute a write
+	  to the original cgroup and not assume that it belongs to submitting
+	  thread.
+
+CONFIG_TRACK_ASYNC_CONTEXT
+	- Currently CFQ attributes the writes to the submitting thread and
+	  caches the async queue pointer in the io context of the process.
+	  If this option is set, it tells cfq and elevator fair queuing logic
+	  that for async writes make use of IO tracking patches and attribute
+	  writes to original cgroup and not to write submitting thread.
+
+CONFIG_DEBUG_GROUP_IOSCHED
+	- Throws extra debug messages in blktrace output helpful in doing
+	  doing debugging in hierarchical setup.
+
+	- Also allows for export of extra debug statistics like group queue
+	  and dequeue statistics on device through cgroup interface.
+
+Config options selected automatically
+=====================================
+These config options are not user visible and are selected/deselected
+automatically based on IO scheduler configurations.
+
+CONFIG_ELV_FAIR_QUEUING
+	- Enables/Disables the fair queuing logic at elevator layer.
+
+CONFIG_GROUP_IOSCHED
+	- Enables/Disables hierarchical queuing and associated cgroup bits.
+
+HOWTO
+=====
+So far I have done very simple testing of running two dd threads in two
+different cgroups. Here is what you can do.
+
+- Enable hierarchical scheduling in io scheuduler of your choice (say cfq).
+	CONFIG_IOSCHED_CFQ_HIER=y
+
+- Enable IO tracking for async writes.
+	CONFIG_TRACK_ASYNC_CONTEXT=y
+
+  (This will automatically select CGROUP_BLKIO)
+
+- Compile and boot into kernel and mount IO controller and blkio io tracking
+  controller.
+
+	mount -t cgroup -o io,blkio none /cgroup
+
+- Create two cgroups
+	mkdir -p /cgroup/test1/ /cgroup/test2
+
+- Set weights of group test1 and test2
+	echo 1000 > /cgroup/test1/io.weight
+	echo 500 > /cgroup/test2/io.weight
+
+- Set "fairness" parameter to 1 at the disk you are testing.
+
+  echo 1 > /sys/block/<disk>/queue/iosched/fairness
+
+- Create two same size files (say 512MB each) on same disk (file1, file2) and
+  launch two dd threads in different cgroup to read those files. Make sure
+  right io scheduler is being used for the block device where files are
+  present (the one you compiled in hierarchical mode).
+
+	sync
+	echo 3 > /proc/sys/vm/drop_caches
+
+	dd if=/mnt/sdb/zerofile1 of=/dev/null &
+	echo $! > /cgroup/test1/tasks
+	cat /cgroup/test1/tasks
+
+	dd if=/mnt/sdb/zerofile2 of=/dev/null &
+	echo $! > /cgroup/test2/tasks
+	cat /cgroup/test2/tasks
+
+- At macro level, first dd should finish first. To get more precise data, keep
+  on looking at (with the help of script), at io.disk_time and io.disk_sectors
+  files of both test1 and test2 groups. This will tell how much disk time
+  (in milli seconds), each group got and how many secotors each group
+  dispatched to the disk. We provide fairness in terms of disk time, so
+  ideally io.disk_time of cgroups should be in proportion to the weight.
+  (It is hard to achieve though :-)).
+
+Details of cgroup files
+=======================
+- io.ioprio_class
+	- Specifies class of the cgroup (RT, BE, IDLE). This is default io
+	  class of the group on all the devices until and unless overridden by
+	  per device rule. (See io.policy).
+
+	  1 = RT; 2 = BE, 3 = IDLE
+
+- io.weight
+	- Specifies per cgroup weight. This is default weight of the group
+	  on all the devices until and unless overridden by per device rule.
+	  (See io.policy).
+
+- io.disk_time
+	- disk time allocated to cgroup per device in milliseconds. First
+	  two fields specify the major and minor number of the device and
+	  third field specifies the disk time allocated to group in
+	  milliseconds.
+
+- io.disk_sectors
+	- number of sectors transferred to/from disk by the group. First
+	  two fields specify the major and minor number of the device and
+	  third field specifies the number of sectors transferred by the
+	  group to/from the device.
+
+- io.disk_queue
+	- Debugging aid only enabled if CONFIG_DEBUG_GROUP_IOSCHED=y. This
+	  gives the statistics about how many a times a group was queued
+	  on service tree of the device. First two fields specify the major
+	  and minor number of the device and third field specifies the number
+	  of times a group was queued on a particular device.
+
+- io.disk_queue
+	- Debugging aid only enabled if CONFIG_DEBUG_GROUP_IOSCHED=y. This
+	  gives the statistics about how many a times a group was de-queued
+	  or removed from the service tree of the device. This basically gives
+	  and idea if we can generate enough IO to create continuously
+	  backlogged groups. First two fields specify the major and minor
+	  number of the device and third field specifies the number
+	  of times a group was de-queued on a particular device.
+
+- io.policy
+	- One can specify per cgroup per device rules using this interface.
+	  These rules override the default value of group weight and class as
+	  specified by io.weight and io.ioprio_class.
+
+	  Following is the format.
+
+	#echo DEV:weight:ioprio_class > /patch/to/cgroup/io.policy
+
+	weight=0 means removing a policy.
+
+	Examples:
+
+	Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
+	# echo /dev/hdb:300:2 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hdb 300 2
+
+	Configure weight=500 ioprio_class=1 on /dev/hda in this cgroup
+	# echo /dev/hda:500:1 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hda 500 1
+	/dev/hdb 300 2
+
+	Remove the policy for /dev/hda in this cgroup
+	# echo /dev/hda:0:1 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hdb 300 2
+
+About configuring request desriptors
+====================================
+Traditionally there are 128 request desriptors allocated per request queue
+where io scheduler is operating (/sys/block/<disk>/queue/nr_requests). If these
+request descriptors are exhausted, processes will put to sleep and woken
+up once request descriptors are available.
+
+With io controller and cgroup stuff, one can not afford to allocate requests
+from single pool as one group might allocate lots of requests and then tasks
+from other groups might be put to sleep and this other group might be a
+higher weight group. Hence to make sure that a group always can get the
+request descriptors it is entitled to, one needs to make request descriptor
+limit per group on every queue.
+
+A new parameter /sys/block/<disk>/queue/nr_group_requests has been introduced
+and this parameter controlls the maximum number of requests per group.
+nr_requests still continues to control total number of request descriptors
+on the queue.
+
+Ideally one should set nr_requests to be following.
+
+nr_requests = number_of_cgroups * nr_group_requests
+
+This will make sure that at any point of time nr_group_requests number of
+request descriptors will be available for any of the cgroups.
+
+Currently default nr_requests=512 and nr_group_requests=128. This will make
+sure that apart from root group one can create 3 more group without running
+into any issues. If one decides to create more cgorus, nr_requests and
+nr_group_requests should be adjusted accordingly.
+
+Probably a better way to assign limit to group request descriptors is through
+sysfs interface. This is a future TODO item.
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 01/20] io-controller: Documentation
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o Documentation for io-controller.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 Documentation/block/00-INDEX          |    2 +
 Documentation/block/io-controller.txt |  360 +++++++++++++++++++++++++++++++++
 2 files changed, 362 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/block/io-controller.txt

diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX
index 961a051..dc8bf95 100644
--- a/Documentation/block/00-INDEX
+++ b/Documentation/block/00-INDEX
@@ -10,6 +10,8 @@ capability.txt
 	- Generic Block Device Capability (/sys/block/<disk>/capability)
 deadline-iosched.txt
 	- Deadline IO scheduler tunables
+io-controller.txt
+	- IO controller for provding hierarchical IO scheduling
 ioprio.txt
 	- Block io priorities (in CFQ scheduler)
 request.txt
diff --git a/Documentation/block/io-controller.txt b/Documentation/block/io-controller.txt
new file mode 100644
index 0000000..bf95bf7
--- /dev/null
+++ b/Documentation/block/io-controller.txt
@@ -0,0 +1,360 @@
+				IO Controller
+				=============
+
+Overview
+========
+
+This patchset implements a proportional weight IO controller. That is one
+can create cgroups and assign prio/weights to those cgroups and task group
+will get access to disk proportionate to the weight of the group.
+
+These patches modify elevator layer and individual IO schedulers to do
+IO control hence this io controller works only on block devices which use
+one of the standard io schedulers can not be used with any xyz logical block
+device.
+
+The assumption/thought behind modifying IO scheduler is that resource control
+is needed only on leaf nodes where the actual contention for resources is
+present and not on intertermediate logical block devices.
+
+Consider following hypothetical scenario. Lets say there are three physical
+disks, namely sda, sdb and sdc. Two logical volumes (lv0 and lv1) have been
+created on top of these. Some part of sdb is in lv0 and some part is in lv1.
+
+			    lv0      lv1
+			  /	\  /     \
+			sda      sdb      sdc
+
+Also consider following cgroup hierarchy
+
+				root
+				/   \
+			       A     B
+			      / \    / \
+			     T1 T2  T3  T4
+
+A and B are two cgroups and T1, T2, T3 and T4 are tasks with-in those cgroups.
+Assuming T1, T2, T3 and T4 are doing IO on lv0 and lv1. These tasks should
+get their fair share of bandwidth on disks sda, sdb and sdc. There is no
+IO control on intermediate logical block nodes (lv0, lv1).
+
+So if tasks T1 and T2 are doing IO on lv0 and T3 and T4 are doing IO on lv1
+only, there will not be any contetion for resources between group A and B if
+IO is going to sda or sdc. But if actual IO gets translated to disk sdb, then
+IO scheduler associated with the sdb will distribute disk bandwidth to
+group A and B proportionate to their weight.
+
+CFQ already has the notion of fairness and it provides differential disk
+access based on priority and class of the task. Just that it is flat and
+with cgroup stuff, it needs to be made hierarchical to achive a good
+hierarchical control on IO.
+
+Rest of the IO schedulers (noop, deadline and AS) don't have any notion
+of fairness among various threads. They maintain only one queue where all
+the IO gets queued (internally this queue is split in read and write queue
+for deadline and AS). With this patchset, now we maintain one queue per
+cgropu per device and then try to do fair queuing among those queues.
+
+One of the concerns raised with modifying IO schedulers was that we don't
+want to replicate the code in all the IO schedulers. These patches share
+the fair queuing code which has been moved to a common layer (elevator
+layer). Hence we don't end up replicating code across IO schedulers. Following
+diagram depicts the concept.
+
+			--------------------------------
+			| Elevator Layer + Fair Queuing |
+			--------------------------------
+			 |	     |	     |       |
+			NOOP     DEADLINE    AS     CFQ
+
+Design
+======
+This patchset primarily uses BFQ (Budget Fair Queuing) code to provide
+fairness among different IO queues. Fabio and Paolo implemented BFQ which uses
+B-WF2Q+ algorithm for fair queuing.
+
+Why BFQ?
+
+- Not sure if weighted round robin logic of CFQ can be easily extended for
+  hierarchical mode. One of the things is that we can not keep dividing
+  the time slice of parent group among childrens. Deeper we go in hierarchy
+  time slice will get smaller.
+
+  One of the ways to implement hierarchical support could be to keep track
+  of virtual time and service provided to queue/group and select a queue/group
+  for service based on any of the various available algoriths.
+
+  BFQ already had support for hierarchical scheduling, taking those patches
+  was easier.
+
+- BFQ was designed to provide tighter bounds/delay w.r.t service provided
+  to a queue. Delay/Jitter with BFQ is O(1).
+
+  Note: BFQ originally used amount of IO done (number of sectors) as notion
+        of service provided. IOW, it tried to provide fairness in terms of
+        actual IO done and not in terms of actual time disk access was
+	given to a queue.
+
+	This patcheset modified BFQ to provide fairness in time domain because
+	that's what CFQ does. So idea was try not to deviate too much from
+	the CFQ behavior initially.
+
+	Providing fairness in time domain makes accounting trciky because
+	due to command queueing, at one time there might be multiple requests
+	from different queues and there is no easy way to find out how much
+	disk time actually was consumed by the requests of a particular
+	queue. More about this in comments in source code.
+
+We have taken BFQ code as starting point for providing fairness among groups
+because it already contained lots of features which we required to implement
+hierarhical IO scheduling. With this patch set, I am not trying to ensure O(1)
+delay here as my goal is to provide fairness among groups. Most likely that
+will mean that latencies are not worse than what cfq currently provides (if
+not improved ones). Once fairness is ensured, one can look into  more in
+ensuring O(1) latencies.
+
+From data structure point of view, one can think of a tree per device, where
+io groups and io queues are hanging and are being scheduled using B-WF2Q+
+algorithm. io_queue, is end queue where requests are actually stored and
+dispatched from (like cfqq).
+
+These io queues are primarily created by and managed by end io schedulers
+depending on its semantics. For example, noop, deadline and AS ioschedulers
+keep one io queues per cgroup and cfqq keeps one io queue per io_context in
+a cgroup (apart from async queues).
+
+A request is mapped to an io group by elevator layer and which io queue it
+is mapped to with in group depends on ioscheduler. Currently "current" task
+is used to determine the cgroup (hence io group) of the request. Down the
+line we need to make use of bio-cgroup patches to map delayed writes to
+right group.
+
+Going back to old behavior
+==========================
+In new scheme of things essentially we are creating hierarchical fair
+queuing logic in elevator layer and chaning IO schedulers to make use of
+that logic so that end IO schedulers start supporting hierarchical scheduling.
+
+Elevator layer continues to support the old interfaces. So even if fair queuing
+is enabled at elevator layer, one can have both new hierchical scheduler as
+well as old non-hierarchical scheduler operating.
+
+Also noop, deadline and AS have option of enabling hierarchical scheduling.
+If it is selected, fair queuing is done in hierarchical manner. If hierarchical
+scheduling is disabled, noop, deadline and AS should retain their existing
+behavior.
+
+CFQ is the only exception where one can not disable fair queuing as it is
+needed for provding fairness among various threads even in non-hierarchical
+mode.
+
+Various user visible config options
+===================================
+CONFIG_IOSCHED_NOOP_HIER
+	- Enables hierchical fair queuing in noop. Not selecting this option
+	  leads to old behavior of noop.
+
+CONFIG_IOSCHED_DEADLINE_HIER
+	- Enables hierchical fair queuing in deadline. Not selecting this
+	  option leads to old behavior of deadline.
+
+CONFIG_IOSCHED_AS_HIER
+	- Enables hierchical fair queuing in AS. Not selecting this option
+	  leads to old behavior of AS.
+
+CONFIG_IOSCHED_CFQ_HIER
+	- Enables hierarchical fair queuing in CFQ. Not selecting this option
+	  still does fair queuing among various queus but it is flat and not
+	  hierarchical.
+
+CGROUP_BLKIO
+	- This option enables blkio-cgroup controller for IO tracking
+	  purposes. That means, by this controller one can attribute a write
+	  to the original cgroup and not assume that it belongs to submitting
+	  thread.
+
+CONFIG_TRACK_ASYNC_CONTEXT
+	- Currently CFQ attributes the writes to the submitting thread and
+	  caches the async queue pointer in the io context of the process.
+	  If this option is set, it tells cfq and elevator fair queuing logic
+	  that for async writes make use of IO tracking patches and attribute
+	  writes to original cgroup and not to write submitting thread.
+
+CONFIG_DEBUG_GROUP_IOSCHED
+	- Throws extra debug messages in blktrace output helpful in doing
+	  doing debugging in hierarchical setup.
+
+	- Also allows for export of extra debug statistics like group queue
+	  and dequeue statistics on device through cgroup interface.
+
+Config options selected automatically
+=====================================
+These config options are not user visible and are selected/deselected
+automatically based on IO scheduler configurations.
+
+CONFIG_ELV_FAIR_QUEUING
+	- Enables/Disables the fair queuing logic at elevator layer.
+
+CONFIG_GROUP_IOSCHED
+	- Enables/Disables hierarchical queuing and associated cgroup bits.
+
+HOWTO
+=====
+So far I have done very simple testing of running two dd threads in two
+different cgroups. Here is what you can do.
+
+- Enable hierarchical scheduling in io scheuduler of your choice (say cfq).
+	CONFIG_IOSCHED_CFQ_HIER=y
+
+- Enable IO tracking for async writes.
+	CONFIG_TRACK_ASYNC_CONTEXT=y
+
+  (This will automatically select CGROUP_BLKIO)
+
+- Compile and boot into kernel and mount IO controller and blkio io tracking
+  controller.
+
+	mount -t cgroup -o io,blkio none /cgroup
+
+- Create two cgroups
+	mkdir -p /cgroup/test1/ /cgroup/test2
+
+- Set weights of group test1 and test2
+	echo 1000 > /cgroup/test1/io.weight
+	echo 500 > /cgroup/test2/io.weight
+
+- Set "fairness" parameter to 1 at the disk you are testing.
+
+  echo 1 > /sys/block/<disk>/queue/iosched/fairness
+
+- Create two same size files (say 512MB each) on same disk (file1, file2) and
+  launch two dd threads in different cgroup to read those files. Make sure
+  right io scheduler is being used for the block device where files are
+  present (the one you compiled in hierarchical mode).
+
+	sync
+	echo 3 > /proc/sys/vm/drop_caches
+
+	dd if=/mnt/sdb/zerofile1 of=/dev/null &
+	echo $! > /cgroup/test1/tasks
+	cat /cgroup/test1/tasks
+
+	dd if=/mnt/sdb/zerofile2 of=/dev/null &
+	echo $! > /cgroup/test2/tasks
+	cat /cgroup/test2/tasks
+
+- At macro level, first dd should finish first. To get more precise data, keep
+  on looking at (with the help of script), at io.disk_time and io.disk_sectors
+  files of both test1 and test2 groups. This will tell how much disk time
+  (in milli seconds), each group got and how many secotors each group
+  dispatched to the disk. We provide fairness in terms of disk time, so
+  ideally io.disk_time of cgroups should be in proportion to the weight.
+  (It is hard to achieve though :-)).
+
+Details of cgroup files
+=======================
+- io.ioprio_class
+	- Specifies class of the cgroup (RT, BE, IDLE). This is default io
+	  class of the group on all the devices until and unless overridden by
+	  per device rule. (See io.policy).
+
+	  1 = RT; 2 = BE, 3 = IDLE
+
+- io.weight
+	- Specifies per cgroup weight. This is default weight of the group
+	  on all the devices until and unless overridden by per device rule.
+	  (See io.policy).
+
+- io.disk_time
+	- disk time allocated to cgroup per device in milliseconds. First
+	  two fields specify the major and minor number of the device and
+	  third field specifies the disk time allocated to group in
+	  milliseconds.
+
+- io.disk_sectors
+	- number of sectors transferred to/from disk by the group. First
+	  two fields specify the major and minor number of the device and
+	  third field specifies the number of sectors transferred by the
+	  group to/from the device.
+
+- io.disk_queue
+	- Debugging aid only enabled if CONFIG_DEBUG_GROUP_IOSCHED=y. This
+	  gives the statistics about how many a times a group was queued
+	  on service tree of the device. First two fields specify the major
+	  and minor number of the device and third field specifies the number
+	  of times a group was queued on a particular device.
+
+- io.disk_queue
+	- Debugging aid only enabled if CONFIG_DEBUG_GROUP_IOSCHED=y. This
+	  gives the statistics about how many a times a group was de-queued
+	  or removed from the service tree of the device. This basically gives
+	  and idea if we can generate enough IO to create continuously
+	  backlogged groups. First two fields specify the major and minor
+	  number of the device and third field specifies the number
+	  of times a group was de-queued on a particular device.
+
+- io.policy
+	- One can specify per cgroup per device rules using this interface.
+	  These rules override the default value of group weight and class as
+	  specified by io.weight and io.ioprio_class.
+
+	  Following is the format.
+
+	#echo DEV:weight:ioprio_class > /patch/to/cgroup/io.policy
+
+	weight=0 means removing a policy.
+
+	Examples:
+
+	Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
+	# echo /dev/hdb:300:2 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hdb 300 2
+
+	Configure weight=500 ioprio_class=1 on /dev/hda in this cgroup
+	# echo /dev/hda:500:1 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hda 500 1
+	/dev/hdb 300 2
+
+	Remove the policy for /dev/hda in this cgroup
+	# echo /dev/hda:0:1 > io.policy
+	# cat io.policy
+	dev weight class
+	/dev/hdb 300 2
+
+About configuring request desriptors
+====================================
+Traditionally there are 128 request desriptors allocated per request queue
+where io scheduler is operating (/sys/block/<disk>/queue/nr_requests). If these
+request descriptors are exhausted, processes will put to sleep and woken
+up once request descriptors are available.
+
+With io controller and cgroup stuff, one can not afford to allocate requests
+from single pool as one group might allocate lots of requests and then tasks
+from other groups might be put to sleep and this other group might be a
+higher weight group. Hence to make sure that a group always can get the
+request descriptors it is entitled to, one needs to make request descriptor
+limit per group on every queue.
+
+A new parameter /sys/block/<disk>/queue/nr_group_requests has been introduced
+and this parameter controlls the maximum number of requests per group.
+nr_requests still continues to control total number of request descriptors
+on the queue.
+
+Ideally one should set nr_requests to be following.
+
+nr_requests = number_of_cgroups * nr_group_requests
+
+This will make sure that at any point of time nr_group_requests number of
+request descriptors will be available for any of the cgroups.
+
+Currently default nr_requests=512 and nr_group_requests=128. This will make
+sure that apart from root group one can create 3 more group without running
+into any issues. If one decides to create more cgorus, nr_requests and
+nr_group_requests should be adjusted accordingly.
+
+Probably a better way to assign limit to group request descriptors is through
+sysfs interface. This is a future TODO item.
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2009-06-19 20:37   ` [PATCH 01/20] io-controller: Documentation Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 03/20] io-controller: Charge for time slice based on average disk rate Vivek Goyal
                     ` (19 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

This is common fair queuing code in elevator layer. This is controlled by
config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
flat fair queuing support where there is only one group, "root group" and all
the tasks belong to root group.

This elevator layer changes are backward compatible. That means any ioscheduler
using old interfaces will continue to work.

This code is essentially the CFQ code for fair queuing. The primary difference
is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
Signed-off-by: Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
Signed-off-by: Aristeu Rozanski <aris-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched    |   13 +
 block/Makefile           |    1 +
 block/elevator-fq.c      | 2015 ++++++++++++++++++++++++++++++++++++++++++++++
 block/elevator-fq.h      |  473 +++++++++++
 block/elevator.c         |   46 +-
 include/linux/blkdev.h   |    5 +
 include/linux/elevator.h |   51 ++
 7 files changed, 2593 insertions(+), 11 deletions(-)
 create mode 100644 block/elevator-fq.c
 create mode 100644 block/elevator-fq.h

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 7e803fc..3398134 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -2,6 +2,19 @@ if BLOCK
 
 menu "IO Schedulers"
 
+config ELV_FAIR_QUEUING
+	bool "Elevator Fair Queuing Support"
+	default n
+	---help---
+	  Traditionally only cfq had notion of multiple queues and it did
+	  fair queuing at its own. With the cgroups and need of controlling
+	  IO, now even the simple io schedulers like noop, deadline, as will
+	  have one queue per cgroup and will need hierarchical fair queuing.
+	  Instead of every io scheduler implementing its own fair queuing
+	  logic, this option enables fair queuing in elevator layer so that
+	  other ioschedulers can make use of it.
+	  If unsure, say N.
+
 config IOSCHED_NOOP
 	bool
 	default y
diff --git a/block/Makefile b/block/Makefile
index e9fa4dd..94bfc6e 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -15,3 +15,4 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
 
 obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
 obj-$(CONFIG_BLK_DEV_INTEGRITY)	+= blk-integrity.o
+obj-$(CONFIG_ELV_FAIR_QUEUING)	+= elevator-fq.o
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
new file mode 100644
index 0000000..9357fb0
--- /dev/null
+++ b/block/elevator-fq.c
@@ -0,0 +1,2015 @@
+/*
+ * BFQ: Hierarchical B-WF2Q+ scheduler.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
+ *
+ * Copyright (C) 2008 Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
+ *		      Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
+ * Copyright (C) 2009 Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
+ * 	              Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#include <linux/blkdev.h>
+#include "elevator-fq.h"
+#include <linux/blktrace_api.h>
+
+/* Values taken from cfq */
+const int elv_slice_sync = HZ / 10;
+int elv_slice_async = HZ / 25;
+const int elv_slice_async_rq = 2;
+int elv_slice_idle = HZ / 125;
+static struct kmem_cache *elv_ioq_pool;
+
+#define ELV_SLICE_SCALE		(5)
+#define ELV_HW_QUEUE_MIN	(5)
+#define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
+				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
+
+static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
+					struct io_queue *ioq, int probe);
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract);
+
+static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
+					unsigned short prio)
+{
+	const int base_slice = efqd->elv_slice[sync];
+
+	WARN_ON(prio >= IOPRIO_BE_NR);
+
+	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
+}
+
+static inline int
+elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
+{
+	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
+}
+
+/* Mainly the BFQ scheduling code Follows */
+
+/*
+ * Shift for timestamp calculations.  This actually limits the maximum
+ * service allowed in one timestamp delta (small shift values increase it),
+ * the maximum total weight that can be used for the queues in the system
+ * (big shift values increase it), and the period of virtual time wraparounds.
+ */
+#define WFQ_SERVICE_SHIFT	22
+
+/**
+ * bfq_gt - compare two timestamps.
+ * @a: first ts.
+ * @b: second ts.
+ *
+ * Return @a > @b, dealing with wrapping correctly.
+ */
+static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
+{
+	return (s64)(a - b) > 0;
+}
+
+/**
+ * bfq_delta - map service into the virtual time domain.
+ * @service: amount of service.
+ * @weight: scale factor.
+ */
+static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
+					bfq_weight_t weight)
+{
+	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
+
+	do_div(d, weight);
+	return d;
+}
+
+/**
+ * bfq_calc_finish - assign the finish time to an entity.
+ * @entity: the entity to act upon.
+ * @service: the service to be charged to the entity.
+ */
+static inline void bfq_calc_finish(struct io_entity *entity,
+				   bfq_service_t service)
+{
+	BUG_ON(entity->weight == 0);
+
+	entity->finish = entity->start + bfq_delta(service, entity->weight);
+}
+
+static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
+{
+	struct io_queue *ioq = NULL;
+
+	BUG_ON(entity == NULL);
+	if (entity->my_sched_data == NULL)
+		ioq = container_of(entity, struct io_queue, entity);
+	return ioq;
+}
+
+/**
+ * bfq_entity_of - get an entity from a node.
+ * @node: the node field of the entity.
+ *
+ * Convert a node pointer to the relative entity.  This is used only
+ * to simplify the logic of some functions and not as the generic
+ * conversion mechanism because, e.g., in the tree walking functions,
+ * the check for a %NULL value would be redundant.
+ */
+static inline struct io_entity *bfq_entity_of(struct rb_node *node)
+{
+	struct io_entity *entity = NULL;
+
+	if (node != NULL)
+		entity = rb_entry(node, struct io_entity, rb_node);
+
+	return entity;
+}
+
+/**
+ * bfq_extract - remove an entity from a tree.
+ * @root: the tree root.
+ * @entity: the entity to remove.
+ */
+static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
+{
+	BUG_ON(entity->tree != root);
+
+	entity->tree = NULL;
+	rb_erase(&entity->rb_node, root);
+}
+
+/**
+ * bfq_idle_extract - extract an entity from the idle tree.
+ * @st: the service tree of the owning @entity.
+ * @entity: the entity being removed.
+ */
+static void bfq_idle_extract(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct rb_node *next;
+
+	BUG_ON(entity->tree != &st->idle);
+
+	if (entity == st->first_idle) {
+		next = rb_next(&entity->rb_node);
+		st->first_idle = bfq_entity_of(next);
+	}
+
+	if (entity == st->last_idle) {
+		next = rb_prev(&entity->rb_node);
+		st->last_idle = bfq_entity_of(next);
+	}
+
+	bfq_extract(&st->idle, entity);
+}
+
+/**
+ * bfq_insert - generic tree insertion.
+ * @root: tree root.
+ * @entity: entity to insert.
+ *
+ * This is used for the idle and the active tree, since they are both
+ * ordered by finish time.
+ */
+static void bfq_insert(struct rb_root *root, struct io_entity *entity)
+{
+	struct io_entity *entry;
+	struct rb_node **node = &root->rb_node;
+	struct rb_node *parent = NULL;
+
+	BUG_ON(entity->tree != NULL);
+
+	while (*node != NULL) {
+		parent = *node;
+		entry = rb_entry(parent, struct io_entity, rb_node);
+
+		if (bfq_gt(entry->finish, entity->finish))
+			node = &parent->rb_left;
+		else
+			node = &parent->rb_right;
+	}
+
+	rb_link_node(&entity->rb_node, parent, node);
+	rb_insert_color(&entity->rb_node, root);
+
+	entity->tree = root;
+}
+
+/**
+ * bfq_update_min - update the min_start field of a entity.
+ * @entity: the entity to update.
+ * @node: one of its children.
+ *
+ * This function is called when @entity may store an invalid value for
+ * min_start due to updates to the active tree.  The function  assumes
+ * that the subtree rooted at @node (which may be its left or its right
+ * child) has a valid min_start value.
+ */
+static inline void bfq_update_min(struct io_entity *entity,
+					struct rb_node *node)
+{
+	struct io_entity *child;
+
+	if (node != NULL) {
+		child = rb_entry(node, struct io_entity, rb_node);
+		if (bfq_gt(entity->min_start, child->min_start))
+			entity->min_start = child->min_start;
+	}
+}
+
+/**
+ * bfq_update_active_node - recalculate min_start.
+ * @node: the node to update.
+ *
+ * @node may have changed position or one of its children may have moved,
+ * this function updates its min_start value.  The left and right subtrees
+ * are assumed to hold a correct min_start value.
+ */
+static inline void bfq_update_active_node(struct rb_node *node)
+{
+	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
+
+	entity->min_start = entity->start;
+	bfq_update_min(entity, node->rb_right);
+	bfq_update_min(entity, node->rb_left);
+}
+
+/**
+ * bfq_update_active_tree - update min_start for the whole active tree.
+ * @node: the starting node.
+ *
+ * @node must be the deepest modified node after an update.  This function
+ * updates its min_start using the values held by its children, assuming
+ * that they did not change, and then updates all the nodes that may have
+ * changed in the path to the root.  The only nodes that may have changed
+ * are the ones in the path or their siblings.
+ */
+static void bfq_update_active_tree(struct rb_node *node)
+{
+	struct rb_node *parent;
+
+up:
+	bfq_update_active_node(node);
+
+	parent = rb_parent(node);
+	if (parent == NULL)
+		return;
+
+	if (node == parent->rb_left && parent->rb_right != NULL)
+		bfq_update_active_node(parent->rb_right);
+	else if (parent->rb_left != NULL)
+		bfq_update_active_node(parent->rb_left);
+
+	node = parent;
+	goto up;
+}
+
+/**
+ * bfq_active_insert - insert an entity in the active tree of its group/device.
+ * @st: the service tree of the entity.
+ * @entity: the entity being inserted.
+ *
+ * The active tree is ordered by finish time, but an extra key is kept
+ * per each node, containing the minimum value for the start times of
+ * its children (and the node itself), so it's possible to search for
+ * the eligible node with the lowest finish time in logarithmic time.
+ */
+static void bfq_active_insert(struct io_service_tree *st,
+					struct io_entity *entity)
+{
+	struct rb_node *node = &entity->rb_node;
+
+	bfq_insert(&st->active, entity);
+
+	if (node->rb_left != NULL)
+		node = node->rb_left;
+	else if (node->rb_right != NULL)
+		node = node->rb_right;
+
+	bfq_update_active_tree(node);
+}
+
+/**
+ * bfq_ioprio_to_weight - calc a weight from an ioprio.
+ * @ioprio: the ioprio value to convert.
+ */
+static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
+{
+	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+	return IOPRIO_BE_NR - ioprio;
+}
+
+void bfq_get_entity(struct io_entity *entity)
+{
+	struct io_queue *ioq = io_entity_to_ioq(entity);
+
+	if (ioq)
+		elv_get_ioq(ioq);
+}
+
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->sched_data = &iog->sched_data;
+}
+
+/**
+ * bfq_find_deepest - find the deepest node that an extraction can modify.
+ * @node: the node being removed.
+ *
+ * Do the first step of an extraction in an rb tree, looking for the
+ * node that will replace @node, and returning the deepest node that
+ * the following modifications to the tree can touch.  If @node is the
+ * last node in the tree return %NULL.
+ */
+static struct rb_node *bfq_find_deepest(struct rb_node *node)
+{
+	struct rb_node *deepest;
+
+	if (node->rb_right == NULL && node->rb_left == NULL)
+		deepest = rb_parent(node);
+	else if (node->rb_right == NULL)
+		deepest = node->rb_left;
+	else if (node->rb_left == NULL)
+		deepest = node->rb_right;
+	else {
+		deepest = rb_next(node);
+		if (deepest->rb_right != NULL)
+			deepest = deepest->rb_right;
+		else if (rb_parent(deepest) != node)
+			deepest = rb_parent(deepest);
+	}
+
+	return deepest;
+}
+
+/**
+ * bfq_active_extract - remove an entity from the active tree.
+ * @st: the service_tree containing the tree.
+ * @entity: the entity being removed.
+ */
+static void bfq_active_extract(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct rb_node *node;
+
+	node = bfq_find_deepest(&entity->rb_node);
+	bfq_extract(&st->active, entity);
+
+	if (node != NULL)
+		bfq_update_active_tree(node);
+}
+
+/**
+ * bfq_idle_insert - insert an entity into the idle tree.
+ * @st: the service tree containing the tree.
+ * @entity: the entity to insert.
+ */
+static void bfq_idle_insert(struct io_service_tree *st,
+					struct io_entity *entity)
+{
+	struct io_entity *first_idle = st->first_idle;
+	struct io_entity *last_idle = st->last_idle;
+
+	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
+		st->first_idle = entity;
+	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
+		st->last_idle = entity;
+
+	bfq_insert(&st->idle, entity);
+}
+
+/**
+ * bfq_forget_entity - remove an entity from the wfq trees.
+ * @st: the service tree.
+ * @entity: the entity being removed.
+ *
+ * Update the device status and forget everything about @entity, putting
+ * the device reference to it, if it is a queue.  Entities belonging to
+ * groups are not refcounted.
+ */
+static void bfq_forget_entity(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct io_queue *ioq = NULL;
+
+	BUG_ON(!entity->on_st);
+	entity->on_st = 0;
+	st->wsum -= entity->weight;
+	ioq = io_entity_to_ioq(entity);
+	if (!ioq)
+		return;
+	elv_put_ioq(ioq);
+}
+
+/**
+ * bfq_put_idle_entity - release the idle tree ref of an entity.
+ * @st: service tree for the entity.
+ * @entity: the entity being released.
+ */
+void bfq_put_idle_entity(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	bfq_idle_extract(st, entity);
+	bfq_forget_entity(st, entity);
+}
+
+/**
+ * bfq_forget_idle - update the idle tree if necessary.
+ * @st: the service tree to act upon.
+ *
+ * To preserve the global O(log N) complexity we only remove one entry here;
+ * as the idle tree will not grow indefinitely this can be done safely.
+ */
+void bfq_forget_idle(struct io_service_tree *st)
+{
+	struct io_entity *first_idle = st->first_idle;
+	struct io_entity *last_idle = st->last_idle;
+
+	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
+	    !bfq_gt(last_idle->finish, st->vtime)) {
+		/*
+		 * Active tree is empty. Pull back vtime to finish time of
+		 * last idle entity on idle tree.
+		 * Rational seems to be that it reduces the possibility of
+		 * vtime wraparound (bfq_gt(V-F) < 0).
+		 */
+		st->vtime = last_idle->finish;
+	}
+
+	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
+		bfq_put_idle_entity(st, first_idle);
+}
+
+
+static struct io_service_tree *
+__bfq_entity_update_prio(struct io_service_tree *old_st,
+				struct io_entity *entity)
+{
+	struct io_service_tree *new_st = old_st;
+	struct io_queue *ioq = io_entity_to_ioq(entity);
+
+	if (entity->ioprio_changed) {
+		entity->ioprio = entity->new_ioprio;
+		entity->ioprio_class = entity->new_ioprio_class;
+		entity->ioprio_changed = 0;
+
+		/*
+		 * Also update the scaled budget for ioq. Group will get the
+		 * updated budget once ioq is selected to run next.
+		 */
+		if (ioq) {
+			struct elv_fq_data *efqd = ioq->efqd;
+			entity->budget = elv_prio_to_slice(efqd, ioq);
+		}
+
+		old_st->wsum -= entity->weight;
+		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
+
+		/*
+		 * NOTE: here we may be changing the weight too early,
+		 * this will cause unfairness.  The correct approach
+		 * would have required additional complexity to defer
+		 * weight changes to the proper time instants (i.e.,
+		 * when entity->finish <= old_st->vtime).
+		 */
+		new_st = io_entity_service_tree(entity);
+		new_st->wsum += entity->weight;
+
+		if (new_st != old_st)
+			entity->start = new_st->vtime;
+	}
+
+	return new_st;
+}
+
+/**
+ * __bfq_activate_entity - activate an entity.
+ * @entity: the entity being activated.
+ *
+ * Called whenever an entity is activated, i.e., it is not active and one
+ * of its children receives a new request, or has to be reactivated due to
+ * budget exhaustion.  It uses the current budget of the entity (and the
+ * service received if @entity is active) of the queue to calculate its
+ * timestamps.
+ */
+static void __bfq_activate_entity(struct io_entity *entity, int add_front)
+{
+	struct io_sched_data *sd = entity->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+
+	if (entity == sd->active_entity) {
+		BUG_ON(entity->tree != NULL);
+		/*
+		 * If we are requeueing the current entity we have
+		 * to take care of not charging to it service it has
+		 * not received.
+		 */
+		bfq_calc_finish(entity, entity->service);
+		entity->start = entity->finish;
+		sd->active_entity = NULL;
+	} else if (entity->tree == &st->active) {
+		/*
+		 * Requeueing an entity due to a change of some
+		 * next_active entity below it.  We reuse the old
+		 * start time.
+		 */
+		bfq_active_extract(st, entity);
+	} else if (entity->tree == &st->idle) {
+		/*
+		 * Must be on the idle tree, bfq_idle_extract() will
+		 * check for that.
+		 */
+		bfq_idle_extract(st, entity);
+		entity->start = bfq_gt(st->vtime, entity->finish) ?
+				       st->vtime : entity->finish;
+	} else {
+		/*
+		 * The finish time of the entity may be invalid, and
+		 * it is in the past for sure, otherwise the queue
+		 * would have been on the idle tree.
+		 */
+		entity->start = st->vtime;
+		st->wsum += entity->weight;
+		bfq_get_entity(entity);
+
+		BUG_ON(entity->on_st);
+		entity->on_st = 1;
+	}
+
+	st = __bfq_entity_update_prio(st, entity);
+	/*
+	 * This is to emulate cfq like functionality where preemption can
+	 * happen with-in same class, like sync queue preempting async queue
+	 * May be this is not a very good idea from fairness point of view
+	 * as preempting queue gains share. Keeping it for now.
+	 */
+	if (add_front) {
+		struct io_entity *next_entity;
+
+		/*
+		 * Determine the entity which will be dispatched next
+		 * Use sd->next_active once hierarchical patch is applied
+		 */
+		next_entity = bfq_lookup_next_entity(sd, 0);
+
+		if (next_entity && next_entity != entity) {
+			struct io_service_tree *new_st;
+			bfq_timestamp_t delta;
+
+			new_st = io_entity_service_tree(next_entity);
+
+			/*
+			 * At this point, both entities should belong to
+			 * same service tree as cross service tree preemption
+			 * is automatically taken care by algorithm
+			 */
+			BUG_ON(new_st != st);
+			entity->finish = next_entity->finish - 1;
+			delta = bfq_delta(entity->budget, entity->weight);
+			entity->start = entity->finish - delta;
+			if (bfq_gt(entity->start, st->vtime))
+				entity->start = st->vtime;
+		}
+	} else {
+		bfq_calc_finish(entity, entity->budget);
+	}
+	bfq_active_insert(st, entity);
+}
+
+/**
+ * bfq_activate_entity - activate an entity.
+ * @entity: the entity to activate.
+ */
+void bfq_activate_entity(struct io_entity *entity, int add_front)
+{
+	__bfq_activate_entity(entity, add_front);
+}
+
+/**
+ * __bfq_deactivate_entity - deactivate an entity from its service tree.
+ * @entity: the entity to deactivate.
+ * @requeue: if false, the entity will not be put into the idle tree.
+ *
+ * Deactivate an entity, independently from its previous state.  If the
+ * entity was not on a service tree just return, otherwise if it is on
+ * any scheduler tree, extract it from that tree, and if necessary
+ * and if the caller did not specify @requeue, put it on the idle tree.
+ *
+ */
+int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
+{
+	struct io_sched_data *sd = entity->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+	int was_active = entity == sd->active_entity;
+	int ret = 0;
+
+	if (!entity->on_st)
+		return 0;
+
+	BUG_ON(was_active && entity->tree != NULL);
+
+	if (was_active) {
+		bfq_calc_finish(entity, entity->service);
+		sd->active_entity = NULL;
+	} else if (entity->tree == &st->active)
+		bfq_active_extract(st, entity);
+	else if (entity->tree == &st->idle)
+		bfq_idle_extract(st, entity);
+	else if (entity->tree != NULL)
+		BUG();
+
+	if (!requeue || !bfq_gt(entity->finish, st->vtime))
+		bfq_forget_entity(st, entity);
+	else
+		bfq_idle_insert(st, entity);
+
+	BUG_ON(sd->active_entity == entity);
+
+	return ret;
+}
+
+/**
+ * bfq_deactivate_entity - deactivate an entity.
+ * @entity: the entity to deactivate.
+ * @requeue: true if the entity can be put on the idle tree
+ */
+void bfq_deactivate_entity(struct io_entity *entity, int requeue)
+{
+	__bfq_deactivate_entity(entity, requeue);
+}
+
+/**
+ * bfq_update_vtime - update vtime if necessary.
+ * @st: the service tree to act upon.
+ *
+ * If necessary update the service tree vtime to have at least one
+ * eligible entity, skipping to its start time.  Assumes that the
+ * active tree of the device is not empty.
+ *
+ * NOTE: this hierarchical implementation updates vtimes quite often,
+ * we may end up with reactivated tasks getting timestamps after a
+ * vtime skip done because we needed a ->first_active entity on some
+ * intermediate node.
+ */
+static void bfq_update_vtime(struct io_service_tree *st)
+{
+	struct io_entity *entry;
+	struct rb_node *node = st->active.rb_node;
+
+	entry = rb_entry(node, struct io_entity, rb_node);
+	if (bfq_gt(entry->min_start, st->vtime)) {
+		st->vtime = entry->min_start;
+		bfq_forget_idle(st);
+	}
+}
+
+/**
+ * bfq_first_active - find the eligible entity with the smallest finish time
+ * @st: the service tree to select from.
+ *
+ * This function searches the first schedulable entity, starting from the
+ * root of the tree and going on the left every time on this side there is
+ * a subtree with at least one eligible (start <= vtime) entity.  The path
+ * on the right is followed only if a) the left subtree contains no eligible
+ * entities and b) no eligible entity has been found yet.
+ */
+static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
+{
+	struct io_entity *entry, *first = NULL;
+	struct rb_node *node = st->active.rb_node;
+
+	while (node != NULL) {
+		entry = rb_entry(node, struct io_entity, rb_node);
+left:
+		if (!bfq_gt(entry->start, st->vtime))
+			first = entry;
+
+		BUG_ON(bfq_gt(entry->min_start, st->vtime));
+
+		if (node->rb_left != NULL) {
+			entry = rb_entry(node->rb_left,
+					 struct io_entity, rb_node);
+			if (!bfq_gt(entry->min_start, st->vtime)) {
+				node = node->rb_left;
+				goto left;
+			}
+		}
+		if (first != NULL)
+			break;
+		node = node->rb_right;
+	}
+
+	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
+	return first;
+}
+
+/**
+ * __bfq_lookup_next_entity - return the first eligible entity in @st.
+ * @st: the service tree.
+ *
+ * Update the virtual time in @st and return the first eligible entity
+ * it contains.
+ */
+static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
+{
+	struct io_entity *entity;
+
+	if (RB_EMPTY_ROOT(&st->active))
+		return NULL;
+
+	bfq_update_vtime(st);
+	entity = bfq_first_active_entity(st);
+	BUG_ON(bfq_gt(entity->start, st->vtime));
+
+	return entity;
+}
+
+/**
+ * bfq_lookup_next_entity - return the first eligible entity in @sd.
+ * @sd: the sched_data.
+ * @extract: if true the returned entity will be also extracted from @sd.
+ *
+ * NOTE: since we cache the next_active entity at each level of the
+ * hierarchy, the complexity of the lookup can be decreased with
+ * absolutely no effort just returning the cached next_active value;
+ * we prefer to do full lookups to test the consistency of * the data
+ * structures.
+ */
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract)
+{
+	struct io_service_tree *st = sd->service_tree;
+	struct io_entity *entity;
+	int i;
+
+	/*
+	 * We should not call lookup when an entity is active, as doing lookup
+	 * can result in an erroneous vtime jump.
+	 */
+	BUG_ON(sd->active_entity != NULL);
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
+		entity = __bfq_lookup_next_entity(st);
+		if (entity != NULL) {
+			if (extract) {
+				bfq_active_extract(st, entity);
+				sd->active_entity = entity;
+			}
+			break;
+		}
+	}
+
+	return entity;
+}
+
+void entity_served(struct io_entity *entity, bfq_service_t served)
+{
+	struct io_service_tree *st;
+
+	st = io_entity_service_tree(entity);
+	entity->service += served;
+	BUG_ON(st->wsum == 0);
+	st->vtime += bfq_delta(served, st->wsum);
+	bfq_forget_idle(st);
+}
+
+/**
+ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
+ * @st: the service tree being flushed.
+ */
+void io_flush_idle_tree(struct io_service_tree *st)
+{
+	struct io_entity *entity = st->first_idle;
+
+	for (; entity != NULL; entity = st->first_idle)
+		__bfq_deactivate_entity(entity, 0);
+}
+
+/* Elevator fair queuing function */
+struct io_queue *rq_ioq(struct request *rq)
+{
+	return rq->ioq;
+}
+
+static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
+{
+	return e->efqd.active_queue;
+}
+
+void *elv_active_sched_queue(struct elevator_queue *e)
+{
+	return ioq_sched_queue(elv_active_ioq(e));
+}
+EXPORT_SYMBOL(elv_active_sched_queue);
+
+int elv_nr_busy_ioq(struct elevator_queue *e)
+{
+	return e->efqd.busy_queues;
+}
+EXPORT_SYMBOL(elv_nr_busy_ioq);
+
+int elv_hw_tag(struct elevator_queue *e)
+{
+	return e->efqd.hw_tag;
+}
+EXPORT_SYMBOL(elv_hw_tag);
+
+/* Helper functions for operating on elevator idle slice timer */
+int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+
+	return mod_timer(&efqd->idle_slice_timer, expires);
+}
+EXPORT_SYMBOL(elv_mod_idle_slice_timer);
+
+int elv_del_idle_slice_timer(struct elevator_queue *eq)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+
+	return del_timer(&efqd->idle_slice_timer);
+}
+EXPORT_SYMBOL(elv_del_idle_slice_timer);
+
+unsigned int elv_get_slice_idle(struct elevator_queue *eq)
+{
+	return eq->efqd.elv_slice_idle;
+}
+EXPORT_SYMBOL(elv_get_slice_idle);
+
+void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
+{
+	entity_served(&ioq->entity, served);
+}
+
+/* Tells whether ioq is queued in root group or not */
+static inline int is_root_group_ioq(struct request_queue *q,
+					struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
+}
+
+/*
+ * sysfs parts below -->
+ */
+static ssize_t
+elv_var_show(unsigned int var, char *page)
+{
+	return sprintf(page, "%d\n", var);
+}
+
+static ssize_t
+elv_var_store(unsigned int *var, const char *page, size_t count)
+{
+	char *p = (char *) page;
+
+	*var = simple_strtoul(p, &p, 10);
+	return count;
+}
+
+#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
+ssize_t __FUNC(struct elevator_queue *e, char *page)		\
+{									\
+	struct elv_fq_data *efqd = &e->efqd;				\
+	unsigned int __data = __VAR;					\
+	if (__CONV)							\
+		__data = jiffies_to_msecs(__data);			\
+	return elv_var_show(__data, (page));				\
+}
+SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
+EXPORT_SYMBOL(elv_slice_idle_show);
+SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
+EXPORT_SYMBOL(elv_slice_sync_show);
+SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
+EXPORT_SYMBOL(elv_slice_async_show);
+#undef SHOW_FUNCTION
+
+#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
+ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
+{									\
+	struct elv_fq_data *efqd = &e->efqd;				\
+	unsigned int __data;						\
+	int ret = elv_var_store(&__data, (page), count);		\
+	if (__data < (MIN))						\
+		__data = (MIN);						\
+	else if (__data > (MAX))					\
+		__data = (MAX);						\
+	if (__CONV)							\
+		*(__PTR) = msecs_to_jiffies(__data);			\
+	else								\
+		*(__PTR) = __data;					\
+	return ret;							\
+}
+STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_idle_store);
+STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_sync_store);
+STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_async_store);
+#undef STORE_FUNCTION
+
+void elv_schedule_dispatch(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (elv_nr_busy_ioq(q->elevator)) {
+		elv_log(efqd, "schedule dispatch");
+		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
+	}
+}
+EXPORT_SYMBOL(elv_schedule_dispatch);
+
+void elv_kick_queue(struct work_struct *work)
+{
+	struct elv_fq_data *efqd =
+		container_of(work, struct elv_fq_data, unplug_work);
+	struct request_queue *q = efqd->queue;
+	unsigned long flags;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	blk_start_queueing(q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+void elv_shutdown_timer_wq(struct elevator_queue *e)
+{
+	del_timer_sync(&e->efqd.idle_slice_timer);
+	cancel_work_sync(&e->efqd.unplug_work);
+}
+EXPORT_SYMBOL(elv_shutdown_timer_wq);
+
+void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	ioq->slice_end = jiffies + ioq->entity.budget;
+	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
+}
+
+static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = ioq->efqd;
+	unsigned long elapsed = jiffies - ioq->last_end_request;
+	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
+
+	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
+	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
+	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
+}
+
+/*
+ * Disable idle window if the process thinks too long.
+ * This idle flag can also be updated by io scheduler.
+ */
+static void elv_ioq_update_idle_window(struct elevator_queue *eq,
+				struct io_queue *ioq, struct request *rq)
+{
+	int old_idle, enable_idle;
+	struct elv_fq_data *efqd = ioq->efqd;
+
+	/*
+	 * Don't idle for async or idle io prio class
+	 */
+	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
+		return;
+
+	enable_idle = old_idle = elv_ioq_idle_window(ioq);
+
+	if (!efqd->elv_slice_idle)
+		enable_idle = 0;
+	else if (ioq_sample_valid(ioq->ttime_samples)) {
+		if (ioq->ttime_mean > efqd->elv_slice_idle)
+			enable_idle = 0;
+		else
+			enable_idle = 1;
+	}
+
+	/*
+	 * From think time perspective idle should be enabled. Check with
+	 * io scheduler if it wants to disable idling based on additional
+	 * considrations like seek pattern.
+	 */
+	if (enable_idle) {
+		if (eq->ops->elevator_update_idle_window_fn)
+			enable_idle = eq->ops->elevator_update_idle_window_fn(
+						eq, ioq->sched_queue, rq);
+		if (!enable_idle)
+			elv_log_ioq(efqd, ioq, "iosched disabled idle");
+	}
+
+	if (old_idle != enable_idle) {
+		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
+		if (enable_idle)
+			elv_mark_ioq_idle_window(ioq);
+		else
+			elv_clear_ioq_idle_window(ioq);
+	}
+}
+
+struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
+{
+	struct io_queue *ioq = NULL;
+
+	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
+	return ioq;
+}
+EXPORT_SYMBOL(elv_alloc_ioq);
+
+void elv_free_ioq(struct io_queue *ioq)
+{
+	kmem_cache_free(elv_ioq_pool, ioq);
+}
+EXPORT_SYMBOL(elv_free_ioq);
+
+int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
+			void *sched_queue, int ioprio_class, int ioprio,
+			int is_sync)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
+
+	RB_CLEAR_NODE(&ioq->entity.rb_node);
+	atomic_set(&ioq->ref, 0);
+	ioq->efqd = efqd;
+	elv_ioq_set_ioprio_class(ioq, ioprio_class);
+	elv_ioq_set_ioprio(ioq, ioprio);
+	ioq->pid = current->pid;
+	ioq->sched_queue = sched_queue;
+	if (is_sync && !elv_ioq_class_idle(ioq))
+		elv_mark_ioq_idle_window(ioq);
+	bfq_init_entity(&ioq->entity, iog);
+	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
+	if (is_sync)
+		ioq->last_end_request = jiffies;
+
+	return 0;
+}
+EXPORT_SYMBOL(elv_init_ioq);
+
+void elv_put_ioq(struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = ioq->efqd;
+	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
+						efqd);
+
+	BUG_ON(atomic_read(&ioq->ref) <= 0);
+	if (!atomic_dec_and_test(&ioq->ref))
+		return;
+	BUG_ON(ioq->nr_queued);
+	BUG_ON(ioq->entity.tree != NULL);
+	BUG_ON(elv_ioq_busy(ioq));
+	BUG_ON(efqd->active_queue == ioq);
+
+	/* Can be called by outgoing elevator. Don't use q */
+	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
+
+	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
+	elv_log_ioq(efqd, ioq, "put_queue");
+	elv_free_ioq(ioq);
+}
+EXPORT_SYMBOL(elv_put_ioq);
+
+void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
+{
+	struct io_queue *ioq = *ioq_ptr;
+
+	if (ioq != NULL) {
+		/* Drop the reference taken by the io group */
+		elv_put_ioq(ioq);
+		*ioq_ptr = NULL;
+	}
+}
+
+/*
+ * Normally next io queue to be served is selected from the service tree.
+ * This function allows one to choose a specific io queue to run next
+ * out of order. This is primarily to accomodate the close_cooperator
+ * feature of cfq.
+ *
+ * Currently it is done only for root level as to begin with supporting
+ * close cooperator feature only for root group to make sure default
+ * cfq behavior in flat hierarchy is not changed.
+ */
+void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	struct io_sched_data *sd = &efqd->root_group->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+
+	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
+	BUG_ON(!efqd->busy_queues);
+	BUG_ON(sd != entity->sched_data);
+	BUG_ON(!st);
+
+	bfq_update_vtime(st);
+	bfq_active_extract(st, entity);
+	sd->active_entity = entity;
+	entity->service = 0;
+	elv_log_ioq(efqd, ioq, "set_next_ioq");
+}
+
+/* Get next queue for service. */
+struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = NULL;
+	struct io_queue *ioq = NULL;
+	struct io_sched_data *sd;
+
+	/*
+	 * We should not call lookup when an entity is active, as doing
+	 * lookup can result in an erroneous vtime jump.
+	 */
+	BUG_ON(efqd->active_queue != NULL);
+
+	if (!efqd->busy_queues)
+		return NULL;
+
+	sd = &efqd->root_group->sched_data;
+	entity = bfq_lookup_next_entity(sd, 1);
+
+	BUG_ON(!entity);
+	if (extract)
+		entity->service = 0;
+	ioq = io_entity_to_ioq(entity);
+
+	return ioq;
+}
+
+/*
+ * coop tells that io scheduler selected a queue for us and we did not
+ * select the next queue based on fairness.
+ */
+static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int coop)
+{
+	struct request_queue *q = efqd->queue;
+
+	if (ioq) {
+		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
+							efqd->busy_queues);
+		ioq->slice_end = 0;
+
+		elv_clear_ioq_wait_request(ioq);
+		elv_clear_ioq_must_dispatch(ioq);
+		elv_mark_ioq_slice_new(ioq);
+
+		del_timer(&efqd->idle_slice_timer);
+	}
+
+	efqd->active_queue = ioq;
+
+	/* Let iosched know if it wants to take some action */
+	if (ioq) {
+		if (q->elevator->ops->elevator_active_ioq_set_fn)
+			q->elevator->ops->elevator_active_ioq_set_fn(q,
+							ioq->sched_queue, coop);
+	}
+}
+
+/* Get and set a new active queue for service. */
+struct io_queue *elv_set_active_ioq(struct request_queue *q,
+						struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	int coop = 0;
+
+	if (!ioq)
+		ioq = elv_get_next_ioq(q, 1);
+	else {
+		elv_set_next_ioq(q, ioq);
+		/*
+		 * io scheduler selected the next queue for us. Pass this
+		 * this info back to io scheudler. cfq currently uses it
+		 * to reset coop flag on the queue.
+		 */
+		coop = 1;
+	}
+	__elv_set_active_ioq(efqd, ioq, coop);
+	return ioq;
+}
+
+void elv_reset_active_ioq(struct elv_fq_data *efqd)
+{
+	struct request_queue *q = efqd->queue;
+	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
+
+	if (q->elevator->ops->elevator_active_ioq_reset_fn)
+		q->elevator->ops->elevator_active_ioq_reset_fn(q,
+							ioq->sched_queue);
+	efqd->active_queue = NULL;
+	del_timer(&efqd->idle_slice_timer);
+}
+
+void elv_activate_ioq(struct io_queue *ioq, int add_front)
+{
+	bfq_activate_entity(&ioq->entity, add_front);
+}
+
+void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int requeue)
+{
+	bfq_deactivate_entity(&ioq->entity, requeue);
+}
+
+/* Called when an inactive queue receives a new request. */
+void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
+{
+	BUG_ON(elv_ioq_busy(ioq));
+	BUG_ON(ioq == efqd->active_queue);
+	elv_log_ioq(efqd, ioq, "add to busy");
+	elv_activate_ioq(ioq, 0);
+	elv_mark_ioq_busy(ioq);
+	efqd->busy_queues++;
+	if (elv_ioq_class_rt(ioq)) {
+		struct io_group *iog = ioq_to_io_group(ioq);
+		iog->busy_rt_queues++;
+	}
+}
+
+void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
+					int requeue)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	BUG_ON(!elv_ioq_busy(ioq));
+	BUG_ON(ioq->nr_queued);
+	elv_log_ioq(efqd, ioq, "del from busy");
+	elv_clear_ioq_busy(ioq);
+	BUG_ON(efqd->busy_queues == 0);
+	efqd->busy_queues--;
+	if (elv_ioq_class_rt(ioq)) {
+		struct io_group *iog = ioq_to_io_group(ioq);
+		iog->busy_rt_queues--;
+	}
+
+	elv_deactivate_ioq(efqd, ioq, requeue);
+}
+
+/*
+ * Do the accounting. Determine how much service (in terms of time slices)
+ * current queue used and adjust the start, finish time of queue and vtime
+ * of the tree accordingly.
+ *
+ * Determining the service used in terms of time is tricky in certain
+ * situations. Especially when underlying device supports command queuing
+ * and requests from multiple queues can be there at same time, then it
+ * is not clear which queue consumed how much of disk time.
+ *
+ * To mitigate this problem, cfq starts the time slice of the queue only
+ * after first request from the queue has completed. This does not work
+ * very well if we expire the queue before we wait for first and more
+ * request to finish from the queue. For seeky queues, we will expire the
+ * queue after dispatching few requests without waiting and start dispatching
+ * from next queue.
+ *
+ * Not sure how to determine the time consumed by queue in such scenarios.
+ * Currently as a crude approximation, we are charging 25% of time slice
+ * for such cases. A better mechanism is needed for accurate accounting.
+ */
+void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
+
+	assert_spin_locked(q->queue_lock);
+	elv_log_ioq(efqd, ioq, "slice expired");
+
+	if (elv_ioq_wait_request(ioq))
+		del_timer(&efqd->idle_slice_timer);
+
+	elv_clear_ioq_wait_request(ioq);
+
+	/*
+	 * if ioq->slice_end = 0, that means a queue was expired before first
+	 * reuqest from the queue got completed. Of course we are not planning
+	 * to idle on the queue otherwise we would not have expired it.
+	 *
+	 * Charge for the 25% slice in such cases. This is not the best thing
+	 * to do but at the same time not very sure what's the next best
+	 * thing to do.
+	 *
+	 * This arises from that fact that we don't have the notion of
+	 * one queue being operational at one time. io scheduler can dispatch
+	 * requests from multiple queues in one dispatch round. Ideally for
+	 * more accurate accounting of exact disk time used by disk, one
+	 * should dispatch requests from only one queue and wait for all
+	 * the requests to finish. But this will reduce throughput.
+	 */
+	if (!ioq->slice_end)
+		slice_used = entity->budget/4;
+	else {
+		if (time_after(ioq->slice_end, jiffies)) {
+			slice_unused = ioq->slice_end - jiffies;
+			if (slice_unused == entity->budget) {
+				/*
+				 * queue got expired immediately after
+				 * completing first request. Charge 25% of
+				 * slice.
+				 */
+				slice_used = entity->budget/4;
+			} else
+				slice_used = entity->budget - slice_unused;
+		} else {
+			slice_overshoot = jiffies - ioq->slice_end;
+			slice_used = entity->budget + slice_overshoot;
+		}
+	}
+
+	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
+			jiffies);
+	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
+				slice_used, entity->budget, slice_overshoot);
+	elv_ioq_served(ioq, slice_used);
+
+	BUG_ON(ioq != efqd->active_queue);
+	elv_reset_active_ioq(efqd);
+
+	if (!ioq->nr_queued)
+		elv_del_ioq_busy(q->elevator, ioq, 1);
+	else
+		elv_activate_ioq(ioq, 0);
+}
+EXPORT_SYMBOL(__elv_ioq_slice_expired);
+
+/*
+ *  Expire the ioq.
+ */
+void elv_ioq_slice_expired(struct request_queue *q)
+{
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+
+	if (ioq)
+		__elv_ioq_slice_expired(q, ioq);
+}
+
+/*
+ * Check if new_cfqq should preempt the currently active queue. Return 0 for
+ * no or if we aren't sure, a 1 will cause a preemption attempt.
+ */
+int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
+			struct request *rq)
+{
+	struct io_queue *ioq;
+	struct elevator_queue *eq = q->elevator;
+	struct io_entity *entity, *new_entity;
+
+	ioq = elv_active_ioq(eq);
+
+	if (!ioq)
+		return 0;
+
+	entity = &ioq->entity;
+	new_entity = &new_ioq->entity;
+
+	/*
+	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
+	 */
+
+	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
+	    && entity->ioprio_class != IOPRIO_CLASS_RT)
+		return 1;
+	/*
+	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
+	 */
+
+	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
+	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
+		return 1;
+
+	/*
+	 * Check with io scheduler if it has additional criterion based on
+	 * which it wants to preempt existing queue.
+	 */
+	if (eq->ops->elevator_should_preempt_fn)
+		return eq->ops->elevator_should_preempt_fn(q,
+						ioq_sched_queue(new_ioq), rq);
+
+	return 0;
+}
+
+static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
+{
+	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
+	elv_ioq_slice_expired(q);
+
+	/*
+	 * Put the new queue at the front of the of the current list,
+	 * so we know that it will be selected next.
+	 */
+
+	elv_activate_ioq(ioq, 1);
+	elv_ioq_set_slice_end(ioq, 0);
+	elv_mark_ioq_slice_new(ioq);
+}
+
+void elv_ioq_request_add(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	BUG_ON(!efqd);
+	BUG_ON(!ioq);
+	efqd->rq_queued++;
+	ioq->nr_queued++;
+
+	if (!elv_ioq_busy(ioq))
+		elv_add_ioq_busy(efqd, ioq);
+
+	elv_ioq_update_io_thinktime(ioq);
+	elv_ioq_update_idle_window(q->elevator, ioq, rq);
+
+	if (ioq == elv_active_ioq(q->elevator)) {
+		/*
+		 * Remember that we saw a request from this process, but
+		 * don't start queuing just yet. Otherwise we risk seeing lots
+		 * of tiny requests, because we disrupt the normal plugging
+		 * and merging. If the request is already larger than a single
+		 * page, let it rip immediately. For that case we assume that
+		 * merging is already done. Ditto for a busy system that
+		 * has other work pending, don't risk delaying until the
+		 * idle timer unplug to continue working.
+		 */
+		if (elv_ioq_wait_request(ioq)) {
+			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
+			    efqd->busy_queues > 1) {
+				del_timer(&efqd->idle_slice_timer);
+				blk_start_queueing(q);
+			}
+			elv_mark_ioq_must_dispatch(ioq);
+		}
+	} else if (elv_should_preempt(q, ioq, rq)) {
+		/*
+		 * not the active queue - expire current slice if it is
+		 * idle and has expired it's mean thinktime or this new queue
+		 * has some old slice time left and is of higher priority or
+		 * this new queue is RT and the current one is BE
+		 */
+		elv_preempt_queue(q, ioq);
+		blk_start_queueing(q);
+	}
+}
+
+void elv_idle_slice_timer(unsigned long data)
+{
+	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
+	struct io_queue *ioq;
+	unsigned long flags;
+	struct request_queue *q = efqd->queue;
+
+	elv_log(efqd, "idle timer fired");
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	ioq = efqd->active_queue;
+
+	if (ioq) {
+
+		/*
+		 * We saw a request before the queue expired, let it through
+		 */
+		if (elv_ioq_must_dispatch(ioq))
+			goto out_kick;
+
+		/*
+		 * expired
+		 */
+		if (elv_ioq_slice_used(ioq))
+			goto expire;
+
+		/*
+		 * only expire and reinvoke request handler, if there are
+		 * other queues with pending requests
+		 */
+		if (!elv_nr_busy_ioq(q->elevator))
+			goto out_cont;
+
+		/*
+		 * not expired and it has a request pending, let it dispatch
+		 */
+		if (ioq->nr_queued)
+			goto out_kick;
+	}
+expire:
+	elv_ioq_slice_expired(q);
+out_kick:
+	elv_schedule_dispatch(q);
+out_cont:
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+void elv_ioq_arm_slice_timer(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+	unsigned long sl;
+
+	BUG_ON(!ioq);
+
+	/*
+	 * SSD device without seek penalty, disable idling. But only do so
+	 * for devices that support queuing, otherwise we still have a problem
+	 * with sync vs async workloads.
+	 */
+	if (blk_queue_nonrot(q) && efqd->hw_tag)
+		return;
+
+	/*
+	 * still requests with the driver, don't idle
+	 */
+	if (efqd->rq_in_driver)
+		return;
+
+	/*
+	 * idle is disabled, either manually or by past process history
+	 */
+	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+		return;
+
+	/*
+	 * may be iosched got its own idling logic. In that case io
+	 * schduler will take care of arming the timer, if need be.
+	 */
+	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
+		q->elevator->ops->elevator_arm_slice_timer_fn(q,
+						ioq->sched_queue);
+	} else {
+		elv_mark_ioq_wait_request(ioq);
+		sl = efqd->elv_slice_idle;
+		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
+		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
+	}
+}
+
+/* Common layer function to select the next queue to dispatch from */
+void *elv_fq_select_ioq(struct request_queue *q, int force)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
+	struct io_group *iog;
+
+	if (!elv_nr_busy_ioq(q->elevator))
+		return NULL;
+
+	if (ioq == NULL)
+		goto new_queue;
+
+	/*
+	 * Force dispatch. Continue to dispatch from current queue as long
+	 * as it has requests.
+	 */
+	if (unlikely(force)) {
+		if (ioq->nr_queued)
+			goto keep_queue;
+		else
+			goto expire;
+	}
+
+	/*
+	 * The active queue has run out of time, expire it and select new.
+	 */
+	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
+		goto expire;
+
+	/*
+	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
+	 * cfqq.
+	 */
+	iog = ioq_to_io_group(ioq);
+
+	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+		/*
+		 * We simulate this as cfqq timed out so that it gets to bank
+		 * the remaining of its time slice.
+		 */
+		elv_log_ioq(efqd, ioq, "preempt");
+		goto expire;
+	}
+
+	/*
+	 * The active queue has requests and isn't expired, allow it to
+	 * dispatch.
+	 */
+
+	if (ioq->nr_queued)
+		goto keep_queue;
+
+	/*
+	 * If another queue has a request waiting within our mean seek
+	 * distance, let it run.  The expire code will check for close
+	 * cooperators and put the close queue at the front of the service
+	 * tree.
+	 */
+	new_ioq = elv_close_cooperator(q, ioq, 0);
+	if (new_ioq)
+		goto expire;
+
+	/*
+	 * No requests pending. If the active queue still has requests in
+	 * flight or is idling for a new request, allow either of these
+	 * conditions to happen (or time out) before selecting a new queue.
+	 */
+
+	if (timer_pending(&efqd->idle_slice_timer) ||
+	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
+expire:
+	elv_ioq_slice_expired(q);
+new_queue:
+	ioq = elv_set_active_ioq(q, new_ioq);
+keep_queue:
+	return ioq;
+}
+
+/* A request got removed from io_queue. Do the accounting */
+void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
+{
+	struct io_queue *ioq;
+	struct elv_fq_data *efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	ioq = rq->ioq;
+	BUG_ON(!ioq);
+	ioq->nr_queued--;
+
+	efqd = ioq->efqd;
+	BUG_ON(!efqd);
+	efqd->rq_queued--;
+
+	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
+		elv_del_ioq_busy(e, ioq, 1);
+}
+
+/* A request got dispatched. Do the accounting. */
+void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
+{
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	BUG_ON(!ioq);
+	elv_ioq_request_dispatched(ioq);
+	elv_ioq_request_removed(e, rq);
+	elv_clear_ioq_must_dispatch(ioq);
+}
+
+void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	efqd->rq_in_driver++;
+	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
+						efqd->rq_in_driver);
+}
+
+void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	WARN_ON(!efqd->rq_in_driver);
+	efqd->rq_in_driver--;
+	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
+						efqd->rq_in_driver);
+}
+
+/*
+ * Update hw_tag based on peak queue depth over 50 samples under
+ * sufficient load.
+ */
+static void elv_update_hw_tag(struct elv_fq_data *efqd)
+{
+	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
+		efqd->rq_in_driver_peak = efqd->rq_in_driver;
+
+	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
+	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
+		return;
+
+	if (efqd->hw_tag_samples++ < 50)
+		return;
+
+	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
+		efqd->hw_tag = 1;
+	else
+		efqd->hw_tag = 0;
+
+	efqd->hw_tag_samples = 0;
+	efqd->rq_in_driver_peak = 0;
+}
+
+/*
+ * If ioscheduler has functionality of keeping track of close cooperator, check
+ * with it if it has got a closely co-operating queue.
+ */
+static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
+					struct io_queue *ioq, int probe)
+{
+	struct elevator_queue *e = q->elevator;
+	struct io_queue *new_ioq = NULL;
+
+	/*
+	 * Currently this feature is supported only for flat hierarchy or
+	 * root group queues so that default cfq behavior is not changed.
+	 */
+	if (!is_root_group_ioq(q, ioq))
+		return NULL;
+
+	if (q->elevator->ops->elevator_close_cooperator_fn)
+		new_ioq = e->ops->elevator_close_cooperator_fn(q,
+						ioq->sched_queue, probe);
+
+	/* Only select co-operating queue if it belongs to root group */
+	if (new_ioq && !is_root_group_ioq(q, new_ioq))
+		return NULL;
+
+	return new_ioq;
+}
+
+/* A request got completed from io_queue. Do the accounting. */
+void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
+{
+	const int sync = rq_is_sync(rq);
+	struct io_queue *ioq;
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	ioq = rq->ioq;
+
+	elv_log_ioq(efqd, ioq, "complete");
+
+	elv_update_hw_tag(efqd);
+
+	WARN_ON(!efqd->rq_in_driver);
+	WARN_ON(!ioq->dispatched);
+	efqd->rq_in_driver--;
+	ioq->dispatched--;
+
+	if (sync)
+		ioq->last_end_request = jiffies;
+
+	/*
+	 * If this is the active queue, check if it needs to be expired,
+	 * or if we want to idle in case it has no pending requests.
+	 */
+
+	if (elv_active_ioq(q->elevator) == ioq) {
+		if (elv_ioq_slice_new(ioq)) {
+			elv_ioq_set_prio_slice(q, ioq);
+			elv_clear_ioq_slice_new(ioq);
+		}
+		/*
+		 * If there are no requests waiting in this queue, and
+		 * there are other queues ready to issue requests, AND
+		 * those other queues are issuing requests within our
+		 * mean seek distance, give them a chance to run instead
+		 * of idling.
+		 */
+		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
+			elv_ioq_slice_expired(q);
+		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
+			 && sync && !rq_noidle(rq))
+			elv_ioq_arm_slice_timer(q);
+	}
+
+	if (!efqd->rq_in_driver)
+		elv_schedule_dispatch(q);
+}
+
+struct io_group *io_lookup_io_group_current(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	return efqd->root_group;
+}
+EXPORT_SYMBOL(io_lookup_io_group_current);
+
+void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
+					int ioprio)
+{
+	struct io_queue *ioq = NULL;
+
+	switch (ioprio_class) {
+	case IOPRIO_CLASS_RT:
+		ioq = iog->async_queue[0][ioprio];
+		break;
+	case IOPRIO_CLASS_BE:
+		ioq = iog->async_queue[1][ioprio];
+		break;
+	case IOPRIO_CLASS_IDLE:
+		ioq = iog->async_idle_queue;
+		break;
+	default:
+		BUG();
+	}
+
+	if (ioq)
+		return ioq->sched_queue;
+	return NULL;
+}
+EXPORT_SYMBOL(io_group_async_queue_prio);
+
+void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
+					int ioprio, struct io_queue *ioq)
+{
+	switch (ioprio_class) {
+	case IOPRIO_CLASS_RT:
+		iog->async_queue[0][ioprio] = ioq;
+		break;
+	case IOPRIO_CLASS_BE:
+		iog->async_queue[1][ioprio] = ioq;
+		break;
+	case IOPRIO_CLASS_IDLE:
+		iog->async_idle_queue = ioq;
+		break;
+	default:
+		BUG();
+	}
+
+	/*
+	 * Take the group reference and pin the queue. Group exit will
+	 * clean it up
+	 */
+	elv_get_ioq(ioq);
+}
+EXPORT_SYMBOL(io_group_set_async_queue);
+
+/*
+ * Release all the io group references to its async queues.
+ */
+void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
+{
+	int i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < IOPRIO_BE_NR; j++)
+			elv_release_ioq(e, &iog->async_queue[i][j]);
+
+	/* Free up async idle queue */
+	elv_release_ioq(e, &iog->async_idle_queue);
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	return iog;
+}
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_group *iog = e->efqd.root_group;
+	struct io_service_tree *st;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	kfree(iog);
+}
+
+static void elv_slab_kill(void)
+{
+	/*
+	 * Caller already ensured that pending RCU callbacks are completed,
+	 * so we should have no busy allocations at this point.
+	 */
+	if (elv_ioq_pool)
+		kmem_cache_destroy(elv_ioq_pool);
+}
+
+static int __init elv_slab_setup(void)
+{
+	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
+	if (!elv_ioq_pool)
+		goto fail;
+
+	return 0;
+fail:
+	elv_slab_kill();
+	return -ENOMEM;
+}
+
+/* Initialize fair queueing data associated with elevator */
+int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
+{
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return 0;
+
+	iog = io_alloc_root_group(q, e, efqd);
+	if (iog == NULL)
+		return 1;
+
+	efqd->root_group = iog;
+	efqd->queue = q;
+
+	init_timer(&efqd->idle_slice_timer);
+	efqd->idle_slice_timer.function = elv_idle_slice_timer;
+	efqd->idle_slice_timer.data = (unsigned long) efqd;
+
+	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
+
+	efqd->elv_slice[0] = elv_slice_async;
+	efqd->elv_slice[1] = elv_slice_sync;
+	efqd->elv_slice_idle = elv_slice_idle;
+	efqd->hw_tag = 1;
+
+	return 0;
+}
+
+/*
+ * elv_exit_fq_data is called before we call elevator_exit_fn. Before
+ * we ask elevator to cleanup its queues, we do the cleanup here so
+ * that all the group and idle tree references to ioq are dropped. Later
+ * during elevator cleanup, ioc reference will be dropped which will lead
+ * to removal of ioscheduler queue as well as associated ioq object.
+ */
+void elv_exit_fq_data(struct elevator_queue *e)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	elv_shutdown_timer_wq(e);
+
+	BUG_ON(timer_pending(&efqd->idle_slice_timer));
+	io_free_root_group(e);
+}
+
+/*
+ * This is called after the io scheduler has cleaned up its data structres.
+ * I don't think that this function is required. Right now just keeping it
+ * because cfq cleans up timer and work queue again after freeing up
+ * io contexts. To me io scheduler has already been drained out, and all
+ * the active queue have already been expired so time and work queue should
+ * not been activated during cleanup process.
+ *
+ * Keeping it here for the time being. Will get rid of it later.
+ */
+void elv_exit_fq_data_post(struct elevator_queue *e)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	elv_shutdown_timer_wq(e);
+	BUG_ON(timer_pending(&efqd->idle_slice_timer));
+}
+
+
+static int __init elv_fq_init(void)
+{
+	if (elv_slab_setup())
+		return -ENOMEM;
+
+	/* could be 0 on HZ < 1000 setups */
+
+	if (!elv_slice_async)
+		elv_slice_async = 1;
+
+	if (!elv_slice_idle)
+		elv_slice_idle = 1;
+
+	return 0;
+}
+
+module_init(elv_fq_init);
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
new file mode 100644
index 0000000..5b6c1cc
--- /dev/null
+++ b/block/elevator-fq.h
@@ -0,0 +1,473 @@
+/*
+ * BFQ: data structures and common functions prototypes.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
+ *
+ * Copyright (C) 2008 Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
+ *		      Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
+ * Copyright (C) 2009 Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
+ * 	              Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#include <linux/blkdev.h>
+
+#ifndef _BFQ_SCHED_H
+#define _BFQ_SCHED_H
+
+#define IO_IOPRIO_CLASSES	3
+
+typedef u64 bfq_timestamp_t;
+typedef unsigned long bfq_weight_t;
+typedef unsigned long bfq_service_t;
+struct io_entity;
+struct io_queue;
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+
+#define ELV_ATTR(name) \
+	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
+
+/**
+ * struct bfq_service_tree - per ioprio_class service tree.
+ * @active: tree for active entities (i.e., those backlogged).
+ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
+ * @first_idle: idle entity with minimum F_i.
+ * @last_idle: idle entity with maximum F_i.
+ * @vtime: scheduler virtual time.
+ * @wsum: scheduler weight sum; active and idle entities contribute to it.
+ *
+ * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
+ * ioprio_class has its own independent scheduler, and so its own
+ * bfq_service_tree.  All the fields are protected by the queue lock
+ * of the containing efqd.
+ */
+struct io_service_tree {
+	struct rb_root active;
+	struct rb_root idle;
+
+	struct io_entity *first_idle;
+	struct io_entity *last_idle;
+
+	bfq_timestamp_t vtime;
+	bfq_weight_t wsum;
+};
+
+/**
+ * struct bfq_sched_data - multi-class scheduler.
+ * @active_entity: entity under service.
+ * @next_active: head-of-the-line entity in the scheduler.
+ * @service_tree: array of service trees, one per ioprio_class.
+ *
+ * bfq_sched_data is the basic scheduler queue.  It supports three
+ * ioprio_classes, and can be used either as a toplevel queue or as
+ * an intermediate queue on a hierarchical setup.
+ * @next_active points to the active entity of the sched_data service
+ * trees that will be scheduled next.
+ *
+ * The supported ioprio_classes are the same as in CFQ, in descending
+ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
+ * Requests from higher priority queues are served before all the
+ * requests from lower priority queues; among requests of the same
+ * queue requests are served according to B-WF2Q+.
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+struct io_sched_data {
+	struct io_entity *active_entity;
+	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
+};
+
+/**
+ * struct bfq_entity - schedulable entity.
+ * @rb_node: service_tree member.
+ * @on_st: flag, true if the entity is on a tree (either the active or
+ *         the idle one of its service_tree).
+ * @finish: B-WF2Q+ finish timestamp (aka F_i).
+ * @start: B-WF2Q+ start timestamp (aka S_i).
+ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
+ * @min_start: minimum start time of the (active) subtree rooted at
+ *             this entity; used for O(log N) lookups into active trees.
+ * @service: service received during the last round of service.
+ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
+ * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
+ * @parent: parent entity, for hierarchical scheduling.
+ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
+ *                 associated scheduler queue, %NULL on leaf nodes.
+ * @sched_data: the scheduler queue this entity belongs to.
+ * @ioprio: the ioprio in use.
+ * @new_ioprio: when an ioprio change is requested, the new ioprio value
+ * @ioprio_class: the ioprio_class in use.
+ * @new_ioprio_class: when an ioprio_class change is requested, the new
+ *                    ioprio_class value.
+ * @ioprio_changed: flag, true when the user requested an ioprio or
+ *                  ioprio_class change.
+ *
+ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
+ * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
+ * entity belongs to the sched_data of the parent group in the cgroup
+ * hierarchy.  Non-leaf entities have also their own sched_data, stored
+ * in @my_sched_data.
+ *
+ * Each entity stores independently its priority values; this would allow
+ * different weights on different devices, but this functionality is not
+ * exported to userspace by now.  Priorities are updated lazily, first
+ * storing the new values into the new_* fields, then setting the
+ * @ioprio_changed flag.  As soon as there is a transition in the entity
+ * state that allows the priority update to take place the effective and
+ * the requested priority values are synchronized.
+ *
+ * The weight value is calculated from the ioprio to export the same
+ * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
+ * queues that do not spend too much time to consume their budget and
+ * have true sequential behavior, and when there are no external factors
+ * breaking anticipation) the relative weights at each level of the
+ * cgroups hierarchy should be guaranteed.
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+struct io_entity {
+	struct rb_node rb_node;
+
+	int on_st;
+
+	bfq_timestamp_t finish;
+	bfq_timestamp_t start;
+
+	struct rb_root *tree;
+
+	bfq_timestamp_t min_start;
+
+	bfq_service_t service, budget;
+	bfq_weight_t weight;
+
+	struct io_entity *parent;
+
+	struct io_sched_data *my_sched_data;
+	struct io_sched_data *sched_data;
+
+	unsigned short ioprio, new_ioprio;
+	unsigned short ioprio_class, new_ioprio_class;
+
+	int ioprio_changed;
+};
+
+/*
+ * A common structure embedded by every io scheduler into their respective
+ * queue structure.
+ */
+struct io_queue {
+	struct io_entity entity;
+	atomic_t ref;
+	unsigned int flags;
+
+	/* Pointer to generic elevator data structure */
+	struct elv_fq_data *efqd;
+	pid_t pid;
+
+	/* Number of requests queued on this io queue */
+	unsigned long nr_queued;
+
+	/* Requests dispatched from this queue */
+	int dispatched;
+
+	/* Keep a track of think time of processes in this queue */
+	unsigned long last_end_request;
+	unsigned long ttime_total;
+	unsigned long ttime_samples;
+	unsigned long ttime_mean;
+
+	unsigned long slice_end;
+
+	/* Pointer to io scheduler's queue */
+	void *sched_queue;
+};
+
+struct io_group {
+	struct io_sched_data sched_data;
+
+	/* async_queue and idle_queue are used only for cfq */
+	struct io_queue *async_queue[2][IOPRIO_BE_NR];
+	struct io_queue *async_idle_queue;
+
+	/*
+	 * Used to track any pending rt requests so we can pre-empt current
+	 * non-RT cfqq in service when this value is non-zero.
+	 */
+	unsigned int busy_rt_queues;
+};
+
+struct elv_fq_data {
+	struct io_group *root_group;
+
+	struct request_queue *queue;
+	unsigned int busy_queues;
+
+	/* Number of requests queued */
+	int rq_queued;
+
+	/* Pointer to the ioscheduler queue being served */
+	void *active_queue;
+
+	int rq_in_driver;
+	int hw_tag;
+	int hw_tag_samples;
+	int rq_in_driver_peak;
+
+	/*
+	 * elevator fair queuing layer has the capability to provide idling
+	 * for ensuring fairness for processes doing dependent reads.
+	 * This might be needed to ensure fairness among two processes doing
+	 * synchronous reads in two different cgroups. noop and deadline don't
+	 * have any notion of anticipation/idling. As of now, these are the
+	 * users of this functionality.
+	 */
+	unsigned int elv_slice_idle;
+	struct timer_list idle_slice_timer;
+	struct work_struct unplug_work;
+
+	unsigned int elv_slice[2];
+};
+
+extern int elv_slice_idle;
+extern int elv_slice_async;
+
+/* Logging facilities. */
+#define elv_log_ioq(efqd, ioq, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
+				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
+
+#define elv_log(efqd, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
+
+#define ioq_sample_valid(samples)   ((samples) > 80)
+
+/* Some shared queue flag manipulation functions among elevators */
+
+enum elv_queue_state_flags {
+	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
+	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
+	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
+	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
+	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
+	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
+	ELV_QUEUE_FLAG_NR,
+};
+
+#define ELV_IO_QUEUE_FLAG_FNS(name)					\
+static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
+{                                                                       \
+	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
+}                                                                       \
+static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
+{                                                                       \
+	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
+}                                                                       \
+static inline int elv_ioq_##name(struct io_queue *ioq)         		\
+{                                                                       \
+	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
+}
+
+ELV_IO_QUEUE_FLAG_FNS(busy)
+ELV_IO_QUEUE_FLAG_FNS(sync)
+ELV_IO_QUEUE_FLAG_FNS(wait_request)
+ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
+ELV_IO_QUEUE_FLAG_FNS(idle_window)
+ELV_IO_QUEUE_FLAG_FNS(slice_new)
+
+static inline struct io_service_tree *
+io_entity_service_tree(struct io_entity *entity)
+{
+	struct io_sched_data *sched_data = entity->sched_data;
+	unsigned int idx = entity->ioprio_class - 1;
+
+	BUG_ON(idx >= IO_IOPRIO_CLASSES);
+	BUG_ON(sched_data == NULL);
+
+	return sched_data->service_tree + idx;
+}
+
+/* A request got dispatched from the io_queue. Do the accounting. */
+static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
+{
+	ioq->dispatched++;
+}
+
+static inline int elv_ioq_slice_used(struct io_queue *ioq)
+{
+	if (elv_ioq_slice_new(ioq))
+		return 0;
+	if (time_before(jiffies, ioq->slice_end))
+		return 0;
+
+	return 1;
+}
+
+/* How many request are currently dispatched from the queue */
+static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
+{
+	return ioq->dispatched;
+}
+
+/* How many request are currently queued in the queue */
+static inline int elv_ioq_nr_queued(struct io_queue *ioq)
+{
+	return ioq->nr_queued;
+}
+
+static inline void elv_get_ioq(struct io_queue *ioq)
+{
+	atomic_inc(&ioq->ref);
+}
+
+static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
+						unsigned long slice_end)
+{
+	ioq->slice_end = slice_end;
+}
+
+static inline int elv_ioq_class_idle(struct io_queue *ioq)
+{
+	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
+}
+
+static inline int elv_ioq_class_rt(struct io_queue *ioq)
+{
+	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
+}
+
+static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
+{
+	return ioq->entity.new_ioprio_class;
+}
+
+static inline int elv_ioq_ioprio(struct io_queue *ioq)
+{
+	return ioq->entity.new_ioprio;
+}
+
+static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
+						int ioprio_class)
+{
+	ioq->entity.new_ioprio_class = ioprio_class;
+	ioq->entity.ioprio_changed = 1;
+}
+
+static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
+{
+	ioq->entity.new_ioprio = ioprio;
+	ioq->entity.ioprio_changed = 1;
+}
+
+static inline void *ioq_sched_queue(struct io_queue *ioq)
+{
+	if (ioq)
+		return ioq->sched_queue;
+	return NULL;
+}
+
+static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
+{
+	return container_of(ioq->entity.sched_data, struct io_group,
+						sched_data);
+}
+
+extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
+						size_t count);
+extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
+						size_t count);
+extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
+						size_t count);
+
+/* Functions used by elevator.c */
+extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
+extern void elv_exit_fq_data(struct elevator_queue *e);
+extern void elv_exit_fq_data_post(struct elevator_queue *e);
+
+extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
+extern void elv_ioq_request_removed(struct elevator_queue *e,
+					struct request *rq);
+extern void elv_fq_dispatched_request(struct elevator_queue *e,
+					struct request *rq);
+
+extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
+extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
+
+extern void elv_ioq_completed_request(struct request_queue *q,
+				struct request *rq);
+
+extern void *elv_fq_select_ioq(struct request_queue *q, int force);
+extern struct io_queue *rq_ioq(struct request *rq);
+
+/* Functions used by io schedulers */
+extern void elv_put_ioq(struct io_queue *ioq);
+extern void __elv_ioq_slice_expired(struct request_queue *q,
+					struct io_queue *ioq);
+extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
+		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
+extern void elv_schedule_dispatch(struct request_queue *q);
+extern int elv_hw_tag(struct elevator_queue *e);
+extern void *elv_active_sched_queue(struct elevator_queue *e);
+extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
+					unsigned long expires);
+extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
+extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
+extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
+					int ioprio);
+extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
+					int ioprio, struct io_queue *ioq);
+extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
+extern int elv_nr_busy_ioq(struct elevator_queue *e);
+extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
+extern void elv_free_ioq(struct io_queue *ioq);
+
+#else /* CONFIG_ELV_FAIR_QUEUING */
+
+static inline int elv_init_fq_data(struct request_queue *q,
+					struct elevator_queue *e)
+{
+	return 0;
+}
+
+static inline void elv_exit_fq_data(struct elevator_queue *e) {}
+static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
+
+static inline void elv_fq_activate_rq(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_fq_deactivate_rq(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_fq_dispatched_request(struct elevator_queue *e,
+						struct request *rq)
+{
+}
+
+static inline void elv_ioq_request_removed(struct elevator_queue *e,
+						struct request *rq)
+{
+}
+
+static inline void elv_ioq_request_add(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_ioq_completed_request(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
+static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
+static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
+{
+	return NULL;
+}
+#endif /* CONFIG_ELV_FAIR_QUEUING */
+#endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index 7073a90..c2f07f5 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
 	for (i = 0; i < ELV_HASH_ENTRIES; i++)
 		INIT_HLIST_HEAD(&eq->hash[i]);
 
+	if (elv_init_fq_data(q, eq))
+		goto err;
+
 	return eq;
 err:
 	kfree(eq);
@@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
 void elevator_exit(struct elevator_queue *e)
 {
 	mutex_lock(&e->sysfs_lock);
+	elv_exit_fq_data(e);
 	if (e->ops->elevator_exit_fn)
 		e->ops->elevator_exit_fn(e);
 	e->ops = NULL;
+	elv_exit_fq_data_post(e);
 	mutex_unlock(&e->sysfs_lock);
 
 	kobject_put(&e->kobj);
@@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	elv_fq_activate_rq(q, rq);
+
 	if (e->ops->elevator_activate_req_fn)
 		e->ops->elevator_activate_req_fn(q, rq);
 }
@@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	elv_fq_deactivate_rq(q, rq);
+
 	if (e->ops->elevator_deactivate_req_fn)
 		e->ops->elevator_deactivate_req_fn(q, rq);
 }
@@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
 	elv_rqhash_del(q, rq);
 
 	q->nr_sorted--;
+	elv_fq_dispatched_request(q->elevator, rq);
 
 	boundary = q->end_sector;
 	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
@@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
 	elv_rqhash_del(q, rq);
 
 	q->nr_sorted--;
+	elv_fq_dispatched_request(q->elevator, rq);
 
 	q->end_sector = rq_end_sector(rq);
 	q->boundary_rq = rq;
@@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
 	elv_rqhash_del(q, next);
 
 	q->nr_sorted--;
+	elv_ioq_request_removed(e, next);
 	q->last_merge = rq;
 }
 
@@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 				q->last_merge = rq;
 		}
 
-		/*
-		 * Some ioscheds (cfq) run q->request_fn directly, so
-		 * rq cannot be accessed after calling
-		 * elevator_add_req_fn.
-		 */
 		q->elevator->ops->elevator_add_req_fn(q, rq);
+		elv_ioq_request_add(q, rq);
 		break;
 
 	case ELEVATOR_INSERT_REQUEUE:
@@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
 
 int elv_queue_empty(struct request_queue *q)
 {
-	struct elevator_queue *e = q->elevator;
-
 	if (!list_empty(&q->queue_head))
 		return 0;
 
-	if (e->ops->elevator_queue_empty_fn)
-		return e->ops->elevator_queue_empty_fn(q);
+	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
+	if (q->nr_sorted)
+		return 0;
 
 	return 1;
 }
@@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 	 */
 	if (blk_account_rq(rq)) {
 		q->in_flight--;
-		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
-			e->ops->elevator_completed_req_fn(q, rq);
+		if (blk_sorted_rq(rq)) {
+			if (e->ops->elevator_completed_req_fn)
+				e->ops->elevator_completed_req_fn(q, rq);
+			elv_ioq_completed_request(q, rq);
+		}
 	}
 
 	/*
@@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
 	return NULL;
 }
 EXPORT_SYMBOL(elv_rb_latter_request);
+
+/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
+void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
+{
+	return ioq_sched_queue(rq_ioq(rq));
+}
+EXPORT_SYMBOL(elv_get_sched_queue);
+
+/* Select an ioscheduler queue to dispatch request from. */
+void *elv_select_sched_queue(struct request_queue *q, int force)
+{
+	return ioq_sched_queue(elv_fq_select_ioq(q, force));
+}
+EXPORT_SYMBOL(elv_select_sched_queue);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b4f71f1..96a94c9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -245,6 +245,11 @@ struct request {
 
 	/* for bidi */
 	struct request *next_rq;
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	/* io queue request belongs to */
+	struct io_queue *ioq;
+#endif
 };
 
 static inline unsigned short req_get_ioprio(struct request *req)
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index c59b769..679c149 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -2,6 +2,7 @@
 #define _LINUX_ELEVATOR_H
 
 #include <linux/percpu.h>
+#include "../../block/elevator-fq.h"
 
 #ifdef CONFIG_BLOCK
 
@@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
+#ifdef CONFIG_ELV_FAIR_QUEUING
+typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
+typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
+typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
+typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
+typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
+						struct request*);
+typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
+						struct request*);
+typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
+						void*, int probe);
+#endif
 
 struct elevator_ops
 {
@@ -56,6 +69,17 @@ struct elevator_ops
 	elevator_init_fn *elevator_init_fn;
 	elevator_exit_fn *elevator_exit_fn;
 	void (*trim)(struct io_context *);
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
+	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
+	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
+
+	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
+	elevator_should_preempt_fn *elevator_should_preempt_fn;
+	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
+	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
+#endif
 };
 
 #define ELV_NAME_MAX	(16)
@@ -76,6 +100,9 @@ struct elevator_type
 	struct elv_fs_entry *elevator_attrs;
 	char elevator_name[ELV_NAME_MAX];
 	struct module *elevator_owner;
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	int elevator_features;
+#endif
 };
 
 /*
@@ -89,6 +116,10 @@ struct elevator_queue
 	struct elevator_type *elevator_type;
 	struct mutex sysfs_lock;
 	struct hlist_head *hash;
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	/* fair queuing data */
+	struct elv_fq_data efqd;
+#endif
 };
 
 /*
@@ -209,5 +240,25 @@ enum {
 	__val;							\
 })
 
+/* iosched can let elevator know their feature set/capability */
+#ifdef CONFIG_ELV_FAIR_QUEUING
+
+/* iosched wants to use fq logic of elevator layer */
+#define	ELV_IOSCHED_NEED_FQ	1
+
+static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
+{
+	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
+}
+
+#else /* ELV_IOSCHED_FAIR_QUEUING */
+
+static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
+{
+	return 0;
+}
+#endif /* ELV_IOSCHED_FAIR_QUEUING */
+extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
+extern void *elv_select_sched_queue(struct request_queue *q, int force);
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

This is common fair queuing code in elevator layer. This is controlled by
config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
flat fair queuing support where there is only one group, "root group" and all
the tasks belong to root group.

This elevator layer changes are backward compatible. That means any ioscheduler
using old interfaces will continue to work.

This code is essentially the CFQ code for fair queuing. The primary difference
is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   13 +
 block/Makefile           |    1 +
 block/elevator-fq.c      | 2015 ++++++++++++++++++++++++++++++++++++++++++++++
 block/elevator-fq.h      |  473 +++++++++++
 block/elevator.c         |   46 +-
 include/linux/blkdev.h   |    5 +
 include/linux/elevator.h |   51 ++
 7 files changed, 2593 insertions(+), 11 deletions(-)
 create mode 100644 block/elevator-fq.c
 create mode 100644 block/elevator-fq.h

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 7e803fc..3398134 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -2,6 +2,19 @@ if BLOCK
 
 menu "IO Schedulers"
 
+config ELV_FAIR_QUEUING
+	bool "Elevator Fair Queuing Support"
+	default n
+	---help---
+	  Traditionally only cfq had notion of multiple queues and it did
+	  fair queuing at its own. With the cgroups and need of controlling
+	  IO, now even the simple io schedulers like noop, deadline, as will
+	  have one queue per cgroup and will need hierarchical fair queuing.
+	  Instead of every io scheduler implementing its own fair queuing
+	  logic, this option enables fair queuing in elevator layer so that
+	  other ioschedulers can make use of it.
+	  If unsure, say N.
+
 config IOSCHED_NOOP
 	bool
 	default y
diff --git a/block/Makefile b/block/Makefile
index e9fa4dd..94bfc6e 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -15,3 +15,4 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
 
 obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
 obj-$(CONFIG_BLK_DEV_INTEGRITY)	+= blk-integrity.o
+obj-$(CONFIG_ELV_FAIR_QUEUING)	+= elevator-fq.o
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
new file mode 100644
index 0000000..9357fb0
--- /dev/null
+++ b/block/elevator-fq.c
@@ -0,0 +1,2015 @@
+/*
+ * BFQ: Hierarchical B-WF2Q+ scheduler.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+ *
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ *		      Paolo Valente <paolo.valente@unimore.it>
+ * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
+ * 	              Nauman Rafique <nauman@google.com>
+ */
+
+#include <linux/blkdev.h>
+#include "elevator-fq.h"
+#include <linux/blktrace_api.h>
+
+/* Values taken from cfq */
+const int elv_slice_sync = HZ / 10;
+int elv_slice_async = HZ / 25;
+const int elv_slice_async_rq = 2;
+int elv_slice_idle = HZ / 125;
+static struct kmem_cache *elv_ioq_pool;
+
+#define ELV_SLICE_SCALE		(5)
+#define ELV_HW_QUEUE_MIN	(5)
+#define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
+				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
+
+static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
+					struct io_queue *ioq, int probe);
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract);
+
+static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
+					unsigned short prio)
+{
+	const int base_slice = efqd->elv_slice[sync];
+
+	WARN_ON(prio >= IOPRIO_BE_NR);
+
+	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
+}
+
+static inline int
+elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
+{
+	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
+}
+
+/* Mainly the BFQ scheduling code Follows */
+
+/*
+ * Shift for timestamp calculations.  This actually limits the maximum
+ * service allowed in one timestamp delta (small shift values increase it),
+ * the maximum total weight that can be used for the queues in the system
+ * (big shift values increase it), and the period of virtual time wraparounds.
+ */
+#define WFQ_SERVICE_SHIFT	22
+
+/**
+ * bfq_gt - compare two timestamps.
+ * @a: first ts.
+ * @b: second ts.
+ *
+ * Return @a > @b, dealing with wrapping correctly.
+ */
+static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
+{
+	return (s64)(a - b) > 0;
+}
+
+/**
+ * bfq_delta - map service into the virtual time domain.
+ * @service: amount of service.
+ * @weight: scale factor.
+ */
+static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
+					bfq_weight_t weight)
+{
+	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
+
+	do_div(d, weight);
+	return d;
+}
+
+/**
+ * bfq_calc_finish - assign the finish time to an entity.
+ * @entity: the entity to act upon.
+ * @service: the service to be charged to the entity.
+ */
+static inline void bfq_calc_finish(struct io_entity *entity,
+				   bfq_service_t service)
+{
+	BUG_ON(entity->weight == 0);
+
+	entity->finish = entity->start + bfq_delta(service, entity->weight);
+}
+
+static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
+{
+	struct io_queue *ioq = NULL;
+
+	BUG_ON(entity == NULL);
+	if (entity->my_sched_data == NULL)
+		ioq = container_of(entity, struct io_queue, entity);
+	return ioq;
+}
+
+/**
+ * bfq_entity_of - get an entity from a node.
+ * @node: the node field of the entity.
+ *
+ * Convert a node pointer to the relative entity.  This is used only
+ * to simplify the logic of some functions and not as the generic
+ * conversion mechanism because, e.g., in the tree walking functions,
+ * the check for a %NULL value would be redundant.
+ */
+static inline struct io_entity *bfq_entity_of(struct rb_node *node)
+{
+	struct io_entity *entity = NULL;
+
+	if (node != NULL)
+		entity = rb_entry(node, struct io_entity, rb_node);
+
+	return entity;
+}
+
+/**
+ * bfq_extract - remove an entity from a tree.
+ * @root: the tree root.
+ * @entity: the entity to remove.
+ */
+static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
+{
+	BUG_ON(entity->tree != root);
+
+	entity->tree = NULL;
+	rb_erase(&entity->rb_node, root);
+}
+
+/**
+ * bfq_idle_extract - extract an entity from the idle tree.
+ * @st: the service tree of the owning @entity.
+ * @entity: the entity being removed.
+ */
+static void bfq_idle_extract(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct rb_node *next;
+
+	BUG_ON(entity->tree != &st->idle);
+
+	if (entity == st->first_idle) {
+		next = rb_next(&entity->rb_node);
+		st->first_idle = bfq_entity_of(next);
+	}
+
+	if (entity == st->last_idle) {
+		next = rb_prev(&entity->rb_node);
+		st->last_idle = bfq_entity_of(next);
+	}
+
+	bfq_extract(&st->idle, entity);
+}
+
+/**
+ * bfq_insert - generic tree insertion.
+ * @root: tree root.
+ * @entity: entity to insert.
+ *
+ * This is used for the idle and the active tree, since they are both
+ * ordered by finish time.
+ */
+static void bfq_insert(struct rb_root *root, struct io_entity *entity)
+{
+	struct io_entity *entry;
+	struct rb_node **node = &root->rb_node;
+	struct rb_node *parent = NULL;
+
+	BUG_ON(entity->tree != NULL);
+
+	while (*node != NULL) {
+		parent = *node;
+		entry = rb_entry(parent, struct io_entity, rb_node);
+
+		if (bfq_gt(entry->finish, entity->finish))
+			node = &parent->rb_left;
+		else
+			node = &parent->rb_right;
+	}
+
+	rb_link_node(&entity->rb_node, parent, node);
+	rb_insert_color(&entity->rb_node, root);
+
+	entity->tree = root;
+}
+
+/**
+ * bfq_update_min - update the min_start field of a entity.
+ * @entity: the entity to update.
+ * @node: one of its children.
+ *
+ * This function is called when @entity may store an invalid value for
+ * min_start due to updates to the active tree.  The function  assumes
+ * that the subtree rooted at @node (which may be its left or its right
+ * child) has a valid min_start value.
+ */
+static inline void bfq_update_min(struct io_entity *entity,
+					struct rb_node *node)
+{
+	struct io_entity *child;
+
+	if (node != NULL) {
+		child = rb_entry(node, struct io_entity, rb_node);
+		if (bfq_gt(entity->min_start, child->min_start))
+			entity->min_start = child->min_start;
+	}
+}
+
+/**
+ * bfq_update_active_node - recalculate min_start.
+ * @node: the node to update.
+ *
+ * @node may have changed position or one of its children may have moved,
+ * this function updates its min_start value.  The left and right subtrees
+ * are assumed to hold a correct min_start value.
+ */
+static inline void bfq_update_active_node(struct rb_node *node)
+{
+	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
+
+	entity->min_start = entity->start;
+	bfq_update_min(entity, node->rb_right);
+	bfq_update_min(entity, node->rb_left);
+}
+
+/**
+ * bfq_update_active_tree - update min_start for the whole active tree.
+ * @node: the starting node.
+ *
+ * @node must be the deepest modified node after an update.  This function
+ * updates its min_start using the values held by its children, assuming
+ * that they did not change, and then updates all the nodes that may have
+ * changed in the path to the root.  The only nodes that may have changed
+ * are the ones in the path or their siblings.
+ */
+static void bfq_update_active_tree(struct rb_node *node)
+{
+	struct rb_node *parent;
+
+up:
+	bfq_update_active_node(node);
+
+	parent = rb_parent(node);
+	if (parent == NULL)
+		return;
+
+	if (node == parent->rb_left && parent->rb_right != NULL)
+		bfq_update_active_node(parent->rb_right);
+	else if (parent->rb_left != NULL)
+		bfq_update_active_node(parent->rb_left);
+
+	node = parent;
+	goto up;
+}
+
+/**
+ * bfq_active_insert - insert an entity in the active tree of its group/device.
+ * @st: the service tree of the entity.
+ * @entity: the entity being inserted.
+ *
+ * The active tree is ordered by finish time, but an extra key is kept
+ * per each node, containing the minimum value for the start times of
+ * its children (and the node itself), so it's possible to search for
+ * the eligible node with the lowest finish time in logarithmic time.
+ */
+static void bfq_active_insert(struct io_service_tree *st,
+					struct io_entity *entity)
+{
+	struct rb_node *node = &entity->rb_node;
+
+	bfq_insert(&st->active, entity);
+
+	if (node->rb_left != NULL)
+		node = node->rb_left;
+	else if (node->rb_right != NULL)
+		node = node->rb_right;
+
+	bfq_update_active_tree(node);
+}
+
+/**
+ * bfq_ioprio_to_weight - calc a weight from an ioprio.
+ * @ioprio: the ioprio value to convert.
+ */
+static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
+{
+	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+	return IOPRIO_BE_NR - ioprio;
+}
+
+void bfq_get_entity(struct io_entity *entity)
+{
+	struct io_queue *ioq = io_entity_to_ioq(entity);
+
+	if (ioq)
+		elv_get_ioq(ioq);
+}
+
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->sched_data = &iog->sched_data;
+}
+
+/**
+ * bfq_find_deepest - find the deepest node that an extraction can modify.
+ * @node: the node being removed.
+ *
+ * Do the first step of an extraction in an rb tree, looking for the
+ * node that will replace @node, and returning the deepest node that
+ * the following modifications to the tree can touch.  If @node is the
+ * last node in the tree return %NULL.
+ */
+static struct rb_node *bfq_find_deepest(struct rb_node *node)
+{
+	struct rb_node *deepest;
+
+	if (node->rb_right == NULL && node->rb_left == NULL)
+		deepest = rb_parent(node);
+	else if (node->rb_right == NULL)
+		deepest = node->rb_left;
+	else if (node->rb_left == NULL)
+		deepest = node->rb_right;
+	else {
+		deepest = rb_next(node);
+		if (deepest->rb_right != NULL)
+			deepest = deepest->rb_right;
+		else if (rb_parent(deepest) != node)
+			deepest = rb_parent(deepest);
+	}
+
+	return deepest;
+}
+
+/**
+ * bfq_active_extract - remove an entity from the active tree.
+ * @st: the service_tree containing the tree.
+ * @entity: the entity being removed.
+ */
+static void bfq_active_extract(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct rb_node *node;
+
+	node = bfq_find_deepest(&entity->rb_node);
+	bfq_extract(&st->active, entity);
+
+	if (node != NULL)
+		bfq_update_active_tree(node);
+}
+
+/**
+ * bfq_idle_insert - insert an entity into the idle tree.
+ * @st: the service tree containing the tree.
+ * @entity: the entity to insert.
+ */
+static void bfq_idle_insert(struct io_service_tree *st,
+					struct io_entity *entity)
+{
+	struct io_entity *first_idle = st->first_idle;
+	struct io_entity *last_idle = st->last_idle;
+
+	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
+		st->first_idle = entity;
+	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
+		st->last_idle = entity;
+
+	bfq_insert(&st->idle, entity);
+}
+
+/**
+ * bfq_forget_entity - remove an entity from the wfq trees.
+ * @st: the service tree.
+ * @entity: the entity being removed.
+ *
+ * Update the device status and forget everything about @entity, putting
+ * the device reference to it, if it is a queue.  Entities belonging to
+ * groups are not refcounted.
+ */
+static void bfq_forget_entity(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct io_queue *ioq = NULL;
+
+	BUG_ON(!entity->on_st);
+	entity->on_st = 0;
+	st->wsum -= entity->weight;
+	ioq = io_entity_to_ioq(entity);
+	if (!ioq)
+		return;
+	elv_put_ioq(ioq);
+}
+
+/**
+ * bfq_put_idle_entity - release the idle tree ref of an entity.
+ * @st: service tree for the entity.
+ * @entity: the entity being released.
+ */
+void bfq_put_idle_entity(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	bfq_idle_extract(st, entity);
+	bfq_forget_entity(st, entity);
+}
+
+/**
+ * bfq_forget_idle - update the idle tree if necessary.
+ * @st: the service tree to act upon.
+ *
+ * To preserve the global O(log N) complexity we only remove one entry here;
+ * as the idle tree will not grow indefinitely this can be done safely.
+ */
+void bfq_forget_idle(struct io_service_tree *st)
+{
+	struct io_entity *first_idle = st->first_idle;
+	struct io_entity *last_idle = st->last_idle;
+
+	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
+	    !bfq_gt(last_idle->finish, st->vtime)) {
+		/*
+		 * Active tree is empty. Pull back vtime to finish time of
+		 * last idle entity on idle tree.
+		 * Rational seems to be that it reduces the possibility of
+		 * vtime wraparound (bfq_gt(V-F) < 0).
+		 */
+		st->vtime = last_idle->finish;
+	}
+
+	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
+		bfq_put_idle_entity(st, first_idle);
+}
+
+
+static struct io_service_tree *
+__bfq_entity_update_prio(struct io_service_tree *old_st,
+				struct io_entity *entity)
+{
+	struct io_service_tree *new_st = old_st;
+	struct io_queue *ioq = io_entity_to_ioq(entity);
+
+	if (entity->ioprio_changed) {
+		entity->ioprio = entity->new_ioprio;
+		entity->ioprio_class = entity->new_ioprio_class;
+		entity->ioprio_changed = 0;
+
+		/*
+		 * Also update the scaled budget for ioq. Group will get the
+		 * updated budget once ioq is selected to run next.
+		 */
+		if (ioq) {
+			struct elv_fq_data *efqd = ioq->efqd;
+			entity->budget = elv_prio_to_slice(efqd, ioq);
+		}
+
+		old_st->wsum -= entity->weight;
+		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
+
+		/*
+		 * NOTE: here we may be changing the weight too early,
+		 * this will cause unfairness.  The correct approach
+		 * would have required additional complexity to defer
+		 * weight changes to the proper time instants (i.e.,
+		 * when entity->finish <= old_st->vtime).
+		 */
+		new_st = io_entity_service_tree(entity);
+		new_st->wsum += entity->weight;
+
+		if (new_st != old_st)
+			entity->start = new_st->vtime;
+	}
+
+	return new_st;
+}
+
+/**
+ * __bfq_activate_entity - activate an entity.
+ * @entity: the entity being activated.
+ *
+ * Called whenever an entity is activated, i.e., it is not active and one
+ * of its children receives a new request, or has to be reactivated due to
+ * budget exhaustion.  It uses the current budget of the entity (and the
+ * service received if @entity is active) of the queue to calculate its
+ * timestamps.
+ */
+static void __bfq_activate_entity(struct io_entity *entity, int add_front)
+{
+	struct io_sched_data *sd = entity->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+
+	if (entity == sd->active_entity) {
+		BUG_ON(entity->tree != NULL);
+		/*
+		 * If we are requeueing the current entity we have
+		 * to take care of not charging to it service it has
+		 * not received.
+		 */
+		bfq_calc_finish(entity, entity->service);
+		entity->start = entity->finish;
+		sd->active_entity = NULL;
+	} else if (entity->tree == &st->active) {
+		/*
+		 * Requeueing an entity due to a change of some
+		 * next_active entity below it.  We reuse the old
+		 * start time.
+		 */
+		bfq_active_extract(st, entity);
+	} else if (entity->tree == &st->idle) {
+		/*
+		 * Must be on the idle tree, bfq_idle_extract() will
+		 * check for that.
+		 */
+		bfq_idle_extract(st, entity);
+		entity->start = bfq_gt(st->vtime, entity->finish) ?
+				       st->vtime : entity->finish;
+	} else {
+		/*
+		 * The finish time of the entity may be invalid, and
+		 * it is in the past for sure, otherwise the queue
+		 * would have been on the idle tree.
+		 */
+		entity->start = st->vtime;
+		st->wsum += entity->weight;
+		bfq_get_entity(entity);
+
+		BUG_ON(entity->on_st);
+		entity->on_st = 1;
+	}
+
+	st = __bfq_entity_update_prio(st, entity);
+	/*
+	 * This is to emulate cfq like functionality where preemption can
+	 * happen with-in same class, like sync queue preempting async queue
+	 * May be this is not a very good idea from fairness point of view
+	 * as preempting queue gains share. Keeping it for now.
+	 */
+	if (add_front) {
+		struct io_entity *next_entity;
+
+		/*
+		 * Determine the entity which will be dispatched next
+		 * Use sd->next_active once hierarchical patch is applied
+		 */
+		next_entity = bfq_lookup_next_entity(sd, 0);
+
+		if (next_entity && next_entity != entity) {
+			struct io_service_tree *new_st;
+			bfq_timestamp_t delta;
+
+			new_st = io_entity_service_tree(next_entity);
+
+			/*
+			 * At this point, both entities should belong to
+			 * same service tree as cross service tree preemption
+			 * is automatically taken care by algorithm
+			 */
+			BUG_ON(new_st != st);
+			entity->finish = next_entity->finish - 1;
+			delta = bfq_delta(entity->budget, entity->weight);
+			entity->start = entity->finish - delta;
+			if (bfq_gt(entity->start, st->vtime))
+				entity->start = st->vtime;
+		}
+	} else {
+		bfq_calc_finish(entity, entity->budget);
+	}
+	bfq_active_insert(st, entity);
+}
+
+/**
+ * bfq_activate_entity - activate an entity.
+ * @entity: the entity to activate.
+ */
+void bfq_activate_entity(struct io_entity *entity, int add_front)
+{
+	__bfq_activate_entity(entity, add_front);
+}
+
+/**
+ * __bfq_deactivate_entity - deactivate an entity from its service tree.
+ * @entity: the entity to deactivate.
+ * @requeue: if false, the entity will not be put into the idle tree.
+ *
+ * Deactivate an entity, independently from its previous state.  If the
+ * entity was not on a service tree just return, otherwise if it is on
+ * any scheduler tree, extract it from that tree, and if necessary
+ * and if the caller did not specify @requeue, put it on the idle tree.
+ *
+ */
+int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
+{
+	struct io_sched_data *sd = entity->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+	int was_active = entity == sd->active_entity;
+	int ret = 0;
+
+	if (!entity->on_st)
+		return 0;
+
+	BUG_ON(was_active && entity->tree != NULL);
+
+	if (was_active) {
+		bfq_calc_finish(entity, entity->service);
+		sd->active_entity = NULL;
+	} else if (entity->tree == &st->active)
+		bfq_active_extract(st, entity);
+	else if (entity->tree == &st->idle)
+		bfq_idle_extract(st, entity);
+	else if (entity->tree != NULL)
+		BUG();
+
+	if (!requeue || !bfq_gt(entity->finish, st->vtime))
+		bfq_forget_entity(st, entity);
+	else
+		bfq_idle_insert(st, entity);
+
+	BUG_ON(sd->active_entity == entity);
+
+	return ret;
+}
+
+/**
+ * bfq_deactivate_entity - deactivate an entity.
+ * @entity: the entity to deactivate.
+ * @requeue: true if the entity can be put on the idle tree
+ */
+void bfq_deactivate_entity(struct io_entity *entity, int requeue)
+{
+	__bfq_deactivate_entity(entity, requeue);
+}
+
+/**
+ * bfq_update_vtime - update vtime if necessary.
+ * @st: the service tree to act upon.
+ *
+ * If necessary update the service tree vtime to have at least one
+ * eligible entity, skipping to its start time.  Assumes that the
+ * active tree of the device is not empty.
+ *
+ * NOTE: this hierarchical implementation updates vtimes quite often,
+ * we may end up with reactivated tasks getting timestamps after a
+ * vtime skip done because we needed a ->first_active entity on some
+ * intermediate node.
+ */
+static void bfq_update_vtime(struct io_service_tree *st)
+{
+	struct io_entity *entry;
+	struct rb_node *node = st->active.rb_node;
+
+	entry = rb_entry(node, struct io_entity, rb_node);
+	if (bfq_gt(entry->min_start, st->vtime)) {
+		st->vtime = entry->min_start;
+		bfq_forget_idle(st);
+	}
+}
+
+/**
+ * bfq_first_active - find the eligible entity with the smallest finish time
+ * @st: the service tree to select from.
+ *
+ * This function searches the first schedulable entity, starting from the
+ * root of the tree and going on the left every time on this side there is
+ * a subtree with at least one eligible (start <= vtime) entity.  The path
+ * on the right is followed only if a) the left subtree contains no eligible
+ * entities and b) no eligible entity has been found yet.
+ */
+static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
+{
+	struct io_entity *entry, *first = NULL;
+	struct rb_node *node = st->active.rb_node;
+
+	while (node != NULL) {
+		entry = rb_entry(node, struct io_entity, rb_node);
+left:
+		if (!bfq_gt(entry->start, st->vtime))
+			first = entry;
+
+		BUG_ON(bfq_gt(entry->min_start, st->vtime));
+
+		if (node->rb_left != NULL) {
+			entry = rb_entry(node->rb_left,
+					 struct io_entity, rb_node);
+			if (!bfq_gt(entry->min_start, st->vtime)) {
+				node = node->rb_left;
+				goto left;
+			}
+		}
+		if (first != NULL)
+			break;
+		node = node->rb_right;
+	}
+
+	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
+	return first;
+}
+
+/**
+ * __bfq_lookup_next_entity - return the first eligible entity in @st.
+ * @st: the service tree.
+ *
+ * Update the virtual time in @st and return the first eligible entity
+ * it contains.
+ */
+static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
+{
+	struct io_entity *entity;
+
+	if (RB_EMPTY_ROOT(&st->active))
+		return NULL;
+
+	bfq_update_vtime(st);
+	entity = bfq_first_active_entity(st);
+	BUG_ON(bfq_gt(entity->start, st->vtime));
+
+	return entity;
+}
+
+/**
+ * bfq_lookup_next_entity - return the first eligible entity in @sd.
+ * @sd: the sched_data.
+ * @extract: if true the returned entity will be also extracted from @sd.
+ *
+ * NOTE: since we cache the next_active entity at each level of the
+ * hierarchy, the complexity of the lookup can be decreased with
+ * absolutely no effort just returning the cached next_active value;
+ * we prefer to do full lookups to test the consistency of * the data
+ * structures.
+ */
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract)
+{
+	struct io_service_tree *st = sd->service_tree;
+	struct io_entity *entity;
+	int i;
+
+	/*
+	 * We should not call lookup when an entity is active, as doing lookup
+	 * can result in an erroneous vtime jump.
+	 */
+	BUG_ON(sd->active_entity != NULL);
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
+		entity = __bfq_lookup_next_entity(st);
+		if (entity != NULL) {
+			if (extract) {
+				bfq_active_extract(st, entity);
+				sd->active_entity = entity;
+			}
+			break;
+		}
+	}
+
+	return entity;
+}
+
+void entity_served(struct io_entity *entity, bfq_service_t served)
+{
+	struct io_service_tree *st;
+
+	st = io_entity_service_tree(entity);
+	entity->service += served;
+	BUG_ON(st->wsum == 0);
+	st->vtime += bfq_delta(served, st->wsum);
+	bfq_forget_idle(st);
+}
+
+/**
+ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
+ * @st: the service tree being flushed.
+ */
+void io_flush_idle_tree(struct io_service_tree *st)
+{
+	struct io_entity *entity = st->first_idle;
+
+	for (; entity != NULL; entity = st->first_idle)
+		__bfq_deactivate_entity(entity, 0);
+}
+
+/* Elevator fair queuing function */
+struct io_queue *rq_ioq(struct request *rq)
+{
+	return rq->ioq;
+}
+
+static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
+{
+	return e->efqd.active_queue;
+}
+
+void *elv_active_sched_queue(struct elevator_queue *e)
+{
+	return ioq_sched_queue(elv_active_ioq(e));
+}
+EXPORT_SYMBOL(elv_active_sched_queue);
+
+int elv_nr_busy_ioq(struct elevator_queue *e)
+{
+	return e->efqd.busy_queues;
+}
+EXPORT_SYMBOL(elv_nr_busy_ioq);
+
+int elv_hw_tag(struct elevator_queue *e)
+{
+	return e->efqd.hw_tag;
+}
+EXPORT_SYMBOL(elv_hw_tag);
+
+/* Helper functions for operating on elevator idle slice timer */
+int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+
+	return mod_timer(&efqd->idle_slice_timer, expires);
+}
+EXPORT_SYMBOL(elv_mod_idle_slice_timer);
+
+int elv_del_idle_slice_timer(struct elevator_queue *eq)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+
+	return del_timer(&efqd->idle_slice_timer);
+}
+EXPORT_SYMBOL(elv_del_idle_slice_timer);
+
+unsigned int elv_get_slice_idle(struct elevator_queue *eq)
+{
+	return eq->efqd.elv_slice_idle;
+}
+EXPORT_SYMBOL(elv_get_slice_idle);
+
+void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
+{
+	entity_served(&ioq->entity, served);
+}
+
+/* Tells whether ioq is queued in root group or not */
+static inline int is_root_group_ioq(struct request_queue *q,
+					struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
+}
+
+/*
+ * sysfs parts below -->
+ */
+static ssize_t
+elv_var_show(unsigned int var, char *page)
+{
+	return sprintf(page, "%d\n", var);
+}
+
+static ssize_t
+elv_var_store(unsigned int *var, const char *page, size_t count)
+{
+	char *p = (char *) page;
+
+	*var = simple_strtoul(p, &p, 10);
+	return count;
+}
+
+#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
+ssize_t __FUNC(struct elevator_queue *e, char *page)		\
+{									\
+	struct elv_fq_data *efqd = &e->efqd;				\
+	unsigned int __data = __VAR;					\
+	if (__CONV)							\
+		__data = jiffies_to_msecs(__data);			\
+	return elv_var_show(__data, (page));				\
+}
+SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
+EXPORT_SYMBOL(elv_slice_idle_show);
+SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
+EXPORT_SYMBOL(elv_slice_sync_show);
+SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
+EXPORT_SYMBOL(elv_slice_async_show);
+#undef SHOW_FUNCTION
+
+#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
+ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
+{									\
+	struct elv_fq_data *efqd = &e->efqd;				\
+	unsigned int __data;						\
+	int ret = elv_var_store(&__data, (page), count);		\
+	if (__data < (MIN))						\
+		__data = (MIN);						\
+	else if (__data > (MAX))					\
+		__data = (MAX);						\
+	if (__CONV)							\
+		*(__PTR) = msecs_to_jiffies(__data);			\
+	else								\
+		*(__PTR) = __data;					\
+	return ret;							\
+}
+STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_idle_store);
+STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_sync_store);
+STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_async_store);
+#undef STORE_FUNCTION
+
+void elv_schedule_dispatch(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (elv_nr_busy_ioq(q->elevator)) {
+		elv_log(efqd, "schedule dispatch");
+		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
+	}
+}
+EXPORT_SYMBOL(elv_schedule_dispatch);
+
+void elv_kick_queue(struct work_struct *work)
+{
+	struct elv_fq_data *efqd =
+		container_of(work, struct elv_fq_data, unplug_work);
+	struct request_queue *q = efqd->queue;
+	unsigned long flags;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	blk_start_queueing(q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+void elv_shutdown_timer_wq(struct elevator_queue *e)
+{
+	del_timer_sync(&e->efqd.idle_slice_timer);
+	cancel_work_sync(&e->efqd.unplug_work);
+}
+EXPORT_SYMBOL(elv_shutdown_timer_wq);
+
+void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	ioq->slice_end = jiffies + ioq->entity.budget;
+	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
+}
+
+static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = ioq->efqd;
+	unsigned long elapsed = jiffies - ioq->last_end_request;
+	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
+
+	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
+	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
+	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
+}
+
+/*
+ * Disable idle window if the process thinks too long.
+ * This idle flag can also be updated by io scheduler.
+ */
+static void elv_ioq_update_idle_window(struct elevator_queue *eq,
+				struct io_queue *ioq, struct request *rq)
+{
+	int old_idle, enable_idle;
+	struct elv_fq_data *efqd = ioq->efqd;
+
+	/*
+	 * Don't idle for async or idle io prio class
+	 */
+	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
+		return;
+
+	enable_idle = old_idle = elv_ioq_idle_window(ioq);
+
+	if (!efqd->elv_slice_idle)
+		enable_idle = 0;
+	else if (ioq_sample_valid(ioq->ttime_samples)) {
+		if (ioq->ttime_mean > efqd->elv_slice_idle)
+			enable_idle = 0;
+		else
+			enable_idle = 1;
+	}
+
+	/*
+	 * From think time perspective idle should be enabled. Check with
+	 * io scheduler if it wants to disable idling based on additional
+	 * considrations like seek pattern.
+	 */
+	if (enable_idle) {
+		if (eq->ops->elevator_update_idle_window_fn)
+			enable_idle = eq->ops->elevator_update_idle_window_fn(
+						eq, ioq->sched_queue, rq);
+		if (!enable_idle)
+			elv_log_ioq(efqd, ioq, "iosched disabled idle");
+	}
+
+	if (old_idle != enable_idle) {
+		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
+		if (enable_idle)
+			elv_mark_ioq_idle_window(ioq);
+		else
+			elv_clear_ioq_idle_window(ioq);
+	}
+}
+
+struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
+{
+	struct io_queue *ioq = NULL;
+
+	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
+	return ioq;
+}
+EXPORT_SYMBOL(elv_alloc_ioq);
+
+void elv_free_ioq(struct io_queue *ioq)
+{
+	kmem_cache_free(elv_ioq_pool, ioq);
+}
+EXPORT_SYMBOL(elv_free_ioq);
+
+int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
+			void *sched_queue, int ioprio_class, int ioprio,
+			int is_sync)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
+
+	RB_CLEAR_NODE(&ioq->entity.rb_node);
+	atomic_set(&ioq->ref, 0);
+	ioq->efqd = efqd;
+	elv_ioq_set_ioprio_class(ioq, ioprio_class);
+	elv_ioq_set_ioprio(ioq, ioprio);
+	ioq->pid = current->pid;
+	ioq->sched_queue = sched_queue;
+	if (is_sync && !elv_ioq_class_idle(ioq))
+		elv_mark_ioq_idle_window(ioq);
+	bfq_init_entity(&ioq->entity, iog);
+	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
+	if (is_sync)
+		ioq->last_end_request = jiffies;
+
+	return 0;
+}
+EXPORT_SYMBOL(elv_init_ioq);
+
+void elv_put_ioq(struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = ioq->efqd;
+	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
+						efqd);
+
+	BUG_ON(atomic_read(&ioq->ref) <= 0);
+	if (!atomic_dec_and_test(&ioq->ref))
+		return;
+	BUG_ON(ioq->nr_queued);
+	BUG_ON(ioq->entity.tree != NULL);
+	BUG_ON(elv_ioq_busy(ioq));
+	BUG_ON(efqd->active_queue == ioq);
+
+	/* Can be called by outgoing elevator. Don't use q */
+	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
+
+	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
+	elv_log_ioq(efqd, ioq, "put_queue");
+	elv_free_ioq(ioq);
+}
+EXPORT_SYMBOL(elv_put_ioq);
+
+void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
+{
+	struct io_queue *ioq = *ioq_ptr;
+
+	if (ioq != NULL) {
+		/* Drop the reference taken by the io group */
+		elv_put_ioq(ioq);
+		*ioq_ptr = NULL;
+	}
+}
+
+/*
+ * Normally next io queue to be served is selected from the service tree.
+ * This function allows one to choose a specific io queue to run next
+ * out of order. This is primarily to accomodate the close_cooperator
+ * feature of cfq.
+ *
+ * Currently it is done only for root level as to begin with supporting
+ * close cooperator feature only for root group to make sure default
+ * cfq behavior in flat hierarchy is not changed.
+ */
+void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	struct io_sched_data *sd = &efqd->root_group->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+
+	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
+	BUG_ON(!efqd->busy_queues);
+	BUG_ON(sd != entity->sched_data);
+	BUG_ON(!st);
+
+	bfq_update_vtime(st);
+	bfq_active_extract(st, entity);
+	sd->active_entity = entity;
+	entity->service = 0;
+	elv_log_ioq(efqd, ioq, "set_next_ioq");
+}
+
+/* Get next queue for service. */
+struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = NULL;
+	struct io_queue *ioq = NULL;
+	struct io_sched_data *sd;
+
+	/*
+	 * We should not call lookup when an entity is active, as doing
+	 * lookup can result in an erroneous vtime jump.
+	 */
+	BUG_ON(efqd->active_queue != NULL);
+
+	if (!efqd->busy_queues)
+		return NULL;
+
+	sd = &efqd->root_group->sched_data;
+	entity = bfq_lookup_next_entity(sd, 1);
+
+	BUG_ON(!entity);
+	if (extract)
+		entity->service = 0;
+	ioq = io_entity_to_ioq(entity);
+
+	return ioq;
+}
+
+/*
+ * coop tells that io scheduler selected a queue for us and we did not
+ * select the next queue based on fairness.
+ */
+static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int coop)
+{
+	struct request_queue *q = efqd->queue;
+
+	if (ioq) {
+		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
+							efqd->busy_queues);
+		ioq->slice_end = 0;
+
+		elv_clear_ioq_wait_request(ioq);
+		elv_clear_ioq_must_dispatch(ioq);
+		elv_mark_ioq_slice_new(ioq);
+
+		del_timer(&efqd->idle_slice_timer);
+	}
+
+	efqd->active_queue = ioq;
+
+	/* Let iosched know if it wants to take some action */
+	if (ioq) {
+		if (q->elevator->ops->elevator_active_ioq_set_fn)
+			q->elevator->ops->elevator_active_ioq_set_fn(q,
+							ioq->sched_queue, coop);
+	}
+}
+
+/* Get and set a new active queue for service. */
+struct io_queue *elv_set_active_ioq(struct request_queue *q,
+						struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	int coop = 0;
+
+	if (!ioq)
+		ioq = elv_get_next_ioq(q, 1);
+	else {
+		elv_set_next_ioq(q, ioq);
+		/*
+		 * io scheduler selected the next queue for us. Pass this
+		 * this info back to io scheudler. cfq currently uses it
+		 * to reset coop flag on the queue.
+		 */
+		coop = 1;
+	}
+	__elv_set_active_ioq(efqd, ioq, coop);
+	return ioq;
+}
+
+void elv_reset_active_ioq(struct elv_fq_data *efqd)
+{
+	struct request_queue *q = efqd->queue;
+	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
+
+	if (q->elevator->ops->elevator_active_ioq_reset_fn)
+		q->elevator->ops->elevator_active_ioq_reset_fn(q,
+							ioq->sched_queue);
+	efqd->active_queue = NULL;
+	del_timer(&efqd->idle_slice_timer);
+}
+
+void elv_activate_ioq(struct io_queue *ioq, int add_front)
+{
+	bfq_activate_entity(&ioq->entity, add_front);
+}
+
+void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int requeue)
+{
+	bfq_deactivate_entity(&ioq->entity, requeue);
+}
+
+/* Called when an inactive queue receives a new request. */
+void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
+{
+	BUG_ON(elv_ioq_busy(ioq));
+	BUG_ON(ioq == efqd->active_queue);
+	elv_log_ioq(efqd, ioq, "add to busy");
+	elv_activate_ioq(ioq, 0);
+	elv_mark_ioq_busy(ioq);
+	efqd->busy_queues++;
+	if (elv_ioq_class_rt(ioq)) {
+		struct io_group *iog = ioq_to_io_group(ioq);
+		iog->busy_rt_queues++;
+	}
+}
+
+void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
+					int requeue)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	BUG_ON(!elv_ioq_busy(ioq));
+	BUG_ON(ioq->nr_queued);
+	elv_log_ioq(efqd, ioq, "del from busy");
+	elv_clear_ioq_busy(ioq);
+	BUG_ON(efqd->busy_queues == 0);
+	efqd->busy_queues--;
+	if (elv_ioq_class_rt(ioq)) {
+		struct io_group *iog = ioq_to_io_group(ioq);
+		iog->busy_rt_queues--;
+	}
+
+	elv_deactivate_ioq(efqd, ioq, requeue);
+}
+
+/*
+ * Do the accounting. Determine how much service (in terms of time slices)
+ * current queue used and adjust the start, finish time of queue and vtime
+ * of the tree accordingly.
+ *
+ * Determining the service used in terms of time is tricky in certain
+ * situations. Especially when underlying device supports command queuing
+ * and requests from multiple queues can be there at same time, then it
+ * is not clear which queue consumed how much of disk time.
+ *
+ * To mitigate this problem, cfq starts the time slice of the queue only
+ * after first request from the queue has completed. This does not work
+ * very well if we expire the queue before we wait for first and more
+ * request to finish from the queue. For seeky queues, we will expire the
+ * queue after dispatching few requests without waiting and start dispatching
+ * from next queue.
+ *
+ * Not sure how to determine the time consumed by queue in such scenarios.
+ * Currently as a crude approximation, we are charging 25% of time slice
+ * for such cases. A better mechanism is needed for accurate accounting.
+ */
+void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
+
+	assert_spin_locked(q->queue_lock);
+	elv_log_ioq(efqd, ioq, "slice expired");
+
+	if (elv_ioq_wait_request(ioq))
+		del_timer(&efqd->idle_slice_timer);
+
+	elv_clear_ioq_wait_request(ioq);
+
+	/*
+	 * if ioq->slice_end = 0, that means a queue was expired before first
+	 * reuqest from the queue got completed. Of course we are not planning
+	 * to idle on the queue otherwise we would not have expired it.
+	 *
+	 * Charge for the 25% slice in such cases. This is not the best thing
+	 * to do but at the same time not very sure what's the next best
+	 * thing to do.
+	 *
+	 * This arises from that fact that we don't have the notion of
+	 * one queue being operational at one time. io scheduler can dispatch
+	 * requests from multiple queues in one dispatch round. Ideally for
+	 * more accurate accounting of exact disk time used by disk, one
+	 * should dispatch requests from only one queue and wait for all
+	 * the requests to finish. But this will reduce throughput.
+	 */
+	if (!ioq->slice_end)
+		slice_used = entity->budget/4;
+	else {
+		if (time_after(ioq->slice_end, jiffies)) {
+			slice_unused = ioq->slice_end - jiffies;
+			if (slice_unused == entity->budget) {
+				/*
+				 * queue got expired immediately after
+				 * completing first request. Charge 25% of
+				 * slice.
+				 */
+				slice_used = entity->budget/4;
+			} else
+				slice_used = entity->budget - slice_unused;
+		} else {
+			slice_overshoot = jiffies - ioq->slice_end;
+			slice_used = entity->budget + slice_overshoot;
+		}
+	}
+
+	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
+			jiffies);
+	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
+				slice_used, entity->budget, slice_overshoot);
+	elv_ioq_served(ioq, slice_used);
+
+	BUG_ON(ioq != efqd->active_queue);
+	elv_reset_active_ioq(efqd);
+
+	if (!ioq->nr_queued)
+		elv_del_ioq_busy(q->elevator, ioq, 1);
+	else
+		elv_activate_ioq(ioq, 0);
+}
+EXPORT_SYMBOL(__elv_ioq_slice_expired);
+
+/*
+ *  Expire the ioq.
+ */
+void elv_ioq_slice_expired(struct request_queue *q)
+{
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+
+	if (ioq)
+		__elv_ioq_slice_expired(q, ioq);
+}
+
+/*
+ * Check if new_cfqq should preempt the currently active queue. Return 0 for
+ * no or if we aren't sure, a 1 will cause a preemption attempt.
+ */
+int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
+			struct request *rq)
+{
+	struct io_queue *ioq;
+	struct elevator_queue *eq = q->elevator;
+	struct io_entity *entity, *new_entity;
+
+	ioq = elv_active_ioq(eq);
+
+	if (!ioq)
+		return 0;
+
+	entity = &ioq->entity;
+	new_entity = &new_ioq->entity;
+
+	/*
+	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
+	 */
+
+	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
+	    && entity->ioprio_class != IOPRIO_CLASS_RT)
+		return 1;
+	/*
+	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
+	 */
+
+	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
+	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
+		return 1;
+
+	/*
+	 * Check with io scheduler if it has additional criterion based on
+	 * which it wants to preempt existing queue.
+	 */
+	if (eq->ops->elevator_should_preempt_fn)
+		return eq->ops->elevator_should_preempt_fn(q,
+						ioq_sched_queue(new_ioq), rq);
+
+	return 0;
+}
+
+static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
+{
+	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
+	elv_ioq_slice_expired(q);
+
+	/*
+	 * Put the new queue at the front of the of the current list,
+	 * so we know that it will be selected next.
+	 */
+
+	elv_activate_ioq(ioq, 1);
+	elv_ioq_set_slice_end(ioq, 0);
+	elv_mark_ioq_slice_new(ioq);
+}
+
+void elv_ioq_request_add(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	BUG_ON(!efqd);
+	BUG_ON(!ioq);
+	efqd->rq_queued++;
+	ioq->nr_queued++;
+
+	if (!elv_ioq_busy(ioq))
+		elv_add_ioq_busy(efqd, ioq);
+
+	elv_ioq_update_io_thinktime(ioq);
+	elv_ioq_update_idle_window(q->elevator, ioq, rq);
+
+	if (ioq == elv_active_ioq(q->elevator)) {
+		/*
+		 * Remember that we saw a request from this process, but
+		 * don't start queuing just yet. Otherwise we risk seeing lots
+		 * of tiny requests, because we disrupt the normal plugging
+		 * and merging. If the request is already larger than a single
+		 * page, let it rip immediately. For that case we assume that
+		 * merging is already done. Ditto for a busy system that
+		 * has other work pending, don't risk delaying until the
+		 * idle timer unplug to continue working.
+		 */
+		if (elv_ioq_wait_request(ioq)) {
+			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
+			    efqd->busy_queues > 1) {
+				del_timer(&efqd->idle_slice_timer);
+				blk_start_queueing(q);
+			}
+			elv_mark_ioq_must_dispatch(ioq);
+		}
+	} else if (elv_should_preempt(q, ioq, rq)) {
+		/*
+		 * not the active queue - expire current slice if it is
+		 * idle and has expired it's mean thinktime or this new queue
+		 * has some old slice time left and is of higher priority or
+		 * this new queue is RT and the current one is BE
+		 */
+		elv_preempt_queue(q, ioq);
+		blk_start_queueing(q);
+	}
+}
+
+void elv_idle_slice_timer(unsigned long data)
+{
+	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
+	struct io_queue *ioq;
+	unsigned long flags;
+	struct request_queue *q = efqd->queue;
+
+	elv_log(efqd, "idle timer fired");
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	ioq = efqd->active_queue;
+
+	if (ioq) {
+
+		/*
+		 * We saw a request before the queue expired, let it through
+		 */
+		if (elv_ioq_must_dispatch(ioq))
+			goto out_kick;
+
+		/*
+		 * expired
+		 */
+		if (elv_ioq_slice_used(ioq))
+			goto expire;
+
+		/*
+		 * only expire and reinvoke request handler, if there are
+		 * other queues with pending requests
+		 */
+		if (!elv_nr_busy_ioq(q->elevator))
+			goto out_cont;
+
+		/*
+		 * not expired and it has a request pending, let it dispatch
+		 */
+		if (ioq->nr_queued)
+			goto out_kick;
+	}
+expire:
+	elv_ioq_slice_expired(q);
+out_kick:
+	elv_schedule_dispatch(q);
+out_cont:
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+void elv_ioq_arm_slice_timer(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+	unsigned long sl;
+
+	BUG_ON(!ioq);
+
+	/*
+	 * SSD device without seek penalty, disable idling. But only do so
+	 * for devices that support queuing, otherwise we still have a problem
+	 * with sync vs async workloads.
+	 */
+	if (blk_queue_nonrot(q) && efqd->hw_tag)
+		return;
+
+	/*
+	 * still requests with the driver, don't idle
+	 */
+	if (efqd->rq_in_driver)
+		return;
+
+	/*
+	 * idle is disabled, either manually or by past process history
+	 */
+	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+		return;
+
+	/*
+	 * may be iosched got its own idling logic. In that case io
+	 * schduler will take care of arming the timer, if need be.
+	 */
+	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
+		q->elevator->ops->elevator_arm_slice_timer_fn(q,
+						ioq->sched_queue);
+	} else {
+		elv_mark_ioq_wait_request(ioq);
+		sl = efqd->elv_slice_idle;
+		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
+		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
+	}
+}
+
+/* Common layer function to select the next queue to dispatch from */
+void *elv_fq_select_ioq(struct request_queue *q, int force)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
+	struct io_group *iog;
+
+	if (!elv_nr_busy_ioq(q->elevator))
+		return NULL;
+
+	if (ioq == NULL)
+		goto new_queue;
+
+	/*
+	 * Force dispatch. Continue to dispatch from current queue as long
+	 * as it has requests.
+	 */
+	if (unlikely(force)) {
+		if (ioq->nr_queued)
+			goto keep_queue;
+		else
+			goto expire;
+	}
+
+	/*
+	 * The active queue has run out of time, expire it and select new.
+	 */
+	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
+		goto expire;
+
+	/*
+	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
+	 * cfqq.
+	 */
+	iog = ioq_to_io_group(ioq);
+
+	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+		/*
+		 * We simulate this as cfqq timed out so that it gets to bank
+		 * the remaining of its time slice.
+		 */
+		elv_log_ioq(efqd, ioq, "preempt");
+		goto expire;
+	}
+
+	/*
+	 * The active queue has requests and isn't expired, allow it to
+	 * dispatch.
+	 */
+
+	if (ioq->nr_queued)
+		goto keep_queue;
+
+	/*
+	 * If another queue has a request waiting within our mean seek
+	 * distance, let it run.  The expire code will check for close
+	 * cooperators and put the close queue at the front of the service
+	 * tree.
+	 */
+	new_ioq = elv_close_cooperator(q, ioq, 0);
+	if (new_ioq)
+		goto expire;
+
+	/*
+	 * No requests pending. If the active queue still has requests in
+	 * flight or is idling for a new request, allow either of these
+	 * conditions to happen (or time out) before selecting a new queue.
+	 */
+
+	if (timer_pending(&efqd->idle_slice_timer) ||
+	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
+expire:
+	elv_ioq_slice_expired(q);
+new_queue:
+	ioq = elv_set_active_ioq(q, new_ioq);
+keep_queue:
+	return ioq;
+}
+
+/* A request got removed from io_queue. Do the accounting */
+void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
+{
+	struct io_queue *ioq;
+	struct elv_fq_data *efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	ioq = rq->ioq;
+	BUG_ON(!ioq);
+	ioq->nr_queued--;
+
+	efqd = ioq->efqd;
+	BUG_ON(!efqd);
+	efqd->rq_queued--;
+
+	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
+		elv_del_ioq_busy(e, ioq, 1);
+}
+
+/* A request got dispatched. Do the accounting. */
+void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
+{
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	BUG_ON(!ioq);
+	elv_ioq_request_dispatched(ioq);
+	elv_ioq_request_removed(e, rq);
+	elv_clear_ioq_must_dispatch(ioq);
+}
+
+void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	efqd->rq_in_driver++;
+	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
+						efqd->rq_in_driver);
+}
+
+void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	WARN_ON(!efqd->rq_in_driver);
+	efqd->rq_in_driver--;
+	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
+						efqd->rq_in_driver);
+}
+
+/*
+ * Update hw_tag based on peak queue depth over 50 samples under
+ * sufficient load.
+ */
+static void elv_update_hw_tag(struct elv_fq_data *efqd)
+{
+	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
+		efqd->rq_in_driver_peak = efqd->rq_in_driver;
+
+	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
+	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
+		return;
+
+	if (efqd->hw_tag_samples++ < 50)
+		return;
+
+	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
+		efqd->hw_tag = 1;
+	else
+		efqd->hw_tag = 0;
+
+	efqd->hw_tag_samples = 0;
+	efqd->rq_in_driver_peak = 0;
+}
+
+/*
+ * If ioscheduler has functionality of keeping track of close cooperator, check
+ * with it if it has got a closely co-operating queue.
+ */
+static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
+					struct io_queue *ioq, int probe)
+{
+	struct elevator_queue *e = q->elevator;
+	struct io_queue *new_ioq = NULL;
+
+	/*
+	 * Currently this feature is supported only for flat hierarchy or
+	 * root group queues so that default cfq behavior is not changed.
+	 */
+	if (!is_root_group_ioq(q, ioq))
+		return NULL;
+
+	if (q->elevator->ops->elevator_close_cooperator_fn)
+		new_ioq = e->ops->elevator_close_cooperator_fn(q,
+						ioq->sched_queue, probe);
+
+	/* Only select co-operating queue if it belongs to root group */
+	if (new_ioq && !is_root_group_ioq(q, new_ioq))
+		return NULL;
+
+	return new_ioq;
+}
+
+/* A request got completed from io_queue. Do the accounting. */
+void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
+{
+	const int sync = rq_is_sync(rq);
+	struct io_queue *ioq;
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	ioq = rq->ioq;
+
+	elv_log_ioq(efqd, ioq, "complete");
+
+	elv_update_hw_tag(efqd);
+
+	WARN_ON(!efqd->rq_in_driver);
+	WARN_ON(!ioq->dispatched);
+	efqd->rq_in_driver--;
+	ioq->dispatched--;
+
+	if (sync)
+		ioq->last_end_request = jiffies;
+
+	/*
+	 * If this is the active queue, check if it needs to be expired,
+	 * or if we want to idle in case it has no pending requests.
+	 */
+
+	if (elv_active_ioq(q->elevator) == ioq) {
+		if (elv_ioq_slice_new(ioq)) {
+			elv_ioq_set_prio_slice(q, ioq);
+			elv_clear_ioq_slice_new(ioq);
+		}
+		/*
+		 * If there are no requests waiting in this queue, and
+		 * there are other queues ready to issue requests, AND
+		 * those other queues are issuing requests within our
+		 * mean seek distance, give them a chance to run instead
+		 * of idling.
+		 */
+		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
+			elv_ioq_slice_expired(q);
+		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
+			 && sync && !rq_noidle(rq))
+			elv_ioq_arm_slice_timer(q);
+	}
+
+	if (!efqd->rq_in_driver)
+		elv_schedule_dispatch(q);
+}
+
+struct io_group *io_lookup_io_group_current(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	return efqd->root_group;
+}
+EXPORT_SYMBOL(io_lookup_io_group_current);
+
+void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
+					int ioprio)
+{
+	struct io_queue *ioq = NULL;
+
+	switch (ioprio_class) {
+	case IOPRIO_CLASS_RT:
+		ioq = iog->async_queue[0][ioprio];
+		break;
+	case IOPRIO_CLASS_BE:
+		ioq = iog->async_queue[1][ioprio];
+		break;
+	case IOPRIO_CLASS_IDLE:
+		ioq = iog->async_idle_queue;
+		break;
+	default:
+		BUG();
+	}
+
+	if (ioq)
+		return ioq->sched_queue;
+	return NULL;
+}
+EXPORT_SYMBOL(io_group_async_queue_prio);
+
+void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
+					int ioprio, struct io_queue *ioq)
+{
+	switch (ioprio_class) {
+	case IOPRIO_CLASS_RT:
+		iog->async_queue[0][ioprio] = ioq;
+		break;
+	case IOPRIO_CLASS_BE:
+		iog->async_queue[1][ioprio] = ioq;
+		break;
+	case IOPRIO_CLASS_IDLE:
+		iog->async_idle_queue = ioq;
+		break;
+	default:
+		BUG();
+	}
+
+	/*
+	 * Take the group reference and pin the queue. Group exit will
+	 * clean it up
+	 */
+	elv_get_ioq(ioq);
+}
+EXPORT_SYMBOL(io_group_set_async_queue);
+
+/*
+ * Release all the io group references to its async queues.
+ */
+void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
+{
+	int i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < IOPRIO_BE_NR; j++)
+			elv_release_ioq(e, &iog->async_queue[i][j]);
+
+	/* Free up async idle queue */
+	elv_release_ioq(e, &iog->async_idle_queue);
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	return iog;
+}
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_group *iog = e->efqd.root_group;
+	struct io_service_tree *st;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	kfree(iog);
+}
+
+static void elv_slab_kill(void)
+{
+	/*
+	 * Caller already ensured that pending RCU callbacks are completed,
+	 * so we should have no busy allocations at this point.
+	 */
+	if (elv_ioq_pool)
+		kmem_cache_destroy(elv_ioq_pool);
+}
+
+static int __init elv_slab_setup(void)
+{
+	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
+	if (!elv_ioq_pool)
+		goto fail;
+
+	return 0;
+fail:
+	elv_slab_kill();
+	return -ENOMEM;
+}
+
+/* Initialize fair queueing data associated with elevator */
+int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
+{
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return 0;
+
+	iog = io_alloc_root_group(q, e, efqd);
+	if (iog == NULL)
+		return 1;
+
+	efqd->root_group = iog;
+	efqd->queue = q;
+
+	init_timer(&efqd->idle_slice_timer);
+	efqd->idle_slice_timer.function = elv_idle_slice_timer;
+	efqd->idle_slice_timer.data = (unsigned long) efqd;
+
+	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
+
+	efqd->elv_slice[0] = elv_slice_async;
+	efqd->elv_slice[1] = elv_slice_sync;
+	efqd->elv_slice_idle = elv_slice_idle;
+	efqd->hw_tag = 1;
+
+	return 0;
+}
+
+/*
+ * elv_exit_fq_data is called before we call elevator_exit_fn. Before
+ * we ask elevator to cleanup its queues, we do the cleanup here so
+ * that all the group and idle tree references to ioq are dropped. Later
+ * during elevator cleanup, ioc reference will be dropped which will lead
+ * to removal of ioscheduler queue as well as associated ioq object.
+ */
+void elv_exit_fq_data(struct elevator_queue *e)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	elv_shutdown_timer_wq(e);
+
+	BUG_ON(timer_pending(&efqd->idle_slice_timer));
+	io_free_root_group(e);
+}
+
+/*
+ * This is called after the io scheduler has cleaned up its data structres.
+ * I don't think that this function is required. Right now just keeping it
+ * because cfq cleans up timer and work queue again after freeing up
+ * io contexts. To me io scheduler has already been drained out, and all
+ * the active queue have already been expired so time and work queue should
+ * not been activated during cleanup process.
+ *
+ * Keeping it here for the time being. Will get rid of it later.
+ */
+void elv_exit_fq_data_post(struct elevator_queue *e)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	elv_shutdown_timer_wq(e);
+	BUG_ON(timer_pending(&efqd->idle_slice_timer));
+}
+
+
+static int __init elv_fq_init(void)
+{
+	if (elv_slab_setup())
+		return -ENOMEM;
+
+	/* could be 0 on HZ < 1000 setups */
+
+	if (!elv_slice_async)
+		elv_slice_async = 1;
+
+	if (!elv_slice_idle)
+		elv_slice_idle = 1;
+
+	return 0;
+}
+
+module_init(elv_fq_init);
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
new file mode 100644
index 0000000..5b6c1cc
--- /dev/null
+++ b/block/elevator-fq.h
@@ -0,0 +1,473 @@
+/*
+ * BFQ: data structures and common functions prototypes.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+ *
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ *		      Paolo Valente <paolo.valente@unimore.it>
+ * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
+ * 	              Nauman Rafique <nauman@google.com>
+ */
+
+#include <linux/blkdev.h>
+
+#ifndef _BFQ_SCHED_H
+#define _BFQ_SCHED_H
+
+#define IO_IOPRIO_CLASSES	3
+
+typedef u64 bfq_timestamp_t;
+typedef unsigned long bfq_weight_t;
+typedef unsigned long bfq_service_t;
+struct io_entity;
+struct io_queue;
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+
+#define ELV_ATTR(name) \
+	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
+
+/**
+ * struct bfq_service_tree - per ioprio_class service tree.
+ * @active: tree for active entities (i.e., those backlogged).
+ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
+ * @first_idle: idle entity with minimum F_i.
+ * @last_idle: idle entity with maximum F_i.
+ * @vtime: scheduler virtual time.
+ * @wsum: scheduler weight sum; active and idle entities contribute to it.
+ *
+ * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
+ * ioprio_class has its own independent scheduler, and so its own
+ * bfq_service_tree.  All the fields are protected by the queue lock
+ * of the containing efqd.
+ */
+struct io_service_tree {
+	struct rb_root active;
+	struct rb_root idle;
+
+	struct io_entity *first_idle;
+	struct io_entity *last_idle;
+
+	bfq_timestamp_t vtime;
+	bfq_weight_t wsum;
+};
+
+/**
+ * struct bfq_sched_data - multi-class scheduler.
+ * @active_entity: entity under service.
+ * @next_active: head-of-the-line entity in the scheduler.
+ * @service_tree: array of service trees, one per ioprio_class.
+ *
+ * bfq_sched_data is the basic scheduler queue.  It supports three
+ * ioprio_classes, and can be used either as a toplevel queue or as
+ * an intermediate queue on a hierarchical setup.
+ * @next_active points to the active entity of the sched_data service
+ * trees that will be scheduled next.
+ *
+ * The supported ioprio_classes are the same as in CFQ, in descending
+ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
+ * Requests from higher priority queues are served before all the
+ * requests from lower priority queues; among requests of the same
+ * queue requests are served according to B-WF2Q+.
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+struct io_sched_data {
+	struct io_entity *active_entity;
+	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
+};
+
+/**
+ * struct bfq_entity - schedulable entity.
+ * @rb_node: service_tree member.
+ * @on_st: flag, true if the entity is on a tree (either the active or
+ *         the idle one of its service_tree).
+ * @finish: B-WF2Q+ finish timestamp (aka F_i).
+ * @start: B-WF2Q+ start timestamp (aka S_i).
+ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
+ * @min_start: minimum start time of the (active) subtree rooted at
+ *             this entity; used for O(log N) lookups into active trees.
+ * @service: service received during the last round of service.
+ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
+ * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
+ * @parent: parent entity, for hierarchical scheduling.
+ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
+ *                 associated scheduler queue, %NULL on leaf nodes.
+ * @sched_data: the scheduler queue this entity belongs to.
+ * @ioprio: the ioprio in use.
+ * @new_ioprio: when an ioprio change is requested, the new ioprio value
+ * @ioprio_class: the ioprio_class in use.
+ * @new_ioprio_class: when an ioprio_class change is requested, the new
+ *                    ioprio_class value.
+ * @ioprio_changed: flag, true when the user requested an ioprio or
+ *                  ioprio_class change.
+ *
+ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
+ * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
+ * entity belongs to the sched_data of the parent group in the cgroup
+ * hierarchy.  Non-leaf entities have also their own sched_data, stored
+ * in @my_sched_data.
+ *
+ * Each entity stores independently its priority values; this would allow
+ * different weights on different devices, but this functionality is not
+ * exported to userspace by now.  Priorities are updated lazily, first
+ * storing the new values into the new_* fields, then setting the
+ * @ioprio_changed flag.  As soon as there is a transition in the entity
+ * state that allows the priority update to take place the effective and
+ * the requested priority values are synchronized.
+ *
+ * The weight value is calculated from the ioprio to export the same
+ * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
+ * queues that do not spend too much time to consume their budget and
+ * have true sequential behavior, and when there are no external factors
+ * breaking anticipation) the relative weights at each level of the
+ * cgroups hierarchy should be guaranteed.
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+struct io_entity {
+	struct rb_node rb_node;
+
+	int on_st;
+
+	bfq_timestamp_t finish;
+	bfq_timestamp_t start;
+
+	struct rb_root *tree;
+
+	bfq_timestamp_t min_start;
+
+	bfq_service_t service, budget;
+	bfq_weight_t weight;
+
+	struct io_entity *parent;
+
+	struct io_sched_data *my_sched_data;
+	struct io_sched_data *sched_data;
+
+	unsigned short ioprio, new_ioprio;
+	unsigned short ioprio_class, new_ioprio_class;
+
+	int ioprio_changed;
+};
+
+/*
+ * A common structure embedded by every io scheduler into their respective
+ * queue structure.
+ */
+struct io_queue {
+	struct io_entity entity;
+	atomic_t ref;
+	unsigned int flags;
+
+	/* Pointer to generic elevator data structure */
+	struct elv_fq_data *efqd;
+	pid_t pid;
+
+	/* Number of requests queued on this io queue */
+	unsigned long nr_queued;
+
+	/* Requests dispatched from this queue */
+	int dispatched;
+
+	/* Keep a track of think time of processes in this queue */
+	unsigned long last_end_request;
+	unsigned long ttime_total;
+	unsigned long ttime_samples;
+	unsigned long ttime_mean;
+
+	unsigned long slice_end;
+
+	/* Pointer to io scheduler's queue */
+	void *sched_queue;
+};
+
+struct io_group {
+	struct io_sched_data sched_data;
+
+	/* async_queue and idle_queue are used only for cfq */
+	struct io_queue *async_queue[2][IOPRIO_BE_NR];
+	struct io_queue *async_idle_queue;
+
+	/*
+	 * Used to track any pending rt requests so we can pre-empt current
+	 * non-RT cfqq in service when this value is non-zero.
+	 */
+	unsigned int busy_rt_queues;
+};
+
+struct elv_fq_data {
+	struct io_group *root_group;
+
+	struct request_queue *queue;
+	unsigned int busy_queues;
+
+	/* Number of requests queued */
+	int rq_queued;
+
+	/* Pointer to the ioscheduler queue being served */
+	void *active_queue;
+
+	int rq_in_driver;
+	int hw_tag;
+	int hw_tag_samples;
+	int rq_in_driver_peak;
+
+	/*
+	 * elevator fair queuing layer has the capability to provide idling
+	 * for ensuring fairness for processes doing dependent reads.
+	 * This might be needed to ensure fairness among two processes doing
+	 * synchronous reads in two different cgroups. noop and deadline don't
+	 * have any notion of anticipation/idling. As of now, these are the
+	 * users of this functionality.
+	 */
+	unsigned int elv_slice_idle;
+	struct timer_list idle_slice_timer;
+	struct work_struct unplug_work;
+
+	unsigned int elv_slice[2];
+};
+
+extern int elv_slice_idle;
+extern int elv_slice_async;
+
+/* Logging facilities. */
+#define elv_log_ioq(efqd, ioq, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
+				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
+
+#define elv_log(efqd, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
+
+#define ioq_sample_valid(samples)   ((samples) > 80)
+
+/* Some shared queue flag manipulation functions among elevators */
+
+enum elv_queue_state_flags {
+	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
+	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
+	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
+	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
+	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
+	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
+	ELV_QUEUE_FLAG_NR,
+};
+
+#define ELV_IO_QUEUE_FLAG_FNS(name)					\
+static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
+{                                                                       \
+	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
+}                                                                       \
+static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
+{                                                                       \
+	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
+}                                                                       \
+static inline int elv_ioq_##name(struct io_queue *ioq)         		\
+{                                                                       \
+	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
+}
+
+ELV_IO_QUEUE_FLAG_FNS(busy)
+ELV_IO_QUEUE_FLAG_FNS(sync)
+ELV_IO_QUEUE_FLAG_FNS(wait_request)
+ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
+ELV_IO_QUEUE_FLAG_FNS(idle_window)
+ELV_IO_QUEUE_FLAG_FNS(slice_new)
+
+static inline struct io_service_tree *
+io_entity_service_tree(struct io_entity *entity)
+{
+	struct io_sched_data *sched_data = entity->sched_data;
+	unsigned int idx = entity->ioprio_class - 1;
+
+	BUG_ON(idx >= IO_IOPRIO_CLASSES);
+	BUG_ON(sched_data == NULL);
+
+	return sched_data->service_tree + idx;
+}
+
+/* A request got dispatched from the io_queue. Do the accounting. */
+static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
+{
+	ioq->dispatched++;
+}
+
+static inline int elv_ioq_slice_used(struct io_queue *ioq)
+{
+	if (elv_ioq_slice_new(ioq))
+		return 0;
+	if (time_before(jiffies, ioq->slice_end))
+		return 0;
+
+	return 1;
+}
+
+/* How many request are currently dispatched from the queue */
+static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
+{
+	return ioq->dispatched;
+}
+
+/* How many request are currently queued in the queue */
+static inline int elv_ioq_nr_queued(struct io_queue *ioq)
+{
+	return ioq->nr_queued;
+}
+
+static inline void elv_get_ioq(struct io_queue *ioq)
+{
+	atomic_inc(&ioq->ref);
+}
+
+static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
+						unsigned long slice_end)
+{
+	ioq->slice_end = slice_end;
+}
+
+static inline int elv_ioq_class_idle(struct io_queue *ioq)
+{
+	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
+}
+
+static inline int elv_ioq_class_rt(struct io_queue *ioq)
+{
+	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
+}
+
+static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
+{
+	return ioq->entity.new_ioprio_class;
+}
+
+static inline int elv_ioq_ioprio(struct io_queue *ioq)
+{
+	return ioq->entity.new_ioprio;
+}
+
+static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
+						int ioprio_class)
+{
+	ioq->entity.new_ioprio_class = ioprio_class;
+	ioq->entity.ioprio_changed = 1;
+}
+
+static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
+{
+	ioq->entity.new_ioprio = ioprio;
+	ioq->entity.ioprio_changed = 1;
+}
+
+static inline void *ioq_sched_queue(struct io_queue *ioq)
+{
+	if (ioq)
+		return ioq->sched_queue;
+	return NULL;
+}
+
+static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
+{
+	return container_of(ioq->entity.sched_data, struct io_group,
+						sched_data);
+}
+
+extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
+						size_t count);
+extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
+						size_t count);
+extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
+						size_t count);
+
+/* Functions used by elevator.c */
+extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
+extern void elv_exit_fq_data(struct elevator_queue *e);
+extern void elv_exit_fq_data_post(struct elevator_queue *e);
+
+extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
+extern void elv_ioq_request_removed(struct elevator_queue *e,
+					struct request *rq);
+extern void elv_fq_dispatched_request(struct elevator_queue *e,
+					struct request *rq);
+
+extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
+extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
+
+extern void elv_ioq_completed_request(struct request_queue *q,
+				struct request *rq);
+
+extern void *elv_fq_select_ioq(struct request_queue *q, int force);
+extern struct io_queue *rq_ioq(struct request *rq);
+
+/* Functions used by io schedulers */
+extern void elv_put_ioq(struct io_queue *ioq);
+extern void __elv_ioq_slice_expired(struct request_queue *q,
+					struct io_queue *ioq);
+extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
+		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
+extern void elv_schedule_dispatch(struct request_queue *q);
+extern int elv_hw_tag(struct elevator_queue *e);
+extern void *elv_active_sched_queue(struct elevator_queue *e);
+extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
+					unsigned long expires);
+extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
+extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
+extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
+					int ioprio);
+extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
+					int ioprio, struct io_queue *ioq);
+extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
+extern int elv_nr_busy_ioq(struct elevator_queue *e);
+extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
+extern void elv_free_ioq(struct io_queue *ioq);
+
+#else /* CONFIG_ELV_FAIR_QUEUING */
+
+static inline int elv_init_fq_data(struct request_queue *q,
+					struct elevator_queue *e)
+{
+	return 0;
+}
+
+static inline void elv_exit_fq_data(struct elevator_queue *e) {}
+static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
+
+static inline void elv_fq_activate_rq(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_fq_deactivate_rq(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_fq_dispatched_request(struct elevator_queue *e,
+						struct request *rq)
+{
+}
+
+static inline void elv_ioq_request_removed(struct elevator_queue *e,
+						struct request *rq)
+{
+}
+
+static inline void elv_ioq_request_add(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_ioq_completed_request(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
+static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
+static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
+{
+	return NULL;
+}
+#endif /* CONFIG_ELV_FAIR_QUEUING */
+#endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index 7073a90..c2f07f5 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
 	for (i = 0; i < ELV_HASH_ENTRIES; i++)
 		INIT_HLIST_HEAD(&eq->hash[i]);
 
+	if (elv_init_fq_data(q, eq))
+		goto err;
+
 	return eq;
 err:
 	kfree(eq);
@@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
 void elevator_exit(struct elevator_queue *e)
 {
 	mutex_lock(&e->sysfs_lock);
+	elv_exit_fq_data(e);
 	if (e->ops->elevator_exit_fn)
 		e->ops->elevator_exit_fn(e);
 	e->ops = NULL;
+	elv_exit_fq_data_post(e);
 	mutex_unlock(&e->sysfs_lock);
 
 	kobject_put(&e->kobj);
@@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	elv_fq_activate_rq(q, rq);
+
 	if (e->ops->elevator_activate_req_fn)
 		e->ops->elevator_activate_req_fn(q, rq);
 }
@@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	elv_fq_deactivate_rq(q, rq);
+
 	if (e->ops->elevator_deactivate_req_fn)
 		e->ops->elevator_deactivate_req_fn(q, rq);
 }
@@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
 	elv_rqhash_del(q, rq);
 
 	q->nr_sorted--;
+	elv_fq_dispatched_request(q->elevator, rq);
 
 	boundary = q->end_sector;
 	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
@@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
 	elv_rqhash_del(q, rq);
 
 	q->nr_sorted--;
+	elv_fq_dispatched_request(q->elevator, rq);
 
 	q->end_sector = rq_end_sector(rq);
 	q->boundary_rq = rq;
@@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
 	elv_rqhash_del(q, next);
 
 	q->nr_sorted--;
+	elv_ioq_request_removed(e, next);
 	q->last_merge = rq;
 }
 
@@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 				q->last_merge = rq;
 		}
 
-		/*
-		 * Some ioscheds (cfq) run q->request_fn directly, so
-		 * rq cannot be accessed after calling
-		 * elevator_add_req_fn.
-		 */
 		q->elevator->ops->elevator_add_req_fn(q, rq);
+		elv_ioq_request_add(q, rq);
 		break;
 
 	case ELEVATOR_INSERT_REQUEUE:
@@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
 
 int elv_queue_empty(struct request_queue *q)
 {
-	struct elevator_queue *e = q->elevator;
-
 	if (!list_empty(&q->queue_head))
 		return 0;
 
-	if (e->ops->elevator_queue_empty_fn)
-		return e->ops->elevator_queue_empty_fn(q);
+	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
+	if (q->nr_sorted)
+		return 0;
 
 	return 1;
 }
@@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 	 */
 	if (blk_account_rq(rq)) {
 		q->in_flight--;
-		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
-			e->ops->elevator_completed_req_fn(q, rq);
+		if (blk_sorted_rq(rq)) {
+			if (e->ops->elevator_completed_req_fn)
+				e->ops->elevator_completed_req_fn(q, rq);
+			elv_ioq_completed_request(q, rq);
+		}
 	}
 
 	/*
@@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
 	return NULL;
 }
 EXPORT_SYMBOL(elv_rb_latter_request);
+
+/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
+void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
+{
+	return ioq_sched_queue(rq_ioq(rq));
+}
+EXPORT_SYMBOL(elv_get_sched_queue);
+
+/* Select an ioscheduler queue to dispatch request from. */
+void *elv_select_sched_queue(struct request_queue *q, int force)
+{
+	return ioq_sched_queue(elv_fq_select_ioq(q, force));
+}
+EXPORT_SYMBOL(elv_select_sched_queue);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b4f71f1..96a94c9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -245,6 +245,11 @@ struct request {
 
 	/* for bidi */
 	struct request *next_rq;
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	/* io queue request belongs to */
+	struct io_queue *ioq;
+#endif
 };
 
 static inline unsigned short req_get_ioprio(struct request *req)
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index c59b769..679c149 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -2,6 +2,7 @@
 #define _LINUX_ELEVATOR_H
 
 #include <linux/percpu.h>
+#include "../../block/elevator-fq.h"
 
 #ifdef CONFIG_BLOCK
 
@@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
+#ifdef CONFIG_ELV_FAIR_QUEUING
+typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
+typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
+typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
+typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
+typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
+						struct request*);
+typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
+						struct request*);
+typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
+						void*, int probe);
+#endif
 
 struct elevator_ops
 {
@@ -56,6 +69,17 @@ struct elevator_ops
 	elevator_init_fn *elevator_init_fn;
 	elevator_exit_fn *elevator_exit_fn;
 	void (*trim)(struct io_context *);
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
+	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
+	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
+
+	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
+	elevator_should_preempt_fn *elevator_should_preempt_fn;
+	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
+	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
+#endif
 };
 
 #define ELV_NAME_MAX	(16)
@@ -76,6 +100,9 @@ struct elevator_type
 	struct elv_fs_entry *elevator_attrs;
 	char elevator_name[ELV_NAME_MAX];
 	struct module *elevator_owner;
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	int elevator_features;
+#endif
 };
 
 /*
@@ -89,6 +116,10 @@ struct elevator_queue
 	struct elevator_type *elevator_type;
 	struct mutex sysfs_lock;
 	struct hlist_head *hash;
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	/* fair queuing data */
+	struct elv_fq_data efqd;
+#endif
 };
 
 /*
@@ -209,5 +240,25 @@ enum {
 	__val;							\
 })
 
+/* iosched can let elevator know their feature set/capability */
+#ifdef CONFIG_ELV_FAIR_QUEUING
+
+/* iosched wants to use fq logic of elevator layer */
+#define	ELV_IOSCHED_NEED_FQ	1
+
+static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
+{
+	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
+}
+
+#else /* ELV_IOSCHED_FAIR_QUEUING */
+
+static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
+{
+	return 0;
+}
+#endif /* ELV_IOSCHED_FAIR_QUEUING */
+extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
+extern void *elv_select_sched_queue(struct request_queue *q, int force);
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

This is common fair queuing code in elevator layer. This is controlled by
config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
flat fair queuing support where there is only one group, "root group" and all
the tasks belong to root group.

This elevator layer changes are backward compatible. That means any ioscheduler
using old interfaces will continue to work.

This code is essentially the CFQ code for fair queuing. The primary difference
is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   13 +
 block/Makefile           |    1 +
 block/elevator-fq.c      | 2015 ++++++++++++++++++++++++++++++++++++++++++++++
 block/elevator-fq.h      |  473 +++++++++++
 block/elevator.c         |   46 +-
 include/linux/blkdev.h   |    5 +
 include/linux/elevator.h |   51 ++
 7 files changed, 2593 insertions(+), 11 deletions(-)
 create mode 100644 block/elevator-fq.c
 create mode 100644 block/elevator-fq.h

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 7e803fc..3398134 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -2,6 +2,19 @@ if BLOCK
 
 menu "IO Schedulers"
 
+config ELV_FAIR_QUEUING
+	bool "Elevator Fair Queuing Support"
+	default n
+	---help---
+	  Traditionally only cfq had notion of multiple queues and it did
+	  fair queuing at its own. With the cgroups and need of controlling
+	  IO, now even the simple io schedulers like noop, deadline, as will
+	  have one queue per cgroup and will need hierarchical fair queuing.
+	  Instead of every io scheduler implementing its own fair queuing
+	  logic, this option enables fair queuing in elevator layer so that
+	  other ioschedulers can make use of it.
+	  If unsure, say N.
+
 config IOSCHED_NOOP
 	bool
 	default y
diff --git a/block/Makefile b/block/Makefile
index e9fa4dd..94bfc6e 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -15,3 +15,4 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
 
 obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
 obj-$(CONFIG_BLK_DEV_INTEGRITY)	+= blk-integrity.o
+obj-$(CONFIG_ELV_FAIR_QUEUING)	+= elevator-fq.o
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
new file mode 100644
index 0000000..9357fb0
--- /dev/null
+++ b/block/elevator-fq.c
@@ -0,0 +1,2015 @@
+/*
+ * BFQ: Hierarchical B-WF2Q+ scheduler.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+ *
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ *		      Paolo Valente <paolo.valente@unimore.it>
+ * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
+ * 	              Nauman Rafique <nauman@google.com>
+ */
+
+#include <linux/blkdev.h>
+#include "elevator-fq.h"
+#include <linux/blktrace_api.h>
+
+/* Values taken from cfq */
+const int elv_slice_sync = HZ / 10;
+int elv_slice_async = HZ / 25;
+const int elv_slice_async_rq = 2;
+int elv_slice_idle = HZ / 125;
+static struct kmem_cache *elv_ioq_pool;
+
+#define ELV_SLICE_SCALE		(5)
+#define ELV_HW_QUEUE_MIN	(5)
+#define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
+				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
+
+static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
+					struct io_queue *ioq, int probe);
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract);
+
+static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
+					unsigned short prio)
+{
+	const int base_slice = efqd->elv_slice[sync];
+
+	WARN_ON(prio >= IOPRIO_BE_NR);
+
+	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
+}
+
+static inline int
+elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
+{
+	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
+}
+
+/* Mainly the BFQ scheduling code Follows */
+
+/*
+ * Shift for timestamp calculations.  This actually limits the maximum
+ * service allowed in one timestamp delta (small shift values increase it),
+ * the maximum total weight that can be used for the queues in the system
+ * (big shift values increase it), and the period of virtual time wraparounds.
+ */
+#define WFQ_SERVICE_SHIFT	22
+
+/**
+ * bfq_gt - compare two timestamps.
+ * @a: first ts.
+ * @b: second ts.
+ *
+ * Return @a > @b, dealing with wrapping correctly.
+ */
+static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
+{
+	return (s64)(a - b) > 0;
+}
+
+/**
+ * bfq_delta - map service into the virtual time domain.
+ * @service: amount of service.
+ * @weight: scale factor.
+ */
+static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
+					bfq_weight_t weight)
+{
+	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
+
+	do_div(d, weight);
+	return d;
+}
+
+/**
+ * bfq_calc_finish - assign the finish time to an entity.
+ * @entity: the entity to act upon.
+ * @service: the service to be charged to the entity.
+ */
+static inline void bfq_calc_finish(struct io_entity *entity,
+				   bfq_service_t service)
+{
+	BUG_ON(entity->weight == 0);
+
+	entity->finish = entity->start + bfq_delta(service, entity->weight);
+}
+
+static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
+{
+	struct io_queue *ioq = NULL;
+
+	BUG_ON(entity == NULL);
+	if (entity->my_sched_data == NULL)
+		ioq = container_of(entity, struct io_queue, entity);
+	return ioq;
+}
+
+/**
+ * bfq_entity_of - get an entity from a node.
+ * @node: the node field of the entity.
+ *
+ * Convert a node pointer to the relative entity.  This is used only
+ * to simplify the logic of some functions and not as the generic
+ * conversion mechanism because, e.g., in the tree walking functions,
+ * the check for a %NULL value would be redundant.
+ */
+static inline struct io_entity *bfq_entity_of(struct rb_node *node)
+{
+	struct io_entity *entity = NULL;
+
+	if (node != NULL)
+		entity = rb_entry(node, struct io_entity, rb_node);
+
+	return entity;
+}
+
+/**
+ * bfq_extract - remove an entity from a tree.
+ * @root: the tree root.
+ * @entity: the entity to remove.
+ */
+static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
+{
+	BUG_ON(entity->tree != root);
+
+	entity->tree = NULL;
+	rb_erase(&entity->rb_node, root);
+}
+
+/**
+ * bfq_idle_extract - extract an entity from the idle tree.
+ * @st: the service tree of the owning @entity.
+ * @entity: the entity being removed.
+ */
+static void bfq_idle_extract(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct rb_node *next;
+
+	BUG_ON(entity->tree != &st->idle);
+
+	if (entity == st->first_idle) {
+		next = rb_next(&entity->rb_node);
+		st->first_idle = bfq_entity_of(next);
+	}
+
+	if (entity == st->last_idle) {
+		next = rb_prev(&entity->rb_node);
+		st->last_idle = bfq_entity_of(next);
+	}
+
+	bfq_extract(&st->idle, entity);
+}
+
+/**
+ * bfq_insert - generic tree insertion.
+ * @root: tree root.
+ * @entity: entity to insert.
+ *
+ * This is used for the idle and the active tree, since they are both
+ * ordered by finish time.
+ */
+static void bfq_insert(struct rb_root *root, struct io_entity *entity)
+{
+	struct io_entity *entry;
+	struct rb_node **node = &root->rb_node;
+	struct rb_node *parent = NULL;
+
+	BUG_ON(entity->tree != NULL);
+
+	while (*node != NULL) {
+		parent = *node;
+		entry = rb_entry(parent, struct io_entity, rb_node);
+
+		if (bfq_gt(entry->finish, entity->finish))
+			node = &parent->rb_left;
+		else
+			node = &parent->rb_right;
+	}
+
+	rb_link_node(&entity->rb_node, parent, node);
+	rb_insert_color(&entity->rb_node, root);
+
+	entity->tree = root;
+}
+
+/**
+ * bfq_update_min - update the min_start field of a entity.
+ * @entity: the entity to update.
+ * @node: one of its children.
+ *
+ * This function is called when @entity may store an invalid value for
+ * min_start due to updates to the active tree.  The function  assumes
+ * that the subtree rooted at @node (which may be its left or its right
+ * child) has a valid min_start value.
+ */
+static inline void bfq_update_min(struct io_entity *entity,
+					struct rb_node *node)
+{
+	struct io_entity *child;
+
+	if (node != NULL) {
+		child = rb_entry(node, struct io_entity, rb_node);
+		if (bfq_gt(entity->min_start, child->min_start))
+			entity->min_start = child->min_start;
+	}
+}
+
+/**
+ * bfq_update_active_node - recalculate min_start.
+ * @node: the node to update.
+ *
+ * @node may have changed position or one of its children may have moved,
+ * this function updates its min_start value.  The left and right subtrees
+ * are assumed to hold a correct min_start value.
+ */
+static inline void bfq_update_active_node(struct rb_node *node)
+{
+	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
+
+	entity->min_start = entity->start;
+	bfq_update_min(entity, node->rb_right);
+	bfq_update_min(entity, node->rb_left);
+}
+
+/**
+ * bfq_update_active_tree - update min_start for the whole active tree.
+ * @node: the starting node.
+ *
+ * @node must be the deepest modified node after an update.  This function
+ * updates its min_start using the values held by its children, assuming
+ * that they did not change, and then updates all the nodes that may have
+ * changed in the path to the root.  The only nodes that may have changed
+ * are the ones in the path or their siblings.
+ */
+static void bfq_update_active_tree(struct rb_node *node)
+{
+	struct rb_node *parent;
+
+up:
+	bfq_update_active_node(node);
+
+	parent = rb_parent(node);
+	if (parent == NULL)
+		return;
+
+	if (node == parent->rb_left && parent->rb_right != NULL)
+		bfq_update_active_node(parent->rb_right);
+	else if (parent->rb_left != NULL)
+		bfq_update_active_node(parent->rb_left);
+
+	node = parent;
+	goto up;
+}
+
+/**
+ * bfq_active_insert - insert an entity in the active tree of its group/device.
+ * @st: the service tree of the entity.
+ * @entity: the entity being inserted.
+ *
+ * The active tree is ordered by finish time, but an extra key is kept
+ * per each node, containing the minimum value for the start times of
+ * its children (and the node itself), so it's possible to search for
+ * the eligible node with the lowest finish time in logarithmic time.
+ */
+static void bfq_active_insert(struct io_service_tree *st,
+					struct io_entity *entity)
+{
+	struct rb_node *node = &entity->rb_node;
+
+	bfq_insert(&st->active, entity);
+
+	if (node->rb_left != NULL)
+		node = node->rb_left;
+	else if (node->rb_right != NULL)
+		node = node->rb_right;
+
+	bfq_update_active_tree(node);
+}
+
+/**
+ * bfq_ioprio_to_weight - calc a weight from an ioprio.
+ * @ioprio: the ioprio value to convert.
+ */
+static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
+{
+	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+	return IOPRIO_BE_NR - ioprio;
+}
+
+void bfq_get_entity(struct io_entity *entity)
+{
+	struct io_queue *ioq = io_entity_to_ioq(entity);
+
+	if (ioq)
+		elv_get_ioq(ioq);
+}
+
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->sched_data = &iog->sched_data;
+}
+
+/**
+ * bfq_find_deepest - find the deepest node that an extraction can modify.
+ * @node: the node being removed.
+ *
+ * Do the first step of an extraction in an rb tree, looking for the
+ * node that will replace @node, and returning the deepest node that
+ * the following modifications to the tree can touch.  If @node is the
+ * last node in the tree return %NULL.
+ */
+static struct rb_node *bfq_find_deepest(struct rb_node *node)
+{
+	struct rb_node *deepest;
+
+	if (node->rb_right == NULL && node->rb_left == NULL)
+		deepest = rb_parent(node);
+	else if (node->rb_right == NULL)
+		deepest = node->rb_left;
+	else if (node->rb_left == NULL)
+		deepest = node->rb_right;
+	else {
+		deepest = rb_next(node);
+		if (deepest->rb_right != NULL)
+			deepest = deepest->rb_right;
+		else if (rb_parent(deepest) != node)
+			deepest = rb_parent(deepest);
+	}
+
+	return deepest;
+}
+
+/**
+ * bfq_active_extract - remove an entity from the active tree.
+ * @st: the service_tree containing the tree.
+ * @entity: the entity being removed.
+ */
+static void bfq_active_extract(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct rb_node *node;
+
+	node = bfq_find_deepest(&entity->rb_node);
+	bfq_extract(&st->active, entity);
+
+	if (node != NULL)
+		bfq_update_active_tree(node);
+}
+
+/**
+ * bfq_idle_insert - insert an entity into the idle tree.
+ * @st: the service tree containing the tree.
+ * @entity: the entity to insert.
+ */
+static void bfq_idle_insert(struct io_service_tree *st,
+					struct io_entity *entity)
+{
+	struct io_entity *first_idle = st->first_idle;
+	struct io_entity *last_idle = st->last_idle;
+
+	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
+		st->first_idle = entity;
+	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
+		st->last_idle = entity;
+
+	bfq_insert(&st->idle, entity);
+}
+
+/**
+ * bfq_forget_entity - remove an entity from the wfq trees.
+ * @st: the service tree.
+ * @entity: the entity being removed.
+ *
+ * Update the device status and forget everything about @entity, putting
+ * the device reference to it, if it is a queue.  Entities belonging to
+ * groups are not refcounted.
+ */
+static void bfq_forget_entity(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	struct io_queue *ioq = NULL;
+
+	BUG_ON(!entity->on_st);
+	entity->on_st = 0;
+	st->wsum -= entity->weight;
+	ioq = io_entity_to_ioq(entity);
+	if (!ioq)
+		return;
+	elv_put_ioq(ioq);
+}
+
+/**
+ * bfq_put_idle_entity - release the idle tree ref of an entity.
+ * @st: service tree for the entity.
+ * @entity: the entity being released.
+ */
+void bfq_put_idle_entity(struct io_service_tree *st,
+				struct io_entity *entity)
+{
+	bfq_idle_extract(st, entity);
+	bfq_forget_entity(st, entity);
+}
+
+/**
+ * bfq_forget_idle - update the idle tree if necessary.
+ * @st: the service tree to act upon.
+ *
+ * To preserve the global O(log N) complexity we only remove one entry here;
+ * as the idle tree will not grow indefinitely this can be done safely.
+ */
+void bfq_forget_idle(struct io_service_tree *st)
+{
+	struct io_entity *first_idle = st->first_idle;
+	struct io_entity *last_idle = st->last_idle;
+
+	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
+	    !bfq_gt(last_idle->finish, st->vtime)) {
+		/*
+		 * Active tree is empty. Pull back vtime to finish time of
+		 * last idle entity on idle tree.
+		 * Rational seems to be that it reduces the possibility of
+		 * vtime wraparound (bfq_gt(V-F) < 0).
+		 */
+		st->vtime = last_idle->finish;
+	}
+
+	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
+		bfq_put_idle_entity(st, first_idle);
+}
+
+
+static struct io_service_tree *
+__bfq_entity_update_prio(struct io_service_tree *old_st,
+				struct io_entity *entity)
+{
+	struct io_service_tree *new_st = old_st;
+	struct io_queue *ioq = io_entity_to_ioq(entity);
+
+	if (entity->ioprio_changed) {
+		entity->ioprio = entity->new_ioprio;
+		entity->ioprio_class = entity->new_ioprio_class;
+		entity->ioprio_changed = 0;
+
+		/*
+		 * Also update the scaled budget for ioq. Group will get the
+		 * updated budget once ioq is selected to run next.
+		 */
+		if (ioq) {
+			struct elv_fq_data *efqd = ioq->efqd;
+			entity->budget = elv_prio_to_slice(efqd, ioq);
+		}
+
+		old_st->wsum -= entity->weight;
+		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
+
+		/*
+		 * NOTE: here we may be changing the weight too early,
+		 * this will cause unfairness.  The correct approach
+		 * would have required additional complexity to defer
+		 * weight changes to the proper time instants (i.e.,
+		 * when entity->finish <= old_st->vtime).
+		 */
+		new_st = io_entity_service_tree(entity);
+		new_st->wsum += entity->weight;
+
+		if (new_st != old_st)
+			entity->start = new_st->vtime;
+	}
+
+	return new_st;
+}
+
+/**
+ * __bfq_activate_entity - activate an entity.
+ * @entity: the entity being activated.
+ *
+ * Called whenever an entity is activated, i.e., it is not active and one
+ * of its children receives a new request, or has to be reactivated due to
+ * budget exhaustion.  It uses the current budget of the entity (and the
+ * service received if @entity is active) of the queue to calculate its
+ * timestamps.
+ */
+static void __bfq_activate_entity(struct io_entity *entity, int add_front)
+{
+	struct io_sched_data *sd = entity->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+
+	if (entity == sd->active_entity) {
+		BUG_ON(entity->tree != NULL);
+		/*
+		 * If we are requeueing the current entity we have
+		 * to take care of not charging to it service it has
+		 * not received.
+		 */
+		bfq_calc_finish(entity, entity->service);
+		entity->start = entity->finish;
+		sd->active_entity = NULL;
+	} else if (entity->tree == &st->active) {
+		/*
+		 * Requeueing an entity due to a change of some
+		 * next_active entity below it.  We reuse the old
+		 * start time.
+		 */
+		bfq_active_extract(st, entity);
+	} else if (entity->tree == &st->idle) {
+		/*
+		 * Must be on the idle tree, bfq_idle_extract() will
+		 * check for that.
+		 */
+		bfq_idle_extract(st, entity);
+		entity->start = bfq_gt(st->vtime, entity->finish) ?
+				       st->vtime : entity->finish;
+	} else {
+		/*
+		 * The finish time of the entity may be invalid, and
+		 * it is in the past for sure, otherwise the queue
+		 * would have been on the idle tree.
+		 */
+		entity->start = st->vtime;
+		st->wsum += entity->weight;
+		bfq_get_entity(entity);
+
+		BUG_ON(entity->on_st);
+		entity->on_st = 1;
+	}
+
+	st = __bfq_entity_update_prio(st, entity);
+	/*
+	 * This is to emulate cfq like functionality where preemption can
+	 * happen with-in same class, like sync queue preempting async queue
+	 * May be this is not a very good idea from fairness point of view
+	 * as preempting queue gains share. Keeping it for now.
+	 */
+	if (add_front) {
+		struct io_entity *next_entity;
+
+		/*
+		 * Determine the entity which will be dispatched next
+		 * Use sd->next_active once hierarchical patch is applied
+		 */
+		next_entity = bfq_lookup_next_entity(sd, 0);
+
+		if (next_entity && next_entity != entity) {
+			struct io_service_tree *new_st;
+			bfq_timestamp_t delta;
+
+			new_st = io_entity_service_tree(next_entity);
+
+			/*
+			 * At this point, both entities should belong to
+			 * same service tree as cross service tree preemption
+			 * is automatically taken care by algorithm
+			 */
+			BUG_ON(new_st != st);
+			entity->finish = next_entity->finish - 1;
+			delta = bfq_delta(entity->budget, entity->weight);
+			entity->start = entity->finish - delta;
+			if (bfq_gt(entity->start, st->vtime))
+				entity->start = st->vtime;
+		}
+	} else {
+		bfq_calc_finish(entity, entity->budget);
+	}
+	bfq_active_insert(st, entity);
+}
+
+/**
+ * bfq_activate_entity - activate an entity.
+ * @entity: the entity to activate.
+ */
+void bfq_activate_entity(struct io_entity *entity, int add_front)
+{
+	__bfq_activate_entity(entity, add_front);
+}
+
+/**
+ * __bfq_deactivate_entity - deactivate an entity from its service tree.
+ * @entity: the entity to deactivate.
+ * @requeue: if false, the entity will not be put into the idle tree.
+ *
+ * Deactivate an entity, independently from its previous state.  If the
+ * entity was not on a service tree just return, otherwise if it is on
+ * any scheduler tree, extract it from that tree, and if necessary
+ * and if the caller did not specify @requeue, put it on the idle tree.
+ *
+ */
+int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
+{
+	struct io_sched_data *sd = entity->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+	int was_active = entity == sd->active_entity;
+	int ret = 0;
+
+	if (!entity->on_st)
+		return 0;
+
+	BUG_ON(was_active && entity->tree != NULL);
+
+	if (was_active) {
+		bfq_calc_finish(entity, entity->service);
+		sd->active_entity = NULL;
+	} else if (entity->tree == &st->active)
+		bfq_active_extract(st, entity);
+	else if (entity->tree == &st->idle)
+		bfq_idle_extract(st, entity);
+	else if (entity->tree != NULL)
+		BUG();
+
+	if (!requeue || !bfq_gt(entity->finish, st->vtime))
+		bfq_forget_entity(st, entity);
+	else
+		bfq_idle_insert(st, entity);
+
+	BUG_ON(sd->active_entity == entity);
+
+	return ret;
+}
+
+/**
+ * bfq_deactivate_entity - deactivate an entity.
+ * @entity: the entity to deactivate.
+ * @requeue: true if the entity can be put on the idle tree
+ */
+void bfq_deactivate_entity(struct io_entity *entity, int requeue)
+{
+	__bfq_deactivate_entity(entity, requeue);
+}
+
+/**
+ * bfq_update_vtime - update vtime if necessary.
+ * @st: the service tree to act upon.
+ *
+ * If necessary update the service tree vtime to have at least one
+ * eligible entity, skipping to its start time.  Assumes that the
+ * active tree of the device is not empty.
+ *
+ * NOTE: this hierarchical implementation updates vtimes quite often,
+ * we may end up with reactivated tasks getting timestamps after a
+ * vtime skip done because we needed a ->first_active entity on some
+ * intermediate node.
+ */
+static void bfq_update_vtime(struct io_service_tree *st)
+{
+	struct io_entity *entry;
+	struct rb_node *node = st->active.rb_node;
+
+	entry = rb_entry(node, struct io_entity, rb_node);
+	if (bfq_gt(entry->min_start, st->vtime)) {
+		st->vtime = entry->min_start;
+		bfq_forget_idle(st);
+	}
+}
+
+/**
+ * bfq_first_active - find the eligible entity with the smallest finish time
+ * @st: the service tree to select from.
+ *
+ * This function searches the first schedulable entity, starting from the
+ * root of the tree and going on the left every time on this side there is
+ * a subtree with at least one eligible (start <= vtime) entity.  The path
+ * on the right is followed only if a) the left subtree contains no eligible
+ * entities and b) no eligible entity has been found yet.
+ */
+static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
+{
+	struct io_entity *entry, *first = NULL;
+	struct rb_node *node = st->active.rb_node;
+
+	while (node != NULL) {
+		entry = rb_entry(node, struct io_entity, rb_node);
+left:
+		if (!bfq_gt(entry->start, st->vtime))
+			first = entry;
+
+		BUG_ON(bfq_gt(entry->min_start, st->vtime));
+
+		if (node->rb_left != NULL) {
+			entry = rb_entry(node->rb_left,
+					 struct io_entity, rb_node);
+			if (!bfq_gt(entry->min_start, st->vtime)) {
+				node = node->rb_left;
+				goto left;
+			}
+		}
+		if (first != NULL)
+			break;
+		node = node->rb_right;
+	}
+
+	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
+	return first;
+}
+
+/**
+ * __bfq_lookup_next_entity - return the first eligible entity in @st.
+ * @st: the service tree.
+ *
+ * Update the virtual time in @st and return the first eligible entity
+ * it contains.
+ */
+static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
+{
+	struct io_entity *entity;
+
+	if (RB_EMPTY_ROOT(&st->active))
+		return NULL;
+
+	bfq_update_vtime(st);
+	entity = bfq_first_active_entity(st);
+	BUG_ON(bfq_gt(entity->start, st->vtime));
+
+	return entity;
+}
+
+/**
+ * bfq_lookup_next_entity - return the first eligible entity in @sd.
+ * @sd: the sched_data.
+ * @extract: if true the returned entity will be also extracted from @sd.
+ *
+ * NOTE: since we cache the next_active entity at each level of the
+ * hierarchy, the complexity of the lookup can be decreased with
+ * absolutely no effort just returning the cached next_active value;
+ * we prefer to do full lookups to test the consistency of * the data
+ * structures.
+ */
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract)
+{
+	struct io_service_tree *st = sd->service_tree;
+	struct io_entity *entity;
+	int i;
+
+	/*
+	 * We should not call lookup when an entity is active, as doing lookup
+	 * can result in an erroneous vtime jump.
+	 */
+	BUG_ON(sd->active_entity != NULL);
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
+		entity = __bfq_lookup_next_entity(st);
+		if (entity != NULL) {
+			if (extract) {
+				bfq_active_extract(st, entity);
+				sd->active_entity = entity;
+			}
+			break;
+		}
+	}
+
+	return entity;
+}
+
+void entity_served(struct io_entity *entity, bfq_service_t served)
+{
+	struct io_service_tree *st;
+
+	st = io_entity_service_tree(entity);
+	entity->service += served;
+	BUG_ON(st->wsum == 0);
+	st->vtime += bfq_delta(served, st->wsum);
+	bfq_forget_idle(st);
+}
+
+/**
+ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
+ * @st: the service tree being flushed.
+ */
+void io_flush_idle_tree(struct io_service_tree *st)
+{
+	struct io_entity *entity = st->first_idle;
+
+	for (; entity != NULL; entity = st->first_idle)
+		__bfq_deactivate_entity(entity, 0);
+}
+
+/* Elevator fair queuing function */
+struct io_queue *rq_ioq(struct request *rq)
+{
+	return rq->ioq;
+}
+
+static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
+{
+	return e->efqd.active_queue;
+}
+
+void *elv_active_sched_queue(struct elevator_queue *e)
+{
+	return ioq_sched_queue(elv_active_ioq(e));
+}
+EXPORT_SYMBOL(elv_active_sched_queue);
+
+int elv_nr_busy_ioq(struct elevator_queue *e)
+{
+	return e->efqd.busy_queues;
+}
+EXPORT_SYMBOL(elv_nr_busy_ioq);
+
+int elv_hw_tag(struct elevator_queue *e)
+{
+	return e->efqd.hw_tag;
+}
+EXPORT_SYMBOL(elv_hw_tag);
+
+/* Helper functions for operating on elevator idle slice timer */
+int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+
+	return mod_timer(&efqd->idle_slice_timer, expires);
+}
+EXPORT_SYMBOL(elv_mod_idle_slice_timer);
+
+int elv_del_idle_slice_timer(struct elevator_queue *eq)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+
+	return del_timer(&efqd->idle_slice_timer);
+}
+EXPORT_SYMBOL(elv_del_idle_slice_timer);
+
+unsigned int elv_get_slice_idle(struct elevator_queue *eq)
+{
+	return eq->efqd.elv_slice_idle;
+}
+EXPORT_SYMBOL(elv_get_slice_idle);
+
+void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
+{
+	entity_served(&ioq->entity, served);
+}
+
+/* Tells whether ioq is queued in root group or not */
+static inline int is_root_group_ioq(struct request_queue *q,
+					struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
+}
+
+/*
+ * sysfs parts below -->
+ */
+static ssize_t
+elv_var_show(unsigned int var, char *page)
+{
+	return sprintf(page, "%d\n", var);
+}
+
+static ssize_t
+elv_var_store(unsigned int *var, const char *page, size_t count)
+{
+	char *p = (char *) page;
+
+	*var = simple_strtoul(p, &p, 10);
+	return count;
+}
+
+#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
+ssize_t __FUNC(struct elevator_queue *e, char *page)		\
+{									\
+	struct elv_fq_data *efqd = &e->efqd;				\
+	unsigned int __data = __VAR;					\
+	if (__CONV)							\
+		__data = jiffies_to_msecs(__data);			\
+	return elv_var_show(__data, (page));				\
+}
+SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
+EXPORT_SYMBOL(elv_slice_idle_show);
+SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
+EXPORT_SYMBOL(elv_slice_sync_show);
+SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
+EXPORT_SYMBOL(elv_slice_async_show);
+#undef SHOW_FUNCTION
+
+#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
+ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
+{									\
+	struct elv_fq_data *efqd = &e->efqd;				\
+	unsigned int __data;						\
+	int ret = elv_var_store(&__data, (page), count);		\
+	if (__data < (MIN))						\
+		__data = (MIN);						\
+	else if (__data > (MAX))					\
+		__data = (MAX);						\
+	if (__CONV)							\
+		*(__PTR) = msecs_to_jiffies(__data);			\
+	else								\
+		*(__PTR) = __data;					\
+	return ret;							\
+}
+STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_idle_store);
+STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_sync_store);
+STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_slice_async_store);
+#undef STORE_FUNCTION
+
+void elv_schedule_dispatch(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (elv_nr_busy_ioq(q->elevator)) {
+		elv_log(efqd, "schedule dispatch");
+		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
+	}
+}
+EXPORT_SYMBOL(elv_schedule_dispatch);
+
+void elv_kick_queue(struct work_struct *work)
+{
+	struct elv_fq_data *efqd =
+		container_of(work, struct elv_fq_data, unplug_work);
+	struct request_queue *q = efqd->queue;
+	unsigned long flags;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	blk_start_queueing(q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+void elv_shutdown_timer_wq(struct elevator_queue *e)
+{
+	del_timer_sync(&e->efqd.idle_slice_timer);
+	cancel_work_sync(&e->efqd.unplug_work);
+}
+EXPORT_SYMBOL(elv_shutdown_timer_wq);
+
+void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	ioq->slice_end = jiffies + ioq->entity.budget;
+	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
+}
+
+static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = ioq->efqd;
+	unsigned long elapsed = jiffies - ioq->last_end_request;
+	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
+
+	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
+	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
+	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
+}
+
+/*
+ * Disable idle window if the process thinks too long.
+ * This idle flag can also be updated by io scheduler.
+ */
+static void elv_ioq_update_idle_window(struct elevator_queue *eq,
+				struct io_queue *ioq, struct request *rq)
+{
+	int old_idle, enable_idle;
+	struct elv_fq_data *efqd = ioq->efqd;
+
+	/*
+	 * Don't idle for async or idle io prio class
+	 */
+	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
+		return;
+
+	enable_idle = old_idle = elv_ioq_idle_window(ioq);
+
+	if (!efqd->elv_slice_idle)
+		enable_idle = 0;
+	else if (ioq_sample_valid(ioq->ttime_samples)) {
+		if (ioq->ttime_mean > efqd->elv_slice_idle)
+			enable_idle = 0;
+		else
+			enable_idle = 1;
+	}
+
+	/*
+	 * From think time perspective idle should be enabled. Check with
+	 * io scheduler if it wants to disable idling based on additional
+	 * considrations like seek pattern.
+	 */
+	if (enable_idle) {
+		if (eq->ops->elevator_update_idle_window_fn)
+			enable_idle = eq->ops->elevator_update_idle_window_fn(
+						eq, ioq->sched_queue, rq);
+		if (!enable_idle)
+			elv_log_ioq(efqd, ioq, "iosched disabled idle");
+	}
+
+	if (old_idle != enable_idle) {
+		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
+		if (enable_idle)
+			elv_mark_ioq_idle_window(ioq);
+		else
+			elv_clear_ioq_idle_window(ioq);
+	}
+}
+
+struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
+{
+	struct io_queue *ioq = NULL;
+
+	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
+	return ioq;
+}
+EXPORT_SYMBOL(elv_alloc_ioq);
+
+void elv_free_ioq(struct io_queue *ioq)
+{
+	kmem_cache_free(elv_ioq_pool, ioq);
+}
+EXPORT_SYMBOL(elv_free_ioq);
+
+int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
+			void *sched_queue, int ioprio_class, int ioprio,
+			int is_sync)
+{
+	struct elv_fq_data *efqd = &eq->efqd;
+	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
+
+	RB_CLEAR_NODE(&ioq->entity.rb_node);
+	atomic_set(&ioq->ref, 0);
+	ioq->efqd = efqd;
+	elv_ioq_set_ioprio_class(ioq, ioprio_class);
+	elv_ioq_set_ioprio(ioq, ioprio);
+	ioq->pid = current->pid;
+	ioq->sched_queue = sched_queue;
+	if (is_sync && !elv_ioq_class_idle(ioq))
+		elv_mark_ioq_idle_window(ioq);
+	bfq_init_entity(&ioq->entity, iog);
+	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
+	if (is_sync)
+		ioq->last_end_request = jiffies;
+
+	return 0;
+}
+EXPORT_SYMBOL(elv_init_ioq);
+
+void elv_put_ioq(struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = ioq->efqd;
+	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
+						efqd);
+
+	BUG_ON(atomic_read(&ioq->ref) <= 0);
+	if (!atomic_dec_and_test(&ioq->ref))
+		return;
+	BUG_ON(ioq->nr_queued);
+	BUG_ON(ioq->entity.tree != NULL);
+	BUG_ON(elv_ioq_busy(ioq));
+	BUG_ON(efqd->active_queue == ioq);
+
+	/* Can be called by outgoing elevator. Don't use q */
+	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
+
+	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
+	elv_log_ioq(efqd, ioq, "put_queue");
+	elv_free_ioq(ioq);
+}
+EXPORT_SYMBOL(elv_put_ioq);
+
+void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
+{
+	struct io_queue *ioq = *ioq_ptr;
+
+	if (ioq != NULL) {
+		/* Drop the reference taken by the io group */
+		elv_put_ioq(ioq);
+		*ioq_ptr = NULL;
+	}
+}
+
+/*
+ * Normally next io queue to be served is selected from the service tree.
+ * This function allows one to choose a specific io queue to run next
+ * out of order. This is primarily to accomodate the close_cooperator
+ * feature of cfq.
+ *
+ * Currently it is done only for root level as to begin with supporting
+ * close cooperator feature only for root group to make sure default
+ * cfq behavior in flat hierarchy is not changed.
+ */
+void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	struct io_sched_data *sd = &efqd->root_group->sched_data;
+	struct io_service_tree *st = io_entity_service_tree(entity);
+
+	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
+	BUG_ON(!efqd->busy_queues);
+	BUG_ON(sd != entity->sched_data);
+	BUG_ON(!st);
+
+	bfq_update_vtime(st);
+	bfq_active_extract(st, entity);
+	sd->active_entity = entity;
+	entity->service = 0;
+	elv_log_ioq(efqd, ioq, "set_next_ioq");
+}
+
+/* Get next queue for service. */
+struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = NULL;
+	struct io_queue *ioq = NULL;
+	struct io_sched_data *sd;
+
+	/*
+	 * We should not call lookup when an entity is active, as doing
+	 * lookup can result in an erroneous vtime jump.
+	 */
+	BUG_ON(efqd->active_queue != NULL);
+
+	if (!efqd->busy_queues)
+		return NULL;
+
+	sd = &efqd->root_group->sched_data;
+	entity = bfq_lookup_next_entity(sd, 1);
+
+	BUG_ON(!entity);
+	if (extract)
+		entity->service = 0;
+	ioq = io_entity_to_ioq(entity);
+
+	return ioq;
+}
+
+/*
+ * coop tells that io scheduler selected a queue for us and we did not
+ * select the next queue based on fairness.
+ */
+static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int coop)
+{
+	struct request_queue *q = efqd->queue;
+
+	if (ioq) {
+		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
+							efqd->busy_queues);
+		ioq->slice_end = 0;
+
+		elv_clear_ioq_wait_request(ioq);
+		elv_clear_ioq_must_dispatch(ioq);
+		elv_mark_ioq_slice_new(ioq);
+
+		del_timer(&efqd->idle_slice_timer);
+	}
+
+	efqd->active_queue = ioq;
+
+	/* Let iosched know if it wants to take some action */
+	if (ioq) {
+		if (q->elevator->ops->elevator_active_ioq_set_fn)
+			q->elevator->ops->elevator_active_ioq_set_fn(q,
+							ioq->sched_queue, coop);
+	}
+}
+
+/* Get and set a new active queue for service. */
+struct io_queue *elv_set_active_ioq(struct request_queue *q,
+						struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	int coop = 0;
+
+	if (!ioq)
+		ioq = elv_get_next_ioq(q, 1);
+	else {
+		elv_set_next_ioq(q, ioq);
+		/*
+		 * io scheduler selected the next queue for us. Pass this
+		 * this info back to io scheudler. cfq currently uses it
+		 * to reset coop flag on the queue.
+		 */
+		coop = 1;
+	}
+	__elv_set_active_ioq(efqd, ioq, coop);
+	return ioq;
+}
+
+void elv_reset_active_ioq(struct elv_fq_data *efqd)
+{
+	struct request_queue *q = efqd->queue;
+	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
+
+	if (q->elevator->ops->elevator_active_ioq_reset_fn)
+		q->elevator->ops->elevator_active_ioq_reset_fn(q,
+							ioq->sched_queue);
+	efqd->active_queue = NULL;
+	del_timer(&efqd->idle_slice_timer);
+}
+
+void elv_activate_ioq(struct io_queue *ioq, int add_front)
+{
+	bfq_activate_entity(&ioq->entity, add_front);
+}
+
+void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int requeue)
+{
+	bfq_deactivate_entity(&ioq->entity, requeue);
+}
+
+/* Called when an inactive queue receives a new request. */
+void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
+{
+	BUG_ON(elv_ioq_busy(ioq));
+	BUG_ON(ioq == efqd->active_queue);
+	elv_log_ioq(efqd, ioq, "add to busy");
+	elv_activate_ioq(ioq, 0);
+	elv_mark_ioq_busy(ioq);
+	efqd->busy_queues++;
+	if (elv_ioq_class_rt(ioq)) {
+		struct io_group *iog = ioq_to_io_group(ioq);
+		iog->busy_rt_queues++;
+	}
+}
+
+void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
+					int requeue)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	BUG_ON(!elv_ioq_busy(ioq));
+	BUG_ON(ioq->nr_queued);
+	elv_log_ioq(efqd, ioq, "del from busy");
+	elv_clear_ioq_busy(ioq);
+	BUG_ON(efqd->busy_queues == 0);
+	efqd->busy_queues--;
+	if (elv_ioq_class_rt(ioq)) {
+		struct io_group *iog = ioq_to_io_group(ioq);
+		iog->busy_rt_queues--;
+	}
+
+	elv_deactivate_ioq(efqd, ioq, requeue);
+}
+
+/*
+ * Do the accounting. Determine how much service (in terms of time slices)
+ * current queue used and adjust the start, finish time of queue and vtime
+ * of the tree accordingly.
+ *
+ * Determining the service used in terms of time is tricky in certain
+ * situations. Especially when underlying device supports command queuing
+ * and requests from multiple queues can be there at same time, then it
+ * is not clear which queue consumed how much of disk time.
+ *
+ * To mitigate this problem, cfq starts the time slice of the queue only
+ * after first request from the queue has completed. This does not work
+ * very well if we expire the queue before we wait for first and more
+ * request to finish from the queue. For seeky queues, we will expire the
+ * queue after dispatching few requests without waiting and start dispatching
+ * from next queue.
+ *
+ * Not sure how to determine the time consumed by queue in such scenarios.
+ * Currently as a crude approximation, we are charging 25% of time slice
+ * for such cases. A better mechanism is needed for accurate accounting.
+ */
+void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
+
+	assert_spin_locked(q->queue_lock);
+	elv_log_ioq(efqd, ioq, "slice expired");
+
+	if (elv_ioq_wait_request(ioq))
+		del_timer(&efqd->idle_slice_timer);
+
+	elv_clear_ioq_wait_request(ioq);
+
+	/*
+	 * if ioq->slice_end = 0, that means a queue was expired before first
+	 * reuqest from the queue got completed. Of course we are not planning
+	 * to idle on the queue otherwise we would not have expired it.
+	 *
+	 * Charge for the 25% slice in such cases. This is not the best thing
+	 * to do but at the same time not very sure what's the next best
+	 * thing to do.
+	 *
+	 * This arises from that fact that we don't have the notion of
+	 * one queue being operational at one time. io scheduler can dispatch
+	 * requests from multiple queues in one dispatch round. Ideally for
+	 * more accurate accounting of exact disk time used by disk, one
+	 * should dispatch requests from only one queue and wait for all
+	 * the requests to finish. But this will reduce throughput.
+	 */
+	if (!ioq->slice_end)
+		slice_used = entity->budget/4;
+	else {
+		if (time_after(ioq->slice_end, jiffies)) {
+			slice_unused = ioq->slice_end - jiffies;
+			if (slice_unused == entity->budget) {
+				/*
+				 * queue got expired immediately after
+				 * completing first request. Charge 25% of
+				 * slice.
+				 */
+				slice_used = entity->budget/4;
+			} else
+				slice_used = entity->budget - slice_unused;
+		} else {
+			slice_overshoot = jiffies - ioq->slice_end;
+			slice_used = entity->budget + slice_overshoot;
+		}
+	}
+
+	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
+			jiffies);
+	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
+				slice_used, entity->budget, slice_overshoot);
+	elv_ioq_served(ioq, slice_used);
+
+	BUG_ON(ioq != efqd->active_queue);
+	elv_reset_active_ioq(efqd);
+
+	if (!ioq->nr_queued)
+		elv_del_ioq_busy(q->elevator, ioq, 1);
+	else
+		elv_activate_ioq(ioq, 0);
+}
+EXPORT_SYMBOL(__elv_ioq_slice_expired);
+
+/*
+ *  Expire the ioq.
+ */
+void elv_ioq_slice_expired(struct request_queue *q)
+{
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+
+	if (ioq)
+		__elv_ioq_slice_expired(q, ioq);
+}
+
+/*
+ * Check if new_cfqq should preempt the currently active queue. Return 0 for
+ * no or if we aren't sure, a 1 will cause a preemption attempt.
+ */
+int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
+			struct request *rq)
+{
+	struct io_queue *ioq;
+	struct elevator_queue *eq = q->elevator;
+	struct io_entity *entity, *new_entity;
+
+	ioq = elv_active_ioq(eq);
+
+	if (!ioq)
+		return 0;
+
+	entity = &ioq->entity;
+	new_entity = &new_ioq->entity;
+
+	/*
+	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
+	 */
+
+	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
+	    && entity->ioprio_class != IOPRIO_CLASS_RT)
+		return 1;
+	/*
+	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
+	 */
+
+	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
+	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
+		return 1;
+
+	/*
+	 * Check with io scheduler if it has additional criterion based on
+	 * which it wants to preempt existing queue.
+	 */
+	if (eq->ops->elevator_should_preempt_fn)
+		return eq->ops->elevator_should_preempt_fn(q,
+						ioq_sched_queue(new_ioq), rq);
+
+	return 0;
+}
+
+static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
+{
+	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
+	elv_ioq_slice_expired(q);
+
+	/*
+	 * Put the new queue at the front of the of the current list,
+	 * so we know that it will be selected next.
+	 */
+
+	elv_activate_ioq(ioq, 1);
+	elv_ioq_set_slice_end(ioq, 0);
+	elv_mark_ioq_slice_new(ioq);
+}
+
+void elv_ioq_request_add(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	BUG_ON(!efqd);
+	BUG_ON(!ioq);
+	efqd->rq_queued++;
+	ioq->nr_queued++;
+
+	if (!elv_ioq_busy(ioq))
+		elv_add_ioq_busy(efqd, ioq);
+
+	elv_ioq_update_io_thinktime(ioq);
+	elv_ioq_update_idle_window(q->elevator, ioq, rq);
+
+	if (ioq == elv_active_ioq(q->elevator)) {
+		/*
+		 * Remember that we saw a request from this process, but
+		 * don't start queuing just yet. Otherwise we risk seeing lots
+		 * of tiny requests, because we disrupt the normal plugging
+		 * and merging. If the request is already larger than a single
+		 * page, let it rip immediately. For that case we assume that
+		 * merging is already done. Ditto for a busy system that
+		 * has other work pending, don't risk delaying until the
+		 * idle timer unplug to continue working.
+		 */
+		if (elv_ioq_wait_request(ioq)) {
+			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
+			    efqd->busy_queues > 1) {
+				del_timer(&efqd->idle_slice_timer);
+				blk_start_queueing(q);
+			}
+			elv_mark_ioq_must_dispatch(ioq);
+		}
+	} else if (elv_should_preempt(q, ioq, rq)) {
+		/*
+		 * not the active queue - expire current slice if it is
+		 * idle and has expired it's mean thinktime or this new queue
+		 * has some old slice time left and is of higher priority or
+		 * this new queue is RT and the current one is BE
+		 */
+		elv_preempt_queue(q, ioq);
+		blk_start_queueing(q);
+	}
+}
+
+void elv_idle_slice_timer(unsigned long data)
+{
+	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
+	struct io_queue *ioq;
+	unsigned long flags;
+	struct request_queue *q = efqd->queue;
+
+	elv_log(efqd, "idle timer fired");
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	ioq = efqd->active_queue;
+
+	if (ioq) {
+
+		/*
+		 * We saw a request before the queue expired, let it through
+		 */
+		if (elv_ioq_must_dispatch(ioq))
+			goto out_kick;
+
+		/*
+		 * expired
+		 */
+		if (elv_ioq_slice_used(ioq))
+			goto expire;
+
+		/*
+		 * only expire and reinvoke request handler, if there are
+		 * other queues with pending requests
+		 */
+		if (!elv_nr_busy_ioq(q->elevator))
+			goto out_cont;
+
+		/*
+		 * not expired and it has a request pending, let it dispatch
+		 */
+		if (ioq->nr_queued)
+			goto out_kick;
+	}
+expire:
+	elv_ioq_slice_expired(q);
+out_kick:
+	elv_schedule_dispatch(q);
+out_cont:
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+void elv_ioq_arm_slice_timer(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+	unsigned long sl;
+
+	BUG_ON(!ioq);
+
+	/*
+	 * SSD device without seek penalty, disable idling. But only do so
+	 * for devices that support queuing, otherwise we still have a problem
+	 * with sync vs async workloads.
+	 */
+	if (blk_queue_nonrot(q) && efqd->hw_tag)
+		return;
+
+	/*
+	 * still requests with the driver, don't idle
+	 */
+	if (efqd->rq_in_driver)
+		return;
+
+	/*
+	 * idle is disabled, either manually or by past process history
+	 */
+	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+		return;
+
+	/*
+	 * may be iosched got its own idling logic. In that case io
+	 * schduler will take care of arming the timer, if need be.
+	 */
+	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
+		q->elevator->ops->elevator_arm_slice_timer_fn(q,
+						ioq->sched_queue);
+	} else {
+		elv_mark_ioq_wait_request(ioq);
+		sl = efqd->elv_slice_idle;
+		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
+		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
+	}
+}
+
+/* Common layer function to select the next queue to dispatch from */
+void *elv_fq_select_ioq(struct request_queue *q, int force)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
+	struct io_group *iog;
+
+	if (!elv_nr_busy_ioq(q->elevator))
+		return NULL;
+
+	if (ioq == NULL)
+		goto new_queue;
+
+	/*
+	 * Force dispatch. Continue to dispatch from current queue as long
+	 * as it has requests.
+	 */
+	if (unlikely(force)) {
+		if (ioq->nr_queued)
+			goto keep_queue;
+		else
+			goto expire;
+	}
+
+	/*
+	 * The active queue has run out of time, expire it and select new.
+	 */
+	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
+		goto expire;
+
+	/*
+	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
+	 * cfqq.
+	 */
+	iog = ioq_to_io_group(ioq);
+
+	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+		/*
+		 * We simulate this as cfqq timed out so that it gets to bank
+		 * the remaining of its time slice.
+		 */
+		elv_log_ioq(efqd, ioq, "preempt");
+		goto expire;
+	}
+
+	/*
+	 * The active queue has requests and isn't expired, allow it to
+	 * dispatch.
+	 */
+
+	if (ioq->nr_queued)
+		goto keep_queue;
+
+	/*
+	 * If another queue has a request waiting within our mean seek
+	 * distance, let it run.  The expire code will check for close
+	 * cooperators and put the close queue at the front of the service
+	 * tree.
+	 */
+	new_ioq = elv_close_cooperator(q, ioq, 0);
+	if (new_ioq)
+		goto expire;
+
+	/*
+	 * No requests pending. If the active queue still has requests in
+	 * flight or is idling for a new request, allow either of these
+	 * conditions to happen (or time out) before selecting a new queue.
+	 */
+
+	if (timer_pending(&efqd->idle_slice_timer) ||
+	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
+expire:
+	elv_ioq_slice_expired(q);
+new_queue:
+	ioq = elv_set_active_ioq(q, new_ioq);
+keep_queue:
+	return ioq;
+}
+
+/* A request got removed from io_queue. Do the accounting */
+void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
+{
+	struct io_queue *ioq;
+	struct elv_fq_data *efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	ioq = rq->ioq;
+	BUG_ON(!ioq);
+	ioq->nr_queued--;
+
+	efqd = ioq->efqd;
+	BUG_ON(!efqd);
+	efqd->rq_queued--;
+
+	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
+		elv_del_ioq_busy(e, ioq, 1);
+}
+
+/* A request got dispatched. Do the accounting. */
+void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
+{
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	BUG_ON(!ioq);
+	elv_ioq_request_dispatched(ioq);
+	elv_ioq_request_removed(e, rq);
+	elv_clear_ioq_must_dispatch(ioq);
+}
+
+void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	efqd->rq_in_driver++;
+	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
+						efqd->rq_in_driver);
+}
+
+void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	WARN_ON(!efqd->rq_in_driver);
+	efqd->rq_in_driver--;
+	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
+						efqd->rq_in_driver);
+}
+
+/*
+ * Update hw_tag based on peak queue depth over 50 samples under
+ * sufficient load.
+ */
+static void elv_update_hw_tag(struct elv_fq_data *efqd)
+{
+	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
+		efqd->rq_in_driver_peak = efqd->rq_in_driver;
+
+	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
+	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
+		return;
+
+	if (efqd->hw_tag_samples++ < 50)
+		return;
+
+	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
+		efqd->hw_tag = 1;
+	else
+		efqd->hw_tag = 0;
+
+	efqd->hw_tag_samples = 0;
+	efqd->rq_in_driver_peak = 0;
+}
+
+/*
+ * If ioscheduler has functionality of keeping track of close cooperator, check
+ * with it if it has got a closely co-operating queue.
+ */
+static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
+					struct io_queue *ioq, int probe)
+{
+	struct elevator_queue *e = q->elevator;
+	struct io_queue *new_ioq = NULL;
+
+	/*
+	 * Currently this feature is supported only for flat hierarchy or
+	 * root group queues so that default cfq behavior is not changed.
+	 */
+	if (!is_root_group_ioq(q, ioq))
+		return NULL;
+
+	if (q->elevator->ops->elevator_close_cooperator_fn)
+		new_ioq = e->ops->elevator_close_cooperator_fn(q,
+						ioq->sched_queue, probe);
+
+	/* Only select co-operating queue if it belongs to root group */
+	if (new_ioq && !is_root_group_ioq(q, new_ioq))
+		return NULL;
+
+	return new_ioq;
+}
+
+/* A request got completed from io_queue. Do the accounting. */
+void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
+{
+	const int sync = rq_is_sync(rq);
+	struct io_queue *ioq;
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	ioq = rq->ioq;
+
+	elv_log_ioq(efqd, ioq, "complete");
+
+	elv_update_hw_tag(efqd);
+
+	WARN_ON(!efqd->rq_in_driver);
+	WARN_ON(!ioq->dispatched);
+	efqd->rq_in_driver--;
+	ioq->dispatched--;
+
+	if (sync)
+		ioq->last_end_request = jiffies;
+
+	/*
+	 * If this is the active queue, check if it needs to be expired,
+	 * or if we want to idle in case it has no pending requests.
+	 */
+
+	if (elv_active_ioq(q->elevator) == ioq) {
+		if (elv_ioq_slice_new(ioq)) {
+			elv_ioq_set_prio_slice(q, ioq);
+			elv_clear_ioq_slice_new(ioq);
+		}
+		/*
+		 * If there are no requests waiting in this queue, and
+		 * there are other queues ready to issue requests, AND
+		 * those other queues are issuing requests within our
+		 * mean seek distance, give them a chance to run instead
+		 * of idling.
+		 */
+		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
+			elv_ioq_slice_expired(q);
+		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
+			 && sync && !rq_noidle(rq))
+			elv_ioq_arm_slice_timer(q);
+	}
+
+	if (!efqd->rq_in_driver)
+		elv_schedule_dispatch(q);
+}
+
+struct io_group *io_lookup_io_group_current(struct request_queue *q)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	return efqd->root_group;
+}
+EXPORT_SYMBOL(io_lookup_io_group_current);
+
+void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
+					int ioprio)
+{
+	struct io_queue *ioq = NULL;
+
+	switch (ioprio_class) {
+	case IOPRIO_CLASS_RT:
+		ioq = iog->async_queue[0][ioprio];
+		break;
+	case IOPRIO_CLASS_BE:
+		ioq = iog->async_queue[1][ioprio];
+		break;
+	case IOPRIO_CLASS_IDLE:
+		ioq = iog->async_idle_queue;
+		break;
+	default:
+		BUG();
+	}
+
+	if (ioq)
+		return ioq->sched_queue;
+	return NULL;
+}
+EXPORT_SYMBOL(io_group_async_queue_prio);
+
+void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
+					int ioprio, struct io_queue *ioq)
+{
+	switch (ioprio_class) {
+	case IOPRIO_CLASS_RT:
+		iog->async_queue[0][ioprio] = ioq;
+		break;
+	case IOPRIO_CLASS_BE:
+		iog->async_queue[1][ioprio] = ioq;
+		break;
+	case IOPRIO_CLASS_IDLE:
+		iog->async_idle_queue = ioq;
+		break;
+	default:
+		BUG();
+	}
+
+	/*
+	 * Take the group reference and pin the queue. Group exit will
+	 * clean it up
+	 */
+	elv_get_ioq(ioq);
+}
+EXPORT_SYMBOL(io_group_set_async_queue);
+
+/*
+ * Release all the io group references to its async queues.
+ */
+void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
+{
+	int i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < IOPRIO_BE_NR; j++)
+			elv_release_ioq(e, &iog->async_queue[i][j]);
+
+	/* Free up async idle queue */
+	elv_release_ioq(e, &iog->async_idle_queue);
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	return iog;
+}
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_group *iog = e->efqd.root_group;
+	struct io_service_tree *st;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	kfree(iog);
+}
+
+static void elv_slab_kill(void)
+{
+	/*
+	 * Caller already ensured that pending RCU callbacks are completed,
+	 * so we should have no busy allocations at this point.
+	 */
+	if (elv_ioq_pool)
+		kmem_cache_destroy(elv_ioq_pool);
+}
+
+static int __init elv_slab_setup(void)
+{
+	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
+	if (!elv_ioq_pool)
+		goto fail;
+
+	return 0;
+fail:
+	elv_slab_kill();
+	return -ENOMEM;
+}
+
+/* Initialize fair queueing data associated with elevator */
+int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
+{
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return 0;
+
+	iog = io_alloc_root_group(q, e, efqd);
+	if (iog == NULL)
+		return 1;
+
+	efqd->root_group = iog;
+	efqd->queue = q;
+
+	init_timer(&efqd->idle_slice_timer);
+	efqd->idle_slice_timer.function = elv_idle_slice_timer;
+	efqd->idle_slice_timer.data = (unsigned long) efqd;
+
+	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
+
+	efqd->elv_slice[0] = elv_slice_async;
+	efqd->elv_slice[1] = elv_slice_sync;
+	efqd->elv_slice_idle = elv_slice_idle;
+	efqd->hw_tag = 1;
+
+	return 0;
+}
+
+/*
+ * elv_exit_fq_data is called before we call elevator_exit_fn. Before
+ * we ask elevator to cleanup its queues, we do the cleanup here so
+ * that all the group and idle tree references to ioq are dropped. Later
+ * during elevator cleanup, ioc reference will be dropped which will lead
+ * to removal of ioscheduler queue as well as associated ioq object.
+ */
+void elv_exit_fq_data(struct elevator_queue *e)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	elv_shutdown_timer_wq(e);
+
+	BUG_ON(timer_pending(&efqd->idle_slice_timer));
+	io_free_root_group(e);
+}
+
+/*
+ * This is called after the io scheduler has cleaned up its data structres.
+ * I don't think that this function is required. Right now just keeping it
+ * because cfq cleans up timer and work queue again after freeing up
+ * io contexts. To me io scheduler has already been drained out, and all
+ * the active queue have already been expired so time and work queue should
+ * not been activated during cleanup process.
+ *
+ * Keeping it here for the time being. Will get rid of it later.
+ */
+void elv_exit_fq_data_post(struct elevator_queue *e)
+{
+	struct elv_fq_data *efqd = &e->efqd;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return;
+
+	elv_shutdown_timer_wq(e);
+	BUG_ON(timer_pending(&efqd->idle_slice_timer));
+}
+
+
+static int __init elv_fq_init(void)
+{
+	if (elv_slab_setup())
+		return -ENOMEM;
+
+	/* could be 0 on HZ < 1000 setups */
+
+	if (!elv_slice_async)
+		elv_slice_async = 1;
+
+	if (!elv_slice_idle)
+		elv_slice_idle = 1;
+
+	return 0;
+}
+
+module_init(elv_fq_init);
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
new file mode 100644
index 0000000..5b6c1cc
--- /dev/null
+++ b/block/elevator-fq.h
@@ -0,0 +1,473 @@
+/*
+ * BFQ: data structures and common functions prototypes.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+ *
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ *		      Paolo Valente <paolo.valente@unimore.it>
+ * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
+ * 	              Nauman Rafique <nauman@google.com>
+ */
+
+#include <linux/blkdev.h>
+
+#ifndef _BFQ_SCHED_H
+#define _BFQ_SCHED_H
+
+#define IO_IOPRIO_CLASSES	3
+
+typedef u64 bfq_timestamp_t;
+typedef unsigned long bfq_weight_t;
+typedef unsigned long bfq_service_t;
+struct io_entity;
+struct io_queue;
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+
+#define ELV_ATTR(name) \
+	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
+
+/**
+ * struct bfq_service_tree - per ioprio_class service tree.
+ * @active: tree for active entities (i.e., those backlogged).
+ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
+ * @first_idle: idle entity with minimum F_i.
+ * @last_idle: idle entity with maximum F_i.
+ * @vtime: scheduler virtual time.
+ * @wsum: scheduler weight sum; active and idle entities contribute to it.
+ *
+ * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
+ * ioprio_class has its own independent scheduler, and so its own
+ * bfq_service_tree.  All the fields are protected by the queue lock
+ * of the containing efqd.
+ */
+struct io_service_tree {
+	struct rb_root active;
+	struct rb_root idle;
+
+	struct io_entity *first_idle;
+	struct io_entity *last_idle;
+
+	bfq_timestamp_t vtime;
+	bfq_weight_t wsum;
+};
+
+/**
+ * struct bfq_sched_data - multi-class scheduler.
+ * @active_entity: entity under service.
+ * @next_active: head-of-the-line entity in the scheduler.
+ * @service_tree: array of service trees, one per ioprio_class.
+ *
+ * bfq_sched_data is the basic scheduler queue.  It supports three
+ * ioprio_classes, and can be used either as a toplevel queue or as
+ * an intermediate queue on a hierarchical setup.
+ * @next_active points to the active entity of the sched_data service
+ * trees that will be scheduled next.
+ *
+ * The supported ioprio_classes are the same as in CFQ, in descending
+ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
+ * Requests from higher priority queues are served before all the
+ * requests from lower priority queues; among requests of the same
+ * queue requests are served according to B-WF2Q+.
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+struct io_sched_data {
+	struct io_entity *active_entity;
+	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
+};
+
+/**
+ * struct bfq_entity - schedulable entity.
+ * @rb_node: service_tree member.
+ * @on_st: flag, true if the entity is on a tree (either the active or
+ *         the idle one of its service_tree).
+ * @finish: B-WF2Q+ finish timestamp (aka F_i).
+ * @start: B-WF2Q+ start timestamp (aka S_i).
+ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
+ * @min_start: minimum start time of the (active) subtree rooted at
+ *             this entity; used for O(log N) lookups into active trees.
+ * @service: service received during the last round of service.
+ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
+ * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
+ * @parent: parent entity, for hierarchical scheduling.
+ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
+ *                 associated scheduler queue, %NULL on leaf nodes.
+ * @sched_data: the scheduler queue this entity belongs to.
+ * @ioprio: the ioprio in use.
+ * @new_ioprio: when an ioprio change is requested, the new ioprio value
+ * @ioprio_class: the ioprio_class in use.
+ * @new_ioprio_class: when an ioprio_class change is requested, the new
+ *                    ioprio_class value.
+ * @ioprio_changed: flag, true when the user requested an ioprio or
+ *                  ioprio_class change.
+ *
+ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
+ * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
+ * entity belongs to the sched_data of the parent group in the cgroup
+ * hierarchy.  Non-leaf entities have also their own sched_data, stored
+ * in @my_sched_data.
+ *
+ * Each entity stores independently its priority values; this would allow
+ * different weights on different devices, but this functionality is not
+ * exported to userspace by now.  Priorities are updated lazily, first
+ * storing the new values into the new_* fields, then setting the
+ * @ioprio_changed flag.  As soon as there is a transition in the entity
+ * state that allows the priority update to take place the effective and
+ * the requested priority values are synchronized.
+ *
+ * The weight value is calculated from the ioprio to export the same
+ * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
+ * queues that do not spend too much time to consume their budget and
+ * have true sequential behavior, and when there are no external factors
+ * breaking anticipation) the relative weights at each level of the
+ * cgroups hierarchy should be guaranteed.
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+struct io_entity {
+	struct rb_node rb_node;
+
+	int on_st;
+
+	bfq_timestamp_t finish;
+	bfq_timestamp_t start;
+
+	struct rb_root *tree;
+
+	bfq_timestamp_t min_start;
+
+	bfq_service_t service, budget;
+	bfq_weight_t weight;
+
+	struct io_entity *parent;
+
+	struct io_sched_data *my_sched_data;
+	struct io_sched_data *sched_data;
+
+	unsigned short ioprio, new_ioprio;
+	unsigned short ioprio_class, new_ioprio_class;
+
+	int ioprio_changed;
+};
+
+/*
+ * A common structure embedded by every io scheduler into their respective
+ * queue structure.
+ */
+struct io_queue {
+	struct io_entity entity;
+	atomic_t ref;
+	unsigned int flags;
+
+	/* Pointer to generic elevator data structure */
+	struct elv_fq_data *efqd;
+	pid_t pid;
+
+	/* Number of requests queued on this io queue */
+	unsigned long nr_queued;
+
+	/* Requests dispatched from this queue */
+	int dispatched;
+
+	/* Keep a track of think time of processes in this queue */
+	unsigned long last_end_request;
+	unsigned long ttime_total;
+	unsigned long ttime_samples;
+	unsigned long ttime_mean;
+
+	unsigned long slice_end;
+
+	/* Pointer to io scheduler's queue */
+	void *sched_queue;
+};
+
+struct io_group {
+	struct io_sched_data sched_data;
+
+	/* async_queue and idle_queue are used only for cfq */
+	struct io_queue *async_queue[2][IOPRIO_BE_NR];
+	struct io_queue *async_idle_queue;
+
+	/*
+	 * Used to track any pending rt requests so we can pre-empt current
+	 * non-RT cfqq in service when this value is non-zero.
+	 */
+	unsigned int busy_rt_queues;
+};
+
+struct elv_fq_data {
+	struct io_group *root_group;
+
+	struct request_queue *queue;
+	unsigned int busy_queues;
+
+	/* Number of requests queued */
+	int rq_queued;
+
+	/* Pointer to the ioscheduler queue being served */
+	void *active_queue;
+
+	int rq_in_driver;
+	int hw_tag;
+	int hw_tag_samples;
+	int rq_in_driver_peak;
+
+	/*
+	 * elevator fair queuing layer has the capability to provide idling
+	 * for ensuring fairness for processes doing dependent reads.
+	 * This might be needed to ensure fairness among two processes doing
+	 * synchronous reads in two different cgroups. noop and deadline don't
+	 * have any notion of anticipation/idling. As of now, these are the
+	 * users of this functionality.
+	 */
+	unsigned int elv_slice_idle;
+	struct timer_list idle_slice_timer;
+	struct work_struct unplug_work;
+
+	unsigned int elv_slice[2];
+};
+
+extern int elv_slice_idle;
+extern int elv_slice_async;
+
+/* Logging facilities. */
+#define elv_log_ioq(efqd, ioq, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
+				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
+
+#define elv_log(efqd, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
+
+#define ioq_sample_valid(samples)   ((samples) > 80)
+
+/* Some shared queue flag manipulation functions among elevators */
+
+enum elv_queue_state_flags {
+	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
+	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
+	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
+	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
+	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
+	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
+	ELV_QUEUE_FLAG_NR,
+};
+
+#define ELV_IO_QUEUE_FLAG_FNS(name)					\
+static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
+{                                                                       \
+	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
+}                                                                       \
+static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
+{                                                                       \
+	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
+}                                                                       \
+static inline int elv_ioq_##name(struct io_queue *ioq)         		\
+{                                                                       \
+	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
+}
+
+ELV_IO_QUEUE_FLAG_FNS(busy)
+ELV_IO_QUEUE_FLAG_FNS(sync)
+ELV_IO_QUEUE_FLAG_FNS(wait_request)
+ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
+ELV_IO_QUEUE_FLAG_FNS(idle_window)
+ELV_IO_QUEUE_FLAG_FNS(slice_new)
+
+static inline struct io_service_tree *
+io_entity_service_tree(struct io_entity *entity)
+{
+	struct io_sched_data *sched_data = entity->sched_data;
+	unsigned int idx = entity->ioprio_class - 1;
+
+	BUG_ON(idx >= IO_IOPRIO_CLASSES);
+	BUG_ON(sched_data == NULL);
+
+	return sched_data->service_tree + idx;
+}
+
+/* A request got dispatched from the io_queue. Do the accounting. */
+static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
+{
+	ioq->dispatched++;
+}
+
+static inline int elv_ioq_slice_used(struct io_queue *ioq)
+{
+	if (elv_ioq_slice_new(ioq))
+		return 0;
+	if (time_before(jiffies, ioq->slice_end))
+		return 0;
+
+	return 1;
+}
+
+/* How many request are currently dispatched from the queue */
+static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
+{
+	return ioq->dispatched;
+}
+
+/* How many request are currently queued in the queue */
+static inline int elv_ioq_nr_queued(struct io_queue *ioq)
+{
+	return ioq->nr_queued;
+}
+
+static inline void elv_get_ioq(struct io_queue *ioq)
+{
+	atomic_inc(&ioq->ref);
+}
+
+static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
+						unsigned long slice_end)
+{
+	ioq->slice_end = slice_end;
+}
+
+static inline int elv_ioq_class_idle(struct io_queue *ioq)
+{
+	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
+}
+
+static inline int elv_ioq_class_rt(struct io_queue *ioq)
+{
+	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
+}
+
+static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
+{
+	return ioq->entity.new_ioprio_class;
+}
+
+static inline int elv_ioq_ioprio(struct io_queue *ioq)
+{
+	return ioq->entity.new_ioprio;
+}
+
+static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
+						int ioprio_class)
+{
+	ioq->entity.new_ioprio_class = ioprio_class;
+	ioq->entity.ioprio_changed = 1;
+}
+
+static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
+{
+	ioq->entity.new_ioprio = ioprio;
+	ioq->entity.ioprio_changed = 1;
+}
+
+static inline void *ioq_sched_queue(struct io_queue *ioq)
+{
+	if (ioq)
+		return ioq->sched_queue;
+	return NULL;
+}
+
+static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
+{
+	return container_of(ioq->entity.sched_data, struct io_group,
+						sched_data);
+}
+
+extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
+						size_t count);
+extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
+						size_t count);
+extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
+						size_t count);
+
+/* Functions used by elevator.c */
+extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
+extern void elv_exit_fq_data(struct elevator_queue *e);
+extern void elv_exit_fq_data_post(struct elevator_queue *e);
+
+extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
+extern void elv_ioq_request_removed(struct elevator_queue *e,
+					struct request *rq);
+extern void elv_fq_dispatched_request(struct elevator_queue *e,
+					struct request *rq);
+
+extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
+extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
+
+extern void elv_ioq_completed_request(struct request_queue *q,
+				struct request *rq);
+
+extern void *elv_fq_select_ioq(struct request_queue *q, int force);
+extern struct io_queue *rq_ioq(struct request *rq);
+
+/* Functions used by io schedulers */
+extern void elv_put_ioq(struct io_queue *ioq);
+extern void __elv_ioq_slice_expired(struct request_queue *q,
+					struct io_queue *ioq);
+extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
+		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
+extern void elv_schedule_dispatch(struct request_queue *q);
+extern int elv_hw_tag(struct elevator_queue *e);
+extern void *elv_active_sched_queue(struct elevator_queue *e);
+extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
+					unsigned long expires);
+extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
+extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
+extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
+					int ioprio);
+extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
+					int ioprio, struct io_queue *ioq);
+extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
+extern int elv_nr_busy_ioq(struct elevator_queue *e);
+extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
+extern void elv_free_ioq(struct io_queue *ioq);
+
+#else /* CONFIG_ELV_FAIR_QUEUING */
+
+static inline int elv_init_fq_data(struct request_queue *q,
+					struct elevator_queue *e)
+{
+	return 0;
+}
+
+static inline void elv_exit_fq_data(struct elevator_queue *e) {}
+static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
+
+static inline void elv_fq_activate_rq(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_fq_deactivate_rq(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_fq_dispatched_request(struct elevator_queue *e,
+						struct request *rq)
+{
+}
+
+static inline void elv_ioq_request_removed(struct elevator_queue *e,
+						struct request *rq)
+{
+}
+
+static inline void elv_ioq_request_add(struct request_queue *q,
+					struct request *rq)
+{
+}
+
+static inline void elv_ioq_completed_request(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
+static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
+static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
+{
+	return NULL;
+}
+#endif /* CONFIG_ELV_FAIR_QUEUING */
+#endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index 7073a90..c2f07f5 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
 	for (i = 0; i < ELV_HASH_ENTRIES; i++)
 		INIT_HLIST_HEAD(&eq->hash[i]);
 
+	if (elv_init_fq_data(q, eq))
+		goto err;
+
 	return eq;
 err:
 	kfree(eq);
@@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
 void elevator_exit(struct elevator_queue *e)
 {
 	mutex_lock(&e->sysfs_lock);
+	elv_exit_fq_data(e);
 	if (e->ops->elevator_exit_fn)
 		e->ops->elevator_exit_fn(e);
 	e->ops = NULL;
+	elv_exit_fq_data_post(e);
 	mutex_unlock(&e->sysfs_lock);
 
 	kobject_put(&e->kobj);
@@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	elv_fq_activate_rq(q, rq);
+
 	if (e->ops->elevator_activate_req_fn)
 		e->ops->elevator_activate_req_fn(q, rq);
 }
@@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	elv_fq_deactivate_rq(q, rq);
+
 	if (e->ops->elevator_deactivate_req_fn)
 		e->ops->elevator_deactivate_req_fn(q, rq);
 }
@@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
 	elv_rqhash_del(q, rq);
 
 	q->nr_sorted--;
+	elv_fq_dispatched_request(q->elevator, rq);
 
 	boundary = q->end_sector;
 	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
@@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
 	elv_rqhash_del(q, rq);
 
 	q->nr_sorted--;
+	elv_fq_dispatched_request(q->elevator, rq);
 
 	q->end_sector = rq_end_sector(rq);
 	q->boundary_rq = rq;
@@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
 	elv_rqhash_del(q, next);
 
 	q->nr_sorted--;
+	elv_ioq_request_removed(e, next);
 	q->last_merge = rq;
 }
 
@@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 				q->last_merge = rq;
 		}
 
-		/*
-		 * Some ioscheds (cfq) run q->request_fn directly, so
-		 * rq cannot be accessed after calling
-		 * elevator_add_req_fn.
-		 */
 		q->elevator->ops->elevator_add_req_fn(q, rq);
+		elv_ioq_request_add(q, rq);
 		break;
 
 	case ELEVATOR_INSERT_REQUEUE:
@@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
 
 int elv_queue_empty(struct request_queue *q)
 {
-	struct elevator_queue *e = q->elevator;
-
 	if (!list_empty(&q->queue_head))
 		return 0;
 
-	if (e->ops->elevator_queue_empty_fn)
-		return e->ops->elevator_queue_empty_fn(q);
+	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
+	if (q->nr_sorted)
+		return 0;
 
 	return 1;
 }
@@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 	 */
 	if (blk_account_rq(rq)) {
 		q->in_flight--;
-		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
-			e->ops->elevator_completed_req_fn(q, rq);
+		if (blk_sorted_rq(rq)) {
+			if (e->ops->elevator_completed_req_fn)
+				e->ops->elevator_completed_req_fn(q, rq);
+			elv_ioq_completed_request(q, rq);
+		}
 	}
 
 	/*
@@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
 	return NULL;
 }
 EXPORT_SYMBOL(elv_rb_latter_request);
+
+/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
+void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
+{
+	return ioq_sched_queue(rq_ioq(rq));
+}
+EXPORT_SYMBOL(elv_get_sched_queue);
+
+/* Select an ioscheduler queue to dispatch request from. */
+void *elv_select_sched_queue(struct request_queue *q, int force)
+{
+	return ioq_sched_queue(elv_fq_select_ioq(q, force));
+}
+EXPORT_SYMBOL(elv_select_sched_queue);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b4f71f1..96a94c9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -245,6 +245,11 @@ struct request {
 
 	/* for bidi */
 	struct request *next_rq;
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	/* io queue request belongs to */
+	struct io_queue *ioq;
+#endif
 };
 
 static inline unsigned short req_get_ioprio(struct request *req)
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index c59b769..679c149 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -2,6 +2,7 @@
 #define _LINUX_ELEVATOR_H
 
 #include <linux/percpu.h>
+#include "../../block/elevator-fq.h"
 
 #ifdef CONFIG_BLOCK
 
@@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
+#ifdef CONFIG_ELV_FAIR_QUEUING
+typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
+typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
+typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
+typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
+typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
+						struct request*);
+typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
+						struct request*);
+typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
+						void*, int probe);
+#endif
 
 struct elevator_ops
 {
@@ -56,6 +69,17 @@ struct elevator_ops
 	elevator_init_fn *elevator_init_fn;
 	elevator_exit_fn *elevator_exit_fn;
 	void (*trim)(struct io_context *);
+
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
+	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
+	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
+
+	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
+	elevator_should_preempt_fn *elevator_should_preempt_fn;
+	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
+	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
+#endif
 };
 
 #define ELV_NAME_MAX	(16)
@@ -76,6 +100,9 @@ struct elevator_type
 	struct elv_fs_entry *elevator_attrs;
 	char elevator_name[ELV_NAME_MAX];
 	struct module *elevator_owner;
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	int elevator_features;
+#endif
 };
 
 /*
@@ -89,6 +116,10 @@ struct elevator_queue
 	struct elevator_type *elevator_type;
 	struct mutex sysfs_lock;
 	struct hlist_head *hash;
+#ifdef CONFIG_ELV_FAIR_QUEUING
+	/* fair queuing data */
+	struct elv_fq_data efqd;
+#endif
 };
 
 /*
@@ -209,5 +240,25 @@ enum {
 	__val;							\
 })
 
+/* iosched can let elevator know their feature set/capability */
+#ifdef CONFIG_ELV_FAIR_QUEUING
+
+/* iosched wants to use fq logic of elevator layer */
+#define	ELV_IOSCHED_NEED_FQ	1
+
+static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
+{
+	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
+}
+
+#else /* ELV_IOSCHED_FAIR_QUEUING */
+
+static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
+{
+	return 0;
+}
+#endif /* ELV_IOSCHED_FAIR_QUEUING */
+extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
+extern void *elv_select_sched_queue(struct request_queue *q, int force);
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 03/20] io-controller: Charge for time slice based on average disk rate
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2009-06-19 20:37   ` [PATCH 01/20] io-controller: Documentation Vivek Goyal
  2009-06-19 20:37   ` [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
                     ` (18 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o There are situations where a queue gets expired very soon and it looks
  as if time slice used by that queue is zero. For example, If an async
  queue dispatches a bunch of requests and queue is expired before first
  request completes. Another example is where a queue is expired as soon
  as first request completes and queue has no more requests (sync queues
  on SSD).

o Currently we just charge 25% of slice length in such cases. This patch tries
  to improve on that approximation by keeping a track of average disk rate
  and charging for time by nr_sectors/disk_rate.

o This is still experimental, not very sure if it gives measurable improvement
  or not.

Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/elevator-fq.c |   85 +++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h |   11 ++++++
 2 files changed, 94 insertions(+), 2 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 9357fb0..3e956dc 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -21,6 +21,9 @@ const int elv_slice_async_rq = 2;
 int elv_slice_idle = HZ / 125;
 static struct kmem_cache *elv_ioq_pool;
 
+/* Maximum Window length for updating average disk rate */
+static int elv_rate_sampling_window = HZ / 10;
+
 #define ELV_SLICE_SCALE		(5)
 #define ELV_HW_QUEUE_MIN	(5)
 #define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
@@ -961,6 +964,47 @@ static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
 	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
 }
 
+static void elv_update_io_rate(struct elv_fq_data *efqd, struct request *rq)
+{
+	long elapsed = jiffies - efqd->rate_sampling_start;
+	unsigned long total;
+
+	/* sampling window is off */
+	if (!efqd->rate_sampling_start)
+		return;
+
+	efqd->rate_sectors_current += rq->nr_sectors;
+
+	if (efqd->rq_in_driver && (elapsed < elv_rate_sampling_window))
+		return;
+
+	efqd->rate_sectors = (7*efqd->rate_sectors +
+				256*efqd->rate_sectors_current) / 8;
+
+	if (!elapsed) {
+		/*
+		 * updating rate before a jiffy could complete. Could be a
+		 * problem with fast queuing/non-queuing hardware. Should we
+		 * look at higher resolution time source?
+		 *
+		 * In case of non-queuing hardware we will probably not try to
+		 * dispatch from multiple queues and will be able to account
+		 * for disk time used and will not need this approximation
+		 * anyway?
+		 */
+		elapsed = 1;
+	}
+
+	efqd->rate_time = (7*efqd->rate_time + 256*elapsed) / 8;
+	total = efqd->rate_sectors + (efqd->rate_time/2);
+	efqd->mean_rate = total/efqd->rate_time;
+
+	elv_log(efqd, "mean_rate=%d, t=%d s=%d", efqd->mean_rate,
+			elapsed, efqd->rate_sectors_current);
+	efqd->rate_sampling_start = 0;
+	efqd->rate_sectors_current = 0;
+}
+
 /*
  * Disable idle window if the process thinks too long.
  * This idle flag can also be updated by io scheduler.
@@ -1252,6 +1296,34 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 }
 
 /*
+ * Calculate the effective disk time used by the queue based on how many
+ * sectors queue has dispatched and what is the average disk rate
+ * Returns disk time in ms.
+ */
+static inline unsigned long elv_disk_time_used(struct request_queue *q,
+					struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	unsigned long jiffies_used = 0;
+
+	if (!efqd->mean_rate)
+		return entity->budget/4;
+
+	/* Charge the queue based on average disk rate */
+	jiffies_used = ioq->nr_sectors/efqd->mean_rate;
+
+	if (!jiffies_used)
+		jiffies_used = 1;
+
+	elv_log_ioq(efqd, ioq, "disk time=%ldms sect=%lu rate=%ld",
+				jiffies_to_msecs(jiffies_used),
+				ioq->nr_sectors, efqd->mean_rate);
+
+	return jiffies_used;
+}
+
+/*
  * Do the accounting. Determine how much service (in terms of time slices)
  * current queue used and adjust the start, finish time of queue and vtime
  * of the tree accordingly.
@@ -1303,7 +1375,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	 * the requests to finish. But this will reduce throughput.
 	 */
 	if (!ioq->slice_end)
-		slice_used = entity->budget/4;
+		slice_used = elv_disk_time_used(q, ioq);
 	else {
 		if (time_after(ioq->slice_end, jiffies)) {
 			slice_unused = ioq->slice_end - jiffies;
@@ -1313,7 +1385,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 				 * completing first request. Charge 25% of
 				 * slice.
 				 */
-				slice_used = entity->budget/4;
+				slice_used = elv_disk_time_used(q, ioq);
 			} else
 				slice_used = entity->budget - slice_unused;
 		} else {
@@ -1331,6 +1403,8 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	BUG_ON(ioq != efqd->active_queue);
 	elv_reset_active_ioq(efqd);
 
+	/* Queue is being expired. Reset number of secotrs dispatched */
+	ioq->nr_sectors = 0;
 	if (!ioq->nr_queued)
 		elv_del_ioq_busy(q->elevator, ioq, 1);
 	else
@@ -1664,6 +1738,7 @@ void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
 
 	BUG_ON(!ioq);
 	elv_ioq_request_dispatched(ioq);
+	ioq->nr_sectors += rq->nr_sectors;
 	elv_ioq_request_removed(e, rq);
 	elv_clear_ioq_must_dispatch(ioq);
 }
@@ -1676,6 +1751,10 @@ void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
 		return;
 
 	efqd->rq_in_driver++;
+
+	if (!efqd->rate_sampling_start)
+		efqd->rate_sampling_start = jiffies;
+
 	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
 						efqd->rq_in_driver);
 }
@@ -1767,6 +1846,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	efqd->rq_in_driver--;
 	ioq->dispatched--;
 
+	elv_update_io_rate(efqd, rq);
+
 	if (sync)
 		ioq->last_end_request = jiffies;
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 5b6c1cc..a0acf32 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -169,6 +169,9 @@ struct io_queue {
 	/* Requests dispatched from this queue */
 	int dispatched;
 
+	/* Number of sectors dispatched in current dispatch round */
+	unsigned long nr_sectors;
+
 	/* Keep a track of think time of processes in this queue */
 	unsigned long last_end_request;
 	unsigned long ttime_total;
@@ -225,6 +228,14 @@ struct elv_fq_data {
 	struct work_struct unplug_work;
 
 	unsigned int elv_slice[2];
+
+	/* Fields for keeping track of average disk rate */
+	unsigned long rate_sectors; /* number of sectors finished */
+	unsigned long rate_time;   /* jiffies elapsed */
+	unsigned long mean_rate; /* sectors per jiffy */
+	unsigned long long rate_sampling_start; /*sampling window start jifies*/
+	/* number of sectors finished io during current sampling window */
+	unsigned long rate_sectors_current;
 };
 
 extern int elv_slice_idle;
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 03/20] io-controller: Charge for time slice based on average disk rate
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o There are situations where a queue gets expired very soon and it looks
  as if time slice used by that queue is zero. For example, If an async
  queue dispatches a bunch of requests and queue is expired before first
  request completes. Another example is where a queue is expired as soon
  as first request completes and queue has no more requests (sync queues
  on SSD).

o Currently we just charge 25% of slice length in such cases. This patch tries
  to improve on that approximation by keeping a track of average disk rate
  and charging for time by nr_sectors/disk_rate.

o This is still experimental, not very sure if it gives measurable improvement
  or not.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/elevator-fq.c |   85 +++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h |   11 ++++++
 2 files changed, 94 insertions(+), 2 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 9357fb0..3e956dc 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -21,6 +21,9 @@ const int elv_slice_async_rq = 2;
 int elv_slice_idle = HZ / 125;
 static struct kmem_cache *elv_ioq_pool;
 
+/* Maximum Window length for updating average disk rate */
+static int elv_rate_sampling_window = HZ / 10;
+
 #define ELV_SLICE_SCALE		(5)
 #define ELV_HW_QUEUE_MIN	(5)
 #define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
@@ -961,6 +964,47 @@ static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
 	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
 }
 
+static void elv_update_io_rate(struct elv_fq_data *efqd, struct request *rq)
+{
+	long elapsed = jiffies - efqd->rate_sampling_start;
+	unsigned long total;
+
+	/* sampling window is off */
+	if (!efqd->rate_sampling_start)
+		return;
+
+	efqd->rate_sectors_current += rq->nr_sectors;
+
+	if (efqd->rq_in_driver && (elapsed < elv_rate_sampling_window))
+		return;
+
+	efqd->rate_sectors = (7*efqd->rate_sectors +
+				256*efqd->rate_sectors_current) / 8;
+
+	if (!elapsed) {
+		/*
+		 * updating rate before a jiffy could complete. Could be a
+		 * problem with fast queuing/non-queuing hardware. Should we
+		 * look at higher resolution time source?
+		 *
+		 * In case of non-queuing hardware we will probably not try to
+		 * dispatch from multiple queues and will be able to account
+		 * for disk time used and will not need this approximation
+		 * anyway?
+		 */
+		elapsed = 1;
+	}
+
+	efqd->rate_time = (7*efqd->rate_time + 256*elapsed) / 8;
+	total = efqd->rate_sectors + (efqd->rate_time/2);
+	efqd->mean_rate = total/efqd->rate_time;
+
+	elv_log(efqd, "mean_rate=%d, t=%d s=%d", efqd->mean_rate,
+			elapsed, efqd->rate_sectors_current);
+	efqd->rate_sampling_start = 0;
+	efqd->rate_sectors_current = 0;
+}
+
 /*
  * Disable idle window if the process thinks too long.
  * This idle flag can also be updated by io scheduler.
@@ -1252,6 +1296,34 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 }
 
 /*
+ * Calculate the effective disk time used by the queue based on how many
+ * sectors queue has dispatched and what is the average disk rate
+ * Returns disk time in ms.
+ */
+static inline unsigned long elv_disk_time_used(struct request_queue *q,
+					struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	unsigned long jiffies_used = 0;
+
+	if (!efqd->mean_rate)
+		return entity->budget/4;
+
+	/* Charge the queue based on average disk rate */
+	jiffies_used = ioq->nr_sectors/efqd->mean_rate;
+
+	if (!jiffies_used)
+		jiffies_used = 1;
+
+	elv_log_ioq(efqd, ioq, "disk time=%ldms sect=%lu rate=%ld",
+				jiffies_to_msecs(jiffies_used),
+				ioq->nr_sectors, efqd->mean_rate);
+
+	return jiffies_used;
+}
+
+/*
  * Do the accounting. Determine how much service (in terms of time slices)
  * current queue used and adjust the start, finish time of queue and vtime
  * of the tree accordingly.
@@ -1303,7 +1375,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	 * the requests to finish. But this will reduce throughput.
 	 */
 	if (!ioq->slice_end)
-		slice_used = entity->budget/4;
+		slice_used = elv_disk_time_used(q, ioq);
 	else {
 		if (time_after(ioq->slice_end, jiffies)) {
 			slice_unused = ioq->slice_end - jiffies;
@@ -1313,7 +1385,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 				 * completing first request. Charge 25% of
 				 * slice.
 				 */
-				slice_used = entity->budget/4;
+				slice_used = elv_disk_time_used(q, ioq);
 			} else
 				slice_used = entity->budget - slice_unused;
 		} else {
@@ -1331,6 +1403,8 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	BUG_ON(ioq != efqd->active_queue);
 	elv_reset_active_ioq(efqd);
 
+	/* Queue is being expired. Reset number of secotrs dispatched */
+	ioq->nr_sectors = 0;
 	if (!ioq->nr_queued)
 		elv_del_ioq_busy(q->elevator, ioq, 1);
 	else
@@ -1664,6 +1738,7 @@ void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
 
 	BUG_ON(!ioq);
 	elv_ioq_request_dispatched(ioq);
+	ioq->nr_sectors += rq->nr_sectors;
 	elv_ioq_request_removed(e, rq);
 	elv_clear_ioq_must_dispatch(ioq);
 }
@@ -1676,6 +1751,10 @@ void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
 		return;
 
 	efqd->rq_in_driver++;
+
+	if (!efqd->rate_sampling_start)
+		efqd->rate_sampling_start = jiffies;
+
 	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
 						efqd->rq_in_driver);
 }
@@ -1767,6 +1846,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	efqd->rq_in_driver--;
 	ioq->dispatched--;
 
+	elv_update_io_rate(efqd, rq);
+
 	if (sync)
 		ioq->last_end_request = jiffies;
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 5b6c1cc..a0acf32 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -169,6 +169,9 @@ struct io_queue {
 	/* Requests dispatched from this queue */
 	int dispatched;
 
+	/* Number of sectors dispatched in current dispatch round */
+	unsigned long nr_sectors;
+
 	/* Keep a track of think time of processes in this queue */
 	unsigned long last_end_request;
 	unsigned long ttime_total;
@@ -225,6 +228,14 @@ struct elv_fq_data {
 	struct work_struct unplug_work;
 
 	unsigned int elv_slice[2];
+
+	/* Fields for keeping track of average disk rate */
+	unsigned long rate_sectors; /* number of sectors finished */
+	unsigned long rate_time;   /* jiffies elapsed */
+	unsigned long mean_rate; /* sectors per jiffy */
+	unsigned long long rate_sampling_start; /*sampling window start jifies*/
+	/* number of sectors finished io during current sampling window */
+	unsigned long rate_sectors_current;
 };
 
 extern int elv_slice_idle;
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 03/20] io-controller: Charge for time slice based on average disk rate
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o There are situations where a queue gets expired very soon and it looks
  as if time slice used by that queue is zero. For example, If an async
  queue dispatches a bunch of requests and queue is expired before first
  request completes. Another example is where a queue is expired as soon
  as first request completes and queue has no more requests (sync queues
  on SSD).

o Currently we just charge 25% of slice length in such cases. This patch tries
  to improve on that approximation by keeping a track of average disk rate
  and charging for time by nr_sectors/disk_rate.

o This is still experimental, not very sure if it gives measurable improvement
  or not.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/elevator-fq.c |   85 +++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h |   11 ++++++
 2 files changed, 94 insertions(+), 2 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 9357fb0..3e956dc 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -21,6 +21,9 @@ const int elv_slice_async_rq = 2;
 int elv_slice_idle = HZ / 125;
 static struct kmem_cache *elv_ioq_pool;
 
+/* Maximum Window length for updating average disk rate */
+static int elv_rate_sampling_window = HZ / 10;
+
 #define ELV_SLICE_SCALE		(5)
 #define ELV_HW_QUEUE_MIN	(5)
 #define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
@@ -961,6 +964,47 @@ static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
 	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
 }
 
+static void elv_update_io_rate(struct elv_fq_data *efqd, struct request *rq)
+{
+	long elapsed = jiffies - efqd->rate_sampling_start;
+	unsigned long total;
+
+	/* sampling window is off */
+	if (!efqd->rate_sampling_start)
+		return;
+
+	efqd->rate_sectors_current += rq->nr_sectors;
+
+	if (efqd->rq_in_driver && (elapsed < elv_rate_sampling_window))
+		return;
+
+	efqd->rate_sectors = (7*efqd->rate_sectors +
+				256*efqd->rate_sectors_current) / 8;
+
+	if (!elapsed) {
+		/*
+		 * updating rate before a jiffy could complete. Could be a
+		 * problem with fast queuing/non-queuing hardware. Should we
+		 * look at higher resolution time source?
+		 *
+		 * In case of non-queuing hardware we will probably not try to
+		 * dispatch from multiple queues and will be able to account
+		 * for disk time used and will not need this approximation
+		 * anyway?
+		 */
+		elapsed = 1;
+	}
+
+	efqd->rate_time = (7*efqd->rate_time + 256*elapsed) / 8;
+	total = efqd->rate_sectors + (efqd->rate_time/2);
+	efqd->mean_rate = total/efqd->rate_time;
+
+	elv_log(efqd, "mean_rate=%d, t=%d s=%d", efqd->mean_rate,
+			elapsed, efqd->rate_sectors_current);
+	efqd->rate_sampling_start = 0;
+	efqd->rate_sectors_current = 0;
+}
+
 /*
  * Disable idle window if the process thinks too long.
  * This idle flag can also be updated by io scheduler.
@@ -1252,6 +1296,34 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 }
 
 /*
+ * Calculate the effective disk time used by the queue based on how many
+ * sectors queue has dispatched and what is the average disk rate
+ * Returns disk time in ms.
+ */
+static inline unsigned long elv_disk_time_used(struct request_queue *q,
+					struct io_queue *ioq)
+{
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_entity *entity = &ioq->entity;
+	unsigned long jiffies_used = 0;
+
+	if (!efqd->mean_rate)
+		return entity->budget/4;
+
+	/* Charge the queue based on average disk rate */
+	jiffies_used = ioq->nr_sectors/efqd->mean_rate;
+
+	if (!jiffies_used)
+		jiffies_used = 1;
+
+	elv_log_ioq(efqd, ioq, "disk time=%ldms sect=%lu rate=%ld",
+				jiffies_to_msecs(jiffies_used),
+				ioq->nr_sectors, efqd->mean_rate);
+
+	return jiffies_used;
+}
+
+/*
  * Do the accounting. Determine how much service (in terms of time slices)
  * current queue used and adjust the start, finish time of queue and vtime
  * of the tree accordingly.
@@ -1303,7 +1375,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	 * the requests to finish. But this will reduce throughput.
 	 */
 	if (!ioq->slice_end)
-		slice_used = entity->budget/4;
+		slice_used = elv_disk_time_used(q, ioq);
 	else {
 		if (time_after(ioq->slice_end, jiffies)) {
 			slice_unused = ioq->slice_end - jiffies;
@@ -1313,7 +1385,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 				 * completing first request. Charge 25% of
 				 * slice.
 				 */
-				slice_used = entity->budget/4;
+				slice_used = elv_disk_time_used(q, ioq);
 			} else
 				slice_used = entity->budget - slice_unused;
 		} else {
@@ -1331,6 +1403,8 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	BUG_ON(ioq != efqd->active_queue);
 	elv_reset_active_ioq(efqd);
 
+	/* Queue is being expired. Reset number of secotrs dispatched */
+	ioq->nr_sectors = 0;
 	if (!ioq->nr_queued)
 		elv_del_ioq_busy(q->elevator, ioq, 1);
 	else
@@ -1664,6 +1738,7 @@ void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
 
 	BUG_ON(!ioq);
 	elv_ioq_request_dispatched(ioq);
+	ioq->nr_sectors += rq->nr_sectors;
 	elv_ioq_request_removed(e, rq);
 	elv_clear_ioq_must_dispatch(ioq);
 }
@@ -1676,6 +1751,10 @@ void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
 		return;
 
 	efqd->rq_in_driver++;
+
+	if (!efqd->rate_sampling_start)
+		efqd->rate_sampling_start = jiffies;
+
 	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
 						efqd->rq_in_driver);
 }
@@ -1767,6 +1846,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	efqd->rq_in_driver--;
 	ioq->dispatched--;
 
+	elv_update_io_rate(efqd, rq);
+
 	if (sync)
 		ioq->last_end_request = jiffies;
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 5b6c1cc..a0acf32 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -169,6 +169,9 @@ struct io_queue {
 	/* Requests dispatched from this queue */
 	int dispatched;
 
+	/* Number of sectors dispatched in current dispatch round */
+	unsigned long nr_sectors;
+
 	/* Keep a track of think time of processes in this queue */
 	unsigned long last_end_request;
 	unsigned long ttime_total;
@@ -225,6 +228,14 @@ struct elv_fq_data {
 	struct work_struct unplug_work;
 
 	unsigned int elv_slice[2];
+
+	/* Fields for keeping track of average disk rate */
+	unsigned long rate_sectors; /* number of sectors finished */
+	unsigned long rate_time;   /* jiffies elapsed */
+	unsigned long mean_rate; /* sectors per jiffy */
+	unsigned long long rate_sampling_start; /*sampling window start jifies*/
+	/* number of sectors finished io during current sampling window */
+	unsigned long rate_sectors_current;
 };
 
 extern int elv_slice_idle;
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (2 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 03/20] io-controller: Charge for time slice based on average disk rate Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
                     ` (17 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

This patch changes cfq to use fair queuing code from elevator layer.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
Signed-off-by: Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched     |    3 +-
 block/cfq-iosched.c       | 1106 +++++++++------------------------------------
 include/linux/iocontext.h |    5 -
 3 files changed, 222 insertions(+), 892 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 3398134..dd5224d 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -3,7 +3,7 @@ if BLOCK
 menu "IO Schedulers"
 
 config ELV_FAIR_QUEUING
-	bool "Elevator Fair Queuing Support"
+	bool
 	default n
 	---help---
 	  Traditionally only cfq had notion of multiple queues and it did
@@ -46,6 +46,7 @@ config IOSCHED_DEADLINE
 
 config IOSCHED_CFQ
 	tristate "CFQ I/O scheduler"
+	select ELV_FAIR_QUEUING
 	default y
 	---help---
 	  The CFQ I/O scheduler tries to distribute bandwidth equally
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..995c8dd 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -12,7 +12,6 @@
 #include <linux/rbtree.h>
 #include <linux/ioprio.h>
 #include <linux/blktrace_api.h>
-
 /*
  * tunables
  */
@@ -23,15 +22,7 @@ static const int cfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
 static const int cfq_back_max = 16 * 1024;
 /* penalty of a backwards seek */
 static const int cfq_back_penalty = 2;
-static const int cfq_slice_sync = HZ / 10;
-static int cfq_slice_async = HZ / 25;
 static const int cfq_slice_async_rq = 2;
-static int cfq_slice_idle = HZ / 125;
-
-/*
- * offset from end of service tree
- */
-#define CFQ_IDLE_DELAY		(HZ / 5)
 
 /*
  * below this threshold, we consider thinktime immediate
@@ -43,7 +34,7 @@ static int cfq_slice_idle = HZ / 125;
 
 #define RQ_CIC(rq)		\
 	((struct cfq_io_context *) (rq)->elevator_private)
-#define RQ_CFQQ(rq)		(struct cfq_queue *) ((rq)->elevator_private2)
+#define RQ_CFQQ(rq)	(struct cfq_queue *) (ioq_sched_queue((rq)->ioq))
 
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
@@ -53,8 +44,6 @@ static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
 #define CFQ_PRIO_LISTS		IOPRIO_BE_NR
-#define cfq_class_idle(cfqq)	((cfqq)->ioprio_class == IOPRIO_CLASS_IDLE)
-#define cfq_class_rt(cfqq)	((cfqq)->ioprio_class == IOPRIO_CLASS_RT)
 
 #define sample_valid(samples)	((samples) > 80)
 
@@ -75,12 +64,6 @@ struct cfq_rb_root {
  */
 struct cfq_data {
 	struct request_queue *queue;
-
-	/*
-	 * rr list of queues with requests and the count of them
-	 */
-	struct cfq_rb_root service_tree;
-
 	/*
 	 * Each priority tree is sorted by next_request position.  These
 	 * trees are used when determining if two or more queues are
@@ -88,41 +71,11 @@ struct cfq_data {
 	 */
 	struct rb_root prio_trees[CFQ_PRIO_LISTS];
 
-	unsigned int busy_queues;
-	/*
-	 * Used to track any pending rt requests so we can pre-empt current
-	 * non-RT cfqq in service when this value is non-zero.
-	 */
-	unsigned int busy_rt_queues;
-
-	int rq_in_driver;
 	int sync_flight;
 
-	/*
-	 * queue-depth detection
-	 */
-	int rq_queued;
-	int hw_tag;
-	int hw_tag_samples;
-	int rq_in_driver_peak;
-
-	/*
-	 * idle window management
-	 */
-	struct timer_list idle_slice_timer;
-	struct work_struct unplug_work;
-
-	struct cfq_queue *active_queue;
 	struct cfq_io_context *active_cic;
 
-	/*
-	 * async queue for each priority case
-	 */
-	struct cfq_queue *async_cfqq[2][IOPRIO_BE_NR];
-	struct cfq_queue *async_idle_cfqq;
-
 	sector_t last_position;
-	unsigned long last_end_request;
 
 	/*
 	 * tunables, see top of file
@@ -131,9 +84,7 @@ struct cfq_data {
 	unsigned int cfq_fifo_expire[2];
 	unsigned int cfq_back_penalty;
 	unsigned int cfq_back_max;
-	unsigned int cfq_slice[2];
 	unsigned int cfq_slice_async_rq;
-	unsigned int cfq_slice_idle;
 
 	struct list_head cic_list;
 };
@@ -142,16 +93,11 @@ struct cfq_data {
  * Per process-grouping structure
  */
 struct cfq_queue {
-	/* reference count */
-	atomic_t ref;
+	struct io_queue *ioq;
 	/* various state flags, see below */
 	unsigned int flags;
 	/* parent cfq_data */
 	struct cfq_data *cfqd;
-	/* service_tree member */
-	struct rb_node rb_node;
-	/* service_tree key */
-	unsigned long rb_key;
 	/* prio tree member */
 	struct rb_node p_node;
 	/* prio tree root we belong to, if any */
@@ -167,33 +113,23 @@ struct cfq_queue {
 	/* fifo list of requests in sort_list */
 	struct list_head fifo;
 
-	unsigned long slice_end;
-	long slice_resid;
 	unsigned int slice_dispatch;
 
 	/* pending metadata requests */
 	int meta_pending;
-	/* number of requests that are on the dispatch list or inside driver */
-	int dispatched;
 
 	/* io prio of this group */
-	unsigned short ioprio, org_ioprio;
-	unsigned short ioprio_class, org_ioprio_class;
+	unsigned short org_ioprio;
+	unsigned short org_ioprio_class;
 
 	pid_t pid;
 };
 
 enum cfqq_state_flags {
-	CFQ_CFQQ_FLAG_on_rr = 0,	/* on round-robin busy list */
-	CFQ_CFQQ_FLAG_wait_request,	/* waiting for a request */
-	CFQ_CFQQ_FLAG_must_dispatch,	/* must be allowed a dispatch */
 	CFQ_CFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
 	CFQ_CFQQ_FLAG_must_alloc_slice,	/* per-slice must_alloc flag */
 	CFQ_CFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
-	CFQ_CFQQ_FLAG_idle_window,	/* slice idling enabled */
 	CFQ_CFQQ_FLAG_prio_changed,	/* task priority has changed */
-	CFQ_CFQQ_FLAG_slice_new,	/* no requests dispatched in slice */
-	CFQ_CFQQ_FLAG_sync,		/* synchronous queue */
 	CFQ_CFQQ_FLAG_coop,		/* has done a coop jump of the queue */
 };
 
@@ -211,16 +147,10 @@ static inline int cfq_cfqq_##name(const struct cfq_queue *cfqq)		\
 	return ((cfqq)->flags & (1 << CFQ_CFQQ_FLAG_##name)) != 0;	\
 }
 
-CFQ_CFQQ_FNS(on_rr);
-CFQ_CFQQ_FNS(wait_request);
-CFQ_CFQQ_FNS(must_dispatch);
 CFQ_CFQQ_FNS(must_alloc);
 CFQ_CFQQ_FNS(must_alloc_slice);
 CFQ_CFQQ_FNS(fifo_expire);
-CFQ_CFQQ_FNS(idle_window);
 CFQ_CFQQ_FNS(prio_changed);
-CFQ_CFQQ_FNS(slice_new);
-CFQ_CFQQ_FNS(sync);
 CFQ_CFQQ_FNS(coop);
 #undef CFQ_CFQQ_FNS
 
@@ -259,66 +189,27 @@ static inline int cfq_bio_sync(struct bio *bio)
 	return 0;
 }
 
-/*
- * scheduler run of queue, if there are requests pending and no one in the
- * driver that will restart queueing
- */
-static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)
+static inline struct io_group *cfqq_to_io_group(struct cfq_queue *cfqq)
 {
-	if (cfqd->busy_queues) {
-		cfq_log(cfqd, "schedule dispatch");
-		kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work);
-	}
+	return ioq_to_io_group(cfqq->ioq);
 }
 
-static int cfq_queue_empty(struct request_queue *q)
+static inline int cfq_class_idle(struct cfq_queue *cfqq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
-
-	return !cfqd->busy_queues;
+	return elv_ioq_class_idle(cfqq->ioq);
 }
 
-/*
- * Scale schedule slice based on io priority. Use the sync time slice only
- * if a queue is marked sync and has sync io queued. A sync queue with async
- * io only, should not get full sync slice length.
- */
-static inline int cfq_prio_slice(struct cfq_data *cfqd, int sync,
-				 unsigned short prio)
-{
-	const int base_slice = cfqd->cfq_slice[sync];
-
-	WARN_ON(prio >= IOPRIO_BE_NR);
-
-	return base_slice + (base_slice/CFQ_SLICE_SCALE * (4 - prio));
-}
-
-static inline int
-cfq_prio_to_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	return cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio);
-}
-
-static inline void
-cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+static inline int cfq_cfqq_sync(struct cfq_queue *cfqq)
 {
-	cfqq->slice_end = cfq_prio_to_slice(cfqd, cfqq) + jiffies;
-	cfq_log_cfqq(cfqd, cfqq, "set_slice=%lu", cfqq->slice_end - jiffies);
+	return elv_ioq_sync(cfqq->ioq);
 }
 
-/*
- * We need to wrap this check in cfq_cfqq_slice_new(), since ->slice_end
- * isn't valid until the first request from the dispatch is activated
- * and the slice time set.
- */
-static inline int cfq_slice_used(struct cfq_queue *cfqq)
+static inline int cfqq_is_active_queue(struct cfq_queue *cfqq)
 {
-	if (cfq_cfqq_slice_new(cfqq))
-		return 0;
-	if (time_before(jiffies, cfqq->slice_end))
-		return 0;
+	struct cfq_data *cfqd = cfqq->cfqd;
+	struct elevator_queue *e = cfqd->queue->elevator;
 
-	return 1;
+	return (elv_active_sched_queue(e) == cfqq);
 }
 
 /*
@@ -417,33 +308,6 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2)
 }
 
 /*
- * The below is leftmost cache rbtree addon
- */
-static struct cfq_queue *cfq_rb_first(struct cfq_rb_root *root)
-{
-	if (!root->left)
-		root->left = rb_first(&root->rb);
-
-	if (root->left)
-		return rb_entry(root->left, struct cfq_queue, rb_node);
-
-	return NULL;
-}
-
-static void rb_erase_init(struct rb_node *n, struct rb_root *root)
-{
-	rb_erase(n, root);
-	RB_CLEAR_NODE(n);
-}
-
-static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
-{
-	if (root->left == n)
-		root->left = NULL;
-	rb_erase_init(n, &root->rb);
-}
-
-/*
  * would be nice to take fifo expire time into account as well
  */
 static struct request *
@@ -456,10 +320,10 @@ cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 
 	BUG_ON(RB_EMPTY_NODE(&last->rb_node));
 
-	if (rbprev)
+	if (rbprev != NULL)
 		prev = rb_entry_rq(rbprev);
 
-	if (rbnext)
+	if (rbnext != NULL)
 		next = rb_entry_rq(rbnext);
 	else {
 		rbnext = rb_first(&cfqq->sort_list);
@@ -470,95 +334,6 @@ cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 	return cfq_choose_req(cfqd, next, prev);
 }
 
-static unsigned long cfq_slice_offset(struct cfq_data *cfqd,
-				      struct cfq_queue *cfqq)
-{
-	/*
-	 * just an approximation, should be ok.
-	 */
-	return (cfqd->busy_queues - 1) * (cfq_prio_slice(cfqd, 1, 0) -
-		       cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio));
-}
-
-/*
- * The cfqd->service_tree holds all pending cfq_queue's that have
- * requests waiting to be processed. It is sorted in the order that
- * we will service the queues.
- */
-static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-				 int add_front)
-{
-	struct rb_node **p, *parent;
-	struct cfq_queue *__cfqq;
-	unsigned long rb_key;
-	int left;
-
-	if (cfq_class_idle(cfqq)) {
-		rb_key = CFQ_IDLE_DELAY;
-		parent = rb_last(&cfqd->service_tree.rb);
-		if (parent && parent != &cfqq->rb_node) {
-			__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-			rb_key += __cfqq->rb_key;
-		} else
-			rb_key += jiffies;
-	} else if (!add_front) {
-		rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies;
-		rb_key += cfqq->slice_resid;
-		cfqq->slice_resid = 0;
-	} else
-		rb_key = 0;
-
-	if (!RB_EMPTY_NODE(&cfqq->rb_node)) {
-		/*
-		 * same position, nothing more to do
-		 */
-		if (rb_key == cfqq->rb_key)
-			return;
-
-		cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
-	}
-
-	left = 1;
-	parent = NULL;
-	p = &cfqd->service_tree.rb.rb_node;
-	while (*p) {
-		struct rb_node **n;
-
-		parent = *p;
-		__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-
-		/*
-		 * sort RT queues first, we always want to give
-		 * preference to them. IDLE queues goes to the back.
-		 * after that, sort on the next service time.
-		 */
-		if (cfq_class_rt(cfqq) > cfq_class_rt(__cfqq))
-			n = &(*p)->rb_left;
-		else if (cfq_class_rt(cfqq) < cfq_class_rt(__cfqq))
-			n = &(*p)->rb_right;
-		else if (cfq_class_idle(cfqq) < cfq_class_idle(__cfqq))
-			n = &(*p)->rb_left;
-		else if (cfq_class_idle(cfqq) > cfq_class_idle(__cfqq))
-			n = &(*p)->rb_right;
-		else if (rb_key < __cfqq->rb_key)
-			n = &(*p)->rb_left;
-		else
-			n = &(*p)->rb_right;
-
-		if (n == &(*p)->rb_right)
-			left = 0;
-
-		p = n;
-	}
-
-	if (left)
-		cfqd->service_tree.left = &cfqq->rb_node;
-
-	cfqq->rb_key = rb_key;
-	rb_link_node(&cfqq->rb_node, parent, p);
-	rb_insert_color(&cfqq->rb_node, &cfqd->service_tree.rb);
-}
-
 static struct cfq_queue *
 cfq_prio_tree_lookup(struct cfq_data *cfqd, struct rb_root *root,
 		     sector_t sector, struct rb_node **ret_parent,
@@ -620,57 +395,34 @@ static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 		cfqq->p_root = NULL;
 }
 
-/*
- * Update cfqq's position in the service tree.
- */
-static void cfq_resort_rr_list(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+/* An active ioq is being reset. A chance to do cic related stuff. */
+static void cfq_active_ioq_reset(struct request_queue *q, void *sched_queue)
 {
-	/*
-	 * Resorting requires the cfqq to be on the RR list already.
-	 */
-	if (cfq_cfqq_on_rr(cfqq)) {
-		cfq_service_tree_add(cfqd, cfqq, 0);
-		cfq_prio_tree_add(cfqd, cfqq);
-	}
-}
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = sched_queue;
 
-/*
- * add to busy list of queues for service, trying to be fair in ordering
- * the pending list according to last request service
- */
-static void cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	cfq_log_cfqq(cfqd, cfqq, "add_to_rr");
-	BUG_ON(cfq_cfqq_on_rr(cfqq));
-	cfq_mark_cfqq_on_rr(cfqq);
-	cfqd->busy_queues++;
-	if (cfq_class_rt(cfqq))
-		cfqd->busy_rt_queues++;
+	if (cfqd->active_cic) {
+		put_io_context(cfqd->active_cic->ioc);
+		cfqd->active_cic = NULL;
+	}
 
-	cfq_resort_rr_list(cfqd, cfqq);
+	/* Resort the cfqq in prio tree */
+	if (cfqq)
+		cfq_prio_tree_add(cfqd, cfqq);
 }
 
-/*
- * Called when the cfqq no longer has requests pending, remove it from
- * the service tree.
- */
-static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+/* An ioq has been set as active one. */
+static void cfq_active_ioq_set(struct request_queue *q, void *sched_queue,
+				int coop)
 {
-	cfq_log_cfqq(cfqd, cfqq, "del_from_rr");
-	BUG_ON(!cfq_cfqq_on_rr(cfqq));
-	cfq_clear_cfqq_on_rr(cfqq);
+	struct cfq_queue *cfqq = sched_queue;
 
-	if (!RB_EMPTY_NODE(&cfqq->rb_node))
-		cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
-	if (cfqq->p_root) {
-		rb_erase(&cfqq->p_node, cfqq->p_root);
-		cfqq->p_root = NULL;
-	}
+	cfqq->slice_dispatch = 0;
 
-	BUG_ON(!cfqd->busy_queues);
-	cfqd->busy_queues--;
-	if (cfq_class_rt(cfqq))
-		cfqd->busy_rt_queues--;
+	cfq_clear_cfqq_must_alloc_slice(cfqq);
+	cfq_clear_cfqq_fifo_expire(cfqq);
+	if (!coop)
+		cfq_clear_cfqq_coop(cfqq);
 }
 
 /*
@@ -679,7 +431,6 @@ static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 static void cfq_del_rq_rb(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
-	struct cfq_data *cfqd = cfqq->cfqd;
 	const int sync = rq_is_sync(rq);
 
 	BUG_ON(!cfqq->queued[sync]);
@@ -687,8 +438,17 @@ static void cfq_del_rq_rb(struct request *rq)
 
 	elv_rb_del(&cfqq->sort_list, rq);
 
-	if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list))
-		cfq_del_cfqq_rr(cfqd, cfqq);
+	/*
+	 * If this was last request in the queue, remove this queue from
+	 * prio trees. For last request nr_queued count will still be 1 as
+	 * elevator fair queuing layer is yet to do the accounting.
+	 */
+	if (elv_ioq_nr_queued(cfqq->ioq) == 1) {
+		if (cfqq->p_root) {
+			rb_erase(&cfqq->p_node, cfqq->p_root);
+			cfqq->p_root = NULL;
+		}
+	}
 }
 
 static void cfq_add_rq_rb(struct request *rq)
@@ -706,9 +466,6 @@ static void cfq_add_rq_rb(struct request *rq)
 	while ((__alias = elv_rb_add(&cfqq->sort_list, rq)) != NULL)
 		cfq_dispatch_insert(cfqd->queue, __alias);
 
-	if (!cfq_cfqq_on_rr(cfqq))
-		cfq_add_cfqq_rr(cfqd, cfqq);
-
 	/*
 	 * check if this request is a better next-serve candidate
 	 */
@@ -756,23 +513,9 @@ static void cfq_activate_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 
-	cfqd->rq_in_driver++;
-	cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "activate rq, drv=%d",
-						cfqd->rq_in_driver);
-
 	cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors;
 }
 
-static void cfq_deactivate_request(struct request_queue *q, struct request *rq)
-{
-	struct cfq_data *cfqd = q->elevator->elevator_data;
-
-	WARN_ON(!cfqd->rq_in_driver);
-	cfqd->rq_in_driver--;
-	cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "deactivate rq, drv=%d",
-						cfqd->rq_in_driver);
-}
-
 static void cfq_remove_request(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
@@ -783,7 +526,6 @@ static void cfq_remove_request(struct request *rq)
 	list_del_init(&rq->queuelist);
 	cfq_del_rq_rb(rq);
 
-	cfqq->cfqd->rq_queued--;
 	if (rq_is_meta(rq)) {
 		WARN_ON(!cfqq->meta_pending);
 		cfqq->meta_pending--;
@@ -857,93 +599,21 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	return 0;
 }
 
-static void __cfq_set_active_queue(struct cfq_data *cfqd,
-				   struct cfq_queue *cfqq)
-{
-	if (cfqq) {
-		cfq_log_cfqq(cfqd, cfqq, "set_active");
-		cfqq->slice_end = 0;
-		cfqq->slice_dispatch = 0;
-
-		cfq_clear_cfqq_wait_request(cfqq);
-		cfq_clear_cfqq_must_dispatch(cfqq);
-		cfq_clear_cfqq_must_alloc_slice(cfqq);
-		cfq_clear_cfqq_fifo_expire(cfqq);
-		cfq_mark_cfqq_slice_new(cfqq);
-
-		del_timer(&cfqd->idle_slice_timer);
-	}
-
-	cfqd->active_queue = cfqq;
-}
-
 /*
  * current cfqq expired its slice (or was too idle), select new one
  */
 static void
-__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-		    int timed_out)
+__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
-	cfq_log_cfqq(cfqd, cfqq, "slice expired t=%d", timed_out);
-
-	if (cfq_cfqq_wait_request(cfqq))
-		del_timer(&cfqd->idle_slice_timer);
-
-	cfq_clear_cfqq_wait_request(cfqq);
-
-	/*
-	 * store what was left of this slice, if the queue idled/timed out
-	 */
-	if (timed_out && !cfq_cfqq_slice_new(cfqq)) {
-		cfqq->slice_resid = cfqq->slice_end - jiffies;
-		cfq_log_cfqq(cfqd, cfqq, "resid=%ld", cfqq->slice_resid);
-	}
-
-	cfq_resort_rr_list(cfqd, cfqq);
-
-	if (cfqq == cfqd->active_queue)
-		cfqd->active_queue = NULL;
-
-	if (cfqd->active_cic) {
-		put_io_context(cfqd->active_cic->ioc);
-		cfqd->active_cic = NULL;
-	}
+	__elv_ioq_slice_expired(cfqd->queue, cfqq->ioq);
 }
 
-static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out)
+static inline void cfq_slice_expired(struct cfq_data *cfqd)
 {
-	struct cfq_queue *cfqq = cfqd->active_queue;
+	struct cfq_queue *cfqq = elv_active_sched_queue(cfqd->queue->elevator);
 
 	if (cfqq)
-		__cfq_slice_expired(cfqd, cfqq, timed_out);
-}
-
-/*
- * Get next queue for service. Unless we have a queue preemption,
- * we'll simply select the first cfqq in the service tree.
- */
-static struct cfq_queue *cfq_get_next_queue(struct cfq_data *cfqd)
-{
-	if (RB_EMPTY_ROOT(&cfqd->service_tree.rb))
-		return NULL;
-
-	return cfq_rb_first(&cfqd->service_tree);
-}
-
-/*
- * Get and set a new active queue for service.
- */
-static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd,
-					      struct cfq_queue *cfqq)
-{
-	if (!cfqq) {
-		cfqq = cfq_get_next_queue(cfqd);
-		if (cfqq)
-			cfq_clear_cfqq_coop(cfqq);
-	}
-
-	__cfq_set_active_queue(cfqd, cfqq);
-	return cfqq;
+		__cfq_slice_expired(cfqd, cfqq);
 }
 
 static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
@@ -1020,11 +690,12 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
  * associated with the I/O issued by cur_cfqq.  I'm not sure this is a valid
  * assumption.
  */
-static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
-					      struct cfq_queue *cur_cfqq,
+static struct io_queue *cfq_close_cooperator(struct request_queue *q,
+					      void *cur_sched_queue,
 					      int probe)
 {
-	struct cfq_queue *cfqq;
+	struct cfq_queue *cur_cfqq = cur_sched_queue, *cfqq;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
 
 	/*
 	 * A valid cfq_io_context is necessary to compare requests against
@@ -1047,38 +718,18 @@ static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
 
 	if (!probe)
 		cfq_mark_cfqq_coop(cfqq);
-	return cfqq;
+	return cfqq->ioq;
 }
 
-static void cfq_arm_slice_timer(struct cfq_data *cfqd)
+static void cfq_arm_slice_timer(struct request_queue *q, void *sched_queue)
 {
-	struct cfq_queue *cfqq = cfqd->active_queue;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = sched_queue;
 	struct cfq_io_context *cic;
 	unsigned long sl;
 
-	/*
-	 * SSD device without seek penalty, disable idling. But only do so
-	 * for devices that support queuing, otherwise we still have a problem
-	 * with sync vs async workloads.
-	 */
-	if (blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag)
-		return;
-
 	WARN_ON(!RB_EMPTY_ROOT(&cfqq->sort_list));
-	WARN_ON(cfq_cfqq_slice_new(cfqq));
-
-	/*
-	 * idle is disabled, either manually or by past process history
-	 */
-	if (!cfqd->cfq_slice_idle || !cfq_cfqq_idle_window(cfqq))
-		return;
-
-	/*
-	 * still requests with the driver, don't idle
-	 */
-	if (cfqd->rq_in_driver)
-		return;
-
+	WARN_ON(elv_ioq_slice_new(cfqq->ioq));
 	/*
 	 * task has exited, don't wait
 	 */
@@ -1086,18 +737,18 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
 	if (!cic || !atomic_read(&cic->ioc->nr_tasks))
 		return;
 
-	cfq_mark_cfqq_wait_request(cfqq);
 
+	elv_mark_ioq_wait_request(cfqq->ioq);
 	/*
 	 * we don't want to idle for seeks, but we do want to allow
 	 * fair distribution of slice time for a process doing back-to-back
 	 * seeks. so allow a little bit of time for him to submit a new rq
 	 */
-	sl = cfqd->cfq_slice_idle;
+	sl = elv_get_slice_idle(q->elevator);
 	if (sample_valid(cic->seek_samples) && CIC_SEEKY(cic))
 		sl = min(sl, msecs_to_jiffies(CFQ_MIN_TT));
 
-	mod_timer(&cfqd->idle_slice_timer, jiffies + sl);
+	elv_mod_idle_slice_timer(q->elevator, jiffies + sl);
 	cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu", sl);
 }
 
@@ -1106,13 +757,12 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
  */
 static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
+	struct cfq_data *cfqd = q->elevator->elevator_data;
 
-	cfq_log_cfqq(cfqd, cfqq, "dispatch_insert");
+	cfq_log_cfqq(cfqd, cfqq, "dispatch_insert sect=%d", rq->nr_sectors);
 
 	cfq_remove_request(rq);
-	cfqq->dispatched++;
 	elv_dispatch_sort(q, rq);
 
 	if (cfq_cfqq_sync(cfqq))
@@ -1150,78 +800,11 @@ static inline int
 cfq_prio_to_maxrq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
 	const int base_rq = cfqd->cfq_slice_async_rq;
+	unsigned short ioprio = elv_ioq_ioprio(cfqq->ioq);
 
-	WARN_ON(cfqq->ioprio >= IOPRIO_BE_NR);
+	WARN_ON(ioprio >= IOPRIO_BE_NR);
 
-	return 2 * (base_rq + base_rq * (CFQ_PRIO_LISTS - 1 - cfqq->ioprio));
-}
-
-/*
- * Select a queue for service. If we have a current active queue,
- * check whether to continue servicing it, or retrieve and set a new one.
- */
-static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
-{
-	struct cfq_queue *cfqq, *new_cfqq = NULL;
-
-	cfqq = cfqd->active_queue;
-	if (!cfqq)
-		goto new_queue;
-
-	/*
-	 * The active queue has run out of time, expire it and select new.
-	 */
-	if (cfq_slice_used(cfqq) && !cfq_cfqq_must_dispatch(cfqq))
-		goto expire;
-
-	/*
-	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
-	 * cfqq.
-	 */
-	if (!cfq_class_rt(cfqq) && cfqd->busy_rt_queues) {
-		/*
-		 * We simulate this as cfqq timed out so that it gets to bank
-		 * the remaining of its time slice.
-		 */
-		cfq_log_cfqq(cfqd, cfqq, "preempt");
-		cfq_slice_expired(cfqd, 1);
-		goto new_queue;
-	}
-
-	/*
-	 * The active queue has requests and isn't expired, allow it to
-	 * dispatch.
-	 */
-	if (!RB_EMPTY_ROOT(&cfqq->sort_list))
-		goto keep_queue;
-
-	/*
-	 * If another queue has a request waiting within our mean seek
-	 * distance, let it run.  The expire code will check for close
-	 * cooperators and put the close queue at the front of the service
-	 * tree.
-	 */
-	new_cfqq = cfq_close_cooperator(cfqd, cfqq, 0);
-	if (new_cfqq)
-		goto expire;
-
-	/*
-	 * No requests pending. If the active queue still has requests in
-	 * flight or is idling for a new request, allow either of these
-	 * conditions to happen (or time out) before selecting a new queue.
-	 */
-	if (timer_pending(&cfqd->idle_slice_timer) ||
-	    (cfqq->dispatched && cfq_cfqq_idle_window(cfqq))) {
-		cfqq = NULL;
-		goto keep_queue;
-	}
-
-expire:
-	cfq_slice_expired(cfqd, 0);
-new_queue:
-	cfqq = cfq_set_active_queue(cfqd, new_cfqq);
-keep_queue:
-	return cfqq;
+	return 2 * (base_rq + base_rq * (CFQ_PRIO_LISTS - 1 - ioprio));
 }
 
 static int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
@@ -1246,12 +829,14 @@ static int cfq_forced_dispatch(struct cfq_data *cfqd)
 	struct cfq_queue *cfqq;
 	int dispatched = 0;
 
-	while ((cfqq = cfq_rb_first(&cfqd->service_tree)) != NULL)
+	while ((cfqq = elv_select_sched_queue(cfqd->queue, 1)) != NULL)
 		dispatched += __cfq_forced_dispatch_cfqq(cfqq);
 
-	cfq_slice_expired(cfqd, 0);
+	/* This probably is redundant now. above loop will should make sure
+	 * that all the busy queues have expired */
+	cfq_slice_expired(cfqd);
 
-	BUG_ON(cfqd->busy_queues);
+	BUG_ON(elv_nr_busy_ioq(cfqd->queue->elevator));
 
 	cfq_log(cfqd, "forced_dispatch=%d\n", dispatched);
 	return dispatched;
@@ -1297,13 +882,10 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	struct cfq_queue *cfqq;
 	unsigned int max_dispatch;
 
-	if (!cfqd->busy_queues)
-		return 0;
-
 	if (unlikely(force))
 		return cfq_forced_dispatch(cfqd);
 
-	cfqq = cfq_select_queue(cfqd);
+	cfqq = elv_select_sched_queue(q, 0);
 	if (!cfqq)
 		return 0;
 
@@ -1320,7 +902,7 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	/*
 	 * Does this cfqq already have too much IO in flight?
 	 */
-	if (cfqq->dispatched >= max_dispatch) {
+	if (elv_ioq_nr_dispatched(cfqq->ioq) >= max_dispatch) {
 		/*
 		 * idle queue must always only have a single IO in flight
 		 */
@@ -1330,13 +912,13 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 		/*
 		 * We have other queues, don't allow more IO from this one
 		 */
-		if (cfqd->busy_queues > 1)
+		if (elv_nr_busy_ioq(q->elevator) > 1)
 			return 0;
 
 		/*
 		 * we are the only queue, allow up to 4 times of 'quantum'
 		 */
-		if (cfqq->dispatched >= 4 * max_dispatch)
+		if (elv_ioq_nr_dispatched(cfqq->ioq) >= 4 * max_dispatch)
 			return 0;
 	}
 
@@ -1345,51 +927,45 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	 */
 	cfq_dispatch_request(cfqd, cfqq);
 	cfqq->slice_dispatch++;
-	cfq_clear_cfqq_must_dispatch(cfqq);
 
 	/*
 	 * expire an async queue immediately if it has used up its slice. idle
 	 * queue always expire after 1 dispatch round.
 	 */
-	if (cfqd->busy_queues > 1 && ((!cfq_cfqq_sync(cfqq) &&
+	if (elv_nr_busy_ioq(q->elevator) > 1 && ((!cfq_cfqq_sync(cfqq) &&
 	    cfqq->slice_dispatch >= cfq_prio_to_maxrq(cfqd, cfqq)) ||
 	    cfq_class_idle(cfqq))) {
-		cfqq->slice_end = jiffies + 1;
-		cfq_slice_expired(cfqd, 0);
+		cfq_slice_expired(cfqd);
 	}
 
 	cfq_log(cfqd, "dispatched a request");
 	return 1;
 }
 
-/*
- * task holds one reference to the queue, dropped when task exits. each rq
- * in-flight on this queue also holds a reference, dropped when rq is freed.
- *
- * queue lock must be held here.
- */
-static void cfq_put_queue(struct cfq_queue *cfqq)
+static void cfq_free_cfq_queue(struct elevator_queue *e, void *sched_queue)
 {
+	struct cfq_queue *cfqq = sched_queue;
 	struct cfq_data *cfqd = cfqq->cfqd;
 
-	BUG_ON(atomic_read(&cfqq->ref) <= 0);
-
-	if (!atomic_dec_and_test(&cfqq->ref))
-		return;
+	BUG_ON(!cfqq);
 
-	cfq_log_cfqq(cfqd, cfqq, "put_queue");
+	cfq_log_cfqq(cfqd, cfqq, "free_queue");
 	BUG_ON(rb_first(&cfqq->sort_list));
 	BUG_ON(cfqq->allocated[READ] + cfqq->allocated[WRITE]);
-	BUG_ON(cfq_cfqq_on_rr(cfqq));
 
-	if (unlikely(cfqd->active_queue == cfqq)) {
-		__cfq_slice_expired(cfqd, cfqq, 0);
-		cfq_schedule_dispatch(cfqd);
+	if (unlikely(cfqq_is_active_queue(cfqq))) {
+		__cfq_slice_expired(cfqd, cfqq);
+		elv_schedule_dispatch(cfqd->queue);
 	}
 
 	kmem_cache_free(cfq_pool, cfqq);
 }
 
+static inline void cfq_put_queue(struct cfq_queue *cfqq)
+{
+	elv_put_ioq(cfqq->ioq);
+}
+
 /*
  * Must always be called with the rcu_read_lock() held
  */
@@ -1477,9 +1053,9 @@ static void cfq_free_io_context(struct io_context *ioc)
 
 static void cfq_exit_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
-	if (unlikely(cfqq == cfqd->active_queue)) {
-		__cfq_slice_expired(cfqd, cfqq, 0);
-		cfq_schedule_dispatch(cfqd);
+	if (unlikely(cfqq == elv_active_sched_queue(cfqd->queue->elevator))) {
+		__cfq_slice_expired(cfqd, cfqq);
+		elv_schedule_dispatch(cfqd->queue);
 	}
 
 	cfq_put_queue(cfqq);
@@ -1549,11 +1125,11 @@ static struct cfq_io_context *
 cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 {
 	struct cfq_io_context *cic;
+	struct request_queue *q = cfqd->queue;
 
 	cic = kmem_cache_alloc_node(cfq_ioc_pool, gfp_mask | __GFP_ZERO,
-							cfqd->queue->node);
+							q->node);
 	if (cic) {
-		cic->last_end_request = jiffies;
 		INIT_LIST_HEAD(&cic->queue_list);
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
@@ -1567,7 +1143,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
 {
 	struct task_struct *tsk = current;
-	int ioprio_class;
+	int ioprio_class, ioprio;
 
 	if (!cfq_cfqq_prio_changed(cfqq))
 		return;
@@ -1580,30 +1156,33 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
 		/*
 		 * no prio set, inherit CPU scheduling settings
 		 */
-		cfqq->ioprio = task_nice_ioprio(tsk);
-		cfqq->ioprio_class = task_nice_ioclass(tsk);
+		ioprio = task_nice_ioprio(tsk);
+		ioprio_class = task_nice_ioclass(tsk);
 		break;
 	case IOPRIO_CLASS_RT:
-		cfqq->ioprio = task_ioprio(ioc);
-		cfqq->ioprio_class = IOPRIO_CLASS_RT;
+		ioprio = task_ioprio(ioc);
+		ioprio_class = IOPRIO_CLASS_RT;
 		break;
 	case IOPRIO_CLASS_BE:
-		cfqq->ioprio = task_ioprio(ioc);
-		cfqq->ioprio_class = IOPRIO_CLASS_BE;
+		ioprio = task_ioprio(ioc);
+		ioprio_class = IOPRIO_CLASS_BE;
 		break;
 	case IOPRIO_CLASS_IDLE:
-		cfqq->ioprio_class = IOPRIO_CLASS_IDLE;
-		cfqq->ioprio = 7;
-		cfq_clear_cfqq_idle_window(cfqq);
+		ioprio_class = IOPRIO_CLASS_IDLE;
+		ioprio = 7;
+		elv_clear_ioq_idle_window(cfqq->ioq);
 		break;
 	}
 
+	elv_ioq_set_ioprio_class(cfqq->ioq, ioprio_class);
+	elv_ioq_set_ioprio(cfqq->ioq, ioprio);
+
 	/*
 	 * keep track of original prio settings in case we have to temporarily
 	 * elevate the priority of this queue
 	 */
-	cfqq->org_ioprio = cfqq->ioprio;
-	cfqq->org_ioprio_class = cfqq->ioprio_class;
+	cfqq->org_ioprio = ioprio;
+	cfqq->org_ioprio_class = ioprio_class;
 	cfq_clear_cfqq_prio_changed(cfqq);
 }
 
@@ -1612,11 +1191,12 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	struct cfq_data *cfqd = cic->key;
 	struct cfq_queue *cfqq;
 	unsigned long flags;
+	struct request_queue *q = cfqd->queue;
 
 	if (unlikely(!cfqd))
 		return;
 
-	spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+	spin_lock_irqsave(q->queue_lock, flags);
 
 	cfqq = cic->cfqq[BLK_RW_ASYNC];
 	if (cfqq) {
@@ -1633,7 +1213,7 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	if (cfqq)
 		cfq_mark_cfqq_prio_changed(cfqq);
 
-	spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
 static void cfq_ioc_set_ioprio(struct io_context *ioc)
@@ -1644,11 +1224,12 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc)
 
 static struct cfq_queue *
 cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
-		     struct io_context *ioc, gfp_t gfp_mask)
+				struct io_context *ioc, gfp_t gfp_mask)
 {
 	struct cfq_queue *cfqq, *new_cfqq = NULL;
 	struct cfq_io_context *cic;
-
+	struct request_queue *q = cfqd->queue;
+	struct io_queue *ioq = NULL, *new_ioq = NULL;
 retry:
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
@@ -1656,8 +1237,7 @@ retry:
 
 	if (!cfqq) {
 		if (new_cfqq) {
-			cfqq = new_cfqq;
-			new_cfqq = NULL;
+			goto alloc_ioq;
 		} else if (gfp_mask & __GFP_WAIT) {
 			/*
 			 * Inform the allocator of the fact that we will
@@ -1678,22 +1258,52 @@ retry:
 			if (!cfqq)
 				goto out;
 		}
+alloc_ioq:
+		if (new_ioq) {
+			ioq = new_ioq;
+			new_ioq = NULL;
+			cfqq = new_cfqq;
+			new_cfqq = NULL;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			new_ioq = elv_alloc_ioq(q,
+					gfp_mask | __GFP_NOFAIL | __GFP_ZERO);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			ioq = elv_alloc_ioq(q, gfp_mask | __GFP_ZERO);
+			if (!ioq) {
+				kmem_cache_free(cfq_pool, cfqq);
+				cfqq = NULL;
+				goto out;
+			}
+		}
 
-		RB_CLEAR_NODE(&cfqq->rb_node);
+		/*
+		 * Both cfqq and ioq objects allocated. Do the initializations
+		 * now.
+		 */
 		RB_CLEAR_NODE(&cfqq->p_node);
 		INIT_LIST_HEAD(&cfqq->fifo);
-
-		atomic_set(&cfqq->ref, 0);
 		cfqq->cfqd = cfqd;
 
 		cfq_mark_cfqq_prio_changed(cfqq);
 
+		cfqq->ioq = ioq;
 		cfq_init_prio_data(cfqq, ioc);
+		elv_init_ioq(q->elevator, ioq, cfqq, cfqq->org_ioprio_class,
+				cfqq->org_ioprio, is_sync);
 
 		if (is_sync) {
 			if (!cfq_class_idle(cfqq))
-				cfq_mark_cfqq_idle_window(cfqq);
-			cfq_mark_cfqq_sync(cfqq);
+				elv_mark_ioq_idle_window(cfqq->ioq);
+			elv_mark_ioq_sync(cfqq->ioq);
 		}
 		cfqq->pid = current->pid;
 		cfq_log_cfqq(cfqd, cfqq, "alloced");
@@ -1702,38 +1312,28 @@ retry:
 	if (new_cfqq)
 		kmem_cache_free(cfq_pool, new_cfqq);
 
+	if (new_ioq)
+		elv_free_ioq(new_ioq);
+
 out:
 	WARN_ON((gfp_mask & __GFP_WAIT) && !cfqq);
 	return cfqq;
 }
 
-static struct cfq_queue **
-cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio)
-{
-	switch (ioprio_class) {
-	case IOPRIO_CLASS_RT:
-		return &cfqd->async_cfqq[0][ioprio];
-	case IOPRIO_CLASS_BE:
-		return &cfqd->async_cfqq[1][ioprio];
-	case IOPRIO_CLASS_IDLE:
-		return &cfqd->async_idle_cfqq;
-	default:
-		BUG();
-	}
-}
-
 static struct cfq_queue *
 cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
-	      gfp_t gfp_mask)
+					gfp_t gfp_mask)
 {
 	const int ioprio = task_ioprio(ioc);
 	const int ioprio_class = task_ioprio_class(ioc);
-	struct cfq_queue **async_cfqq = NULL;
+	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
+	struct io_group *iog = io_lookup_io_group_current(cfqd->queue);
 
 	if (!is_sync) {
-		async_cfqq = cfq_async_queue_prio(cfqd, ioprio_class, ioprio);
-		cfqq = *async_cfqq;
+		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
+								ioprio);
+		cfqq = async_cfqq;
 	}
 
 	if (!cfqq) {
@@ -1742,15 +1342,11 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 			return NULL;
 	}
 
-	/*
-	 * pin the queue now that it's allocated, scheduler exit will prune it
-	 */
-	if (!is_sync && !(*async_cfqq)) {
-		atomic_inc(&cfqq->ref);
-		*async_cfqq = cfqq;
-	}
+	if (!is_sync && !async_cfqq)
+		io_group_set_async_queue(iog, ioprio_class, ioprio, cfqq->ioq);
 
-	atomic_inc(&cfqq->ref);
+	/* ioc reference */
+	elv_get_ioq(cfqq->ioq);
 	return cfqq;
 }
 
@@ -1829,6 +1425,7 @@ static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
 {
 	unsigned long flags;
 	int ret;
+	struct request_queue *q = cfqd->queue;
 
 	ret = radix_tree_preload(gfp_mask);
 	if (!ret) {
@@ -1845,9 +1442,9 @@ static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
 		radix_tree_preload_end();
 
 		if (!ret) {
-			spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+			spin_lock_irqsave(q->queue_lock, flags);
 			list_add(&cic->queue_list, &cfqd->cic_list);
-			spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+			spin_unlock_irqrestore(q->queue_lock, flags);
 		}
 	}
 
@@ -1867,10 +1464,11 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 {
 	struct io_context *ioc = NULL;
 	struct cfq_io_context *cic;
+	struct request_queue *q = cfqd->queue;
 
 	might_sleep_if(gfp_mask & __GFP_WAIT);
 
-	ioc = get_io_context(gfp_mask, cfqd->queue->node);
+	ioc = get_io_context(gfp_mask, q->node);
 	if (!ioc)
 		return NULL;
 
@@ -1889,7 +1487,6 @@ out:
 	smp_read_barrier_depends();
 	if (unlikely(ioc->ioprio_changed))
 		cfq_ioc_set_ioprio(ioc);
-
 	return cic;
 err_free:
 	cfq_cic_free(cic);
@@ -1899,17 +1496,6 @@ err:
 }
 
 static void
-cfq_update_io_thinktime(struct cfq_data *cfqd, struct cfq_io_context *cic)
-{
-	unsigned long elapsed = jiffies - cic->last_end_request;
-	unsigned long ttime = min(elapsed, 2UL * cfqd->cfq_slice_idle);
-
-	cic->ttime_samples = (7*cic->ttime_samples + 256) / 8;
-	cic->ttime_total = (7*cic->ttime_total + 256*ttime) / 8;
-	cic->ttime_mean = (cic->ttime_total + 128) / cic->ttime_samples;
-}
-
-static void
 cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
 		       struct request *rq)
 {
@@ -1940,57 +1526,41 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
 }
 
 /*
- * Disable idle window if the process thinks too long or seeks so much that
- * it doesn't matter
+ * Disable idle window if the process seeks so much that it doesn't matter
  */
-static void
-cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-		       struct cfq_io_context *cic)
+static int
+cfq_update_idle_window(struct elevator_queue *eq, void *cfqq,
+					struct request *rq)
 {
-	int old_idle, enable_idle;
+	struct cfq_io_context *cic = RQ_CIC(rq);
 
 	/*
-	 * Don't idle for async or idle io prio class
+	 * Enabling/Disabling idling based on thinktime has been moved
+	 * in common layer.
 	 */
-	if (!cfq_cfqq_sync(cfqq) || cfq_class_idle(cfqq))
-		return;
-
-	enable_idle = old_idle = cfq_cfqq_idle_window(cfqq);
-
-	if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
-	    (cfqd->hw_tag && CIC_SEEKY(cic)))
-		enable_idle = 0;
-	else if (sample_valid(cic->ttime_samples)) {
-		if (cic->ttime_mean > cfqd->cfq_slice_idle)
-			enable_idle = 0;
-		else
-			enable_idle = 1;
-	}
+	if (!atomic_read(&cic->ioc->nr_tasks) ||
+	    (elv_hw_tag(eq) && CIC_SEEKY(cic)))
+		return 0;
 
-	if (old_idle != enable_idle) {
-		cfq_log_cfqq(cfqd, cfqq, "idle=%d", enable_idle);
-		if (enable_idle)
-			cfq_mark_cfqq_idle_window(cfqq);
-		else
-			cfq_clear_cfqq_idle_window(cfqq);
-	}
+	return 1;
 }
 
 /*
  * Check if new_cfqq should preempt the currently active queue. Return 0 for
- * no or if we aren't sure, a 1 will cause a preempt.
+ * no or if we aren't sure, a 1 will cause a preemption attempt.
+ * Some of the preemption logic has been moved to common layer. Only cfq
+ * specific parts are left here.
  */
 static int
-cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
-		   struct request *rq)
+cfq_should_preempt(struct request_queue *q, void *new_cfqq, struct request *rq)
 {
-	struct cfq_queue *cfqq;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = elv_active_sched_queue(q->elevator);
 
-	cfqq = cfqd->active_queue;
 	if (!cfqq)
 		return 0;
 
-	if (cfq_slice_used(cfqq))
+	if (elv_ioq_slice_used(cfqq->ioq))
 		return 1;
 
 	if (cfq_class_idle(new_cfqq))
@@ -2013,13 +1583,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
 	if (rq_is_meta(rq) && !cfqq->meta_pending)
 		return 1;
 
-	/*
-	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
-	 */
-	if (cfq_class_rt(new_cfqq) && !cfq_class_rt(cfqq))
-		return 1;
-
-	if (!cfqd->active_cic || !cfq_cfqq_wait_request(cfqq))
+	if (!cfqd->active_cic || !elv_ioq_wait_request(cfqq->ioq))
 		return 0;
 
 	/*
@@ -2033,29 +1597,10 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
 }
 
 /*
- * cfqq preempts the active queue. if we allowed preempt with no slice left,
- * let it have half of its nominal slice.
- */
-static void cfq_preempt_queue(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	cfq_log_cfqq(cfqd, cfqq, "preempt");
-	cfq_slice_expired(cfqd, 1);
-
-	/*
-	 * Put the new queue at the front of the of the current list,
-	 * so we know that it will be selected next.
-	 */
-	BUG_ON(!cfq_cfqq_on_rr(cfqq));
-
-	cfq_service_tree_add(cfqd, cfqq, 1);
-
-	cfqq->slice_end = 0;
-	cfq_mark_cfqq_slice_new(cfqq);
-}
-
-/*
  * Called when a new fs request (rq) is added (to cfqq). Check if there's
  * something we should do about it
+ * After enqueuing the request whether queue should be preempted or kicked
+ * decision is taken by common layer.
  */
 static void
 cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
@@ -2063,45 +1608,12 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 {
 	struct cfq_io_context *cic = RQ_CIC(rq);
 
-	cfqd->rq_queued++;
 	if (rq_is_meta(rq))
 		cfqq->meta_pending++;
 
-	cfq_update_io_thinktime(cfqd, cic);
 	cfq_update_io_seektime(cfqd, cic, rq);
-	cfq_update_idle_window(cfqd, cfqq, cic);
 
 	cic->last_request_pos = rq->sector + rq->nr_sectors;
-
-	if (cfqq == cfqd->active_queue) {
-		/*
-		 * Remember that we saw a request from this process, but
-		 * don't start queuing just yet. Otherwise we risk seeing lots
-		 * of tiny requests, because we disrupt the normal plugging
-		 * and merging. If the request is already larger than a single
-		 * page, let it rip immediately. For that case we assume that
-		 * merging is already done. Ditto for a busy system that
-		 * has other work pending, don't risk delaying until the
-		 * idle timer unplug to continue working.
-		 */
-		if (cfq_cfqq_wait_request(cfqq)) {
-			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
-			    cfqd->busy_queues > 1) {
-				del_timer(&cfqd->idle_slice_timer);
-				blk_start_queueing(cfqd->queue);
-			}
-			cfq_mark_cfqq_must_dispatch(cfqq);
-		}
-	} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
-		/*
-		 * not the active queue - expire current slice if it is
-		 * idle and has expired it's mean thinktime or this new queue
-		 * has some old slice time left and is of higher priority or
-		 * this new queue is RT and the current one is BE
-		 */
-		cfq_preempt_queue(cfqd, cfqq);
-		blk_start_queueing(cfqd->queue);
-	}
 }
 
 static void cfq_insert_request(struct request_queue *q, struct request *rq)
@@ -2119,84 +1631,17 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq)
 	cfq_rq_enqueued(cfqd, cfqq, rq);
 }
 
-/*
- * Update hw_tag based on peak queue depth over 50 samples under
- * sufficient load.
- */
-static void cfq_update_hw_tag(struct cfq_data *cfqd)
-{
-	if (cfqd->rq_in_driver > cfqd->rq_in_driver_peak)
-		cfqd->rq_in_driver_peak = cfqd->rq_in_driver;
-
-	if (cfqd->rq_queued <= CFQ_HW_QUEUE_MIN &&
-	    cfqd->rq_in_driver <= CFQ_HW_QUEUE_MIN)
-		return;
-
-	if (cfqd->hw_tag_samples++ < 50)
-		return;
-
-	if (cfqd->rq_in_driver_peak >= CFQ_HW_QUEUE_MIN)
-		cfqd->hw_tag = 1;
-	else
-		cfqd->hw_tag = 0;
-
-	cfqd->hw_tag_samples = 0;
-	cfqd->rq_in_driver_peak = 0;
-}
-
 static void cfq_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
 	struct cfq_data *cfqd = cfqq->cfqd;
-	const int sync = rq_is_sync(rq);
 	unsigned long now;
 
 	now = jiffies;
 	cfq_log_cfqq(cfqd, cfqq, "complete");
 
-	cfq_update_hw_tag(cfqd);
-
-	WARN_ON(!cfqd->rq_in_driver);
-	WARN_ON(!cfqq->dispatched);
-	cfqd->rq_in_driver--;
-	cfqq->dispatched--;
-
 	if (cfq_cfqq_sync(cfqq))
 		cfqd->sync_flight--;
-
-	if (!cfq_class_idle(cfqq))
-		cfqd->last_end_request = now;
-
-	if (sync)
-		RQ_CIC(rq)->last_end_request = now;
-
-	/*
-	 * If this is the active queue, check if it needs to be expired,
-	 * or if we want to idle in case it has no pending requests.
-	 */
-	if (cfqd->active_queue == cfqq) {
-		const bool cfqq_empty = RB_EMPTY_ROOT(&cfqq->sort_list);
-
-		if (cfq_cfqq_slice_new(cfqq)) {
-			cfq_set_prio_slice(cfqd, cfqq);
-			cfq_clear_cfqq_slice_new(cfqq);
-		}
-		/*
-		 * If there are no requests waiting in this queue, and
-		 * there are other queues ready to issue requests, AND
-		 * those other queues are issuing requests within our
-		 * mean seek distance, give them a chance to run instead
-		 * of idling.
-		 */
-		if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq))
-			cfq_slice_expired(cfqd, 1);
-		else if (cfqq_empty && !cfq_close_cooperator(cfqd, cfqq, 1) &&
-			 sync && !rq_noidle(rq))
-			cfq_arm_slice_timer(cfqd);
-	}
-
-	if (!cfqd->rq_in_driver)
-		cfq_schedule_dispatch(cfqd);
 }
 
 /*
@@ -2205,30 +1650,33 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
  */
 static void cfq_prio_boost(struct cfq_queue *cfqq)
 {
+	struct io_queue *ioq = cfqq->ioq;
+
 	if (has_fs_excl()) {
 		/*
 		 * boost idle prio on transactions that would lock out other
 		 * users of the filesystem
 		 */
 		if (cfq_class_idle(cfqq))
-			cfqq->ioprio_class = IOPRIO_CLASS_BE;
-		if (cfqq->ioprio > IOPRIO_NORM)
-			cfqq->ioprio = IOPRIO_NORM;
+			elv_ioq_set_ioprio_class(ioq, IOPRIO_CLASS_BE);
+		if (elv_ioq_ioprio(ioq) > IOPRIO_NORM)
+			elv_ioq_set_ioprio(ioq, IOPRIO_NORM);
+
 	} else {
 		/*
 		 * check if we need to unboost the queue
 		 */
-		if (cfqq->ioprio_class != cfqq->org_ioprio_class)
-			cfqq->ioprio_class = cfqq->org_ioprio_class;
-		if (cfqq->ioprio != cfqq->org_ioprio)
-			cfqq->ioprio = cfqq->org_ioprio;
+		if (elv_ioq_ioprio_class(ioq) != cfqq->org_ioprio_class)
+			elv_ioq_set_ioprio_class(ioq, cfqq->org_ioprio_class);
+		if (elv_ioq_ioprio(ioq) != cfqq->org_ioprio)
+			elv_ioq_set_ioprio(ioq, cfqq->org_ioprio);
 	}
 }
 
 static inline int __cfq_may_queue(struct cfq_queue *cfqq)
 {
-	if ((cfq_cfqq_wait_request(cfqq) || cfq_cfqq_must_alloc(cfqq)) &&
-	    !cfq_cfqq_must_alloc_slice(cfqq)) {
+	if ((elv_ioq_wait_request(cfqq->ioq) ||
+	   cfq_cfqq_must_alloc(cfqq)) && !cfq_cfqq_must_alloc_slice(cfqq)) {
 		cfq_mark_cfqq_must_alloc_slice(cfqq);
 		return ELV_MQUEUE_MUST;
 	}
@@ -2280,7 +1728,7 @@ static void cfq_put_request(struct request *rq)
 		put_io_context(RQ_CIC(rq)->ioc);
 
 		rq->elevator_private = NULL;
-		rq->elevator_private2 = NULL;
+		rq->ioq = NULL;
 
 		cfq_put_queue(cfqq);
 	}
@@ -2320,119 +1768,31 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 	cfqq->allocated[rw]++;
 	cfq_clear_cfqq_must_alloc(cfqq);
-	atomic_inc(&cfqq->ref);
+	elv_get_ioq(cfqq->ioq);
 
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
 	rq->elevator_private = cic;
-	rq->elevator_private2 = cfqq;
+	rq->ioq = cfqq->ioq;
 	return 0;
 
 queue_fail:
 	if (cic)
 		put_io_context(cic->ioc);
 
-	cfq_schedule_dispatch(cfqd);
+	elv_schedule_dispatch(cfqd->queue);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 	cfq_log(cfqd, "set_request fail");
 	return 1;
 }
 
-static void cfq_kick_queue(struct work_struct *work)
-{
-	struct cfq_data *cfqd =
-		container_of(work, struct cfq_data, unplug_work);
-	struct request_queue *q = cfqd->queue;
-
-	spin_lock_irq(q->queue_lock);
-	blk_start_queueing(q);
-	spin_unlock_irq(q->queue_lock);
-}
-
-/*
- * Timer running if the active_queue is currently idling inside its time slice
- */
-static void cfq_idle_slice_timer(unsigned long data)
-{
-	struct cfq_data *cfqd = (struct cfq_data *) data;
-	struct cfq_queue *cfqq;
-	unsigned long flags;
-	int timed_out = 1;
-
-	cfq_log(cfqd, "idle timer fired");
-
-	spin_lock_irqsave(cfqd->queue->queue_lock, flags);
-
-	cfqq = cfqd->active_queue;
-	if (cfqq) {
-		timed_out = 0;
-
-		/*
-		 * We saw a request before the queue expired, let it through
-		 */
-		if (cfq_cfqq_must_dispatch(cfqq))
-			goto out_kick;
-
-		/*
-		 * expired
-		 */
-		if (cfq_slice_used(cfqq))
-			goto expire;
-
-		/*
-		 * only expire and reinvoke request handler, if there are
-		 * other queues with pending requests
-		 */
-		if (!cfqd->busy_queues)
-			goto out_cont;
-
-		/*
-		 * not expired and it has a request pending, let it dispatch
-		 */
-		if (!RB_EMPTY_ROOT(&cfqq->sort_list))
-			goto out_kick;
-	}
-expire:
-	cfq_slice_expired(cfqd, timed_out);
-out_kick:
-	cfq_schedule_dispatch(cfqd);
-out_cont:
-	spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
-}
-
-static void cfq_shutdown_timer_wq(struct cfq_data *cfqd)
-{
-	del_timer_sync(&cfqd->idle_slice_timer);
-	cancel_work_sync(&cfqd->unplug_work);
-}
-
-static void cfq_put_async_queues(struct cfq_data *cfqd)
-{
-	int i;
-
-	for (i = 0; i < IOPRIO_BE_NR; i++) {
-		if (cfqd->async_cfqq[0][i])
-			cfq_put_queue(cfqd->async_cfqq[0][i]);
-		if (cfqd->async_cfqq[1][i])
-			cfq_put_queue(cfqd->async_cfqq[1][i]);
-	}
-
-	if (cfqd->async_idle_cfqq)
-		cfq_put_queue(cfqd->async_idle_cfqq);
-}
-
 static void cfq_exit_queue(struct elevator_queue *e)
 {
 	struct cfq_data *cfqd = e->elevator_data;
 	struct request_queue *q = cfqd->queue;
 
-	cfq_shutdown_timer_wq(cfqd);
-
 	spin_lock_irq(q->queue_lock);
 
-	if (cfqd->active_queue)
-		__cfq_slice_expired(cfqd, cfqd->active_queue, 0);
-
 	while (!list_empty(&cfqd->cic_list)) {
 		struct cfq_io_context *cic = list_entry(cfqd->cic_list.next,
 							struct cfq_io_context,
@@ -2441,12 +1801,7 @@ static void cfq_exit_queue(struct elevator_queue *e)
 		__cfq_exit_single_io_context(cfqd, cic);
 	}
 
-	cfq_put_async_queues(cfqd);
-
 	spin_unlock_irq(q->queue_lock);
-
-	cfq_shutdown_timer_wq(cfqd);
-
 	kfree(cfqd);
 }
 
@@ -2459,8 +1814,6 @@ static void *cfq_init_queue(struct request_queue *q)
 	if (!cfqd)
 		return NULL;
 
-	cfqd->service_tree = CFQ_RB_ROOT;
-
 	/*
 	 * Not strictly needed (since RB_ROOT just clears the node and we
 	 * zeroed cfqd on alloc), but better be safe in case someone decides
@@ -2473,23 +1826,12 @@ static void *cfq_init_queue(struct request_queue *q)
 
 	cfqd->queue = q;
 
-	init_timer(&cfqd->idle_slice_timer);
-	cfqd->idle_slice_timer.function = cfq_idle_slice_timer;
-	cfqd->idle_slice_timer.data = (unsigned long) cfqd;
-
-	INIT_WORK(&cfqd->unplug_work, cfq_kick_queue);
-
-	cfqd->last_end_request = jiffies;
 	cfqd->cfq_quantum = cfq_quantum;
 	cfqd->cfq_fifo_expire[0] = cfq_fifo_expire[0];
 	cfqd->cfq_fifo_expire[1] = cfq_fifo_expire[1];
 	cfqd->cfq_back_max = cfq_back_max;
 	cfqd->cfq_back_penalty = cfq_back_penalty;
-	cfqd->cfq_slice[0] = cfq_slice_async;
-	cfqd->cfq_slice[1] = cfq_slice_sync;
 	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
-	cfqd->cfq_slice_idle = cfq_slice_idle;
-	cfqd->hw_tag = 1;
 
 	return cfqd;
 }
@@ -2554,9 +1896,6 @@ SHOW_FUNCTION(cfq_fifo_expire_sync_show, cfqd->cfq_fifo_expire[1], 1);
 SHOW_FUNCTION(cfq_fifo_expire_async_show, cfqd->cfq_fifo_expire[0], 1);
 SHOW_FUNCTION(cfq_back_seek_max_show, cfqd->cfq_back_max, 0);
 SHOW_FUNCTION(cfq_back_seek_penalty_show, cfqd->cfq_back_penalty, 0);
-SHOW_FUNCTION(cfq_slice_idle_show, cfqd->cfq_slice_idle, 1);
-SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1);
-SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1);
 SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0);
 #undef SHOW_FUNCTION
 
@@ -2584,9 +1923,6 @@ STORE_FUNCTION(cfq_fifo_expire_async_store, &cfqd->cfq_fifo_expire[0], 1,
 STORE_FUNCTION(cfq_back_seek_max_store, &cfqd->cfq_back_max, 0, UINT_MAX, 0);
 STORE_FUNCTION(cfq_back_seek_penalty_store, &cfqd->cfq_back_penalty, 1,
 		UINT_MAX, 0);
-STORE_FUNCTION(cfq_slice_idle_store, &cfqd->cfq_slice_idle, 0, UINT_MAX, 1);
-STORE_FUNCTION(cfq_slice_sync_store, &cfqd->cfq_slice[1], 1, UINT_MAX, 1);
-STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1);
 STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1,
 		UINT_MAX, 0);
 #undef STORE_FUNCTION
@@ -2600,10 +1936,10 @@ static struct elv_fs_entry cfq_attrs[] = {
 	CFQ_ATTR(fifo_expire_async),
 	CFQ_ATTR(back_seek_max),
 	CFQ_ATTR(back_seek_penalty),
-	CFQ_ATTR(slice_sync),
-	CFQ_ATTR(slice_async),
 	CFQ_ATTR(slice_async_rq),
-	CFQ_ATTR(slice_idle),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+	ELV_ATTR(slice_async),
 	__ATTR_NULL
 };
 
@@ -2616,8 +1952,6 @@ static struct elevator_type iosched_cfq = {
 		.elevator_dispatch_fn =		cfq_dispatch_requests,
 		.elevator_add_req_fn =		cfq_insert_request,
 		.elevator_activate_req_fn =	cfq_activate_request,
-		.elevator_deactivate_req_fn =	cfq_deactivate_request,
-		.elevator_queue_empty_fn =	cfq_queue_empty,
 		.elevator_completed_req_fn =	cfq_completed_request,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
@@ -2627,7 +1961,15 @@ static struct elevator_type iosched_cfq = {
 		.elevator_init_fn =		cfq_init_queue,
 		.elevator_exit_fn =		cfq_exit_queue,
 		.trim =				cfq_free_io_context,
+		.elevator_free_sched_queue_fn =	cfq_free_cfq_queue,
+		.elevator_active_ioq_set_fn = 	cfq_active_ioq_set,
+		.elevator_active_ioq_reset_fn =	cfq_active_ioq_reset,
+		.elevator_arm_slice_timer_fn = 	cfq_arm_slice_timer,
+		.elevator_should_preempt_fn = 	cfq_should_preempt,
+		.elevator_update_idle_window_fn = cfq_update_idle_window,
+		.elevator_close_cooperator_fn = cfq_close_cooperator,
 	},
+	.elevator_features =    ELV_IOSCHED_NEED_FQ,
 	.elevator_attrs =	cfq_attrs,
 	.elevator_name =	"cfq",
 	.elevator_owner =	THIS_MODULE,
@@ -2635,14 +1977,6 @@ static struct elevator_type iosched_cfq = {
 
 static int __init cfq_init(void)
 {
-	/*
-	 * could be 0 on HZ < 1000 setups
-	 */
-	if (!cfq_slice_async)
-		cfq_slice_async = 1;
-	if (!cfq_slice_idle)
-		cfq_slice_idle = 1;
-
 	if (cfq_slab_setup())
 		return -ENOMEM;
 
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..5be25b3 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -39,13 +39,8 @@ struct cfq_io_context {
 
 	struct io_context *ioc;
 
-	unsigned long last_end_request;
 	sector_t last_request_pos;
 
-	unsigned long ttime_total;
-	unsigned long ttime_samples;
-	unsigned long ttime_mean;
-
 	unsigned int seek_samples;
 	u64 seek_total;
 	sector_t seek_mean;
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

This patch changes cfq to use fair queuing code from elevator layer.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched     |    3 +-
 block/cfq-iosched.c       | 1106 +++++++++------------------------------------
 include/linux/iocontext.h |    5 -
 3 files changed, 222 insertions(+), 892 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 3398134..dd5224d 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -3,7 +3,7 @@ if BLOCK
 menu "IO Schedulers"
 
 config ELV_FAIR_QUEUING
-	bool "Elevator Fair Queuing Support"
+	bool
 	default n
 	---help---
 	  Traditionally only cfq had notion of multiple queues and it did
@@ -46,6 +46,7 @@ config IOSCHED_DEADLINE
 
 config IOSCHED_CFQ
 	tristate "CFQ I/O scheduler"
+	select ELV_FAIR_QUEUING
 	default y
 	---help---
 	  The CFQ I/O scheduler tries to distribute bandwidth equally
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..995c8dd 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -12,7 +12,6 @@
 #include <linux/rbtree.h>
 #include <linux/ioprio.h>
 #include <linux/blktrace_api.h>
-
 /*
  * tunables
  */
@@ -23,15 +22,7 @@ static const int cfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
 static const int cfq_back_max = 16 * 1024;
 /* penalty of a backwards seek */
 static const int cfq_back_penalty = 2;
-static const int cfq_slice_sync = HZ / 10;
-static int cfq_slice_async = HZ / 25;
 static const int cfq_slice_async_rq = 2;
-static int cfq_slice_idle = HZ / 125;
-
-/*
- * offset from end of service tree
- */
-#define CFQ_IDLE_DELAY		(HZ / 5)
 
 /*
  * below this threshold, we consider thinktime immediate
@@ -43,7 +34,7 @@ static int cfq_slice_idle = HZ / 125;
 
 #define RQ_CIC(rq)		\
 	((struct cfq_io_context *) (rq)->elevator_private)
-#define RQ_CFQQ(rq)		(struct cfq_queue *) ((rq)->elevator_private2)
+#define RQ_CFQQ(rq)	(struct cfq_queue *) (ioq_sched_queue((rq)->ioq))
 
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
@@ -53,8 +44,6 @@ static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
 #define CFQ_PRIO_LISTS		IOPRIO_BE_NR
-#define cfq_class_idle(cfqq)	((cfqq)->ioprio_class == IOPRIO_CLASS_IDLE)
-#define cfq_class_rt(cfqq)	((cfqq)->ioprio_class == IOPRIO_CLASS_RT)
 
 #define sample_valid(samples)	((samples) > 80)
 
@@ -75,12 +64,6 @@ struct cfq_rb_root {
  */
 struct cfq_data {
 	struct request_queue *queue;
-
-	/*
-	 * rr list of queues with requests and the count of them
-	 */
-	struct cfq_rb_root service_tree;
-
 	/*
 	 * Each priority tree is sorted by next_request position.  These
 	 * trees are used when determining if two or more queues are
@@ -88,41 +71,11 @@ struct cfq_data {
 	 */
 	struct rb_root prio_trees[CFQ_PRIO_LISTS];
 
-	unsigned int busy_queues;
-	/*
-	 * Used to track any pending rt requests so we can pre-empt current
-	 * non-RT cfqq in service when this value is non-zero.
-	 */
-	unsigned int busy_rt_queues;
-
-	int rq_in_driver;
 	int sync_flight;
 
-	/*
-	 * queue-depth detection
-	 */
-	int rq_queued;
-	int hw_tag;
-	int hw_tag_samples;
-	int rq_in_driver_peak;
-
-	/*
-	 * idle window management
-	 */
-	struct timer_list idle_slice_timer;
-	struct work_struct unplug_work;
-
-	struct cfq_queue *active_queue;
 	struct cfq_io_context *active_cic;
 
-	/*
-	 * async queue for each priority case
-	 */
-	struct cfq_queue *async_cfqq[2][IOPRIO_BE_NR];
-	struct cfq_queue *async_idle_cfqq;
-
 	sector_t last_position;
-	unsigned long last_end_request;
 
 	/*
 	 * tunables, see top of file
@@ -131,9 +84,7 @@ struct cfq_data {
 	unsigned int cfq_fifo_expire[2];
 	unsigned int cfq_back_penalty;
 	unsigned int cfq_back_max;
-	unsigned int cfq_slice[2];
 	unsigned int cfq_slice_async_rq;
-	unsigned int cfq_slice_idle;
 
 	struct list_head cic_list;
 };
@@ -142,16 +93,11 @@ struct cfq_data {
  * Per process-grouping structure
  */
 struct cfq_queue {
-	/* reference count */
-	atomic_t ref;
+	struct io_queue *ioq;
 	/* various state flags, see below */
 	unsigned int flags;
 	/* parent cfq_data */
 	struct cfq_data *cfqd;
-	/* service_tree member */
-	struct rb_node rb_node;
-	/* service_tree key */
-	unsigned long rb_key;
 	/* prio tree member */
 	struct rb_node p_node;
 	/* prio tree root we belong to, if any */
@@ -167,33 +113,23 @@ struct cfq_queue {
 	/* fifo list of requests in sort_list */
 	struct list_head fifo;
 
-	unsigned long slice_end;
-	long slice_resid;
 	unsigned int slice_dispatch;
 
 	/* pending metadata requests */
 	int meta_pending;
-	/* number of requests that are on the dispatch list or inside driver */
-	int dispatched;
 
 	/* io prio of this group */
-	unsigned short ioprio, org_ioprio;
-	unsigned short ioprio_class, org_ioprio_class;
+	unsigned short org_ioprio;
+	unsigned short org_ioprio_class;
 
 	pid_t pid;
 };
 
 enum cfqq_state_flags {
-	CFQ_CFQQ_FLAG_on_rr = 0,	/* on round-robin busy list */
-	CFQ_CFQQ_FLAG_wait_request,	/* waiting for a request */
-	CFQ_CFQQ_FLAG_must_dispatch,	/* must be allowed a dispatch */
 	CFQ_CFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
 	CFQ_CFQQ_FLAG_must_alloc_slice,	/* per-slice must_alloc flag */
 	CFQ_CFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
-	CFQ_CFQQ_FLAG_idle_window,	/* slice idling enabled */
 	CFQ_CFQQ_FLAG_prio_changed,	/* task priority has changed */
-	CFQ_CFQQ_FLAG_slice_new,	/* no requests dispatched in slice */
-	CFQ_CFQQ_FLAG_sync,		/* synchronous queue */
 	CFQ_CFQQ_FLAG_coop,		/* has done a coop jump of the queue */
 };
 
@@ -211,16 +147,10 @@ static inline int cfq_cfqq_##name(const struct cfq_queue *cfqq)		\
 	return ((cfqq)->flags & (1 << CFQ_CFQQ_FLAG_##name)) != 0;	\
 }
 
-CFQ_CFQQ_FNS(on_rr);
-CFQ_CFQQ_FNS(wait_request);
-CFQ_CFQQ_FNS(must_dispatch);
 CFQ_CFQQ_FNS(must_alloc);
 CFQ_CFQQ_FNS(must_alloc_slice);
 CFQ_CFQQ_FNS(fifo_expire);
-CFQ_CFQQ_FNS(idle_window);
 CFQ_CFQQ_FNS(prio_changed);
-CFQ_CFQQ_FNS(slice_new);
-CFQ_CFQQ_FNS(sync);
 CFQ_CFQQ_FNS(coop);
 #undef CFQ_CFQQ_FNS
 
@@ -259,66 +189,27 @@ static inline int cfq_bio_sync(struct bio *bio)
 	return 0;
 }
 
-/*
- * scheduler run of queue, if there are requests pending and no one in the
- * driver that will restart queueing
- */
-static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)
+static inline struct io_group *cfqq_to_io_group(struct cfq_queue *cfqq)
 {
-	if (cfqd->busy_queues) {
-		cfq_log(cfqd, "schedule dispatch");
-		kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work);
-	}
+	return ioq_to_io_group(cfqq->ioq);
 }
 
-static int cfq_queue_empty(struct request_queue *q)
+static inline int cfq_class_idle(struct cfq_queue *cfqq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
-
-	return !cfqd->busy_queues;
+	return elv_ioq_class_idle(cfqq->ioq);
 }
 
-/*
- * Scale schedule slice based on io priority. Use the sync time slice only
- * if a queue is marked sync and has sync io queued. A sync queue with async
- * io only, should not get full sync slice length.
- */
-static inline int cfq_prio_slice(struct cfq_data *cfqd, int sync,
-				 unsigned short prio)
-{
-	const int base_slice = cfqd->cfq_slice[sync];
-
-	WARN_ON(prio >= IOPRIO_BE_NR);
-
-	return base_slice + (base_slice/CFQ_SLICE_SCALE * (4 - prio));
-}
-
-static inline int
-cfq_prio_to_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	return cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio);
-}
-
-static inline void
-cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+static inline int cfq_cfqq_sync(struct cfq_queue *cfqq)
 {
-	cfqq->slice_end = cfq_prio_to_slice(cfqd, cfqq) + jiffies;
-	cfq_log_cfqq(cfqd, cfqq, "set_slice=%lu", cfqq->slice_end - jiffies);
+	return elv_ioq_sync(cfqq->ioq);
 }
 
-/*
- * We need to wrap this check in cfq_cfqq_slice_new(), since ->slice_end
- * isn't valid until the first request from the dispatch is activated
- * and the slice time set.
- */
-static inline int cfq_slice_used(struct cfq_queue *cfqq)
+static inline int cfqq_is_active_queue(struct cfq_queue *cfqq)
 {
-	if (cfq_cfqq_slice_new(cfqq))
-		return 0;
-	if (time_before(jiffies, cfqq->slice_end))
-		return 0;
+	struct cfq_data *cfqd = cfqq->cfqd;
+	struct elevator_queue *e = cfqd->queue->elevator;
 
-	return 1;
+	return (elv_active_sched_queue(e) == cfqq);
 }
 
 /*
@@ -417,33 +308,6 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2)
 }
 
 /*
- * The below is leftmost cache rbtree addon
- */
-static struct cfq_queue *cfq_rb_first(struct cfq_rb_root *root)
-{
-	if (!root->left)
-		root->left = rb_first(&root->rb);
-
-	if (root->left)
-		return rb_entry(root->left, struct cfq_queue, rb_node);
-
-	return NULL;
-}
-
-static void rb_erase_init(struct rb_node *n, struct rb_root *root)
-{
-	rb_erase(n, root);
-	RB_CLEAR_NODE(n);
-}
-
-static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
-{
-	if (root->left == n)
-		root->left = NULL;
-	rb_erase_init(n, &root->rb);
-}
-
-/*
  * would be nice to take fifo expire time into account as well
  */
 static struct request *
@@ -456,10 +320,10 @@ cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 
 	BUG_ON(RB_EMPTY_NODE(&last->rb_node));
 
-	if (rbprev)
+	if (rbprev != NULL)
 		prev = rb_entry_rq(rbprev);
 
-	if (rbnext)
+	if (rbnext != NULL)
 		next = rb_entry_rq(rbnext);
 	else {
 		rbnext = rb_first(&cfqq->sort_list);
@@ -470,95 +334,6 @@ cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 	return cfq_choose_req(cfqd, next, prev);
 }
 
-static unsigned long cfq_slice_offset(struct cfq_data *cfqd,
-				      struct cfq_queue *cfqq)
-{
-	/*
-	 * just an approximation, should be ok.
-	 */
-	return (cfqd->busy_queues - 1) * (cfq_prio_slice(cfqd, 1, 0) -
-		       cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio));
-}
-
-/*
- * The cfqd->service_tree holds all pending cfq_queue's that have
- * requests waiting to be processed. It is sorted in the order that
- * we will service the queues.
- */
-static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-				 int add_front)
-{
-	struct rb_node **p, *parent;
-	struct cfq_queue *__cfqq;
-	unsigned long rb_key;
-	int left;
-
-	if (cfq_class_idle(cfqq)) {
-		rb_key = CFQ_IDLE_DELAY;
-		parent = rb_last(&cfqd->service_tree.rb);
-		if (parent && parent != &cfqq->rb_node) {
-			__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-			rb_key += __cfqq->rb_key;
-		} else
-			rb_key += jiffies;
-	} else if (!add_front) {
-		rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies;
-		rb_key += cfqq->slice_resid;
-		cfqq->slice_resid = 0;
-	} else
-		rb_key = 0;
-
-	if (!RB_EMPTY_NODE(&cfqq->rb_node)) {
-		/*
-		 * same position, nothing more to do
-		 */
-		if (rb_key == cfqq->rb_key)
-			return;
-
-		cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
-	}
-
-	left = 1;
-	parent = NULL;
-	p = &cfqd->service_tree.rb.rb_node;
-	while (*p) {
-		struct rb_node **n;
-
-		parent = *p;
-		__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-
-		/*
-		 * sort RT queues first, we always want to give
-		 * preference to them. IDLE queues goes to the back.
-		 * after that, sort on the next service time.
-		 */
-		if (cfq_class_rt(cfqq) > cfq_class_rt(__cfqq))
-			n = &(*p)->rb_left;
-		else if (cfq_class_rt(cfqq) < cfq_class_rt(__cfqq))
-			n = &(*p)->rb_right;
-		else if (cfq_class_idle(cfqq) < cfq_class_idle(__cfqq))
-			n = &(*p)->rb_left;
-		else if (cfq_class_idle(cfqq) > cfq_class_idle(__cfqq))
-			n = &(*p)->rb_right;
-		else if (rb_key < __cfqq->rb_key)
-			n = &(*p)->rb_left;
-		else
-			n = &(*p)->rb_right;
-
-		if (n == &(*p)->rb_right)
-			left = 0;
-
-		p = n;
-	}
-
-	if (left)
-		cfqd->service_tree.left = &cfqq->rb_node;
-
-	cfqq->rb_key = rb_key;
-	rb_link_node(&cfqq->rb_node, parent, p);
-	rb_insert_color(&cfqq->rb_node, &cfqd->service_tree.rb);
-}
-
 static struct cfq_queue *
 cfq_prio_tree_lookup(struct cfq_data *cfqd, struct rb_root *root,
 		     sector_t sector, struct rb_node **ret_parent,
@@ -620,57 +395,34 @@ static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 		cfqq->p_root = NULL;
 }
 
-/*
- * Update cfqq's position in the service tree.
- */
-static void cfq_resort_rr_list(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+/* An active ioq is being reset. A chance to do cic related stuff. */
+static void cfq_active_ioq_reset(struct request_queue *q, void *sched_queue)
 {
-	/*
-	 * Resorting requires the cfqq to be on the RR list already.
-	 */
-	if (cfq_cfqq_on_rr(cfqq)) {
-		cfq_service_tree_add(cfqd, cfqq, 0);
-		cfq_prio_tree_add(cfqd, cfqq);
-	}
-}
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = sched_queue;
 
-/*
- * add to busy list of queues for service, trying to be fair in ordering
- * the pending list according to last request service
- */
-static void cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	cfq_log_cfqq(cfqd, cfqq, "add_to_rr");
-	BUG_ON(cfq_cfqq_on_rr(cfqq));
-	cfq_mark_cfqq_on_rr(cfqq);
-	cfqd->busy_queues++;
-	if (cfq_class_rt(cfqq))
-		cfqd->busy_rt_queues++;
+	if (cfqd->active_cic) {
+		put_io_context(cfqd->active_cic->ioc);
+		cfqd->active_cic = NULL;
+	}
 
-	cfq_resort_rr_list(cfqd, cfqq);
+	/* Resort the cfqq in prio tree */
+	if (cfqq)
+		cfq_prio_tree_add(cfqd, cfqq);
 }
 
-/*
- * Called when the cfqq no longer has requests pending, remove it from
- * the service tree.
- */
-static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+/* An ioq has been set as active one. */
+static void cfq_active_ioq_set(struct request_queue *q, void *sched_queue,
+				int coop)
 {
-	cfq_log_cfqq(cfqd, cfqq, "del_from_rr");
-	BUG_ON(!cfq_cfqq_on_rr(cfqq));
-	cfq_clear_cfqq_on_rr(cfqq);
+	struct cfq_queue *cfqq = sched_queue;
 
-	if (!RB_EMPTY_NODE(&cfqq->rb_node))
-		cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
-	if (cfqq->p_root) {
-		rb_erase(&cfqq->p_node, cfqq->p_root);
-		cfqq->p_root = NULL;
-	}
+	cfqq->slice_dispatch = 0;
 
-	BUG_ON(!cfqd->busy_queues);
-	cfqd->busy_queues--;
-	if (cfq_class_rt(cfqq))
-		cfqd->busy_rt_queues--;
+	cfq_clear_cfqq_must_alloc_slice(cfqq);
+	cfq_clear_cfqq_fifo_expire(cfqq);
+	if (!coop)
+		cfq_clear_cfqq_coop(cfqq);
 }
 
 /*
@@ -679,7 +431,6 @@ static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 static void cfq_del_rq_rb(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
-	struct cfq_data *cfqd = cfqq->cfqd;
 	const int sync = rq_is_sync(rq);
 
 	BUG_ON(!cfqq->queued[sync]);
@@ -687,8 +438,17 @@ static void cfq_del_rq_rb(struct request *rq)
 
 	elv_rb_del(&cfqq->sort_list, rq);
 
-	if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list))
-		cfq_del_cfqq_rr(cfqd, cfqq);
+	/*
+	 * If this was last request in the queue, remove this queue from
+	 * prio trees. For last request nr_queued count will still be 1 as
+	 * elevator fair queuing layer is yet to do the accounting.
+	 */
+	if (elv_ioq_nr_queued(cfqq->ioq) == 1) {
+		if (cfqq->p_root) {
+			rb_erase(&cfqq->p_node, cfqq->p_root);
+			cfqq->p_root = NULL;
+		}
+	}
 }
 
 static void cfq_add_rq_rb(struct request *rq)
@@ -706,9 +466,6 @@ static void cfq_add_rq_rb(struct request *rq)
 	while ((__alias = elv_rb_add(&cfqq->sort_list, rq)) != NULL)
 		cfq_dispatch_insert(cfqd->queue, __alias);
 
-	if (!cfq_cfqq_on_rr(cfqq))
-		cfq_add_cfqq_rr(cfqd, cfqq);
-
 	/*
 	 * check if this request is a better next-serve candidate
 	 */
@@ -756,23 +513,9 @@ static void cfq_activate_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 
-	cfqd->rq_in_driver++;
-	cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "activate rq, drv=%d",
-						cfqd->rq_in_driver);
-
 	cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors;
 }
 
-static void cfq_deactivate_request(struct request_queue *q, struct request *rq)
-{
-	struct cfq_data *cfqd = q->elevator->elevator_data;
-
-	WARN_ON(!cfqd->rq_in_driver);
-	cfqd->rq_in_driver--;
-	cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "deactivate rq, drv=%d",
-						cfqd->rq_in_driver);
-}
-
 static void cfq_remove_request(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
@@ -783,7 +526,6 @@ static void cfq_remove_request(struct request *rq)
 	list_del_init(&rq->queuelist);
 	cfq_del_rq_rb(rq);
 
-	cfqq->cfqd->rq_queued--;
 	if (rq_is_meta(rq)) {
 		WARN_ON(!cfqq->meta_pending);
 		cfqq->meta_pending--;
@@ -857,93 +599,21 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	return 0;
 }
 
-static void __cfq_set_active_queue(struct cfq_data *cfqd,
-				   struct cfq_queue *cfqq)
-{
-	if (cfqq) {
-		cfq_log_cfqq(cfqd, cfqq, "set_active");
-		cfqq->slice_end = 0;
-		cfqq->slice_dispatch = 0;
-
-		cfq_clear_cfqq_wait_request(cfqq);
-		cfq_clear_cfqq_must_dispatch(cfqq);
-		cfq_clear_cfqq_must_alloc_slice(cfqq);
-		cfq_clear_cfqq_fifo_expire(cfqq);
-		cfq_mark_cfqq_slice_new(cfqq);
-
-		del_timer(&cfqd->idle_slice_timer);
-	}
-
-	cfqd->active_queue = cfqq;
-}
-
 /*
  * current cfqq expired its slice (or was too idle), select new one
  */
 static void
-__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-		    int timed_out)
+__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
-	cfq_log_cfqq(cfqd, cfqq, "slice expired t=%d", timed_out);
-
-	if (cfq_cfqq_wait_request(cfqq))
-		del_timer(&cfqd->idle_slice_timer);
-
-	cfq_clear_cfqq_wait_request(cfqq);
-
-	/*
-	 * store what was left of this slice, if the queue idled/timed out
-	 */
-	if (timed_out && !cfq_cfqq_slice_new(cfqq)) {
-		cfqq->slice_resid = cfqq->slice_end - jiffies;
-		cfq_log_cfqq(cfqd, cfqq, "resid=%ld", cfqq->slice_resid);
-	}
-
-	cfq_resort_rr_list(cfqd, cfqq);
-
-	if (cfqq == cfqd->active_queue)
-		cfqd->active_queue = NULL;
-
-	if (cfqd->active_cic) {
-		put_io_context(cfqd->active_cic->ioc);
-		cfqd->active_cic = NULL;
-	}
+	__elv_ioq_slice_expired(cfqd->queue, cfqq->ioq);
 }
 
-static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out)
+static inline void cfq_slice_expired(struct cfq_data *cfqd)
 {
-	struct cfq_queue *cfqq = cfqd->active_queue;
+	struct cfq_queue *cfqq = elv_active_sched_queue(cfqd->queue->elevator);
 
 	if (cfqq)
-		__cfq_slice_expired(cfqd, cfqq, timed_out);
-}
-
-/*
- * Get next queue for service. Unless we have a queue preemption,
- * we'll simply select the first cfqq in the service tree.
- */
-static struct cfq_queue *cfq_get_next_queue(struct cfq_data *cfqd)
-{
-	if (RB_EMPTY_ROOT(&cfqd->service_tree.rb))
-		return NULL;
-
-	return cfq_rb_first(&cfqd->service_tree);
-}
-
-/*
- * Get and set a new active queue for service.
- */
-static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd,
-					      struct cfq_queue *cfqq)
-{
-	if (!cfqq) {
-		cfqq = cfq_get_next_queue(cfqd);
-		if (cfqq)
-			cfq_clear_cfqq_coop(cfqq);
-	}
-
-	__cfq_set_active_queue(cfqd, cfqq);
-	return cfqq;
+		__cfq_slice_expired(cfqd, cfqq);
 }
 
 static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
@@ -1020,11 +690,12 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
  * associated with the I/O issued by cur_cfqq.  I'm not sure this is a valid
  * assumption.
  */
-static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
-					      struct cfq_queue *cur_cfqq,
+static struct io_queue *cfq_close_cooperator(struct request_queue *q,
+					      void *cur_sched_queue,
 					      int probe)
 {
-	struct cfq_queue *cfqq;
+	struct cfq_queue *cur_cfqq = cur_sched_queue, *cfqq;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
 
 	/*
 	 * A valid cfq_io_context is necessary to compare requests against
@@ -1047,38 +718,18 @@ static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
 
 	if (!probe)
 		cfq_mark_cfqq_coop(cfqq);
-	return cfqq;
+	return cfqq->ioq;
 }
 
-static void cfq_arm_slice_timer(struct cfq_data *cfqd)
+static void cfq_arm_slice_timer(struct request_queue *q, void *sched_queue)
 {
-	struct cfq_queue *cfqq = cfqd->active_queue;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = sched_queue;
 	struct cfq_io_context *cic;
 	unsigned long sl;
 
-	/*
-	 * SSD device without seek penalty, disable idling. But only do so
-	 * for devices that support queuing, otherwise we still have a problem
-	 * with sync vs async workloads.
-	 */
-	if (blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag)
-		return;
-
 	WARN_ON(!RB_EMPTY_ROOT(&cfqq->sort_list));
-	WARN_ON(cfq_cfqq_slice_new(cfqq));
-
-	/*
-	 * idle is disabled, either manually or by past process history
-	 */
-	if (!cfqd->cfq_slice_idle || !cfq_cfqq_idle_window(cfqq))
-		return;
-
-	/*
-	 * still requests with the driver, don't idle
-	 */
-	if (cfqd->rq_in_driver)
-		return;
-
+	WARN_ON(elv_ioq_slice_new(cfqq->ioq));
 	/*
 	 * task has exited, don't wait
 	 */
@@ -1086,18 +737,18 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
 	if (!cic || !atomic_read(&cic->ioc->nr_tasks))
 		return;
 
-	cfq_mark_cfqq_wait_request(cfqq);
 
+	elv_mark_ioq_wait_request(cfqq->ioq);
 	/*
 	 * we don't want to idle for seeks, but we do want to allow
 	 * fair distribution of slice time for a process doing back-to-back
 	 * seeks. so allow a little bit of time for him to submit a new rq
 	 */
-	sl = cfqd->cfq_slice_idle;
+	sl = elv_get_slice_idle(q->elevator);
 	if (sample_valid(cic->seek_samples) && CIC_SEEKY(cic))
 		sl = min(sl, msecs_to_jiffies(CFQ_MIN_TT));
 
-	mod_timer(&cfqd->idle_slice_timer, jiffies + sl);
+	elv_mod_idle_slice_timer(q->elevator, jiffies + sl);
 	cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu", sl);
 }
 
@@ -1106,13 +757,12 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
  */
 static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
+	struct cfq_data *cfqd = q->elevator->elevator_data;
 
-	cfq_log_cfqq(cfqd, cfqq, "dispatch_insert");
+	cfq_log_cfqq(cfqd, cfqq, "dispatch_insert sect=%d", rq->nr_sectors);
 
 	cfq_remove_request(rq);
-	cfqq->dispatched++;
 	elv_dispatch_sort(q, rq);
 
 	if (cfq_cfqq_sync(cfqq))
@@ -1150,78 +800,11 @@ static inline int
 cfq_prio_to_maxrq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
 	const int base_rq = cfqd->cfq_slice_async_rq;
+	unsigned short ioprio = elv_ioq_ioprio(cfqq->ioq);
 
-	WARN_ON(cfqq->ioprio >= IOPRIO_BE_NR);
+	WARN_ON(ioprio >= IOPRIO_BE_NR);
 
-	return 2 * (base_rq + base_rq * (CFQ_PRIO_LISTS - 1 - cfqq->ioprio));
-}
-
-/*
- * Select a queue for service. If we have a current active queue,
- * check whether to continue servicing it, or retrieve and set a new one.
- */
-static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
-{
-	struct cfq_queue *cfqq, *new_cfqq = NULL;
-
-	cfqq = cfqd->active_queue;
-	if (!cfqq)
-		goto new_queue;
-
-	/*
-	 * The active queue has run out of time, expire it and select new.
-	 */
-	if (cfq_slice_used(cfqq) && !cfq_cfqq_must_dispatch(cfqq))
-		goto expire;
-
-	/*
-	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
-	 * cfqq.
-	 */
-	if (!cfq_class_rt(cfqq) && cfqd->busy_rt_queues) {
-		/*
-		 * We simulate this as cfqq timed out so that it gets to bank
-		 * the remaining of its time slice.
-		 */
-		cfq_log_cfqq(cfqd, cfqq, "preempt");
-		cfq_slice_expired(cfqd, 1);
-		goto new_queue;
-	}
-
-	/*
-	 * The active queue has requests and isn't expired, allow it to
-	 * dispatch.
-	 */
-	if (!RB_EMPTY_ROOT(&cfqq->sort_list))
-		goto keep_queue;
-
-	/*
-	 * If another queue has a request waiting within our mean seek
-	 * distance, let it run.  The expire code will check for close
-	 * cooperators and put the close queue at the front of the service
-	 * tree.
-	 */
-	new_cfqq = cfq_close_cooperator(cfqd, cfqq, 0);
-	if (new_cfqq)
-		goto expire;
-
-	/*
-	 * No requests pending. If the active queue still has requests in
-	 * flight or is idling for a new request, allow either of these
-	 * conditions to happen (or time out) before selecting a new queue.
-	 */
-	if (timer_pending(&cfqd->idle_slice_timer) ||
-	    (cfqq->dispatched && cfq_cfqq_idle_window(cfqq))) {
-		cfqq = NULL;
-		goto keep_queue;
-	}
-
-expire:
-	cfq_slice_expired(cfqd, 0);
-new_queue:
-	cfqq = cfq_set_active_queue(cfqd, new_cfqq);
-keep_queue:
-	return cfqq;
+	return 2 * (base_rq + base_rq * (CFQ_PRIO_LISTS - 1 - ioprio));
 }
 
 static int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
@@ -1246,12 +829,14 @@ static int cfq_forced_dispatch(struct cfq_data *cfqd)
 	struct cfq_queue *cfqq;
 	int dispatched = 0;
 
-	while ((cfqq = cfq_rb_first(&cfqd->service_tree)) != NULL)
+	while ((cfqq = elv_select_sched_queue(cfqd->queue, 1)) != NULL)
 		dispatched += __cfq_forced_dispatch_cfqq(cfqq);
 
-	cfq_slice_expired(cfqd, 0);
+	/* This probably is redundant now. above loop will should make sure
+	 * that all the busy queues have expired */
+	cfq_slice_expired(cfqd);
 
-	BUG_ON(cfqd->busy_queues);
+	BUG_ON(elv_nr_busy_ioq(cfqd->queue->elevator));
 
 	cfq_log(cfqd, "forced_dispatch=%d\n", dispatched);
 	return dispatched;
@@ -1297,13 +882,10 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	struct cfq_queue *cfqq;
 	unsigned int max_dispatch;
 
-	if (!cfqd->busy_queues)
-		return 0;
-
 	if (unlikely(force))
 		return cfq_forced_dispatch(cfqd);
 
-	cfqq = cfq_select_queue(cfqd);
+	cfqq = elv_select_sched_queue(q, 0);
 	if (!cfqq)
 		return 0;
 
@@ -1320,7 +902,7 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	/*
 	 * Does this cfqq already have too much IO in flight?
 	 */
-	if (cfqq->dispatched >= max_dispatch) {
+	if (elv_ioq_nr_dispatched(cfqq->ioq) >= max_dispatch) {
 		/*
 		 * idle queue must always only have a single IO in flight
 		 */
@@ -1330,13 +912,13 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 		/*
 		 * We have other queues, don't allow more IO from this one
 		 */
-		if (cfqd->busy_queues > 1)
+		if (elv_nr_busy_ioq(q->elevator) > 1)
 			return 0;
 
 		/*
 		 * we are the only queue, allow up to 4 times of 'quantum'
 		 */
-		if (cfqq->dispatched >= 4 * max_dispatch)
+		if (elv_ioq_nr_dispatched(cfqq->ioq) >= 4 * max_dispatch)
 			return 0;
 	}
 
@@ -1345,51 +927,45 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	 */
 	cfq_dispatch_request(cfqd, cfqq);
 	cfqq->slice_dispatch++;
-	cfq_clear_cfqq_must_dispatch(cfqq);
 
 	/*
 	 * expire an async queue immediately if it has used up its slice. idle
 	 * queue always expire after 1 dispatch round.
 	 */
-	if (cfqd->busy_queues > 1 && ((!cfq_cfqq_sync(cfqq) &&
+	if (elv_nr_busy_ioq(q->elevator) > 1 && ((!cfq_cfqq_sync(cfqq) &&
 	    cfqq->slice_dispatch >= cfq_prio_to_maxrq(cfqd, cfqq)) ||
 	    cfq_class_idle(cfqq))) {
-		cfqq->slice_end = jiffies + 1;
-		cfq_slice_expired(cfqd, 0);
+		cfq_slice_expired(cfqd);
 	}
 
 	cfq_log(cfqd, "dispatched a request");
 	return 1;
 }
 
-/*
- * task holds one reference to the queue, dropped when task exits. each rq
- * in-flight on this queue also holds a reference, dropped when rq is freed.
- *
- * queue lock must be held here.
- */
-static void cfq_put_queue(struct cfq_queue *cfqq)
+static void cfq_free_cfq_queue(struct elevator_queue *e, void *sched_queue)
 {
+	struct cfq_queue *cfqq = sched_queue;
 	struct cfq_data *cfqd = cfqq->cfqd;
 
-	BUG_ON(atomic_read(&cfqq->ref) <= 0);
-
-	if (!atomic_dec_and_test(&cfqq->ref))
-		return;
+	BUG_ON(!cfqq);
 
-	cfq_log_cfqq(cfqd, cfqq, "put_queue");
+	cfq_log_cfqq(cfqd, cfqq, "free_queue");
 	BUG_ON(rb_first(&cfqq->sort_list));
 	BUG_ON(cfqq->allocated[READ] + cfqq->allocated[WRITE]);
-	BUG_ON(cfq_cfqq_on_rr(cfqq));
 
-	if (unlikely(cfqd->active_queue == cfqq)) {
-		__cfq_slice_expired(cfqd, cfqq, 0);
-		cfq_schedule_dispatch(cfqd);
+	if (unlikely(cfqq_is_active_queue(cfqq))) {
+		__cfq_slice_expired(cfqd, cfqq);
+		elv_schedule_dispatch(cfqd->queue);
 	}
 
 	kmem_cache_free(cfq_pool, cfqq);
 }
 
+static inline void cfq_put_queue(struct cfq_queue *cfqq)
+{
+	elv_put_ioq(cfqq->ioq);
+}
+
 /*
  * Must always be called with the rcu_read_lock() held
  */
@@ -1477,9 +1053,9 @@ static void cfq_free_io_context(struct io_context *ioc)
 
 static void cfq_exit_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
-	if (unlikely(cfqq == cfqd->active_queue)) {
-		__cfq_slice_expired(cfqd, cfqq, 0);
-		cfq_schedule_dispatch(cfqd);
+	if (unlikely(cfqq == elv_active_sched_queue(cfqd->queue->elevator))) {
+		__cfq_slice_expired(cfqd, cfqq);
+		elv_schedule_dispatch(cfqd->queue);
 	}
 
 	cfq_put_queue(cfqq);
@@ -1549,11 +1125,11 @@ static struct cfq_io_context *
 cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 {
 	struct cfq_io_context *cic;
+	struct request_queue *q = cfqd->queue;
 
 	cic = kmem_cache_alloc_node(cfq_ioc_pool, gfp_mask | __GFP_ZERO,
-							cfqd->queue->node);
+							q->node);
 	if (cic) {
-		cic->last_end_request = jiffies;
 		INIT_LIST_HEAD(&cic->queue_list);
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
@@ -1567,7 +1143,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
 {
 	struct task_struct *tsk = current;
-	int ioprio_class;
+	int ioprio_class, ioprio;
 
 	if (!cfq_cfqq_prio_changed(cfqq))
 		return;
@@ -1580,30 +1156,33 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
 		/*
 		 * no prio set, inherit CPU scheduling settings
 		 */
-		cfqq->ioprio = task_nice_ioprio(tsk);
-		cfqq->ioprio_class = task_nice_ioclass(tsk);
+		ioprio = task_nice_ioprio(tsk);
+		ioprio_class = task_nice_ioclass(tsk);
 		break;
 	case IOPRIO_CLASS_RT:
-		cfqq->ioprio = task_ioprio(ioc);
-		cfqq->ioprio_class = IOPRIO_CLASS_RT;
+		ioprio = task_ioprio(ioc);
+		ioprio_class = IOPRIO_CLASS_RT;
 		break;
 	case IOPRIO_CLASS_BE:
-		cfqq->ioprio = task_ioprio(ioc);
-		cfqq->ioprio_class = IOPRIO_CLASS_BE;
+		ioprio = task_ioprio(ioc);
+		ioprio_class = IOPRIO_CLASS_BE;
 		break;
 	case IOPRIO_CLASS_IDLE:
-		cfqq->ioprio_class = IOPRIO_CLASS_IDLE;
-		cfqq->ioprio = 7;
-		cfq_clear_cfqq_idle_window(cfqq);
+		ioprio_class = IOPRIO_CLASS_IDLE;
+		ioprio = 7;
+		elv_clear_ioq_idle_window(cfqq->ioq);
 		break;
 	}
 
+	elv_ioq_set_ioprio_class(cfqq->ioq, ioprio_class);
+	elv_ioq_set_ioprio(cfqq->ioq, ioprio);
+
 	/*
 	 * keep track of original prio settings in case we have to temporarily
 	 * elevate the priority of this queue
 	 */
-	cfqq->org_ioprio = cfqq->ioprio;
-	cfqq->org_ioprio_class = cfqq->ioprio_class;
+	cfqq->org_ioprio = ioprio;
+	cfqq->org_ioprio_class = ioprio_class;
 	cfq_clear_cfqq_prio_changed(cfqq);
 }
 
@@ -1612,11 +1191,12 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	struct cfq_data *cfqd = cic->key;
 	struct cfq_queue *cfqq;
 	unsigned long flags;
+	struct request_queue *q = cfqd->queue;
 
 	if (unlikely(!cfqd))
 		return;
 
-	spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+	spin_lock_irqsave(q->queue_lock, flags);
 
 	cfqq = cic->cfqq[BLK_RW_ASYNC];
 	if (cfqq) {
@@ -1633,7 +1213,7 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	if (cfqq)
 		cfq_mark_cfqq_prio_changed(cfqq);
 
-	spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
 static void cfq_ioc_set_ioprio(struct io_context *ioc)
@@ -1644,11 +1224,12 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc)
 
 static struct cfq_queue *
 cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
-		     struct io_context *ioc, gfp_t gfp_mask)
+				struct io_context *ioc, gfp_t gfp_mask)
 {
 	struct cfq_queue *cfqq, *new_cfqq = NULL;
 	struct cfq_io_context *cic;
-
+	struct request_queue *q = cfqd->queue;
+	struct io_queue *ioq = NULL, *new_ioq = NULL;
 retry:
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
@@ -1656,8 +1237,7 @@ retry:
 
 	if (!cfqq) {
 		if (new_cfqq) {
-			cfqq = new_cfqq;
-			new_cfqq = NULL;
+			goto alloc_ioq;
 		} else if (gfp_mask & __GFP_WAIT) {
 			/*
 			 * Inform the allocator of the fact that we will
@@ -1678,22 +1258,52 @@ retry:
 			if (!cfqq)
 				goto out;
 		}
+alloc_ioq:
+		if (new_ioq) {
+			ioq = new_ioq;
+			new_ioq = NULL;
+			cfqq = new_cfqq;
+			new_cfqq = NULL;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			new_ioq = elv_alloc_ioq(q,
+					gfp_mask | __GFP_NOFAIL | __GFP_ZERO);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			ioq = elv_alloc_ioq(q, gfp_mask | __GFP_ZERO);
+			if (!ioq) {
+				kmem_cache_free(cfq_pool, cfqq);
+				cfqq = NULL;
+				goto out;
+			}
+		}
 
-		RB_CLEAR_NODE(&cfqq->rb_node);
+		/*
+		 * Both cfqq and ioq objects allocated. Do the initializations
+		 * now.
+		 */
 		RB_CLEAR_NODE(&cfqq->p_node);
 		INIT_LIST_HEAD(&cfqq->fifo);
-
-		atomic_set(&cfqq->ref, 0);
 		cfqq->cfqd = cfqd;
 
 		cfq_mark_cfqq_prio_changed(cfqq);
 
+		cfqq->ioq = ioq;
 		cfq_init_prio_data(cfqq, ioc);
+		elv_init_ioq(q->elevator, ioq, cfqq, cfqq->org_ioprio_class,
+				cfqq->org_ioprio, is_sync);
 
 		if (is_sync) {
 			if (!cfq_class_idle(cfqq))
-				cfq_mark_cfqq_idle_window(cfqq);
-			cfq_mark_cfqq_sync(cfqq);
+				elv_mark_ioq_idle_window(cfqq->ioq);
+			elv_mark_ioq_sync(cfqq->ioq);
 		}
 		cfqq->pid = current->pid;
 		cfq_log_cfqq(cfqd, cfqq, "alloced");
@@ -1702,38 +1312,28 @@ retry:
 	if (new_cfqq)
 		kmem_cache_free(cfq_pool, new_cfqq);
 
+	if (new_ioq)
+		elv_free_ioq(new_ioq);
+
 out:
 	WARN_ON((gfp_mask & __GFP_WAIT) && !cfqq);
 	return cfqq;
 }
 
-static struct cfq_queue **
-cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio)
-{
-	switch (ioprio_class) {
-	case IOPRIO_CLASS_RT:
-		return &cfqd->async_cfqq[0][ioprio];
-	case IOPRIO_CLASS_BE:
-		return &cfqd->async_cfqq[1][ioprio];
-	case IOPRIO_CLASS_IDLE:
-		return &cfqd->async_idle_cfqq;
-	default:
-		BUG();
-	}
-}
-
 static struct cfq_queue *
 cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
-	      gfp_t gfp_mask)
+					gfp_t gfp_mask)
 {
 	const int ioprio = task_ioprio(ioc);
 	const int ioprio_class = task_ioprio_class(ioc);
-	struct cfq_queue **async_cfqq = NULL;
+	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
+	struct io_group *iog = io_lookup_io_group_current(cfqd->queue);
 
 	if (!is_sync) {
-		async_cfqq = cfq_async_queue_prio(cfqd, ioprio_class, ioprio);
-		cfqq = *async_cfqq;
+		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
+								ioprio);
+		cfqq = async_cfqq;
 	}
 
 	if (!cfqq) {
@@ -1742,15 +1342,11 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 			return NULL;
 	}
 
-	/*
-	 * pin the queue now that it's allocated, scheduler exit will prune it
-	 */
-	if (!is_sync && !(*async_cfqq)) {
-		atomic_inc(&cfqq->ref);
-		*async_cfqq = cfqq;
-	}
+	if (!is_sync && !async_cfqq)
+		io_group_set_async_queue(iog, ioprio_class, ioprio, cfqq->ioq);
 
-	atomic_inc(&cfqq->ref);
+	/* ioc reference */
+	elv_get_ioq(cfqq->ioq);
 	return cfqq;
 }
 
@@ -1829,6 +1425,7 @@ static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
 {
 	unsigned long flags;
 	int ret;
+	struct request_queue *q = cfqd->queue;
 
 	ret = radix_tree_preload(gfp_mask);
 	if (!ret) {
@@ -1845,9 +1442,9 @@ static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
 		radix_tree_preload_end();
 
 		if (!ret) {
-			spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+			spin_lock_irqsave(q->queue_lock, flags);
 			list_add(&cic->queue_list, &cfqd->cic_list);
-			spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+			spin_unlock_irqrestore(q->queue_lock, flags);
 		}
 	}
 
@@ -1867,10 +1464,11 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 {
 	struct io_context *ioc = NULL;
 	struct cfq_io_context *cic;
+	struct request_queue *q = cfqd->queue;
 
 	might_sleep_if(gfp_mask & __GFP_WAIT);
 
-	ioc = get_io_context(gfp_mask, cfqd->queue->node);
+	ioc = get_io_context(gfp_mask, q->node);
 	if (!ioc)
 		return NULL;
 
@@ -1889,7 +1487,6 @@ out:
 	smp_read_barrier_depends();
 	if (unlikely(ioc->ioprio_changed))
 		cfq_ioc_set_ioprio(ioc);
-
 	return cic;
 err_free:
 	cfq_cic_free(cic);
@@ -1899,17 +1496,6 @@ err:
 }
 
 static void
-cfq_update_io_thinktime(struct cfq_data *cfqd, struct cfq_io_context *cic)
-{
-	unsigned long elapsed = jiffies - cic->last_end_request;
-	unsigned long ttime = min(elapsed, 2UL * cfqd->cfq_slice_idle);
-
-	cic->ttime_samples = (7*cic->ttime_samples + 256) / 8;
-	cic->ttime_total = (7*cic->ttime_total + 256*ttime) / 8;
-	cic->ttime_mean = (cic->ttime_total + 128) / cic->ttime_samples;
-}
-
-static void
 cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
 		       struct request *rq)
 {
@@ -1940,57 +1526,41 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
 }
 
 /*
- * Disable idle window if the process thinks too long or seeks so much that
- * it doesn't matter
+ * Disable idle window if the process seeks so much that it doesn't matter
  */
-static void
-cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-		       struct cfq_io_context *cic)
+static int
+cfq_update_idle_window(struct elevator_queue *eq, void *cfqq,
+					struct request *rq)
 {
-	int old_idle, enable_idle;
+	struct cfq_io_context *cic = RQ_CIC(rq);
 
 	/*
-	 * Don't idle for async or idle io prio class
+	 * Enabling/Disabling idling based on thinktime has been moved
+	 * in common layer.
 	 */
-	if (!cfq_cfqq_sync(cfqq) || cfq_class_idle(cfqq))
-		return;
-
-	enable_idle = old_idle = cfq_cfqq_idle_window(cfqq);
-
-	if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
-	    (cfqd->hw_tag && CIC_SEEKY(cic)))
-		enable_idle = 0;
-	else if (sample_valid(cic->ttime_samples)) {
-		if (cic->ttime_mean > cfqd->cfq_slice_idle)
-			enable_idle = 0;
-		else
-			enable_idle = 1;
-	}
+	if (!atomic_read(&cic->ioc->nr_tasks) ||
+	    (elv_hw_tag(eq) && CIC_SEEKY(cic)))
+		return 0;
 
-	if (old_idle != enable_idle) {
-		cfq_log_cfqq(cfqd, cfqq, "idle=%d", enable_idle);
-		if (enable_idle)
-			cfq_mark_cfqq_idle_window(cfqq);
-		else
-			cfq_clear_cfqq_idle_window(cfqq);
-	}
+	return 1;
 }
 
 /*
  * Check if new_cfqq should preempt the currently active queue. Return 0 for
- * no or if we aren't sure, a 1 will cause a preempt.
+ * no or if we aren't sure, a 1 will cause a preemption attempt.
+ * Some of the preemption logic has been moved to common layer. Only cfq
+ * specific parts are left here.
  */
 static int
-cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
-		   struct request *rq)
+cfq_should_preempt(struct request_queue *q, void *new_cfqq, struct request *rq)
 {
-	struct cfq_queue *cfqq;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = elv_active_sched_queue(q->elevator);
 
-	cfqq = cfqd->active_queue;
 	if (!cfqq)
 		return 0;
 
-	if (cfq_slice_used(cfqq))
+	if (elv_ioq_slice_used(cfqq->ioq))
 		return 1;
 
 	if (cfq_class_idle(new_cfqq))
@@ -2013,13 +1583,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
 	if (rq_is_meta(rq) && !cfqq->meta_pending)
 		return 1;
 
-	/*
-	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
-	 */
-	if (cfq_class_rt(new_cfqq) && !cfq_class_rt(cfqq))
-		return 1;
-
-	if (!cfqd->active_cic || !cfq_cfqq_wait_request(cfqq))
+	if (!cfqd->active_cic || !elv_ioq_wait_request(cfqq->ioq))
 		return 0;
 
 	/*
@@ -2033,29 +1597,10 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
 }
 
 /*
- * cfqq preempts the active queue. if we allowed preempt with no slice left,
- * let it have half of its nominal slice.
- */
-static void cfq_preempt_queue(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	cfq_log_cfqq(cfqd, cfqq, "preempt");
-	cfq_slice_expired(cfqd, 1);
-
-	/*
-	 * Put the new queue at the front of the of the current list,
-	 * so we know that it will be selected next.
-	 */
-	BUG_ON(!cfq_cfqq_on_rr(cfqq));
-
-	cfq_service_tree_add(cfqd, cfqq, 1);
-
-	cfqq->slice_end = 0;
-	cfq_mark_cfqq_slice_new(cfqq);
-}
-
-/*
  * Called when a new fs request (rq) is added (to cfqq). Check if there's
  * something we should do about it
+ * After enqueuing the request whether queue should be preempted or kicked
+ * decision is taken by common layer.
  */
 static void
 cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
@@ -2063,45 +1608,12 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 {
 	struct cfq_io_context *cic = RQ_CIC(rq);
 
-	cfqd->rq_queued++;
 	if (rq_is_meta(rq))
 		cfqq->meta_pending++;
 
-	cfq_update_io_thinktime(cfqd, cic);
 	cfq_update_io_seektime(cfqd, cic, rq);
-	cfq_update_idle_window(cfqd, cfqq, cic);
 
 	cic->last_request_pos = rq->sector + rq->nr_sectors;
-
-	if (cfqq == cfqd->active_queue) {
-		/*
-		 * Remember that we saw a request from this process, but
-		 * don't start queuing just yet. Otherwise we risk seeing lots
-		 * of tiny requests, because we disrupt the normal plugging
-		 * and merging. If the request is already larger than a single
-		 * page, let it rip immediately. For that case we assume that
-		 * merging is already done. Ditto for a busy system that
-		 * has other work pending, don't risk delaying until the
-		 * idle timer unplug to continue working.
-		 */
-		if (cfq_cfqq_wait_request(cfqq)) {
-			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
-			    cfqd->busy_queues > 1) {
-				del_timer(&cfqd->idle_slice_timer);
-				blk_start_queueing(cfqd->queue);
-			}
-			cfq_mark_cfqq_must_dispatch(cfqq);
-		}
-	} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
-		/*
-		 * not the active queue - expire current slice if it is
-		 * idle and has expired it's mean thinktime or this new queue
-		 * has some old slice time left and is of higher priority or
-		 * this new queue is RT and the current one is BE
-		 */
-		cfq_preempt_queue(cfqd, cfqq);
-		blk_start_queueing(cfqd->queue);
-	}
 }
 
 static void cfq_insert_request(struct request_queue *q, struct request *rq)
@@ -2119,84 +1631,17 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq)
 	cfq_rq_enqueued(cfqd, cfqq, rq);
 }
 
-/*
- * Update hw_tag based on peak queue depth over 50 samples under
- * sufficient load.
- */
-static void cfq_update_hw_tag(struct cfq_data *cfqd)
-{
-	if (cfqd->rq_in_driver > cfqd->rq_in_driver_peak)
-		cfqd->rq_in_driver_peak = cfqd->rq_in_driver;
-
-	if (cfqd->rq_queued <= CFQ_HW_QUEUE_MIN &&
-	    cfqd->rq_in_driver <= CFQ_HW_QUEUE_MIN)
-		return;
-
-	if (cfqd->hw_tag_samples++ < 50)
-		return;
-
-	if (cfqd->rq_in_driver_peak >= CFQ_HW_QUEUE_MIN)
-		cfqd->hw_tag = 1;
-	else
-		cfqd->hw_tag = 0;
-
-	cfqd->hw_tag_samples = 0;
-	cfqd->rq_in_driver_peak = 0;
-}
-
 static void cfq_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
 	struct cfq_data *cfqd = cfqq->cfqd;
-	const int sync = rq_is_sync(rq);
 	unsigned long now;
 
 	now = jiffies;
 	cfq_log_cfqq(cfqd, cfqq, "complete");
 
-	cfq_update_hw_tag(cfqd);
-
-	WARN_ON(!cfqd->rq_in_driver);
-	WARN_ON(!cfqq->dispatched);
-	cfqd->rq_in_driver--;
-	cfqq->dispatched--;
-
 	if (cfq_cfqq_sync(cfqq))
 		cfqd->sync_flight--;
-
-	if (!cfq_class_idle(cfqq))
-		cfqd->last_end_request = now;
-
-	if (sync)
-		RQ_CIC(rq)->last_end_request = now;
-
-	/*
-	 * If this is the active queue, check if it needs to be expired,
-	 * or if we want to idle in case it has no pending requests.
-	 */
-	if (cfqd->active_queue == cfqq) {
-		const bool cfqq_empty = RB_EMPTY_ROOT(&cfqq->sort_list);
-
-		if (cfq_cfqq_slice_new(cfqq)) {
-			cfq_set_prio_slice(cfqd, cfqq);
-			cfq_clear_cfqq_slice_new(cfqq);
-		}
-		/*
-		 * If there are no requests waiting in this queue, and
-		 * there are other queues ready to issue requests, AND
-		 * those other queues are issuing requests within our
-		 * mean seek distance, give them a chance to run instead
-		 * of idling.
-		 */
-		if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq))
-			cfq_slice_expired(cfqd, 1);
-		else if (cfqq_empty && !cfq_close_cooperator(cfqd, cfqq, 1) &&
-			 sync && !rq_noidle(rq))
-			cfq_arm_slice_timer(cfqd);
-	}
-
-	if (!cfqd->rq_in_driver)
-		cfq_schedule_dispatch(cfqd);
 }
 
 /*
@@ -2205,30 +1650,33 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
  */
 static void cfq_prio_boost(struct cfq_queue *cfqq)
 {
+	struct io_queue *ioq = cfqq->ioq;
+
 	if (has_fs_excl()) {
 		/*
 		 * boost idle prio on transactions that would lock out other
 		 * users of the filesystem
 		 */
 		if (cfq_class_idle(cfqq))
-			cfqq->ioprio_class = IOPRIO_CLASS_BE;
-		if (cfqq->ioprio > IOPRIO_NORM)
-			cfqq->ioprio = IOPRIO_NORM;
+			elv_ioq_set_ioprio_class(ioq, IOPRIO_CLASS_BE);
+		if (elv_ioq_ioprio(ioq) > IOPRIO_NORM)
+			elv_ioq_set_ioprio(ioq, IOPRIO_NORM);
+
 	} else {
 		/*
 		 * check if we need to unboost the queue
 		 */
-		if (cfqq->ioprio_class != cfqq->org_ioprio_class)
-			cfqq->ioprio_class = cfqq->org_ioprio_class;
-		if (cfqq->ioprio != cfqq->org_ioprio)
-			cfqq->ioprio = cfqq->org_ioprio;
+		if (elv_ioq_ioprio_class(ioq) != cfqq->org_ioprio_class)
+			elv_ioq_set_ioprio_class(ioq, cfqq->org_ioprio_class);
+		if (elv_ioq_ioprio(ioq) != cfqq->org_ioprio)
+			elv_ioq_set_ioprio(ioq, cfqq->org_ioprio);
 	}
 }
 
 static inline int __cfq_may_queue(struct cfq_queue *cfqq)
 {
-	if ((cfq_cfqq_wait_request(cfqq) || cfq_cfqq_must_alloc(cfqq)) &&
-	    !cfq_cfqq_must_alloc_slice(cfqq)) {
+	if ((elv_ioq_wait_request(cfqq->ioq) ||
+	   cfq_cfqq_must_alloc(cfqq)) && !cfq_cfqq_must_alloc_slice(cfqq)) {
 		cfq_mark_cfqq_must_alloc_slice(cfqq);
 		return ELV_MQUEUE_MUST;
 	}
@@ -2280,7 +1728,7 @@ static void cfq_put_request(struct request *rq)
 		put_io_context(RQ_CIC(rq)->ioc);
 
 		rq->elevator_private = NULL;
-		rq->elevator_private2 = NULL;
+		rq->ioq = NULL;
 
 		cfq_put_queue(cfqq);
 	}
@@ -2320,119 +1768,31 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 	cfqq->allocated[rw]++;
 	cfq_clear_cfqq_must_alloc(cfqq);
-	atomic_inc(&cfqq->ref);
+	elv_get_ioq(cfqq->ioq);
 
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
 	rq->elevator_private = cic;
-	rq->elevator_private2 = cfqq;
+	rq->ioq = cfqq->ioq;
 	return 0;
 
 queue_fail:
 	if (cic)
 		put_io_context(cic->ioc);
 
-	cfq_schedule_dispatch(cfqd);
+	elv_schedule_dispatch(cfqd->queue);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 	cfq_log(cfqd, "set_request fail");
 	return 1;
 }
 
-static void cfq_kick_queue(struct work_struct *work)
-{
-	struct cfq_data *cfqd =
-		container_of(work, struct cfq_data, unplug_work);
-	struct request_queue *q = cfqd->queue;
-
-	spin_lock_irq(q->queue_lock);
-	blk_start_queueing(q);
-	spin_unlock_irq(q->queue_lock);
-}
-
-/*
- * Timer running if the active_queue is currently idling inside its time slice
- */
-static void cfq_idle_slice_timer(unsigned long data)
-{
-	struct cfq_data *cfqd = (struct cfq_data *) data;
-	struct cfq_queue *cfqq;
-	unsigned long flags;
-	int timed_out = 1;
-
-	cfq_log(cfqd, "idle timer fired");
-
-	spin_lock_irqsave(cfqd->queue->queue_lock, flags);
-
-	cfqq = cfqd->active_queue;
-	if (cfqq) {
-		timed_out = 0;
-
-		/*
-		 * We saw a request before the queue expired, let it through
-		 */
-		if (cfq_cfqq_must_dispatch(cfqq))
-			goto out_kick;
-
-		/*
-		 * expired
-		 */
-		if (cfq_slice_used(cfqq))
-			goto expire;
-
-		/*
-		 * only expire and reinvoke request handler, if there are
-		 * other queues with pending requests
-		 */
-		if (!cfqd->busy_queues)
-			goto out_cont;
-
-		/*
-		 * not expired and it has a request pending, let it dispatch
-		 */
-		if (!RB_EMPTY_ROOT(&cfqq->sort_list))
-			goto out_kick;
-	}
-expire:
-	cfq_slice_expired(cfqd, timed_out);
-out_kick:
-	cfq_schedule_dispatch(cfqd);
-out_cont:
-	spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
-}
-
-static void cfq_shutdown_timer_wq(struct cfq_data *cfqd)
-{
-	del_timer_sync(&cfqd->idle_slice_timer);
-	cancel_work_sync(&cfqd->unplug_work);
-}
-
-static void cfq_put_async_queues(struct cfq_data *cfqd)
-{
-	int i;
-
-	for (i = 0; i < IOPRIO_BE_NR; i++) {
-		if (cfqd->async_cfqq[0][i])
-			cfq_put_queue(cfqd->async_cfqq[0][i]);
-		if (cfqd->async_cfqq[1][i])
-			cfq_put_queue(cfqd->async_cfqq[1][i]);
-	}
-
-	if (cfqd->async_idle_cfqq)
-		cfq_put_queue(cfqd->async_idle_cfqq);
-}
-
 static void cfq_exit_queue(struct elevator_queue *e)
 {
 	struct cfq_data *cfqd = e->elevator_data;
 	struct request_queue *q = cfqd->queue;
 
-	cfq_shutdown_timer_wq(cfqd);
-
 	spin_lock_irq(q->queue_lock);
 
-	if (cfqd->active_queue)
-		__cfq_slice_expired(cfqd, cfqd->active_queue, 0);
-
 	while (!list_empty(&cfqd->cic_list)) {
 		struct cfq_io_context *cic = list_entry(cfqd->cic_list.next,
 							struct cfq_io_context,
@@ -2441,12 +1801,7 @@ static void cfq_exit_queue(struct elevator_queue *e)
 		__cfq_exit_single_io_context(cfqd, cic);
 	}
 
-	cfq_put_async_queues(cfqd);
-
 	spin_unlock_irq(q->queue_lock);
-
-	cfq_shutdown_timer_wq(cfqd);
-
 	kfree(cfqd);
 }
 
@@ -2459,8 +1814,6 @@ static void *cfq_init_queue(struct request_queue *q)
 	if (!cfqd)
 		return NULL;
 
-	cfqd->service_tree = CFQ_RB_ROOT;
-
 	/*
 	 * Not strictly needed (since RB_ROOT just clears the node and we
 	 * zeroed cfqd on alloc), but better be safe in case someone decides
@@ -2473,23 +1826,12 @@ static void *cfq_init_queue(struct request_queue *q)
 
 	cfqd->queue = q;
 
-	init_timer(&cfqd->idle_slice_timer);
-	cfqd->idle_slice_timer.function = cfq_idle_slice_timer;
-	cfqd->idle_slice_timer.data = (unsigned long) cfqd;
-
-	INIT_WORK(&cfqd->unplug_work, cfq_kick_queue);
-
-	cfqd->last_end_request = jiffies;
 	cfqd->cfq_quantum = cfq_quantum;
 	cfqd->cfq_fifo_expire[0] = cfq_fifo_expire[0];
 	cfqd->cfq_fifo_expire[1] = cfq_fifo_expire[1];
 	cfqd->cfq_back_max = cfq_back_max;
 	cfqd->cfq_back_penalty = cfq_back_penalty;
-	cfqd->cfq_slice[0] = cfq_slice_async;
-	cfqd->cfq_slice[1] = cfq_slice_sync;
 	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
-	cfqd->cfq_slice_idle = cfq_slice_idle;
-	cfqd->hw_tag = 1;
 
 	return cfqd;
 }
@@ -2554,9 +1896,6 @@ SHOW_FUNCTION(cfq_fifo_expire_sync_show, cfqd->cfq_fifo_expire[1], 1);
 SHOW_FUNCTION(cfq_fifo_expire_async_show, cfqd->cfq_fifo_expire[0], 1);
 SHOW_FUNCTION(cfq_back_seek_max_show, cfqd->cfq_back_max, 0);
 SHOW_FUNCTION(cfq_back_seek_penalty_show, cfqd->cfq_back_penalty, 0);
-SHOW_FUNCTION(cfq_slice_idle_show, cfqd->cfq_slice_idle, 1);
-SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1);
-SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1);
 SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0);
 #undef SHOW_FUNCTION
 
@@ -2584,9 +1923,6 @@ STORE_FUNCTION(cfq_fifo_expire_async_store, &cfqd->cfq_fifo_expire[0], 1,
 STORE_FUNCTION(cfq_back_seek_max_store, &cfqd->cfq_back_max, 0, UINT_MAX, 0);
 STORE_FUNCTION(cfq_back_seek_penalty_store, &cfqd->cfq_back_penalty, 1,
 		UINT_MAX, 0);
-STORE_FUNCTION(cfq_slice_idle_store, &cfqd->cfq_slice_idle, 0, UINT_MAX, 1);
-STORE_FUNCTION(cfq_slice_sync_store, &cfqd->cfq_slice[1], 1, UINT_MAX, 1);
-STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1);
 STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1,
 		UINT_MAX, 0);
 #undef STORE_FUNCTION
@@ -2600,10 +1936,10 @@ static struct elv_fs_entry cfq_attrs[] = {
 	CFQ_ATTR(fifo_expire_async),
 	CFQ_ATTR(back_seek_max),
 	CFQ_ATTR(back_seek_penalty),
-	CFQ_ATTR(slice_sync),
-	CFQ_ATTR(slice_async),
 	CFQ_ATTR(slice_async_rq),
-	CFQ_ATTR(slice_idle),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+	ELV_ATTR(slice_async),
 	__ATTR_NULL
 };
 
@@ -2616,8 +1952,6 @@ static struct elevator_type iosched_cfq = {
 		.elevator_dispatch_fn =		cfq_dispatch_requests,
 		.elevator_add_req_fn =		cfq_insert_request,
 		.elevator_activate_req_fn =	cfq_activate_request,
-		.elevator_deactivate_req_fn =	cfq_deactivate_request,
-		.elevator_queue_empty_fn =	cfq_queue_empty,
 		.elevator_completed_req_fn =	cfq_completed_request,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
@@ -2627,7 +1961,15 @@ static struct elevator_type iosched_cfq = {
 		.elevator_init_fn =		cfq_init_queue,
 		.elevator_exit_fn =		cfq_exit_queue,
 		.trim =				cfq_free_io_context,
+		.elevator_free_sched_queue_fn =	cfq_free_cfq_queue,
+		.elevator_active_ioq_set_fn = 	cfq_active_ioq_set,
+		.elevator_active_ioq_reset_fn =	cfq_active_ioq_reset,
+		.elevator_arm_slice_timer_fn = 	cfq_arm_slice_timer,
+		.elevator_should_preempt_fn = 	cfq_should_preempt,
+		.elevator_update_idle_window_fn = cfq_update_idle_window,
+		.elevator_close_cooperator_fn = cfq_close_cooperator,
 	},
+	.elevator_features =    ELV_IOSCHED_NEED_FQ,
 	.elevator_attrs =	cfq_attrs,
 	.elevator_name =	"cfq",
 	.elevator_owner =	THIS_MODULE,
@@ -2635,14 +1977,6 @@ static struct elevator_type iosched_cfq = {
 
 static int __init cfq_init(void)
 {
-	/*
-	 * could be 0 on HZ < 1000 setups
-	 */
-	if (!cfq_slice_async)
-		cfq_slice_async = 1;
-	if (!cfq_slice_idle)
-		cfq_slice_idle = 1;
-
 	if (cfq_slab_setup())
 		return -ENOMEM;
 
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..5be25b3 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -39,13 +39,8 @@ struct cfq_io_context {
 
 	struct io_context *ioc;
 
-	unsigned long last_end_request;
 	sector_t last_request_pos;
 
-	unsigned long ttime_total;
-	unsigned long ttime_samples;
-	unsigned long ttime_mean;
-
 	unsigned int seek_samples;
 	u64 seek_total;
 	sector_t seek_mean;
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

This patch changes cfq to use fair queuing code from elevator layer.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched     |    3 +-
 block/cfq-iosched.c       | 1106 +++++++++------------------------------------
 include/linux/iocontext.h |    5 -
 3 files changed, 222 insertions(+), 892 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 3398134..dd5224d 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -3,7 +3,7 @@ if BLOCK
 menu "IO Schedulers"
 
 config ELV_FAIR_QUEUING
-	bool "Elevator Fair Queuing Support"
+	bool
 	default n
 	---help---
 	  Traditionally only cfq had notion of multiple queues and it did
@@ -46,6 +46,7 @@ config IOSCHED_DEADLINE
 
 config IOSCHED_CFQ
 	tristate "CFQ I/O scheduler"
+	select ELV_FAIR_QUEUING
 	default y
 	---help---
 	  The CFQ I/O scheduler tries to distribute bandwidth equally
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..995c8dd 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -12,7 +12,6 @@
 #include <linux/rbtree.h>
 #include <linux/ioprio.h>
 #include <linux/blktrace_api.h>
-
 /*
  * tunables
  */
@@ -23,15 +22,7 @@ static const int cfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
 static const int cfq_back_max = 16 * 1024;
 /* penalty of a backwards seek */
 static const int cfq_back_penalty = 2;
-static const int cfq_slice_sync = HZ / 10;
-static int cfq_slice_async = HZ / 25;
 static const int cfq_slice_async_rq = 2;
-static int cfq_slice_idle = HZ / 125;
-
-/*
- * offset from end of service tree
- */
-#define CFQ_IDLE_DELAY		(HZ / 5)
 
 /*
  * below this threshold, we consider thinktime immediate
@@ -43,7 +34,7 @@ static int cfq_slice_idle = HZ / 125;
 
 #define RQ_CIC(rq)		\
 	((struct cfq_io_context *) (rq)->elevator_private)
-#define RQ_CFQQ(rq)		(struct cfq_queue *) ((rq)->elevator_private2)
+#define RQ_CFQQ(rq)	(struct cfq_queue *) (ioq_sched_queue((rq)->ioq))
 
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
@@ -53,8 +44,6 @@ static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
 #define CFQ_PRIO_LISTS		IOPRIO_BE_NR
-#define cfq_class_idle(cfqq)	((cfqq)->ioprio_class == IOPRIO_CLASS_IDLE)
-#define cfq_class_rt(cfqq)	((cfqq)->ioprio_class == IOPRIO_CLASS_RT)
 
 #define sample_valid(samples)	((samples) > 80)
 
@@ -75,12 +64,6 @@ struct cfq_rb_root {
  */
 struct cfq_data {
 	struct request_queue *queue;
-
-	/*
-	 * rr list of queues with requests and the count of them
-	 */
-	struct cfq_rb_root service_tree;
-
 	/*
 	 * Each priority tree is sorted by next_request position.  These
 	 * trees are used when determining if two or more queues are
@@ -88,41 +71,11 @@ struct cfq_data {
 	 */
 	struct rb_root prio_trees[CFQ_PRIO_LISTS];
 
-	unsigned int busy_queues;
-	/*
-	 * Used to track any pending rt requests so we can pre-empt current
-	 * non-RT cfqq in service when this value is non-zero.
-	 */
-	unsigned int busy_rt_queues;
-
-	int rq_in_driver;
 	int sync_flight;
 
-	/*
-	 * queue-depth detection
-	 */
-	int rq_queued;
-	int hw_tag;
-	int hw_tag_samples;
-	int rq_in_driver_peak;
-
-	/*
-	 * idle window management
-	 */
-	struct timer_list idle_slice_timer;
-	struct work_struct unplug_work;
-
-	struct cfq_queue *active_queue;
 	struct cfq_io_context *active_cic;
 
-	/*
-	 * async queue for each priority case
-	 */
-	struct cfq_queue *async_cfqq[2][IOPRIO_BE_NR];
-	struct cfq_queue *async_idle_cfqq;
-
 	sector_t last_position;
-	unsigned long last_end_request;
 
 	/*
 	 * tunables, see top of file
@@ -131,9 +84,7 @@ struct cfq_data {
 	unsigned int cfq_fifo_expire[2];
 	unsigned int cfq_back_penalty;
 	unsigned int cfq_back_max;
-	unsigned int cfq_slice[2];
 	unsigned int cfq_slice_async_rq;
-	unsigned int cfq_slice_idle;
 
 	struct list_head cic_list;
 };
@@ -142,16 +93,11 @@ struct cfq_data {
  * Per process-grouping structure
  */
 struct cfq_queue {
-	/* reference count */
-	atomic_t ref;
+	struct io_queue *ioq;
 	/* various state flags, see below */
 	unsigned int flags;
 	/* parent cfq_data */
 	struct cfq_data *cfqd;
-	/* service_tree member */
-	struct rb_node rb_node;
-	/* service_tree key */
-	unsigned long rb_key;
 	/* prio tree member */
 	struct rb_node p_node;
 	/* prio tree root we belong to, if any */
@@ -167,33 +113,23 @@ struct cfq_queue {
 	/* fifo list of requests in sort_list */
 	struct list_head fifo;
 
-	unsigned long slice_end;
-	long slice_resid;
 	unsigned int slice_dispatch;
 
 	/* pending metadata requests */
 	int meta_pending;
-	/* number of requests that are on the dispatch list or inside driver */
-	int dispatched;
 
 	/* io prio of this group */
-	unsigned short ioprio, org_ioprio;
-	unsigned short ioprio_class, org_ioprio_class;
+	unsigned short org_ioprio;
+	unsigned short org_ioprio_class;
 
 	pid_t pid;
 };
 
 enum cfqq_state_flags {
-	CFQ_CFQQ_FLAG_on_rr = 0,	/* on round-robin busy list */
-	CFQ_CFQQ_FLAG_wait_request,	/* waiting for a request */
-	CFQ_CFQQ_FLAG_must_dispatch,	/* must be allowed a dispatch */
 	CFQ_CFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
 	CFQ_CFQQ_FLAG_must_alloc_slice,	/* per-slice must_alloc flag */
 	CFQ_CFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
-	CFQ_CFQQ_FLAG_idle_window,	/* slice idling enabled */
 	CFQ_CFQQ_FLAG_prio_changed,	/* task priority has changed */
-	CFQ_CFQQ_FLAG_slice_new,	/* no requests dispatched in slice */
-	CFQ_CFQQ_FLAG_sync,		/* synchronous queue */
 	CFQ_CFQQ_FLAG_coop,		/* has done a coop jump of the queue */
 };
 
@@ -211,16 +147,10 @@ static inline int cfq_cfqq_##name(const struct cfq_queue *cfqq)		\
 	return ((cfqq)->flags & (1 << CFQ_CFQQ_FLAG_##name)) != 0;	\
 }
 
-CFQ_CFQQ_FNS(on_rr);
-CFQ_CFQQ_FNS(wait_request);
-CFQ_CFQQ_FNS(must_dispatch);
 CFQ_CFQQ_FNS(must_alloc);
 CFQ_CFQQ_FNS(must_alloc_slice);
 CFQ_CFQQ_FNS(fifo_expire);
-CFQ_CFQQ_FNS(idle_window);
 CFQ_CFQQ_FNS(prio_changed);
-CFQ_CFQQ_FNS(slice_new);
-CFQ_CFQQ_FNS(sync);
 CFQ_CFQQ_FNS(coop);
 #undef CFQ_CFQQ_FNS
 
@@ -259,66 +189,27 @@ static inline int cfq_bio_sync(struct bio *bio)
 	return 0;
 }
 
-/*
- * scheduler run of queue, if there are requests pending and no one in the
- * driver that will restart queueing
- */
-static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)
+static inline struct io_group *cfqq_to_io_group(struct cfq_queue *cfqq)
 {
-	if (cfqd->busy_queues) {
-		cfq_log(cfqd, "schedule dispatch");
-		kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work);
-	}
+	return ioq_to_io_group(cfqq->ioq);
 }
 
-static int cfq_queue_empty(struct request_queue *q)
+static inline int cfq_class_idle(struct cfq_queue *cfqq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
-
-	return !cfqd->busy_queues;
+	return elv_ioq_class_idle(cfqq->ioq);
 }
 
-/*
- * Scale schedule slice based on io priority. Use the sync time slice only
- * if a queue is marked sync and has sync io queued. A sync queue with async
- * io only, should not get full sync slice length.
- */
-static inline int cfq_prio_slice(struct cfq_data *cfqd, int sync,
-				 unsigned short prio)
-{
-	const int base_slice = cfqd->cfq_slice[sync];
-
-	WARN_ON(prio >= IOPRIO_BE_NR);
-
-	return base_slice + (base_slice/CFQ_SLICE_SCALE * (4 - prio));
-}
-
-static inline int
-cfq_prio_to_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	return cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio);
-}
-
-static inline void
-cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+static inline int cfq_cfqq_sync(struct cfq_queue *cfqq)
 {
-	cfqq->slice_end = cfq_prio_to_slice(cfqd, cfqq) + jiffies;
-	cfq_log_cfqq(cfqd, cfqq, "set_slice=%lu", cfqq->slice_end - jiffies);
+	return elv_ioq_sync(cfqq->ioq);
 }
 
-/*
- * We need to wrap this check in cfq_cfqq_slice_new(), since ->slice_end
- * isn't valid until the first request from the dispatch is activated
- * and the slice time set.
- */
-static inline int cfq_slice_used(struct cfq_queue *cfqq)
+static inline int cfqq_is_active_queue(struct cfq_queue *cfqq)
 {
-	if (cfq_cfqq_slice_new(cfqq))
-		return 0;
-	if (time_before(jiffies, cfqq->slice_end))
-		return 0;
+	struct cfq_data *cfqd = cfqq->cfqd;
+	struct elevator_queue *e = cfqd->queue->elevator;
 
-	return 1;
+	return (elv_active_sched_queue(e) == cfqq);
 }
 
 /*
@@ -417,33 +308,6 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2)
 }
 
 /*
- * The below is leftmost cache rbtree addon
- */
-static struct cfq_queue *cfq_rb_first(struct cfq_rb_root *root)
-{
-	if (!root->left)
-		root->left = rb_first(&root->rb);
-
-	if (root->left)
-		return rb_entry(root->left, struct cfq_queue, rb_node);
-
-	return NULL;
-}
-
-static void rb_erase_init(struct rb_node *n, struct rb_root *root)
-{
-	rb_erase(n, root);
-	RB_CLEAR_NODE(n);
-}
-
-static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
-{
-	if (root->left == n)
-		root->left = NULL;
-	rb_erase_init(n, &root->rb);
-}
-
-/*
  * would be nice to take fifo expire time into account as well
  */
 static struct request *
@@ -456,10 +320,10 @@ cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 
 	BUG_ON(RB_EMPTY_NODE(&last->rb_node));
 
-	if (rbprev)
+	if (rbprev != NULL)
 		prev = rb_entry_rq(rbprev);
 
-	if (rbnext)
+	if (rbnext != NULL)
 		next = rb_entry_rq(rbnext);
 	else {
 		rbnext = rb_first(&cfqq->sort_list);
@@ -470,95 +334,6 @@ cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 	return cfq_choose_req(cfqd, next, prev);
 }
 
-static unsigned long cfq_slice_offset(struct cfq_data *cfqd,
-				      struct cfq_queue *cfqq)
-{
-	/*
-	 * just an approximation, should be ok.
-	 */
-	return (cfqd->busy_queues - 1) * (cfq_prio_slice(cfqd, 1, 0) -
-		       cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio));
-}
-
-/*
- * The cfqd->service_tree holds all pending cfq_queue's that have
- * requests waiting to be processed. It is sorted in the order that
- * we will service the queues.
- */
-static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-				 int add_front)
-{
-	struct rb_node **p, *parent;
-	struct cfq_queue *__cfqq;
-	unsigned long rb_key;
-	int left;
-
-	if (cfq_class_idle(cfqq)) {
-		rb_key = CFQ_IDLE_DELAY;
-		parent = rb_last(&cfqd->service_tree.rb);
-		if (parent && parent != &cfqq->rb_node) {
-			__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-			rb_key += __cfqq->rb_key;
-		} else
-			rb_key += jiffies;
-	} else if (!add_front) {
-		rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies;
-		rb_key += cfqq->slice_resid;
-		cfqq->slice_resid = 0;
-	} else
-		rb_key = 0;
-
-	if (!RB_EMPTY_NODE(&cfqq->rb_node)) {
-		/*
-		 * same position, nothing more to do
-		 */
-		if (rb_key == cfqq->rb_key)
-			return;
-
-		cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
-	}
-
-	left = 1;
-	parent = NULL;
-	p = &cfqd->service_tree.rb.rb_node;
-	while (*p) {
-		struct rb_node **n;
-
-		parent = *p;
-		__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-
-		/*
-		 * sort RT queues first, we always want to give
-		 * preference to them. IDLE queues goes to the back.
-		 * after that, sort on the next service time.
-		 */
-		if (cfq_class_rt(cfqq) > cfq_class_rt(__cfqq))
-			n = &(*p)->rb_left;
-		else if (cfq_class_rt(cfqq) < cfq_class_rt(__cfqq))
-			n = &(*p)->rb_right;
-		else if (cfq_class_idle(cfqq) < cfq_class_idle(__cfqq))
-			n = &(*p)->rb_left;
-		else if (cfq_class_idle(cfqq) > cfq_class_idle(__cfqq))
-			n = &(*p)->rb_right;
-		else if (rb_key < __cfqq->rb_key)
-			n = &(*p)->rb_left;
-		else
-			n = &(*p)->rb_right;
-
-		if (n == &(*p)->rb_right)
-			left = 0;
-
-		p = n;
-	}
-
-	if (left)
-		cfqd->service_tree.left = &cfqq->rb_node;
-
-	cfqq->rb_key = rb_key;
-	rb_link_node(&cfqq->rb_node, parent, p);
-	rb_insert_color(&cfqq->rb_node, &cfqd->service_tree.rb);
-}
-
 static struct cfq_queue *
 cfq_prio_tree_lookup(struct cfq_data *cfqd, struct rb_root *root,
 		     sector_t sector, struct rb_node **ret_parent,
@@ -620,57 +395,34 @@ static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 		cfqq->p_root = NULL;
 }
 
-/*
- * Update cfqq's position in the service tree.
- */
-static void cfq_resort_rr_list(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+/* An active ioq is being reset. A chance to do cic related stuff. */
+static void cfq_active_ioq_reset(struct request_queue *q, void *sched_queue)
 {
-	/*
-	 * Resorting requires the cfqq to be on the RR list already.
-	 */
-	if (cfq_cfqq_on_rr(cfqq)) {
-		cfq_service_tree_add(cfqd, cfqq, 0);
-		cfq_prio_tree_add(cfqd, cfqq);
-	}
-}
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = sched_queue;
 
-/*
- * add to busy list of queues for service, trying to be fair in ordering
- * the pending list according to last request service
- */
-static void cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	cfq_log_cfqq(cfqd, cfqq, "add_to_rr");
-	BUG_ON(cfq_cfqq_on_rr(cfqq));
-	cfq_mark_cfqq_on_rr(cfqq);
-	cfqd->busy_queues++;
-	if (cfq_class_rt(cfqq))
-		cfqd->busy_rt_queues++;
+	if (cfqd->active_cic) {
+		put_io_context(cfqd->active_cic->ioc);
+		cfqd->active_cic = NULL;
+	}
 
-	cfq_resort_rr_list(cfqd, cfqq);
+	/* Resort the cfqq in prio tree */
+	if (cfqq)
+		cfq_prio_tree_add(cfqd, cfqq);
 }
 
-/*
- * Called when the cfqq no longer has requests pending, remove it from
- * the service tree.
- */
-static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+/* An ioq has been set as active one. */
+static void cfq_active_ioq_set(struct request_queue *q, void *sched_queue,
+				int coop)
 {
-	cfq_log_cfqq(cfqd, cfqq, "del_from_rr");
-	BUG_ON(!cfq_cfqq_on_rr(cfqq));
-	cfq_clear_cfqq_on_rr(cfqq);
+	struct cfq_queue *cfqq = sched_queue;
 
-	if (!RB_EMPTY_NODE(&cfqq->rb_node))
-		cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
-	if (cfqq->p_root) {
-		rb_erase(&cfqq->p_node, cfqq->p_root);
-		cfqq->p_root = NULL;
-	}
+	cfqq->slice_dispatch = 0;
 
-	BUG_ON(!cfqd->busy_queues);
-	cfqd->busy_queues--;
-	if (cfq_class_rt(cfqq))
-		cfqd->busy_rt_queues--;
+	cfq_clear_cfqq_must_alloc_slice(cfqq);
+	cfq_clear_cfqq_fifo_expire(cfqq);
+	if (!coop)
+		cfq_clear_cfqq_coop(cfqq);
 }
 
 /*
@@ -679,7 +431,6 @@ static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 static void cfq_del_rq_rb(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
-	struct cfq_data *cfqd = cfqq->cfqd;
 	const int sync = rq_is_sync(rq);
 
 	BUG_ON(!cfqq->queued[sync]);
@@ -687,8 +438,17 @@ static void cfq_del_rq_rb(struct request *rq)
 
 	elv_rb_del(&cfqq->sort_list, rq);
 
-	if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list))
-		cfq_del_cfqq_rr(cfqd, cfqq);
+	/*
+	 * If this was last request in the queue, remove this queue from
+	 * prio trees. For last request nr_queued count will still be 1 as
+	 * elevator fair queuing layer is yet to do the accounting.
+	 */
+	if (elv_ioq_nr_queued(cfqq->ioq) == 1) {
+		if (cfqq->p_root) {
+			rb_erase(&cfqq->p_node, cfqq->p_root);
+			cfqq->p_root = NULL;
+		}
+	}
 }
 
 static void cfq_add_rq_rb(struct request *rq)
@@ -706,9 +466,6 @@ static void cfq_add_rq_rb(struct request *rq)
 	while ((__alias = elv_rb_add(&cfqq->sort_list, rq)) != NULL)
 		cfq_dispatch_insert(cfqd->queue, __alias);
 
-	if (!cfq_cfqq_on_rr(cfqq))
-		cfq_add_cfqq_rr(cfqd, cfqq);
-
 	/*
 	 * check if this request is a better next-serve candidate
 	 */
@@ -756,23 +513,9 @@ static void cfq_activate_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 
-	cfqd->rq_in_driver++;
-	cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "activate rq, drv=%d",
-						cfqd->rq_in_driver);
-
 	cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors;
 }
 
-static void cfq_deactivate_request(struct request_queue *q, struct request *rq)
-{
-	struct cfq_data *cfqd = q->elevator->elevator_data;
-
-	WARN_ON(!cfqd->rq_in_driver);
-	cfqd->rq_in_driver--;
-	cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "deactivate rq, drv=%d",
-						cfqd->rq_in_driver);
-}
-
 static void cfq_remove_request(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
@@ -783,7 +526,6 @@ static void cfq_remove_request(struct request *rq)
 	list_del_init(&rq->queuelist);
 	cfq_del_rq_rb(rq);
 
-	cfqq->cfqd->rq_queued--;
 	if (rq_is_meta(rq)) {
 		WARN_ON(!cfqq->meta_pending);
 		cfqq->meta_pending--;
@@ -857,93 +599,21 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	return 0;
 }
 
-static void __cfq_set_active_queue(struct cfq_data *cfqd,
-				   struct cfq_queue *cfqq)
-{
-	if (cfqq) {
-		cfq_log_cfqq(cfqd, cfqq, "set_active");
-		cfqq->slice_end = 0;
-		cfqq->slice_dispatch = 0;
-
-		cfq_clear_cfqq_wait_request(cfqq);
-		cfq_clear_cfqq_must_dispatch(cfqq);
-		cfq_clear_cfqq_must_alloc_slice(cfqq);
-		cfq_clear_cfqq_fifo_expire(cfqq);
-		cfq_mark_cfqq_slice_new(cfqq);
-
-		del_timer(&cfqd->idle_slice_timer);
-	}
-
-	cfqd->active_queue = cfqq;
-}
-
 /*
  * current cfqq expired its slice (or was too idle), select new one
  */
 static void
-__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-		    int timed_out)
+__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
-	cfq_log_cfqq(cfqd, cfqq, "slice expired t=%d", timed_out);
-
-	if (cfq_cfqq_wait_request(cfqq))
-		del_timer(&cfqd->idle_slice_timer);
-
-	cfq_clear_cfqq_wait_request(cfqq);
-
-	/*
-	 * store what was left of this slice, if the queue idled/timed out
-	 */
-	if (timed_out && !cfq_cfqq_slice_new(cfqq)) {
-		cfqq->slice_resid = cfqq->slice_end - jiffies;
-		cfq_log_cfqq(cfqd, cfqq, "resid=%ld", cfqq->slice_resid);
-	}
-
-	cfq_resort_rr_list(cfqd, cfqq);
-
-	if (cfqq == cfqd->active_queue)
-		cfqd->active_queue = NULL;
-
-	if (cfqd->active_cic) {
-		put_io_context(cfqd->active_cic->ioc);
-		cfqd->active_cic = NULL;
-	}
+	__elv_ioq_slice_expired(cfqd->queue, cfqq->ioq);
 }
 
-static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out)
+static inline void cfq_slice_expired(struct cfq_data *cfqd)
 {
-	struct cfq_queue *cfqq = cfqd->active_queue;
+	struct cfq_queue *cfqq = elv_active_sched_queue(cfqd->queue->elevator);
 
 	if (cfqq)
-		__cfq_slice_expired(cfqd, cfqq, timed_out);
-}
-
-/*
- * Get next queue for service. Unless we have a queue preemption,
- * we'll simply select the first cfqq in the service tree.
- */
-static struct cfq_queue *cfq_get_next_queue(struct cfq_data *cfqd)
-{
-	if (RB_EMPTY_ROOT(&cfqd->service_tree.rb))
-		return NULL;
-
-	return cfq_rb_first(&cfqd->service_tree);
-}
-
-/*
- * Get and set a new active queue for service.
- */
-static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd,
-					      struct cfq_queue *cfqq)
-{
-	if (!cfqq) {
-		cfqq = cfq_get_next_queue(cfqd);
-		if (cfqq)
-			cfq_clear_cfqq_coop(cfqq);
-	}
-
-	__cfq_set_active_queue(cfqd, cfqq);
-	return cfqq;
+		__cfq_slice_expired(cfqd, cfqq);
 }
 
 static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
@@ -1020,11 +690,12 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
  * associated with the I/O issued by cur_cfqq.  I'm not sure this is a valid
  * assumption.
  */
-static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
-					      struct cfq_queue *cur_cfqq,
+static struct io_queue *cfq_close_cooperator(struct request_queue *q,
+					      void *cur_sched_queue,
 					      int probe)
 {
-	struct cfq_queue *cfqq;
+	struct cfq_queue *cur_cfqq = cur_sched_queue, *cfqq;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
 
 	/*
 	 * A valid cfq_io_context is necessary to compare requests against
@@ -1047,38 +718,18 @@ static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
 
 	if (!probe)
 		cfq_mark_cfqq_coop(cfqq);
-	return cfqq;
+	return cfqq->ioq;
 }
 
-static void cfq_arm_slice_timer(struct cfq_data *cfqd)
+static void cfq_arm_slice_timer(struct request_queue *q, void *sched_queue)
 {
-	struct cfq_queue *cfqq = cfqd->active_queue;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = sched_queue;
 	struct cfq_io_context *cic;
 	unsigned long sl;
 
-	/*
-	 * SSD device without seek penalty, disable idling. But only do so
-	 * for devices that support queuing, otherwise we still have a problem
-	 * with sync vs async workloads.
-	 */
-	if (blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag)
-		return;
-
 	WARN_ON(!RB_EMPTY_ROOT(&cfqq->sort_list));
-	WARN_ON(cfq_cfqq_slice_new(cfqq));
-
-	/*
-	 * idle is disabled, either manually or by past process history
-	 */
-	if (!cfqd->cfq_slice_idle || !cfq_cfqq_idle_window(cfqq))
-		return;
-
-	/*
-	 * still requests with the driver, don't idle
-	 */
-	if (cfqd->rq_in_driver)
-		return;
-
+	WARN_ON(elv_ioq_slice_new(cfqq->ioq));
 	/*
 	 * task has exited, don't wait
 	 */
@@ -1086,18 +737,18 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
 	if (!cic || !atomic_read(&cic->ioc->nr_tasks))
 		return;
 
-	cfq_mark_cfqq_wait_request(cfqq);
 
+	elv_mark_ioq_wait_request(cfqq->ioq);
 	/*
 	 * we don't want to idle for seeks, but we do want to allow
 	 * fair distribution of slice time for a process doing back-to-back
 	 * seeks. so allow a little bit of time for him to submit a new rq
 	 */
-	sl = cfqd->cfq_slice_idle;
+	sl = elv_get_slice_idle(q->elevator);
 	if (sample_valid(cic->seek_samples) && CIC_SEEKY(cic))
 		sl = min(sl, msecs_to_jiffies(CFQ_MIN_TT));
 
-	mod_timer(&cfqd->idle_slice_timer, jiffies + sl);
+	elv_mod_idle_slice_timer(q->elevator, jiffies + sl);
 	cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu", sl);
 }
 
@@ -1106,13 +757,12 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
  */
 static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
+	struct cfq_data *cfqd = q->elevator->elevator_data;
 
-	cfq_log_cfqq(cfqd, cfqq, "dispatch_insert");
+	cfq_log_cfqq(cfqd, cfqq, "dispatch_insert sect=%d", rq->nr_sectors);
 
 	cfq_remove_request(rq);
-	cfqq->dispatched++;
 	elv_dispatch_sort(q, rq);
 
 	if (cfq_cfqq_sync(cfqq))
@@ -1150,78 +800,11 @@ static inline int
 cfq_prio_to_maxrq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
 	const int base_rq = cfqd->cfq_slice_async_rq;
+	unsigned short ioprio = elv_ioq_ioprio(cfqq->ioq);
 
-	WARN_ON(cfqq->ioprio >= IOPRIO_BE_NR);
+	WARN_ON(ioprio >= IOPRIO_BE_NR);
 
-	return 2 * (base_rq + base_rq * (CFQ_PRIO_LISTS - 1 - cfqq->ioprio));
-}
-
-/*
- * Select a queue for service. If we have a current active queue,
- * check whether to continue servicing it, or retrieve and set a new one.
- */
-static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
-{
-	struct cfq_queue *cfqq, *new_cfqq = NULL;
-
-	cfqq = cfqd->active_queue;
-	if (!cfqq)
-		goto new_queue;
-
-	/*
-	 * The active queue has run out of time, expire it and select new.
-	 */
-	if (cfq_slice_used(cfqq) && !cfq_cfqq_must_dispatch(cfqq))
-		goto expire;
-
-	/*
-	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
-	 * cfqq.
-	 */
-	if (!cfq_class_rt(cfqq) && cfqd->busy_rt_queues) {
-		/*
-		 * We simulate this as cfqq timed out so that it gets to bank
-		 * the remaining of its time slice.
-		 */
-		cfq_log_cfqq(cfqd, cfqq, "preempt");
-		cfq_slice_expired(cfqd, 1);
-		goto new_queue;
-	}
-
-	/*
-	 * The active queue has requests and isn't expired, allow it to
-	 * dispatch.
-	 */
-	if (!RB_EMPTY_ROOT(&cfqq->sort_list))
-		goto keep_queue;
-
-	/*
-	 * If another queue has a request waiting within our mean seek
-	 * distance, let it run.  The expire code will check for close
-	 * cooperators and put the close queue at the front of the service
-	 * tree.
-	 */
-	new_cfqq = cfq_close_cooperator(cfqd, cfqq, 0);
-	if (new_cfqq)
-		goto expire;
-
-	/*
-	 * No requests pending. If the active queue still has requests in
-	 * flight or is idling for a new request, allow either of these
-	 * conditions to happen (or time out) before selecting a new queue.
-	 */
-	if (timer_pending(&cfqd->idle_slice_timer) ||
-	    (cfqq->dispatched && cfq_cfqq_idle_window(cfqq))) {
-		cfqq = NULL;
-		goto keep_queue;
-	}
-
-expire:
-	cfq_slice_expired(cfqd, 0);
-new_queue:
-	cfqq = cfq_set_active_queue(cfqd, new_cfqq);
-keep_queue:
-	return cfqq;
+	return 2 * (base_rq + base_rq * (CFQ_PRIO_LISTS - 1 - ioprio));
 }
 
 static int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
@@ -1246,12 +829,14 @@ static int cfq_forced_dispatch(struct cfq_data *cfqd)
 	struct cfq_queue *cfqq;
 	int dispatched = 0;
 
-	while ((cfqq = cfq_rb_first(&cfqd->service_tree)) != NULL)
+	while ((cfqq = elv_select_sched_queue(cfqd->queue, 1)) != NULL)
 		dispatched += __cfq_forced_dispatch_cfqq(cfqq);
 
-	cfq_slice_expired(cfqd, 0);
+	/* This probably is redundant now. above loop will should make sure
+	 * that all the busy queues have expired */
+	cfq_slice_expired(cfqd);
 
-	BUG_ON(cfqd->busy_queues);
+	BUG_ON(elv_nr_busy_ioq(cfqd->queue->elevator));
 
 	cfq_log(cfqd, "forced_dispatch=%d\n", dispatched);
 	return dispatched;
@@ -1297,13 +882,10 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	struct cfq_queue *cfqq;
 	unsigned int max_dispatch;
 
-	if (!cfqd->busy_queues)
-		return 0;
-
 	if (unlikely(force))
 		return cfq_forced_dispatch(cfqd);
 
-	cfqq = cfq_select_queue(cfqd);
+	cfqq = elv_select_sched_queue(q, 0);
 	if (!cfqq)
 		return 0;
 
@@ -1320,7 +902,7 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	/*
 	 * Does this cfqq already have too much IO in flight?
 	 */
-	if (cfqq->dispatched >= max_dispatch) {
+	if (elv_ioq_nr_dispatched(cfqq->ioq) >= max_dispatch) {
 		/*
 		 * idle queue must always only have a single IO in flight
 		 */
@@ -1330,13 +912,13 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 		/*
 		 * We have other queues, don't allow more IO from this one
 		 */
-		if (cfqd->busy_queues > 1)
+		if (elv_nr_busy_ioq(q->elevator) > 1)
 			return 0;
 
 		/*
 		 * we are the only queue, allow up to 4 times of 'quantum'
 		 */
-		if (cfqq->dispatched >= 4 * max_dispatch)
+		if (elv_ioq_nr_dispatched(cfqq->ioq) >= 4 * max_dispatch)
 			return 0;
 	}
 
@@ -1345,51 +927,45 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
 	 */
 	cfq_dispatch_request(cfqd, cfqq);
 	cfqq->slice_dispatch++;
-	cfq_clear_cfqq_must_dispatch(cfqq);
 
 	/*
 	 * expire an async queue immediately if it has used up its slice. idle
 	 * queue always expire after 1 dispatch round.
 	 */
-	if (cfqd->busy_queues > 1 && ((!cfq_cfqq_sync(cfqq) &&
+	if (elv_nr_busy_ioq(q->elevator) > 1 && ((!cfq_cfqq_sync(cfqq) &&
 	    cfqq->slice_dispatch >= cfq_prio_to_maxrq(cfqd, cfqq)) ||
 	    cfq_class_idle(cfqq))) {
-		cfqq->slice_end = jiffies + 1;
-		cfq_slice_expired(cfqd, 0);
+		cfq_slice_expired(cfqd);
 	}
 
 	cfq_log(cfqd, "dispatched a request");
 	return 1;
 }
 
-/*
- * task holds one reference to the queue, dropped when task exits. each rq
- * in-flight on this queue also holds a reference, dropped when rq is freed.
- *
- * queue lock must be held here.
- */
-static void cfq_put_queue(struct cfq_queue *cfqq)
+static void cfq_free_cfq_queue(struct elevator_queue *e, void *sched_queue)
 {
+	struct cfq_queue *cfqq = sched_queue;
 	struct cfq_data *cfqd = cfqq->cfqd;
 
-	BUG_ON(atomic_read(&cfqq->ref) <= 0);
-
-	if (!atomic_dec_and_test(&cfqq->ref))
-		return;
+	BUG_ON(!cfqq);
 
-	cfq_log_cfqq(cfqd, cfqq, "put_queue");
+	cfq_log_cfqq(cfqd, cfqq, "free_queue");
 	BUG_ON(rb_first(&cfqq->sort_list));
 	BUG_ON(cfqq->allocated[READ] + cfqq->allocated[WRITE]);
-	BUG_ON(cfq_cfqq_on_rr(cfqq));
 
-	if (unlikely(cfqd->active_queue == cfqq)) {
-		__cfq_slice_expired(cfqd, cfqq, 0);
-		cfq_schedule_dispatch(cfqd);
+	if (unlikely(cfqq_is_active_queue(cfqq))) {
+		__cfq_slice_expired(cfqd, cfqq);
+		elv_schedule_dispatch(cfqd->queue);
 	}
 
 	kmem_cache_free(cfq_pool, cfqq);
 }
 
+static inline void cfq_put_queue(struct cfq_queue *cfqq)
+{
+	elv_put_ioq(cfqq->ioq);
+}
+
 /*
  * Must always be called with the rcu_read_lock() held
  */
@@ -1477,9 +1053,9 @@ static void cfq_free_io_context(struct io_context *ioc)
 
 static void cfq_exit_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 {
-	if (unlikely(cfqq == cfqd->active_queue)) {
-		__cfq_slice_expired(cfqd, cfqq, 0);
-		cfq_schedule_dispatch(cfqd);
+	if (unlikely(cfqq == elv_active_sched_queue(cfqd->queue->elevator))) {
+		__cfq_slice_expired(cfqd, cfqq);
+		elv_schedule_dispatch(cfqd->queue);
 	}
 
 	cfq_put_queue(cfqq);
@@ -1549,11 +1125,11 @@ static struct cfq_io_context *
 cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 {
 	struct cfq_io_context *cic;
+	struct request_queue *q = cfqd->queue;
 
 	cic = kmem_cache_alloc_node(cfq_ioc_pool, gfp_mask | __GFP_ZERO,
-							cfqd->queue->node);
+							q->node);
 	if (cic) {
-		cic->last_end_request = jiffies;
 		INIT_LIST_HEAD(&cic->queue_list);
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
@@ -1567,7 +1143,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
 {
 	struct task_struct *tsk = current;
-	int ioprio_class;
+	int ioprio_class, ioprio;
 
 	if (!cfq_cfqq_prio_changed(cfqq))
 		return;
@@ -1580,30 +1156,33 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
 		/*
 		 * no prio set, inherit CPU scheduling settings
 		 */
-		cfqq->ioprio = task_nice_ioprio(tsk);
-		cfqq->ioprio_class = task_nice_ioclass(tsk);
+		ioprio = task_nice_ioprio(tsk);
+		ioprio_class = task_nice_ioclass(tsk);
 		break;
 	case IOPRIO_CLASS_RT:
-		cfqq->ioprio = task_ioprio(ioc);
-		cfqq->ioprio_class = IOPRIO_CLASS_RT;
+		ioprio = task_ioprio(ioc);
+		ioprio_class = IOPRIO_CLASS_RT;
 		break;
 	case IOPRIO_CLASS_BE:
-		cfqq->ioprio = task_ioprio(ioc);
-		cfqq->ioprio_class = IOPRIO_CLASS_BE;
+		ioprio = task_ioprio(ioc);
+		ioprio_class = IOPRIO_CLASS_BE;
 		break;
 	case IOPRIO_CLASS_IDLE:
-		cfqq->ioprio_class = IOPRIO_CLASS_IDLE;
-		cfqq->ioprio = 7;
-		cfq_clear_cfqq_idle_window(cfqq);
+		ioprio_class = IOPRIO_CLASS_IDLE;
+		ioprio = 7;
+		elv_clear_ioq_idle_window(cfqq->ioq);
 		break;
 	}
 
+	elv_ioq_set_ioprio_class(cfqq->ioq, ioprio_class);
+	elv_ioq_set_ioprio(cfqq->ioq, ioprio);
+
 	/*
 	 * keep track of original prio settings in case we have to temporarily
 	 * elevate the priority of this queue
 	 */
-	cfqq->org_ioprio = cfqq->ioprio;
-	cfqq->org_ioprio_class = cfqq->ioprio_class;
+	cfqq->org_ioprio = ioprio;
+	cfqq->org_ioprio_class = ioprio_class;
 	cfq_clear_cfqq_prio_changed(cfqq);
 }
 
@@ -1612,11 +1191,12 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	struct cfq_data *cfqd = cic->key;
 	struct cfq_queue *cfqq;
 	unsigned long flags;
+	struct request_queue *q = cfqd->queue;
 
 	if (unlikely(!cfqd))
 		return;
 
-	spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+	spin_lock_irqsave(q->queue_lock, flags);
 
 	cfqq = cic->cfqq[BLK_RW_ASYNC];
 	if (cfqq) {
@@ -1633,7 +1213,7 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	if (cfqq)
 		cfq_mark_cfqq_prio_changed(cfqq);
 
-	spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
 static void cfq_ioc_set_ioprio(struct io_context *ioc)
@@ -1644,11 +1224,12 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc)
 
 static struct cfq_queue *
 cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
-		     struct io_context *ioc, gfp_t gfp_mask)
+				struct io_context *ioc, gfp_t gfp_mask)
 {
 	struct cfq_queue *cfqq, *new_cfqq = NULL;
 	struct cfq_io_context *cic;
-
+	struct request_queue *q = cfqd->queue;
+	struct io_queue *ioq = NULL, *new_ioq = NULL;
 retry:
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
@@ -1656,8 +1237,7 @@ retry:
 
 	if (!cfqq) {
 		if (new_cfqq) {
-			cfqq = new_cfqq;
-			new_cfqq = NULL;
+			goto alloc_ioq;
 		} else if (gfp_mask & __GFP_WAIT) {
 			/*
 			 * Inform the allocator of the fact that we will
@@ -1678,22 +1258,52 @@ retry:
 			if (!cfqq)
 				goto out;
 		}
+alloc_ioq:
+		if (new_ioq) {
+			ioq = new_ioq;
+			new_ioq = NULL;
+			cfqq = new_cfqq;
+			new_cfqq = NULL;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			new_ioq = elv_alloc_ioq(q,
+					gfp_mask | __GFP_NOFAIL | __GFP_ZERO);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			ioq = elv_alloc_ioq(q, gfp_mask | __GFP_ZERO);
+			if (!ioq) {
+				kmem_cache_free(cfq_pool, cfqq);
+				cfqq = NULL;
+				goto out;
+			}
+		}
 
-		RB_CLEAR_NODE(&cfqq->rb_node);
+		/*
+		 * Both cfqq and ioq objects allocated. Do the initializations
+		 * now.
+		 */
 		RB_CLEAR_NODE(&cfqq->p_node);
 		INIT_LIST_HEAD(&cfqq->fifo);
-
-		atomic_set(&cfqq->ref, 0);
 		cfqq->cfqd = cfqd;
 
 		cfq_mark_cfqq_prio_changed(cfqq);
 
+		cfqq->ioq = ioq;
 		cfq_init_prio_data(cfqq, ioc);
+		elv_init_ioq(q->elevator, ioq, cfqq, cfqq->org_ioprio_class,
+				cfqq->org_ioprio, is_sync);
 
 		if (is_sync) {
 			if (!cfq_class_idle(cfqq))
-				cfq_mark_cfqq_idle_window(cfqq);
-			cfq_mark_cfqq_sync(cfqq);
+				elv_mark_ioq_idle_window(cfqq->ioq);
+			elv_mark_ioq_sync(cfqq->ioq);
 		}
 		cfqq->pid = current->pid;
 		cfq_log_cfqq(cfqd, cfqq, "alloced");
@@ -1702,38 +1312,28 @@ retry:
 	if (new_cfqq)
 		kmem_cache_free(cfq_pool, new_cfqq);
 
+	if (new_ioq)
+		elv_free_ioq(new_ioq);
+
 out:
 	WARN_ON((gfp_mask & __GFP_WAIT) && !cfqq);
 	return cfqq;
 }
 
-static struct cfq_queue **
-cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio)
-{
-	switch (ioprio_class) {
-	case IOPRIO_CLASS_RT:
-		return &cfqd->async_cfqq[0][ioprio];
-	case IOPRIO_CLASS_BE:
-		return &cfqd->async_cfqq[1][ioprio];
-	case IOPRIO_CLASS_IDLE:
-		return &cfqd->async_idle_cfqq;
-	default:
-		BUG();
-	}
-}
-
 static struct cfq_queue *
 cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
-	      gfp_t gfp_mask)
+					gfp_t gfp_mask)
 {
 	const int ioprio = task_ioprio(ioc);
 	const int ioprio_class = task_ioprio_class(ioc);
-	struct cfq_queue **async_cfqq = NULL;
+	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
+	struct io_group *iog = io_lookup_io_group_current(cfqd->queue);
 
 	if (!is_sync) {
-		async_cfqq = cfq_async_queue_prio(cfqd, ioprio_class, ioprio);
-		cfqq = *async_cfqq;
+		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
+								ioprio);
+		cfqq = async_cfqq;
 	}
 
 	if (!cfqq) {
@@ -1742,15 +1342,11 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 			return NULL;
 	}
 
-	/*
-	 * pin the queue now that it's allocated, scheduler exit will prune it
-	 */
-	if (!is_sync && !(*async_cfqq)) {
-		atomic_inc(&cfqq->ref);
-		*async_cfqq = cfqq;
-	}
+	if (!is_sync && !async_cfqq)
+		io_group_set_async_queue(iog, ioprio_class, ioprio, cfqq->ioq);
 
-	atomic_inc(&cfqq->ref);
+	/* ioc reference */
+	elv_get_ioq(cfqq->ioq);
 	return cfqq;
 }
 
@@ -1829,6 +1425,7 @@ static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
 {
 	unsigned long flags;
 	int ret;
+	struct request_queue *q = cfqd->queue;
 
 	ret = radix_tree_preload(gfp_mask);
 	if (!ret) {
@@ -1845,9 +1442,9 @@ static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
 		radix_tree_preload_end();
 
 		if (!ret) {
-			spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+			spin_lock_irqsave(q->queue_lock, flags);
 			list_add(&cic->queue_list, &cfqd->cic_list);
-			spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+			spin_unlock_irqrestore(q->queue_lock, flags);
 		}
 	}
 
@@ -1867,10 +1464,11 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 {
 	struct io_context *ioc = NULL;
 	struct cfq_io_context *cic;
+	struct request_queue *q = cfqd->queue;
 
 	might_sleep_if(gfp_mask & __GFP_WAIT);
 
-	ioc = get_io_context(gfp_mask, cfqd->queue->node);
+	ioc = get_io_context(gfp_mask, q->node);
 	if (!ioc)
 		return NULL;
 
@@ -1889,7 +1487,6 @@ out:
 	smp_read_barrier_depends();
 	if (unlikely(ioc->ioprio_changed))
 		cfq_ioc_set_ioprio(ioc);
-
 	return cic;
 err_free:
 	cfq_cic_free(cic);
@@ -1899,17 +1496,6 @@ err:
 }
 
 static void
-cfq_update_io_thinktime(struct cfq_data *cfqd, struct cfq_io_context *cic)
-{
-	unsigned long elapsed = jiffies - cic->last_end_request;
-	unsigned long ttime = min(elapsed, 2UL * cfqd->cfq_slice_idle);
-
-	cic->ttime_samples = (7*cic->ttime_samples + 256) / 8;
-	cic->ttime_total = (7*cic->ttime_total + 256*ttime) / 8;
-	cic->ttime_mean = (cic->ttime_total + 128) / cic->ttime_samples;
-}
-
-static void
 cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
 		       struct request *rq)
 {
@@ -1940,57 +1526,41 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
 }
 
 /*
- * Disable idle window if the process thinks too long or seeks so much that
- * it doesn't matter
+ * Disable idle window if the process seeks so much that it doesn't matter
  */
-static void
-cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
-		       struct cfq_io_context *cic)
+static int
+cfq_update_idle_window(struct elevator_queue *eq, void *cfqq,
+					struct request *rq)
 {
-	int old_idle, enable_idle;
+	struct cfq_io_context *cic = RQ_CIC(rq);
 
 	/*
-	 * Don't idle for async or idle io prio class
+	 * Enabling/Disabling idling based on thinktime has been moved
+	 * in common layer.
 	 */
-	if (!cfq_cfqq_sync(cfqq) || cfq_class_idle(cfqq))
-		return;
-
-	enable_idle = old_idle = cfq_cfqq_idle_window(cfqq);
-
-	if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
-	    (cfqd->hw_tag && CIC_SEEKY(cic)))
-		enable_idle = 0;
-	else if (sample_valid(cic->ttime_samples)) {
-		if (cic->ttime_mean > cfqd->cfq_slice_idle)
-			enable_idle = 0;
-		else
-			enable_idle = 1;
-	}
+	if (!atomic_read(&cic->ioc->nr_tasks) ||
+	    (elv_hw_tag(eq) && CIC_SEEKY(cic)))
+		return 0;
 
-	if (old_idle != enable_idle) {
-		cfq_log_cfqq(cfqd, cfqq, "idle=%d", enable_idle);
-		if (enable_idle)
-			cfq_mark_cfqq_idle_window(cfqq);
-		else
-			cfq_clear_cfqq_idle_window(cfqq);
-	}
+	return 1;
 }
 
 /*
  * Check if new_cfqq should preempt the currently active queue. Return 0 for
- * no or if we aren't sure, a 1 will cause a preempt.
+ * no or if we aren't sure, a 1 will cause a preemption attempt.
+ * Some of the preemption logic has been moved to common layer. Only cfq
+ * specific parts are left here.
  */
 static int
-cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
-		   struct request *rq)
+cfq_should_preempt(struct request_queue *q, void *new_cfqq, struct request *rq)
 {
-	struct cfq_queue *cfqq;
+	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_queue *cfqq = elv_active_sched_queue(q->elevator);
 
-	cfqq = cfqd->active_queue;
 	if (!cfqq)
 		return 0;
 
-	if (cfq_slice_used(cfqq))
+	if (elv_ioq_slice_used(cfqq->ioq))
 		return 1;
 
 	if (cfq_class_idle(new_cfqq))
@@ -2013,13 +1583,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
 	if (rq_is_meta(rq) && !cfqq->meta_pending)
 		return 1;
 
-	/*
-	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
-	 */
-	if (cfq_class_rt(new_cfqq) && !cfq_class_rt(cfqq))
-		return 1;
-
-	if (!cfqd->active_cic || !cfq_cfqq_wait_request(cfqq))
+	if (!cfqd->active_cic || !elv_ioq_wait_request(cfqq->ioq))
 		return 0;
 
 	/*
@@ -2033,29 +1597,10 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
 }
 
 /*
- * cfqq preempts the active queue. if we allowed preempt with no slice left,
- * let it have half of its nominal slice.
- */
-static void cfq_preempt_queue(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-{
-	cfq_log_cfqq(cfqd, cfqq, "preempt");
-	cfq_slice_expired(cfqd, 1);
-
-	/*
-	 * Put the new queue at the front of the of the current list,
-	 * so we know that it will be selected next.
-	 */
-	BUG_ON(!cfq_cfqq_on_rr(cfqq));
-
-	cfq_service_tree_add(cfqd, cfqq, 1);
-
-	cfqq->slice_end = 0;
-	cfq_mark_cfqq_slice_new(cfqq);
-}
-
-/*
  * Called when a new fs request (rq) is added (to cfqq). Check if there's
  * something we should do about it
+ * After enqueuing the request whether queue should be preempted or kicked
+ * decision is taken by common layer.
  */
 static void
 cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
@@ -2063,45 +1608,12 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 {
 	struct cfq_io_context *cic = RQ_CIC(rq);
 
-	cfqd->rq_queued++;
 	if (rq_is_meta(rq))
 		cfqq->meta_pending++;
 
-	cfq_update_io_thinktime(cfqd, cic);
 	cfq_update_io_seektime(cfqd, cic, rq);
-	cfq_update_idle_window(cfqd, cfqq, cic);
 
 	cic->last_request_pos = rq->sector + rq->nr_sectors;
-
-	if (cfqq == cfqd->active_queue) {
-		/*
-		 * Remember that we saw a request from this process, but
-		 * don't start queuing just yet. Otherwise we risk seeing lots
-		 * of tiny requests, because we disrupt the normal plugging
-		 * and merging. If the request is already larger than a single
-		 * page, let it rip immediately. For that case we assume that
-		 * merging is already done. Ditto for a busy system that
-		 * has other work pending, don't risk delaying until the
-		 * idle timer unplug to continue working.
-		 */
-		if (cfq_cfqq_wait_request(cfqq)) {
-			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
-			    cfqd->busy_queues > 1) {
-				del_timer(&cfqd->idle_slice_timer);
-				blk_start_queueing(cfqd->queue);
-			}
-			cfq_mark_cfqq_must_dispatch(cfqq);
-		}
-	} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
-		/*
-		 * not the active queue - expire current slice if it is
-		 * idle and has expired it's mean thinktime or this new queue
-		 * has some old slice time left and is of higher priority or
-		 * this new queue is RT and the current one is BE
-		 */
-		cfq_preempt_queue(cfqd, cfqq);
-		blk_start_queueing(cfqd->queue);
-	}
 }
 
 static void cfq_insert_request(struct request_queue *q, struct request *rq)
@@ -2119,84 +1631,17 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq)
 	cfq_rq_enqueued(cfqd, cfqq, rq);
 }
 
-/*
- * Update hw_tag based on peak queue depth over 50 samples under
- * sufficient load.
- */
-static void cfq_update_hw_tag(struct cfq_data *cfqd)
-{
-	if (cfqd->rq_in_driver > cfqd->rq_in_driver_peak)
-		cfqd->rq_in_driver_peak = cfqd->rq_in_driver;
-
-	if (cfqd->rq_queued <= CFQ_HW_QUEUE_MIN &&
-	    cfqd->rq_in_driver <= CFQ_HW_QUEUE_MIN)
-		return;
-
-	if (cfqd->hw_tag_samples++ < 50)
-		return;
-
-	if (cfqd->rq_in_driver_peak >= CFQ_HW_QUEUE_MIN)
-		cfqd->hw_tag = 1;
-	else
-		cfqd->hw_tag = 0;
-
-	cfqd->hw_tag_samples = 0;
-	cfqd->rq_in_driver_peak = 0;
-}
-
 static void cfq_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
 	struct cfq_data *cfqd = cfqq->cfqd;
-	const int sync = rq_is_sync(rq);
 	unsigned long now;
 
 	now = jiffies;
 	cfq_log_cfqq(cfqd, cfqq, "complete");
 
-	cfq_update_hw_tag(cfqd);
-
-	WARN_ON(!cfqd->rq_in_driver);
-	WARN_ON(!cfqq->dispatched);
-	cfqd->rq_in_driver--;
-	cfqq->dispatched--;
-
 	if (cfq_cfqq_sync(cfqq))
 		cfqd->sync_flight--;
-
-	if (!cfq_class_idle(cfqq))
-		cfqd->last_end_request = now;
-
-	if (sync)
-		RQ_CIC(rq)->last_end_request = now;
-
-	/*
-	 * If this is the active queue, check if it needs to be expired,
-	 * or if we want to idle in case it has no pending requests.
-	 */
-	if (cfqd->active_queue == cfqq) {
-		const bool cfqq_empty = RB_EMPTY_ROOT(&cfqq->sort_list);
-
-		if (cfq_cfqq_slice_new(cfqq)) {
-			cfq_set_prio_slice(cfqd, cfqq);
-			cfq_clear_cfqq_slice_new(cfqq);
-		}
-		/*
-		 * If there are no requests waiting in this queue, and
-		 * there are other queues ready to issue requests, AND
-		 * those other queues are issuing requests within our
-		 * mean seek distance, give them a chance to run instead
-		 * of idling.
-		 */
-		if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq))
-			cfq_slice_expired(cfqd, 1);
-		else if (cfqq_empty && !cfq_close_cooperator(cfqd, cfqq, 1) &&
-			 sync && !rq_noidle(rq))
-			cfq_arm_slice_timer(cfqd);
-	}
-
-	if (!cfqd->rq_in_driver)
-		cfq_schedule_dispatch(cfqd);
 }
 
 /*
@@ -2205,30 +1650,33 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
  */
 static void cfq_prio_boost(struct cfq_queue *cfqq)
 {
+	struct io_queue *ioq = cfqq->ioq;
+
 	if (has_fs_excl()) {
 		/*
 		 * boost idle prio on transactions that would lock out other
 		 * users of the filesystem
 		 */
 		if (cfq_class_idle(cfqq))
-			cfqq->ioprio_class = IOPRIO_CLASS_BE;
-		if (cfqq->ioprio > IOPRIO_NORM)
-			cfqq->ioprio = IOPRIO_NORM;
+			elv_ioq_set_ioprio_class(ioq, IOPRIO_CLASS_BE);
+		if (elv_ioq_ioprio(ioq) > IOPRIO_NORM)
+			elv_ioq_set_ioprio(ioq, IOPRIO_NORM);
+
 	} else {
 		/*
 		 * check if we need to unboost the queue
 		 */
-		if (cfqq->ioprio_class != cfqq->org_ioprio_class)
-			cfqq->ioprio_class = cfqq->org_ioprio_class;
-		if (cfqq->ioprio != cfqq->org_ioprio)
-			cfqq->ioprio = cfqq->org_ioprio;
+		if (elv_ioq_ioprio_class(ioq) != cfqq->org_ioprio_class)
+			elv_ioq_set_ioprio_class(ioq, cfqq->org_ioprio_class);
+		if (elv_ioq_ioprio(ioq) != cfqq->org_ioprio)
+			elv_ioq_set_ioprio(ioq, cfqq->org_ioprio);
 	}
 }
 
 static inline int __cfq_may_queue(struct cfq_queue *cfqq)
 {
-	if ((cfq_cfqq_wait_request(cfqq) || cfq_cfqq_must_alloc(cfqq)) &&
-	    !cfq_cfqq_must_alloc_slice(cfqq)) {
+	if ((elv_ioq_wait_request(cfqq->ioq) ||
+	   cfq_cfqq_must_alloc(cfqq)) && !cfq_cfqq_must_alloc_slice(cfqq)) {
 		cfq_mark_cfqq_must_alloc_slice(cfqq);
 		return ELV_MQUEUE_MUST;
 	}
@@ -2280,7 +1728,7 @@ static void cfq_put_request(struct request *rq)
 		put_io_context(RQ_CIC(rq)->ioc);
 
 		rq->elevator_private = NULL;
-		rq->elevator_private2 = NULL;
+		rq->ioq = NULL;
 
 		cfq_put_queue(cfqq);
 	}
@@ -2320,119 +1768,31 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 	cfqq->allocated[rw]++;
 	cfq_clear_cfqq_must_alloc(cfqq);
-	atomic_inc(&cfqq->ref);
+	elv_get_ioq(cfqq->ioq);
 
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
 	rq->elevator_private = cic;
-	rq->elevator_private2 = cfqq;
+	rq->ioq = cfqq->ioq;
 	return 0;
 
 queue_fail:
 	if (cic)
 		put_io_context(cic->ioc);
 
-	cfq_schedule_dispatch(cfqd);
+	elv_schedule_dispatch(cfqd->queue);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 	cfq_log(cfqd, "set_request fail");
 	return 1;
 }
 
-static void cfq_kick_queue(struct work_struct *work)
-{
-	struct cfq_data *cfqd =
-		container_of(work, struct cfq_data, unplug_work);
-	struct request_queue *q = cfqd->queue;
-
-	spin_lock_irq(q->queue_lock);
-	blk_start_queueing(q);
-	spin_unlock_irq(q->queue_lock);
-}
-
-/*
- * Timer running if the active_queue is currently idling inside its time slice
- */
-static void cfq_idle_slice_timer(unsigned long data)
-{
-	struct cfq_data *cfqd = (struct cfq_data *) data;
-	struct cfq_queue *cfqq;
-	unsigned long flags;
-	int timed_out = 1;
-
-	cfq_log(cfqd, "idle timer fired");
-
-	spin_lock_irqsave(cfqd->queue->queue_lock, flags);
-
-	cfqq = cfqd->active_queue;
-	if (cfqq) {
-		timed_out = 0;
-
-		/*
-		 * We saw a request before the queue expired, let it through
-		 */
-		if (cfq_cfqq_must_dispatch(cfqq))
-			goto out_kick;
-
-		/*
-		 * expired
-		 */
-		if (cfq_slice_used(cfqq))
-			goto expire;
-
-		/*
-		 * only expire and reinvoke request handler, if there are
-		 * other queues with pending requests
-		 */
-		if (!cfqd->busy_queues)
-			goto out_cont;
-
-		/*
-		 * not expired and it has a request pending, let it dispatch
-		 */
-		if (!RB_EMPTY_ROOT(&cfqq->sort_list))
-			goto out_kick;
-	}
-expire:
-	cfq_slice_expired(cfqd, timed_out);
-out_kick:
-	cfq_schedule_dispatch(cfqd);
-out_cont:
-	spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
-}
-
-static void cfq_shutdown_timer_wq(struct cfq_data *cfqd)
-{
-	del_timer_sync(&cfqd->idle_slice_timer);
-	cancel_work_sync(&cfqd->unplug_work);
-}
-
-static void cfq_put_async_queues(struct cfq_data *cfqd)
-{
-	int i;
-
-	for (i = 0; i < IOPRIO_BE_NR; i++) {
-		if (cfqd->async_cfqq[0][i])
-			cfq_put_queue(cfqd->async_cfqq[0][i]);
-		if (cfqd->async_cfqq[1][i])
-			cfq_put_queue(cfqd->async_cfqq[1][i]);
-	}
-
-	if (cfqd->async_idle_cfqq)
-		cfq_put_queue(cfqd->async_idle_cfqq);
-}
-
 static void cfq_exit_queue(struct elevator_queue *e)
 {
 	struct cfq_data *cfqd = e->elevator_data;
 	struct request_queue *q = cfqd->queue;
 
-	cfq_shutdown_timer_wq(cfqd);
-
 	spin_lock_irq(q->queue_lock);
 
-	if (cfqd->active_queue)
-		__cfq_slice_expired(cfqd, cfqd->active_queue, 0);
-
 	while (!list_empty(&cfqd->cic_list)) {
 		struct cfq_io_context *cic = list_entry(cfqd->cic_list.next,
 							struct cfq_io_context,
@@ -2441,12 +1801,7 @@ static void cfq_exit_queue(struct elevator_queue *e)
 		__cfq_exit_single_io_context(cfqd, cic);
 	}
 
-	cfq_put_async_queues(cfqd);
-
 	spin_unlock_irq(q->queue_lock);
-
-	cfq_shutdown_timer_wq(cfqd);
-
 	kfree(cfqd);
 }
 
@@ -2459,8 +1814,6 @@ static void *cfq_init_queue(struct request_queue *q)
 	if (!cfqd)
 		return NULL;
 
-	cfqd->service_tree = CFQ_RB_ROOT;
-
 	/*
 	 * Not strictly needed (since RB_ROOT just clears the node and we
 	 * zeroed cfqd on alloc), but better be safe in case someone decides
@@ -2473,23 +1826,12 @@ static void *cfq_init_queue(struct request_queue *q)
 
 	cfqd->queue = q;
 
-	init_timer(&cfqd->idle_slice_timer);
-	cfqd->idle_slice_timer.function = cfq_idle_slice_timer;
-	cfqd->idle_slice_timer.data = (unsigned long) cfqd;
-
-	INIT_WORK(&cfqd->unplug_work, cfq_kick_queue);
-
-	cfqd->last_end_request = jiffies;
 	cfqd->cfq_quantum = cfq_quantum;
 	cfqd->cfq_fifo_expire[0] = cfq_fifo_expire[0];
 	cfqd->cfq_fifo_expire[1] = cfq_fifo_expire[1];
 	cfqd->cfq_back_max = cfq_back_max;
 	cfqd->cfq_back_penalty = cfq_back_penalty;
-	cfqd->cfq_slice[0] = cfq_slice_async;
-	cfqd->cfq_slice[1] = cfq_slice_sync;
 	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
-	cfqd->cfq_slice_idle = cfq_slice_idle;
-	cfqd->hw_tag = 1;
 
 	return cfqd;
 }
@@ -2554,9 +1896,6 @@ SHOW_FUNCTION(cfq_fifo_expire_sync_show, cfqd->cfq_fifo_expire[1], 1);
 SHOW_FUNCTION(cfq_fifo_expire_async_show, cfqd->cfq_fifo_expire[0], 1);
 SHOW_FUNCTION(cfq_back_seek_max_show, cfqd->cfq_back_max, 0);
 SHOW_FUNCTION(cfq_back_seek_penalty_show, cfqd->cfq_back_penalty, 0);
-SHOW_FUNCTION(cfq_slice_idle_show, cfqd->cfq_slice_idle, 1);
-SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1);
-SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1);
 SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0);
 #undef SHOW_FUNCTION
 
@@ -2584,9 +1923,6 @@ STORE_FUNCTION(cfq_fifo_expire_async_store, &cfqd->cfq_fifo_expire[0], 1,
 STORE_FUNCTION(cfq_back_seek_max_store, &cfqd->cfq_back_max, 0, UINT_MAX, 0);
 STORE_FUNCTION(cfq_back_seek_penalty_store, &cfqd->cfq_back_penalty, 1,
 		UINT_MAX, 0);
-STORE_FUNCTION(cfq_slice_idle_store, &cfqd->cfq_slice_idle, 0, UINT_MAX, 1);
-STORE_FUNCTION(cfq_slice_sync_store, &cfqd->cfq_slice[1], 1, UINT_MAX, 1);
-STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1);
 STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1,
 		UINT_MAX, 0);
 #undef STORE_FUNCTION
@@ -2600,10 +1936,10 @@ static struct elv_fs_entry cfq_attrs[] = {
 	CFQ_ATTR(fifo_expire_async),
 	CFQ_ATTR(back_seek_max),
 	CFQ_ATTR(back_seek_penalty),
-	CFQ_ATTR(slice_sync),
-	CFQ_ATTR(slice_async),
 	CFQ_ATTR(slice_async_rq),
-	CFQ_ATTR(slice_idle),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+	ELV_ATTR(slice_async),
 	__ATTR_NULL
 };
 
@@ -2616,8 +1952,6 @@ static struct elevator_type iosched_cfq = {
 		.elevator_dispatch_fn =		cfq_dispatch_requests,
 		.elevator_add_req_fn =		cfq_insert_request,
 		.elevator_activate_req_fn =	cfq_activate_request,
-		.elevator_deactivate_req_fn =	cfq_deactivate_request,
-		.elevator_queue_empty_fn =	cfq_queue_empty,
 		.elevator_completed_req_fn =	cfq_completed_request,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
@@ -2627,7 +1961,15 @@ static struct elevator_type iosched_cfq = {
 		.elevator_init_fn =		cfq_init_queue,
 		.elevator_exit_fn =		cfq_exit_queue,
 		.trim =				cfq_free_io_context,
+		.elevator_free_sched_queue_fn =	cfq_free_cfq_queue,
+		.elevator_active_ioq_set_fn = 	cfq_active_ioq_set,
+		.elevator_active_ioq_reset_fn =	cfq_active_ioq_reset,
+		.elevator_arm_slice_timer_fn = 	cfq_arm_slice_timer,
+		.elevator_should_preempt_fn = 	cfq_should_preempt,
+		.elevator_update_idle_window_fn = cfq_update_idle_window,
+		.elevator_close_cooperator_fn = cfq_close_cooperator,
 	},
+	.elevator_features =    ELV_IOSCHED_NEED_FQ,
 	.elevator_attrs =	cfq_attrs,
 	.elevator_name =	"cfq",
 	.elevator_owner =	THIS_MODULE,
@@ -2635,14 +1977,6 @@ static struct elevator_type iosched_cfq = {
 
 static int __init cfq_init(void)
 {
-	/*
-	 * could be 0 on HZ < 1000 setups
-	 */
-	if (!cfq_slice_async)
-		cfq_slice_async = 1;
-	if (!cfq_slice_idle)
-		cfq_slice_idle = 1;
-
 	if (cfq_slab_setup())
 		return -ENOMEM;
 
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..5be25b3 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -39,13 +39,8 @@ struct cfq_io_context {
 
 	struct io_context *ioc;
 
-	unsigned long last_end_request;
 	sector_t last_request_pos;
 
-	unsigned long ttime_total;
-	unsigned long ttime_samples;
-	unsigned long ttime_mean;
-
 	unsigned int seek_samples;
 	u64 seek_total;
 	sector_t seek_mean;
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (3 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 06/20] io-controller: cfq changes to use " Vivek Goyal
                     ` (16 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o This patch enables hierarchical fair queuing in common layer. It is
  controlled by config option CONFIG_GROUP_IOSCHED.

o Requests keep a reference on ioq and ioq keeps  keep a reference
  on groups. For async queues in CFQ, and single ioq in other
  schedulers, io_group also keeps are reference on io_queue. This
  reference on ioq is dropped when the queue is released
  (elv_release_ioq). So the queue can be freed.

  When a queue is released, it puts the reference to io_group and the
  io_group is released after all the queues are released. Child groups
  also take reference on parent groups, and release it when they are
  destroyed.

o Reads of iocg->group_data are not always iocg->lock; so all the operations
  on that list are still protected by RCU. All modifications to
  iocg->group_data should always done under iocg->lock.

  Whenever iocg->lock and queue_lock can both be held, queue_lock should
  be held first. This avoids all deadlocks. In order to avoid race
  between cgroup deletion and elevator switch the following algorithm is
  used:

	- Cgroup deletion path holds iocg->lock and removes iog entry
	  to iocg->group_data list. Then it drops iocg->lock, holds
	  queue_lock and destroys iog. So in this path, we never hold
	  iocg->lock and queue_lock at the same time. Also, since we
	  remove iog from iocg->group_data under iocg->lock, we can't
	  race with elevator switch.

	- Elevator switch path does not remove iog from
	  iocg->group_data list directly. It first hold iocg->lock,
	  scans iocg->group_data again to see if iog is still there;
	  it removes iog only if it finds iog there. Otherwise, cgroup
	  deletion must have removed it from the list, and cgroup
	  deletion is responsible for removing iog.

  So the path which removes iog from iocg->group_data list does
  the final removal of iog by calling __io_destroy_group()
  function.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
Signed-off-by: Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
Signed-off-by: Aristeu Rozanski <aris-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/blk-ioc.c               |    3 +
 block/cfq-iosched.c           |    2 +
 block/elevator-fq.c           | 1221 +++++++++++++++++++++++++++++++++++++----
 block/elevator-fq.h           |  169 ++++++-
 block/elevator.c              |    4 +
 include/linux/blkdev.h        |    2 +-
 include/linux/cgroup_subsys.h |    7 +
 include/linux/iocontext.h     |    5 +
 init/Kconfig                  |    8 +
 9 files changed, 1313 insertions(+), 108 deletions(-)

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..8f0f6cf 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -95,6 +95,9 @@ struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
 		spin_lock_init(&ret->lock);
 		ret->ioprio_changed = 0;
 		ret->ioprio = 0;
+#ifdef CONFIG_GROUP_IOSCHED
+		ret->cgroup_changed = 0;
+#endif
 		ret->last_waited = jiffies; /* doesn't matter... */
 		ret->nr_batch_requests = 0; /* because this is 0 */
 		ret->aic = NULL;
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 995c8dd..1b67303 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1306,6 +1306,8 @@ alloc_ioq:
 			elv_mark_ioq_sync(cfqq->ioq);
 		}
 		cfqq->pid = current->pid;
+		/* ioq reference on iog */
+		elv_get_iog(iog);
 		cfq_log_cfqq(cfqd, cfqq, "alloced");
 	}
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 3e956dc..e52ace7 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -26,6 +26,10 @@ static int elv_rate_sampling_window = HZ / 10;
 
 #define ELV_SLICE_SCALE		(5)
 #define ELV_HW_QUEUE_MIN	(5)
+
+#define IO_DEFAULT_GRP_WEIGHT  500
+#define IO_DEFAULT_GRP_CLASS   IOPRIO_CLASS_BE
+
 #define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
 				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
 
@@ -33,6 +37,7 @@ static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
 					struct io_queue *ioq, int probe);
 struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 						 int extract);
+void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
 
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
@@ -51,6 +56,148 @@ elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
 }
 
 /* Mainly the BFQ scheduling code Follows */
+#ifdef CONFIG_GROUP_IOSCHED
+#define for_each_entity(entity)	\
+	for (; entity != NULL; entity = entity->parent)
+
+#define for_each_entity_safe(entity, parent) \
+	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
+
+
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract);
+void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
+					int requeue);
+void elv_activate_ioq(struct io_queue *ioq, int add_front);
+void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int requeue);
+
+static int bfq_update_next_active(struct io_sched_data *sd)
+{
+	struct io_group *iog;
+	struct io_entity *entity, *next_active;
+
+	if (sd->active_entity != NULL)
+		/* will update/requeue at the end of service */
+		return 0;
+
+	/*
+	 * NOTE: this can be improved in may ways, such as returning
+	 * 1 (and thus propagating upwards the update) only when the
+	 * budget changes, or caching the bfqq that will be scheduled
+	 * next from this subtree.  By now we worry more about
+	 * correctness than about performance...
+	 */
+	next_active = bfq_lookup_next_entity(sd, 0);
+	sd->next_active = next_active;
+
+	if (next_active != NULL) {
+		iog = container_of(sd, struct io_group, sched_data);
+		entity = iog->my_entity;
+		if (entity != NULL)
+			entity->budget = next_active->budget;
+	}
+
+	return 1;
+}
+
+static inline void bfq_check_next_active(struct io_sched_data *sd,
+					 struct io_entity *entity)
+{
+	BUG_ON(sd->next_active != entity);
+}
+
+static inline int iog_deleting(struct io_group *iog)
+{
+	return iog->deleting;
+}
+
+/* Do the two (enqueued) entities belong to the same group ? */
+static inline int
+is_same_group(struct io_entity *entity, struct io_entity *new_entity)
+{
+	if (entity->sched_data == new_entity->sched_data)
+		return 1;
+
+	return 0;
+}
+
+static inline struct io_entity *parent_entity(struct io_entity *entity)
+{
+	return entity->parent;
+}
+
+/* return depth at which a io entity is present in the hierarchy */
+static inline int depth_entity(struct io_entity *entity)
+{
+	int depth = 0;
+
+	for_each_entity(entity)
+		depth++;
+
+	return depth;
+}
+
+static void bfq_find_matching_entity(struct io_entity **entity,
+			struct io_entity **new_entity)
+{
+	int entity_depth, new_entity_depth;
+
+	/*
+	 * preemption test can be made between sibling entities who are in the
+	 * same group i.e who have a common parent. Walk up the hierarchy of
+	 * both entities until we find their ancestors who are siblings of
+	 * common parent.
+	 */
+
+	/* First walk up until both entities are at same depth */
+	entity_depth = depth_entity(*entity);
+	new_entity_depth = depth_entity(*new_entity);
+
+	while (entity_depth > new_entity_depth) {
+		entity_depth--;
+		*entity = parent_entity(*entity);
+	}
+
+	while (new_entity_depth > entity_depth) {
+		new_entity_depth--;
+		*new_entity = parent_entity(*new_entity);
+	}
+
+	while (!is_same_group(*entity, *new_entity)) {
+		*entity = parent_entity(*entity);
+		*new_entity = parent_entity(*new_entity);
+	}
+}
+
+#else /* GROUP_IOSCHED */
+#define for_each_entity(entity)	\
+	for (; entity != NULL; entity = NULL)
+
+#define for_each_entity_safe(entity, parent) \
+	for (parent = NULL; entity != NULL; entity = parent)
+
+static inline int bfq_update_next_active(struct io_sched_data *sd)
+{
+	return 0;
+}
+
+static inline void bfq_check_next_active(struct io_sched_data *sd,
+					 struct io_entity *entity)
+{
+}
+
+static inline int iog_deleting(struct io_group *iog)
+{
+	/* In flat mode, root cgroup can't be deleted. */
+	return 0;
+}
+
+static void bfq_find_matching_entity(struct io_entity **entity,
+					struct io_entity **new_entity)
+{
+}
+#endif /* GROUP_IOSCHED */
 
 /*
  * Shift for timestamp calculations.  This actually limits the maximum
@@ -283,7 +430,6 @@ static void bfq_active_insert(struct io_service_tree *st,
 	struct rb_node *node = &entity->rb_node;
 
 	bfq_insert(&st->active, entity);
-
 	if (node->rb_left != NULL)
 		node = node->rb_left;
 	else if (node->rb_right != NULL)
@@ -292,16 +438,6 @@ static void bfq_active_insert(struct io_service_tree *st,
 	bfq_update_active_tree(node);
 }
 
-/**
- * bfq_ioprio_to_weight - calc a weight from an ioprio.
- * @ioprio: the ioprio value to convert.
- */
-static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
-{
-	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
-	return IOPRIO_BE_NR - ioprio;
-}
-
 void bfq_get_entity(struct io_entity *entity)
 {
 	struct io_queue *ioq = io_entity_to_ioq(entity);
@@ -310,13 +446,6 @@ void bfq_get_entity(struct io_entity *entity)
 		elv_get_ioq(ioq);
 }
 
-void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
-{
-	entity->ioprio = entity->new_ioprio;
-	entity->ioprio_class = entity->new_ioprio_class;
-	entity->sched_data = &iog->sched_data;
-}
-
 /**
  * bfq_find_deepest - find the deepest node that an extraction can modify.
  * @node: the node being removed.
@@ -359,7 +488,6 @@ static void bfq_active_extract(struct io_service_tree *st,
 
 	node = bfq_find_deepest(&entity->rb_node);
 	bfq_extract(&st->active, entity);
-
 	if (node != NULL)
 		bfq_update_active_tree(node);
 }
@@ -454,8 +582,10 @@ __bfq_entity_update_prio(struct io_service_tree *old_st,
 	struct io_queue *ioq = io_entity_to_ioq(entity);
 
 	if (entity->ioprio_changed) {
+		old_st->wsum -= entity->weight;
 		entity->ioprio = entity->new_ioprio;
 		entity->ioprio_class = entity->new_ioprio_class;
+		entity->weight = entity->new_weight;
 		entity->ioprio_changed = 0;
 
 		/*
@@ -467,9 +597,6 @@ __bfq_entity_update_prio(struct io_service_tree *old_st,
 			entity->budget = elv_prio_to_slice(efqd, ioq);
 		}
 
-		old_st->wsum -= entity->weight;
-		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
-
 		/*
 		 * NOTE: here we may be changing the weight too early,
 		 * this will cause unfairness.  The correct approach
@@ -551,11 +678,8 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 	if (add_front) {
 		struct io_entity *next_entity;
 
-		/*
-		 * Determine the entity which will be dispatched next
-		 * Use sd->next_active once hierarchical patch is applied
-		 */
-		next_entity = bfq_lookup_next_entity(sd, 0);
+		/* Determine the entity which will be dispatched next */
+		next_entity = sd->next_active;
 
 		if (next_entity && next_entity != entity) {
 			struct io_service_tree *new_st;
@@ -582,12 +706,27 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 }
 
 /**
- * bfq_activate_entity - activate an entity.
+ * bfq_activate_entity - activate an entity and its ancestors if necessary.
  * @entity: the entity to activate.
+ * Activate @entity and all the entities on the path from it to the root.
  */
 void bfq_activate_entity(struct io_entity *entity, int add_front)
 {
-	__bfq_activate_entity(entity, add_front);
+	struct io_sched_data *sd;
+
+	for_each_entity(entity) {
+		__bfq_activate_entity(entity, add_front);
+
+		add_front = 0;
+		sd = entity->sched_data;
+		if (!bfq_update_next_active(sd))
+			/*
+			 * No need to propagate the activation to the
+			 * upper entities, as they will be updated when
+			 * the active entity is rescheduled.
+			 */
+			break;
+	}
 }
 
 /**
@@ -623,12 +762,16 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
 	else if (entity->tree != NULL)
 		BUG();
 
+	if (was_active || sd->next_active == entity)
+		ret = bfq_update_next_active(sd);
+
 	if (!requeue || !bfq_gt(entity->finish, st->vtime))
 		bfq_forget_entity(st, entity);
 	else
 		bfq_idle_insert(st, entity);
 
 	BUG_ON(sd->active_entity == entity);
+	BUG_ON(sd->next_active == entity);
 
 	return ret;
 }
@@ -640,7 +783,74 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
  */
 void bfq_deactivate_entity(struct io_entity *entity, int requeue)
 {
-	__bfq_deactivate_entity(entity, requeue);
+	struct io_sched_data *sd;
+	struct io_group *iog, *__iog;
+	struct io_entity *parent;
+
+	iog = container_of(entity->sched_data, struct io_group, sched_data);
+
+	/*
+	 * Hold a reference to entity's iog until we are done. This function
+	 * travels the hierarchy and we don't want to free up the group yet
+	 * while we are traversing the hiearchy. It is possible that this
+	 * group's cgroup has been removed hence cgroup reference is gone.
+	 * If this entity was active entity, then its group will not be on
+	 * any of the trees and it will be freed up the moment queue is
+	 * freed up in __bfq_deactivate_entity().
+	 *
+	 * Hence, hold a reference, deactivate the hierarhcy of entities and
+	 * then drop the reference which should free up the whole chain of
+	 * groups.
+	 */
+	elv_get_iog(iog);
+
+	for_each_entity_safe(entity, parent) {
+		sd = entity->sched_data;
+
+		if (!__bfq_deactivate_entity(entity, requeue))
+			/*
+			 * The parent entity is still backlogged, and
+			 * we don't need to update it as it is still
+			 * under service.
+			 */
+			break;
+
+		if (sd->next_active != NULL) {
+			/*
+			 * The parent entity is still backlogged and
+			 * the budgets on the path towards the root
+			 * need to be updated.
+			 */
+			elv_put_iog(iog);
+			goto update;
+		}
+
+		/*
+		 * If we reach there the parent is no more backlogged and
+		 * we want to propagate the dequeue upwards.
+		 *
+		 * If entity's group has been marked for deletion, don't
+		 * requeue the group in idle tree so that it can be freed.
+		 */
+
+		__iog = container_of(entity->sched_data, struct io_group,
+						sched_data);
+		if (!iog_deleting(__iog))
+			requeue = 1;
+	}
+
+	elv_put_iog(iog);
+	return;
+
+update:
+	entity = parent;
+	for_each_entity(entity) {
+		__bfq_activate_entity(entity, 0);
+
+		sd = entity->sched_data;
+		if (!bfq_update_next_active(sd))
+			break;
+	}
 }
 
 /**
@@ -757,8 +967,10 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 		entity = __bfq_lookup_next_entity(st);
 		if (entity != NULL) {
 			if (extract) {
+				bfq_check_next_active(sd, entity);
 				bfq_active_extract(st, entity);
 				sd->active_entity = entity;
+				sd->next_active = NULL;
 			}
 			break;
 		}
@@ -770,12 +982,13 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 void entity_served(struct io_entity *entity, bfq_service_t served)
 {
 	struct io_service_tree *st;
-
-	st = io_entity_service_tree(entity);
-	entity->service += served;
-	BUG_ON(st->wsum == 0);
-	st->vtime += bfq_delta(served, st->wsum);
-	bfq_forget_idle(st);
+	for_each_entity(entity) {
+		st = io_entity_service_tree(entity);
+		entity->service += served;
+		BUG_ON(st->wsum == 0);
+		st->vtime += bfq_delta(served, st->wsum);
+		bfq_forget_idle(st);
+	}
 }
 
 /**
@@ -790,6 +1003,817 @@ void io_flush_idle_tree(struct io_service_tree *st)
 		__bfq_deactivate_entity(entity, 0);
 }
 
+/*
+ * Release all the io group references to its async queues.
+ */
+void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
+{
+	int i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < IOPRIO_BE_NR; j++)
+			elv_release_ioq(e, &iog->async_queue[i][j]);
+
+	/* Free up async idle queue */
+	elv_release_ioq(e, &iog->async_idle_queue);
+}
+
+
+/* Mainly hierarchical grouping code */
+#ifdef CONFIG_GROUP_IOSCHED
+
+struct io_cgroup io_root_cgroup = {
+	.weight = IO_DEFAULT_GRP_WEIGHT,
+	.ioprio_class = IO_DEFAULT_GRP_CLASS,
+};
+
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->weight = entity->new_weight;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->parent = iog->my_entity;
+	entity->sched_data = &iog->sched_data;
+}
+
+struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
+{
+	return container_of(cgroup_subsys_state(cgroup, io_subsys_id),
+			    struct io_cgroup, css);
+}
+
+/*
+ * Search the bfq_group for bfqd into the hash table (by now only a list)
+ * of bgrp.  Must be called under rcu_read_lock().
+ */
+struct io_group *io_cgroup_lookup_group(struct io_cgroup *iocg, void *key)
+{
+	struct io_group *iog;
+	struct hlist_node *n;
+	void *__key;
+
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		__key = rcu_dereference(iog->key);
+		if (__key == key)
+			return iog;
+	}
+
+	return NULL;
+}
+
+void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog)
+{
+	struct io_entity *entity = &iog->entity;
+
+	entity->weight = entity->new_weight = iocg->weight;
+	entity->ioprio_class = entity->new_ioprio_class = iocg->ioprio_class;
+	entity->ioprio_changed = 1;
+	entity->my_sched_data = &iog->sched_data;
+}
+
+void io_group_set_parent(struct io_group *iog, struct io_group *parent)
+{
+	struct io_entity *entity;
+
+	BUG_ON(parent == NULL);
+	BUG_ON(iog == NULL);
+
+	entity = &iog->entity;
+	entity->parent = parent->my_entity;
+	entity->sched_data = &parent->sched_data;
+	if (entity->parent)
+		/* Child group reference on parent group. */
+		elv_get_iog(parent);
+}
+
+#define SHOW_FUNCTION(__VAR)						\
+static u64 io_cgroup_##__VAR##_read(struct cgroup *cgroup,		\
+				       struct cftype *cftype)		\
+{									\
+	struct io_cgroup *iocg;					\
+	u64 ret;							\
+									\
+	if (!cgroup_lock_live_group(cgroup))				\
+		return -ENODEV;						\
+									\
+	iocg = cgroup_to_io_cgroup(cgroup);				\
+	spin_lock_irq(&iocg->lock);					\
+	ret = iocg->__VAR;						\
+	spin_unlock_irq(&iocg->lock);					\
+									\
+	cgroup_unlock();						\
+									\
+	return ret;							\
+}
+
+SHOW_FUNCTION(weight);
+SHOW_FUNCTION(ioprio_class);
+#undef SHOW_FUNCTION
+
+#define STORE_FUNCTION(__VAR, __MIN, __MAX)				\
+static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
+					struct cftype *cftype,		\
+					u64 val)			\
+{									\
+	struct io_cgroup *iocg;					\
+	struct io_group *iog;						\
+	struct hlist_node *n;						\
+									\
+	if (val < (__MIN) || val > (__MAX))				\
+		return -EINVAL;						\
+									\
+	if (!cgroup_lock_live_group(cgroup))				\
+		return -ENODEV;						\
+									\
+	iocg = cgroup_to_io_cgroup(cgroup);				\
+									\
+	spin_lock_irq(&iocg->lock);					\
+	iocg->__VAR = (unsigned long)val;				\
+	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {	\
+		iog->entity.new_##__VAR = (unsigned long)val;		\
+		smp_wmb();						\
+		iog->entity.ioprio_changed = 1;				\
+	}								\
+	spin_unlock_irq(&iocg->lock);					\
+									\
+	cgroup_unlock();						\
+									\
+	return 0;							\
+}
+
+STORE_FUNCTION(weight, 1, WEIGHT_MAX);
+STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
+#undef STORE_FUNCTION
+
+/**
+ * bfq_group_chain_alloc - allocate a chain of groups.
+ * @bfqd: queue descriptor.
+ * @cgroup: the leaf cgroup this chain starts from.
+ *
+ * Allocate a chain of groups starting from the one belonging to
+ * @cgroup up to the root cgroup.  Stop if a cgroup on the chain
+ * to the root has already an allocated group on @bfqd.
+ */
+struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
+					struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog, *leaf = NULL, *prev = NULL;
+	gfp_t flags = GFP_ATOMIC |  __GFP_ZERO;
+
+	for (; cgroup != NULL; cgroup = cgroup->parent) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+
+		iog = io_cgroup_lookup_group(iocg, key);
+		if (iog != NULL) {
+			/*
+			 * All the cgroups in the path from there to the
+			 * root must have a bfq_group for bfqd, so we don't
+			 * need any more allocations.
+			 */
+			break;
+		}
+
+		iog = kzalloc_node(sizeof(*iog), flags, q->node);
+		if (!iog)
+			goto cleanup;
+
+		iog->iocg_id = css_id(&iocg->css);
+
+		io_group_init_entity(iocg, iog);
+		iog->my_entity = &iog->entity;
+
+		atomic_set(&iog->ref, 0);
+		iog->deleting = 0;
+
+		/*
+		 * Take the initial reference that will be released on destroy
+		 * This can be thought of a joint reference by cgroup and
+		 * elevator which will be dropped by either elevator exit
+		 * or cgroup deletion path depending on who is exiting first.
+		 */
+		elv_get_iog(iog);
+
+		if (leaf == NULL) {
+			leaf = iog;
+			prev = leaf;
+		} else {
+			io_group_set_parent(prev, iog);
+			/*
+			 * Build a list of allocated nodes using the bfqd
+			 * filed, that is still unused and will be initialized
+			 * only after the node will be connected.
+			 */
+			prev->key = iog;
+			prev = iog;
+		}
+	}
+
+	return leaf;
+
+cleanup:
+	while (leaf != NULL) {
+		prev = leaf;
+		leaf = leaf->key;
+		kfree(prev);
+	}
+
+	return NULL;
+}
+
+/**
+ * bfq_group_chain_link - link an allocatd group chain to a cgroup hierarchy.
+ * @bfqd: the queue descriptor.
+ * @cgroup: the leaf cgroup to start from.
+ * @leaf: the leaf group (to be associated to @cgroup).
+ *
+ * Try to link a chain of groups to a cgroup hierarchy, connecting the
+ * nodes bottom-up, so we can be sure that when we find a cgroup in the
+ * hierarchy that already as a group associated to @bfqd all the nodes
+ * in the path to the root cgroup have one too.
+ *
+ * On locking: the queue lock protects the hierarchy (there is a hierarchy
+ * per device) while the bfqio_cgroup lock protects the list of groups
+ * belonging to the same cgroup.
+ */
+void io_group_chain_link(struct request_queue *q, void *key,
+				struct cgroup *cgroup,
+				struct io_group *leaf,
+				struct elv_fq_data *efqd)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog, *next, *prev = NULL;
+	unsigned long flags;
+
+	assert_spin_locked(q->queue_lock);
+
+	for (; cgroup != NULL && leaf != NULL; cgroup = cgroup->parent) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+		next = leaf->key;
+
+		iog = io_cgroup_lookup_group(iocg, key);
+		BUG_ON(iog != NULL);
+
+		spin_lock_irqsave(&iocg->lock, flags);
+
+		rcu_assign_pointer(leaf->key, key);
+		hlist_add_head_rcu(&leaf->group_node, &iocg->group_data);
+		hlist_add_head(&leaf->elv_data_node, &efqd->group_list);
+
+		spin_unlock_irqrestore(&iocg->lock, flags);
+
+		prev = leaf;
+		leaf = next;
+	}
+
+	BUG_ON(cgroup == NULL && leaf != NULL);
+
+	if (cgroup != NULL && prev != NULL) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+		iog = io_cgroup_lookup_group(iocg, key);
+		io_group_set_parent(prev, iog);
+	}
+}
+
+/**
+ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
+ * @bfqd: queue descriptor.
+ * @cgroup: cgroup being searched for.
+ * @create: if set to 1, create the io group if it has not been created yet.
+ *
+ * Return a group associated to @bfqd in @cgroup, allocating one if
+ * necessary.  When a group is returned all the cgroups in the path
+ * to the root have a group associated to @bfqd.
+ *
+ * If the allocation fails, return the root group: this breaks guarantees
+ * but is a safe fallbak.  If this loss becames a problem it can be
+ * mitigated using the equivalent weight (given by the product of the
+ * weights of the groups in the path from @group to the root) in the
+ * root scheduler.
+ *
+ * We allocate all the missing nodes in the path from the leaf cgroup
+ * to the root and we connect the nodes only after all the allocations
+ * have been successful.
+ */
+struct io_group *io_find_alloc_group(struct request_queue *q,
+			struct cgroup *cgroup, struct elv_fq_data *efqd,
+			int create)
+{
+	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
+	struct io_group *iog = NULL;
+	/* Note: Use efqd as key */
+	void *key = efqd;
+
+	/*
+	 * Take a refenrece to css object. Don't want to map a bio to
+	 * a group if it has been marked for deletion
+	 */
+
+	if (!css_tryget(&iocg->css))
+		return iog;
+
+	iog = io_cgroup_lookup_group(iocg, key);
+	if (iog != NULL || !create)
+		goto end;
+
+	iog = io_group_chain_alloc(q, key, cgroup);
+	if (iog != NULL)
+		io_group_chain_link(q, key, cgroup, iog, efqd);
+
+end:
+	css_put(&iocg->css);
+	return iog;
+}
+
+/*
+ * Search for the io group current task belongs to. If create=1, then also
+ * create the io group if it is not already there.
+ *
+ * Note: This function should be called with queue lock held. It returns
+ * a pointer to io group without taking any reference. That group will
+ * be around as long as queue lock is not dropped (as group reclaim code
+ * needs to get hold of queue lock). So if somebody needs to use group
+ * pointer even after dropping queue lock, take a reference to the group
+ * before dropping queue lock.
+ */
+struct io_group *io_get_io_group(struct request_queue *q, int create)
+{
+	struct cgroup *cgroup;
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	assert_spin_locked(q->queue_lock);
+
+	rcu_read_lock();
+	cgroup = task_cgroup(current, io_subsys_id);
+	iog = io_find_alloc_group(q, cgroup, efqd, create);
+	if (!iog) {
+		if (create)
+			iog = efqd->root_group;
+		else
+			/*
+			 * bio merge functions doing lookup don't want to
+			 * map bio to root group by default
+			 */
+			iog = NULL;
+	}
+	rcu_read_unlock();
+	return iog;
+}
+EXPORT_SYMBOL(io_get_io_group);
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_cgroup *iocg = &io_root_cgroup;
+	struct elv_fq_data *efqd = &e->efqd;
+	struct io_group *iog = efqd->root_group;
+	struct io_service_tree *st;
+	int i;
+
+	BUG_ON(!iog);
+	spin_lock_irq(&iocg->lock);
+	hlist_del_rcu(&iog->group_node);
+	spin_unlock_irq(&iocg->lock);
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	elv_put_iog(iog);
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	struct io_cgroup *iocg;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	elv_get_iog(iog);
+	iog->entity.parent = NULL;
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	iocg = &io_root_cgroup;
+	spin_lock_irq(&iocg->lock);
+	rcu_assign_pointer(iog->key, key);
+	hlist_add_head_rcu(&iog->group_node, &iocg->group_data);
+	iog->iocg_id = css_id(&iocg->css);
+	spin_unlock_irq(&iocg->lock);
+
+	return iog;
+}
+
+struct cftype bfqio_files[] = {
+	{
+		.name = "weight",
+		.read_u64 = io_cgroup_weight_read,
+		.write_u64 = io_cgroup_weight_write,
+	},
+	{
+		.name = "ioprio_class",
+		.read_u64 = io_cgroup_ioprio_class_read,
+		.write_u64 = io_cgroup_ioprio_class_write,
+	},
+};
+
+int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
+{
+	return cgroup_add_files(cgroup, subsys, bfqio_files,
+				ARRAY_SIZE(bfqio_files));
+}
+
+struct cgroup_subsys_state *iocg_create(struct cgroup_subsys *subsys,
+						struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg;
+
+	if (cgroup->parent != NULL) {
+		iocg = kzalloc(sizeof(*iocg), GFP_KERNEL);
+		if (iocg == NULL)
+			return ERR_PTR(-ENOMEM);
+	} else
+		iocg = &io_root_cgroup;
+
+	spin_lock_init(&iocg->lock);
+	INIT_HLIST_HEAD(&iocg->group_data);
+	iocg->weight = IO_DEFAULT_GRP_WEIGHT;
+	iocg->ioprio_class = IO_DEFAULT_GRP_CLASS;
+
+	return &iocg->css;
+}
+
+/*
+ * We cannot support shared io contexts, as we have no mean to support
+ * two tasks with the same ioc in two different groups without major rework
+ * of the main cic/bfqq data structures.  By now we allow a task to change
+ * its cgroup only if it's the only owner of its ioc; the drawback of this
+ * behavior is that a group containing a task that forked using CLONE_IO
+ * will not be destroyed until the tasks sharing the ioc die.
+ */
+int iocg_can_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
+			    struct task_struct *tsk)
+{
+	struct io_context *ioc;
+	int ret = 0;
+
+	/* task_lock() is needed to avoid races with exit_io_context() */
+	task_lock(tsk);
+	ioc = tsk->io_context;
+	if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
+		/*
+		 * ioc == NULL means that the task is either too young or
+		 * exiting: if it has still no ioc the ioc can't be shared,
+		 * if the task is exiting the attach will fail anyway, no
+		 * matter what we return here.
+		 */
+		ret = -EINVAL;
+	task_unlock(tsk);
+
+	return ret;
+}
+
+void iocg_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
+			 struct cgroup *prev, struct task_struct *tsk)
+{
+	struct io_context *ioc;
+
+	task_lock(tsk);
+	ioc = tsk->io_context;
+	if (ioc != NULL)
+		ioc->cgroup_changed = 1;
+	task_unlock(tsk);
+}
+
+/*
+ * This cleanup function does the last bit of things to destroy cgroup.
+ * It should only get called after io_destroy_group has been invoked.
+ */
+void io_group_cleanup(struct io_group *iog)
+{
+	struct io_service_tree *st;
+	struct io_entity *entity = iog->my_entity;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+
+		BUG_ON(!RB_EMPTY_ROOT(&st->active));
+		BUG_ON(!RB_EMPTY_ROOT(&st->idle));
+		BUG_ON(st->wsum != 0);
+	}
+
+	BUG_ON(iog->sched_data.next_active != NULL);
+	BUG_ON(iog->sched_data.active_entity != NULL);
+	BUG_ON(entity != NULL && entity->tree != NULL);
+
+	iog->iocg_id = 0;
+	kfree(iog);
+}
+
+void elv_put_iog(struct io_group *iog)
+{
+	struct io_group *parent = NULL;
+	struct io_entity *entity;
+
+	BUG_ON(!iog);
+
+	entity = iog->my_entity;
+
+	BUG_ON(atomic_read(&iog->ref) <= 0);
+	if (!atomic_dec_and_test(&iog->ref))
+		return;
+
+	if (entity)
+		parent = container_of(iog->my_entity->parent,
+					struct io_group, entity);
+
+	io_group_cleanup(iog);
+
+	if (parent)
+		elv_put_iog(parent);
+}
+EXPORT_SYMBOL(elv_put_iog);
+
+/*
+ * check whether a given group has got any active entities on any of the
+ * service tree.
+ */
+static inline int io_group_has_active_entities(struct io_group *iog)
+{
+	int i;
+	struct io_service_tree *st;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		if (!RB_EMPTY_ROOT(&st->active))
+			return 1;
+	}
+
+	/*
+	 * Also check there are no active entities being served which are
+	 * not on active tree
+	 */
+
+	if (iog->sched_data.active_entity)
+		return 1;
+
+	return 0;
+}
+
+/*
+ * After the group is destroyed, no new sync IO should come to the group.
+ * It might still have pending IOs in some busy queues. It should be able to
+ * send those IOs down to the disk. The async IOs (due to dirty page writeback)
+ * would go in the root group queues after this, as the group does not exist
+ * anymore.
+ */
+static void __io_destroy_group(struct elv_fq_data *efqd, struct io_group *iog)
+{
+	struct elevator_queue *eq;
+	struct io_service_tree *st;
+	int i;
+
+	BUG_ON(iog->my_entity == NULL);
+
+	/*
+	 * Mark io group for deletion so that no new entry goes in
+	 * idle tree. Any active queue will be removed from active
+	 * tree and not put in to idle tree.
+	 */
+	iog->deleting = 1;
+
+	/* We flush idle tree now, and don't put things in there any more. */
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+
+		io_flush_idle_tree(st);
+	}
+
+	eq = container_of(efqd, struct elevator_queue, efqd);
+	hlist_del(&iog->elv_data_node);
+	io_put_io_group_queues(eq, iog);
+
+	/*
+	 * We can come here either through cgroup deletion path or through
+	 * elevator exit path. If we come here through cgroup deletion path
+	 * check if io group has any active entities or not. If not, then
+	 * deactivate this io group to make sure it is removed from idle
+	 * tree it might have been on. If this group was on idle tree, then
+	 * this probably will be the last reference and group will be
+	 * freed upon putting the reference down.
+	 */
+
+	if (!io_group_has_active_entities(iog)) {
+		/*
+		 * io group does not have any active entites. Because this
+		 * group has been decoupled from io_cgroup list and this
+		 * cgroup is being deleted, this group should not receive
+		 * any new IO. Hence it should be safe to deactivate this
+		 * io group and remove from the scheduling tree.
+		 */
+		__bfq_deactivate_entity(iog->my_entity, 0);
+	}
+
+	/*
+	 * Put the reference taken at the time of creation so that when all
+	 * queues are gone, cgroup can be destroyed.
+	 */
+	elv_put_iog(iog);
+}
+
+void iocg_destroy(struct cgroup_subsys *subsys, struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
+	struct io_group *iog;
+	struct elv_fq_data *efqd;
+	unsigned long uninitialized_var(flags);
+
+	/*
+	 * io groups are linked in two lists. One list is maintained
+	 * in elevator (efqd->group_list) and other is maintained
+	 * per cgroup structure (iocg->group_data).
+	 *
+	 * While a cgroup is being deleted, elevator also might be
+	 * exiting and both might try to cleanup the same io group
+	 * so need to be little careful.
+	 *
+	 * (iocg->group_data) is protected by iocg->lock. To avoid deadlock,
+	 * we can't hold the queue lock while holding iocg->lock. So we first
+	 * remove iog from iocg->group_data under iocg->lock. Whoever removes
+	 * iog from iocg->group_data should call __io_destroy_group to remove
+	 * iog.
+	 */
+
+	rcu_read_lock();
+
+remove_entry:
+	spin_lock_irqsave(&iocg->lock, flags);
+
+	if (hlist_empty(&iocg->group_data)) {
+		spin_unlock_irqrestore(&iocg->lock, flags);
+		goto done;
+	}
+	iog = hlist_entry(iocg->group_data.first, struct io_group,
+			  group_node);
+	efqd = rcu_dereference(iog->key);
+	hlist_del_rcu(&iog->group_node);
+	spin_unlock_irqrestore(&iocg->lock, flags);
+
+	spin_lock_irqsave(efqd->queue->queue_lock, flags);
+	__io_destroy_group(efqd, iog);
+	spin_unlock_irqrestore(efqd->queue->queue_lock, flags);
+	goto remove_entry;
+
+done:
+	free_css_id(&io_subsys, &iocg->css);
+	rcu_read_unlock();
+	BUG_ON(!hlist_empty(&iocg->group_data));
+	kfree(iocg);
+}
+
+/*
+ * This functions checks if iog is still in iocg->group_data, and removes it.
+ * If iog is not in that list, then cgroup destroy path has removed it, and
+ * we do not need to remove it.
+ */
+void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
+{
+	struct io_cgroup *iocg;
+	unsigned short id = iog->iocg_id;
+	struct hlist_node *n;
+	struct io_group *__iog;
+	unsigned long flags;
+	struct cgroup_subsys_state *css;
+
+	rcu_read_lock();
+
+	BUG_ON(!id);
+	css = css_lookup(&io_subsys, id);
+
+	/* css can't go away as associated io group is still around */
+	BUG_ON(!css);
+
+	iocg = container_of(css, struct io_cgroup, css);
+
+	spin_lock_irqsave(&iocg->lock, flags);
+	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
+		/*
+		 * Remove iog only if it is still in iocg list. Cgroup
+		 * deletion could have deleted it already.
+		 */
+		if (__iog == iog) {
+			hlist_del_rcu(&iog->group_node);
+			__io_destroy_group(efqd, iog);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&iocg->lock, flags);
+	rcu_read_unlock();
+}
+
+void io_disconnect_groups(struct elevator_queue *e)
+{
+	struct hlist_node *pos, *n;
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &e->efqd;
+
+	hlist_for_each_entry_safe(iog, pos, n, &efqd->group_list,
+					elv_data_node) {
+		io_group_check_and_destroy(efqd, iog);
+	}
+}
+
+struct cgroup_subsys io_subsys = {
+	.name = "io",
+	.create = iocg_create,
+	.can_attach = iocg_can_attach,
+	.attach = iocg_attach,
+	.destroy = iocg_destroy,
+	.populate = iocg_populate,
+	.subsys_id = io_subsys_id,
+};
+
+/*
+ * if bio sumbmitting task and rq don't belong to same io_group, it can't
+ * be merged
+ */
+int io_group_allow_merge(struct request *rq, struct bio *bio)
+{
+	struct request_queue *q = rq->q;
+	struct io_queue *ioq = rq->ioq;
+	struct io_group *iog, *__iog;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return 1;
+
+	/* Determine the io group of the bio submitting task */
+	iog = io_get_io_group(q, 0);
+	if (!iog) {
+		/* May be task belongs to a differet cgroup for which io
+		 * group has not been setup yet. */
+		return 0;
+	}
+
+	/* Determine the io group of the ioq, rq belongs to*/
+	__iog = ioq_to_io_group(ioq);
+
+	return (iog == __iog);
+}
+
+#else /* GROUP_IOSCHED */
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->weight = entity->new_weight;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->sched_data = &iog->sched_data;
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	return iog;
+}
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_group *iog = e->efqd.root_group;
+	struct io_service_tree *st;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	kfree(iog);
+}
+
+struct io_group *io_get_io_group(struct request_queue *q, int create)
+{
+	return q->elevator->efqd.root_group;
+}
+EXPORT_SYMBOL(io_get_io_group);
+#endif /* CONFIG_GROUP_IOSCHED*/
+
 /* Elevator fair queuing function */
 struct io_queue *rq_ioq(struct request *rq)
 {
@@ -1070,11 +2094,10 @@ void elv_free_ioq(struct io_queue *ioq)
 EXPORT_SYMBOL(elv_free_ioq);
 
 int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
-			void *sched_queue, int ioprio_class, int ioprio,
-			int is_sync)
+		struct io_group *iog, void *sched_queue, int ioprio_class,
+		int ioprio, int is_sync)
 {
 	struct elv_fq_data *efqd = &eq->efqd;
-	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
 
 	RB_CLEAR_NODE(&ioq->entity.rb_node);
 	atomic_set(&ioq->ref, 0);
@@ -1099,10 +2122,14 @@ void elv_put_ioq(struct io_queue *ioq)
 	struct elv_fq_data *efqd = ioq->efqd;
 	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
 						efqd);
+	struct io_group *iog;
 
 	BUG_ON(atomic_read(&ioq->ref) <= 0);
 	if (!atomic_dec_and_test(&ioq->ref))
 		return;
+
+	iog = ioq_to_io_group(ioq);
+
 	BUG_ON(ioq->nr_queued);
 	BUG_ON(ioq->entity.tree != NULL);
 	BUG_ON(elv_ioq_busy(ioq));
@@ -1114,6 +2141,7 @@ void elv_put_ioq(struct io_queue *ioq)
 	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
 	elv_log_ioq(efqd, ioq, "put_queue");
 	elv_free_ioq(ioq);
+	elv_put_iog(iog);
 }
 EXPORT_SYMBOL(elv_put_ioq);
 
@@ -1175,11 +2203,23 @@ struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
 		return NULL;
 
 	sd = &efqd->root_group->sched_data;
-	entity = bfq_lookup_next_entity(sd, 1);
+	for (; sd != NULL; sd = entity->my_sched_data) {
+		entity = bfq_lookup_next_entity(sd, 1);
+		/*
+		 * entity can be null despite the fact that there are busy
+		 * queues. if all the busy queues are under a group which is
+		 * currently under service.
+		 * So if we are just looking for next ioq while something is
+		 * being served, null entity is not an error.
+		 */
+		BUG_ON(!entity && extract);
 
-	BUG_ON(!entity);
-	if (extract)
-		entity->service = 0;
+		if (extract)
+			entity->service = 0;
+
+		if (!entity)
+			return NULL;
+	}
 	ioq = io_entity_to_ioq(entity);
 
 	return ioq;
@@ -1195,8 +2235,12 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 	struct request_queue *q = efqd->queue;
 
 	if (ioq) {
-		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
-							efqd->busy_queues);
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "set_active, busy=%d ioprio=%d"
+				" weight=%ld group_weight=%ld",
+				efqd->busy_queues,
+				ioq->entity.ioprio, ioq->entity.weight,
+				iog_weight(iog));
 		ioq->slice_end = 0;
 
 		elv_clear_ioq_wait_request(ioq);
@@ -1258,6 +2302,7 @@ void elv_activate_ioq(struct io_queue *ioq, int add_front)
 void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 					int requeue)
 {
+	requeue = update_requeue(ioq, requeue);
 	bfq_deactivate_entity(&ioq->entity, requeue);
 }
 
@@ -1433,6 +2478,7 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	struct io_queue *ioq;
 	struct elevator_queue *eq = q->elevator;
 	struct io_entity *entity, *new_entity;
+	struct io_group *iog = NULL, *new_iog = NULL;
 
 	ioq = elv_active_ioq(eq);
 
@@ -1443,6 +2489,13 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	new_entity = &new_ioq->entity;
 
 	/*
+	 * In hierarchical setup, one need to traverse up the hierarchy
+	 * till both the queues are children of same parent to make a
+	 * decision whether to do the preemption or not.
+	 */
+	bfq_find_matching_entity(&entity, &new_entity);
+
+	/*
 	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
 	 */
 
@@ -1458,9 +2511,17 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 		return 1;
 
 	/*
-	 * Check with io scheduler if it has additional criterion based on
-	 * which it wants to preempt existing queue.
+	 * If both the queues belong to same group, check with io scheduler
+	 * if it has additional criterion based on which it wants to
+	 * preempt existing queue.
 	 */
+	iog = ioq_to_io_group(ioq);
+	new_iog = ioq_to_io_group(new_ioq);
+
+	if (iog != new_iog)
+		return 0;
+
+
 	if (eq->ops->elevator_should_preempt_fn)
 		return eq->ops->elevator_should_preempt_fn(q,
 						ioq_sched_queue(new_ioq), rq);
@@ -1879,14 +2940,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		elv_schedule_dispatch(q);
 }
 
-struct io_group *io_lookup_io_group_current(struct request_queue *q)
-{
-	struct elv_fq_data *efqd = &q->elevator->efqd;
-
-	return efqd->root_group;
-}
-EXPORT_SYMBOL(io_lookup_io_group_current);
-
 void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio)
 {
@@ -1937,52 +2990,6 @@ void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 }
 EXPORT_SYMBOL(io_group_set_async_queue);
 
-/*
- * Release all the io group references to its async queues.
- */
-void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
-{
-	int i, j;
-
-	for (i = 0; i < 2; i++)
-		for (j = 0; j < IOPRIO_BE_NR; j++)
-			elv_release_ioq(e, &iog->async_queue[i][j]);
-
-	/* Free up async idle queue */
-	elv_release_ioq(e, &iog->async_idle_queue);
-}
-
-struct io_group *io_alloc_root_group(struct request_queue *q,
-					struct elevator_queue *e, void *key)
-{
-	struct io_group *iog;
-	int i;
-
-	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
-	if (iog == NULL)
-		return NULL;
-
-	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
-		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
-
-	return iog;
-}
-
-void io_free_root_group(struct elevator_queue *e)
-{
-	struct io_group *iog = e->efqd.root_group;
-	struct io_service_tree *st;
-	int i;
-
-	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
-		st = iog->sched_data.service_tree + i;
-		io_flush_idle_tree(st);
-	}
-
-	io_put_io_group_queues(e, iog);
-	kfree(iog);
-}
-
 static void elv_slab_kill(void)
 {
 	/*
@@ -2026,6 +3033,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->idle_slice_timer.data = (unsigned long) efqd;
 
 	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
+	INIT_HLIST_HEAD(&efqd->group_list);
 
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
@@ -2045,12 +3053,23 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 void elv_exit_fq_data(struct elevator_queue *e)
 {
 	struct elv_fq_data *efqd = &e->efqd;
+	struct request_queue *q = efqd->queue;
 
 	if (!elv_iosched_fair_queuing_enabled(e))
 		return;
 
 	elv_shutdown_timer_wq(e);
 
+	spin_lock_irq(q->queue_lock);
+	/* This should drop all the io group references of async queues */
+	io_disconnect_groups(e);
+	spin_unlock_irq(q->queue_lock);
+
+	elv_shutdown_timer_wq(e);
+
+	/* Wait for iog->key accessors to exit their grace periods. */
+	synchronize_rcu();
+
 	BUG_ON(timer_pending(&efqd->idle_slice_timer));
 	io_free_root_group(e);
 }
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index a0acf32..d9a643a 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -11,11 +11,13 @@
  */
 
 #include <linux/blkdev.h>
+#include <linux/cgroup.h>
 
 #ifndef _BFQ_SCHED_H
 #define _BFQ_SCHED_H
 
 #define IO_IOPRIO_CLASSES	3
+#define WEIGHT_MAX 		1000
 
 typedef u64 bfq_timestamp_t;
 typedef unsigned long bfq_weight_t;
@@ -74,6 +76,7 @@ struct io_service_tree {
  */
 struct io_sched_data {
 	struct io_entity *active_entity;
+	struct io_entity *next_active;
 	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
 };
 
@@ -89,13 +92,12 @@ struct io_sched_data {
  *             this entity; used for O(log N) lookups into active trees.
  * @service: service received during the last round of service.
  * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
- * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
  * @parent: parent entity, for hierarchical scheduling.
  * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
  *                 associated scheduler queue, %NULL on leaf nodes.
  * @sched_data: the scheduler queue this entity belongs to.
- * @ioprio: the ioprio in use.
- * @new_ioprio: when an ioprio change is requested, the new ioprio value
+ * @weight: the weight in use.
+ * @new_weight: when a weight change is requested, the new weight value
  * @ioprio_class: the ioprio_class in use.
  * @new_ioprio_class: when an ioprio_class change is requested, the new
  *                    ioprio_class value.
@@ -137,13 +139,13 @@ struct io_entity {
 	bfq_timestamp_t min_start;
 
 	bfq_service_t service, budget;
-	bfq_weight_t weight;
 
 	struct io_entity *parent;
 
 	struct io_sched_data *my_sched_data;
 	struct io_sched_data *sched_data;
 
+	bfq_weight_t weight, new_weight;
 	unsigned short ioprio, new_ioprio;
 	unsigned short ioprio_class, new_ioprio_class;
 
@@ -184,8 +186,50 @@ struct io_queue {
 	void *sched_queue;
 };
 
+#ifdef CONFIG_GROUP_IOSCHED
+/**
+ * struct bfq_group - per (device, cgroup) data structure.
+ * @entity: schedulable entity to insert into the parent group sched_data.
+ * @sched_data: own sched_data, to contain child entities (they may be
+ *              both bfq_queues and bfq_groups).
+ * @group_node: node to be inserted into the bfqio_cgroup->group_data
+ *              list of the containing cgroup's bfqio_cgroup.
+ * @bfqd_node: node to be inserted into the @bfqd->group_list list
+ *             of the groups active on the same device; used for cleanup.
+ * @bfqd: the bfq_data for the device this group acts upon.
+ * @async_bfqq: array of async queues for all the tasks belonging to
+ *              the group, one queue per ioprio value per ioprio_class,
+ *              except for the idle class that has only one queue.
+ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
+ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
+ *             to avoid too many special cases during group creation/migration.
+ *
+ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
+ * there is a set of bfq_groups, each one collecting the lower-level
+ * entities belonging to the group that are acting on the same device.
+ *
+ * Locking works as follows:
+ *    o @group_node is protected by the bfqio_cgroup lock, and is accessed
+ *      via RCU from its readers.
+ *    o @bfqd is protected by the queue lock, RCU is used to access it
+ *      from the readers.
+ *    o All the other fields are protected by the @bfqd queue lock.
+ */
 struct io_group {
+	struct io_entity entity;
+	struct hlist_node elv_data_node;
+	struct hlist_node group_node;
 	struct io_sched_data sched_data;
+	atomic_t ref;
+
+	struct io_entity *my_entity;
+
+	/*
+	 * A cgroup has multiple io_groups, one for each request queue.
+	 * to find io group belonging to a particular queue, elv_fq_data
+	 * pointer is stored as a key.
+	 */
+	void *key;
 
 	/* async_queue and idle_queue are used only for cfq */
 	struct io_queue *async_queue[2][IOPRIO_BE_NR];
@@ -196,11 +240,52 @@ struct io_group {
 	 * non-RT cfqq in service when this value is non-zero.
 	 */
 	unsigned int busy_rt_queues;
+
+	int deleting;
+	unsigned short iocg_id;
 };
 
+/**
+ * struct bfqio_cgroup - bfq cgroup data structure.
+ * @css: subsystem state for bfq in the containing cgroup.
+ * @weight: cgroup weight.
+ * @ioprio_class: cgroup ioprio_class.
+ * @lock: spinlock that protects @weight, @ioprio_class and @group_data.
+ * @group_data: list containing the bfq_group belonging to this cgroup.
+ *
+ * @group_data is accessed using RCU, with @lock protecting the updates,
+ * @weight and @ioprio_class are protected by @lock.
+ */
+struct io_cgroup {
+	struct cgroup_subsys_state css;
+
+	unsigned long weight, ioprio_class;
+
+	spinlock_t lock;
+	struct hlist_head group_data;
+};
+#else
+struct io_group {
+	struct io_sched_data sched_data;
+
+	/* async_queue and idle_queue are used only for cfq */
+	struct io_queue *async_queue[2][IOPRIO_BE_NR];
+	struct io_queue *async_idle_queue;
+
+	/*
+	 * Used to track any pending rt requests so we can pre-empt current
+	 * non-RT cfqq in service when this value is non-zero.
+	 */
+	unsigned int busy_rt_queues;
+};
+#endif
+
 struct elv_fq_data {
 	struct io_group *root_group;
 
+	/* List of io groups hanging on this elevator */
+	struct hlist_head group_list;
+
 	struct request_queue *queue;
 	unsigned int busy_queues;
 
@@ -362,9 +447,20 @@ static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
 	ioq->entity.ioprio_changed = 1;
 }
 
+/**
+ * bfq_ioprio_to_weight - calc a weight from an ioprio.
+ * @ioprio: the ioprio value to convert.
+ */
+static inline bfq_weight_t bfq_ioprio_to_weight(int ioprio)
+{
+	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+	return ((IOPRIO_BE_NR - ioprio) * WEIGHT_MAX)/IOPRIO_BE_NR;
+}
+
 static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
 {
 	ioq->entity.new_ioprio = ioprio;
+	ioq->entity.new_weight = bfq_ioprio_to_weight(ioprio);
 	ioq->entity.ioprio_changed = 1;
 }
 
@@ -381,6 +477,60 @@ static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
 						sched_data);
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int io_group_allow_merge(struct request *rq, struct bio *bio);
+extern void elv_put_iog(struct io_group *iog);
+static inline bfq_weight_t iog_weight(struct io_group *iog)
+{
+	return iog->entity.weight;
+}
+
+static inline void elv_get_iog(struct io_group *iog)
+{
+	atomic_inc(&iog->ref);
+}
+
+static inline int update_requeue(struct io_queue *ioq, int requeue)
+{
+	struct io_group *iog = ioq_to_io_group(ioq);
+
+	if (iog->deleting == 1)
+		return 0;
+
+	return requeue;
+}
+
+#else /* !GROUP_IOSCHED */
+static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
+{
+	return 1;
+}
+/*
+ * Currently root group is not part of elevator group list and freed
+ * separately. Hence in case of non-hierarchical setup, nothing todo.
+ */
+static inline void io_disconnect_groups(struct elevator_queue *e) {}
+static inline bfq_weight_t iog_weight(struct io_group *iog)
+{
+	/* Just root group is present and weight is immaterial. */
+	return 0;
+}
+
+static inline void elv_get_iog(struct io_group *iog)
+{
+}
+
+static inline void elv_put_iog(struct io_group *iog)
+{
+}
+
+static inline int update_requeue(struct io_queue *ioq, int requeue)
+{
+	return requeue;
+}
+
+#endif /* GROUP_IOSCHED */
+
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
 						size_t count);
@@ -416,7 +566,8 @@ extern void elv_put_ioq(struct io_queue *ioq);
 extern void __elv_ioq_slice_expired(struct request_queue *q,
 					struct io_queue *ioq);
 extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
-		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
+		struct io_group *iog, void *sched_queue, int ioprio_class,
+		int ioprio, int is_sync);
 extern void elv_schedule_dispatch(struct request_queue *q);
 extern int elv_hw_tag(struct elevator_queue *e);
 extern void *elv_active_sched_queue(struct elevator_queue *e);
@@ -428,7 +579,7 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio);
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
-extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
+extern struct io_group *io_get_io_group(struct request_queue *q, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
 extern void elv_free_ioq(struct io_queue *ioq);
@@ -480,5 +631,11 @@ static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	return NULL;
 }
+
+static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
+
+{
+	return 1;
+}
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index c2f07f5..3944385 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -105,6 +105,10 @@ int elv_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (bio_integrity(bio) != blk_integrity_rq(rq))
 		return 0;
 
+	/* If rq and bio belongs to different groups, dont allow merging */
+	if (!io_group_allow_merge(rq, bio))
+		return 0;
+
 	if (!elv_iosched_allow_merge(rq, bio))
 		return 0;
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 96a94c9..539cb9d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -249,7 +249,7 @@ struct request {
 #ifdef CONFIG_ELV_FAIR_QUEUING
 	/* io queue request belongs to */
 	struct io_queue *ioq;
-#endif
+#endif /* ELV_FAIR_QUEUING */
 };
 
 static inline unsigned short req_get_ioprio(struct request *req)
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 9c8d31b..68ea6bd 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -60,3 +60,10 @@ SUBSYS(net_cls)
 #endif
 
 /* */
+
+#ifdef CONFIG_GROUP_IOSCHED
+SUBSYS(io)
+#endif
+
+/* */
+
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 5be25b3..73027b6 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -68,6 +68,11 @@ struct io_context {
 	unsigned short ioprio;
 	unsigned short ioprio_changed;
 
+#ifdef CONFIG_GROUP_IOSCHED
+	/* If task changes the cgroup, elevator processes it asynchronously */
+	unsigned short cgroup_changed;
+#endif
+
 	/*
 	 * For request batching
 	 */
diff --git a/init/Kconfig b/init/Kconfig
index 7be4d38..ab76477 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -606,6 +606,14 @@ config CGROUP_MEM_RES_CTLR_SWAP
 	  Now, memory usage of swap_cgroup is 2 bytes per entry. If swap page
 	  size is 4096bytes, 512k per 1Gbytes of swap.
 
+config GROUP_IOSCHED
+	bool "Group IO Scheduler"
+	depends on CGROUPS && ELV_FAIR_QUEUING
+	default n
+	---help---
+	  This feature lets IO scheduler recognize task groups and control
+	  disk bandwidth allocation to such task groups.
+
 endif # CGROUPS
 
 config MM_OWNER
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o This patch enables hierarchical fair queuing in common layer. It is
  controlled by config option CONFIG_GROUP_IOSCHED.

o Requests keep a reference on ioq and ioq keeps  keep a reference
  on groups. For async queues in CFQ, and single ioq in other
  schedulers, io_group also keeps are reference on io_queue. This
  reference on ioq is dropped when the queue is released
  (elv_release_ioq). So the queue can be freed.

  When a queue is released, it puts the reference to io_group and the
  io_group is released after all the queues are released. Child groups
  also take reference on parent groups, and release it when they are
  destroyed.

o Reads of iocg->group_data are not always iocg->lock; so all the operations
  on that list are still protected by RCU. All modifications to
  iocg->group_data should always done under iocg->lock.

  Whenever iocg->lock and queue_lock can both be held, queue_lock should
  be held first. This avoids all deadlocks. In order to avoid race
  between cgroup deletion and elevator switch the following algorithm is
  used:

	- Cgroup deletion path holds iocg->lock and removes iog entry
	  to iocg->group_data list. Then it drops iocg->lock, holds
	  queue_lock and destroys iog. So in this path, we never hold
	  iocg->lock and queue_lock at the same time. Also, since we
	  remove iog from iocg->group_data under iocg->lock, we can't
	  race with elevator switch.

	- Elevator switch path does not remove iog from
	  iocg->group_data list directly. It first hold iocg->lock,
	  scans iocg->group_data again to see if iog is still there;
	  it removes iog only if it finds iog there. Otherwise, cgroup
	  deletion must have removed it from the list, and cgroup
	  deletion is responsible for removing iog.

  So the path which removes iog from iocg->group_data list does
  the final removal of iog by calling __io_destroy_group()
  function.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/blk-ioc.c               |    3 +
 block/cfq-iosched.c           |    2 +
 block/elevator-fq.c           | 1221 +++++++++++++++++++++++++++++++++++++----
 block/elevator-fq.h           |  169 ++++++-
 block/elevator.c              |    4 +
 include/linux/blkdev.h        |    2 +-
 include/linux/cgroup_subsys.h |    7 +
 include/linux/iocontext.h     |    5 +
 init/Kconfig                  |    8 +
 9 files changed, 1313 insertions(+), 108 deletions(-)

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..8f0f6cf 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -95,6 +95,9 @@ struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
 		spin_lock_init(&ret->lock);
 		ret->ioprio_changed = 0;
 		ret->ioprio = 0;
+#ifdef CONFIG_GROUP_IOSCHED
+		ret->cgroup_changed = 0;
+#endif
 		ret->last_waited = jiffies; /* doesn't matter... */
 		ret->nr_batch_requests = 0; /* because this is 0 */
 		ret->aic = NULL;
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 995c8dd..1b67303 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1306,6 +1306,8 @@ alloc_ioq:
 			elv_mark_ioq_sync(cfqq->ioq);
 		}
 		cfqq->pid = current->pid;
+		/* ioq reference on iog */
+		elv_get_iog(iog);
 		cfq_log_cfqq(cfqd, cfqq, "alloced");
 	}
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 3e956dc..e52ace7 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -26,6 +26,10 @@ static int elv_rate_sampling_window = HZ / 10;
 
 #define ELV_SLICE_SCALE		(5)
 #define ELV_HW_QUEUE_MIN	(5)
+
+#define IO_DEFAULT_GRP_WEIGHT  500
+#define IO_DEFAULT_GRP_CLASS   IOPRIO_CLASS_BE
+
 #define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
 				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
 
@@ -33,6 +37,7 @@ static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
 					struct io_queue *ioq, int probe);
 struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 						 int extract);
+void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
 
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
@@ -51,6 +56,148 @@ elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
 }
 
 /* Mainly the BFQ scheduling code Follows */
+#ifdef CONFIG_GROUP_IOSCHED
+#define for_each_entity(entity)	\
+	for (; entity != NULL; entity = entity->parent)
+
+#define for_each_entity_safe(entity, parent) \
+	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
+
+
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract);
+void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
+					int requeue);
+void elv_activate_ioq(struct io_queue *ioq, int add_front);
+void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int requeue);
+
+static int bfq_update_next_active(struct io_sched_data *sd)
+{
+	struct io_group *iog;
+	struct io_entity *entity, *next_active;
+
+	if (sd->active_entity != NULL)
+		/* will update/requeue at the end of service */
+		return 0;
+
+	/*
+	 * NOTE: this can be improved in may ways, such as returning
+	 * 1 (and thus propagating upwards the update) only when the
+	 * budget changes, or caching the bfqq that will be scheduled
+	 * next from this subtree.  By now we worry more about
+	 * correctness than about performance...
+	 */
+	next_active = bfq_lookup_next_entity(sd, 0);
+	sd->next_active = next_active;
+
+	if (next_active != NULL) {
+		iog = container_of(sd, struct io_group, sched_data);
+		entity = iog->my_entity;
+		if (entity != NULL)
+			entity->budget = next_active->budget;
+	}
+
+	return 1;
+}
+
+static inline void bfq_check_next_active(struct io_sched_data *sd,
+					 struct io_entity *entity)
+{
+	BUG_ON(sd->next_active != entity);
+}
+
+static inline int iog_deleting(struct io_group *iog)
+{
+	return iog->deleting;
+}
+
+/* Do the two (enqueued) entities belong to the same group ? */
+static inline int
+is_same_group(struct io_entity *entity, struct io_entity *new_entity)
+{
+	if (entity->sched_data == new_entity->sched_data)
+		return 1;
+
+	return 0;
+}
+
+static inline struct io_entity *parent_entity(struct io_entity *entity)
+{
+	return entity->parent;
+}
+
+/* return depth at which a io entity is present in the hierarchy */
+static inline int depth_entity(struct io_entity *entity)
+{
+	int depth = 0;
+
+	for_each_entity(entity)
+		depth++;
+
+	return depth;
+}
+
+static void bfq_find_matching_entity(struct io_entity **entity,
+			struct io_entity **new_entity)
+{
+	int entity_depth, new_entity_depth;
+
+	/*
+	 * preemption test can be made between sibling entities who are in the
+	 * same group i.e who have a common parent. Walk up the hierarchy of
+	 * both entities until we find their ancestors who are siblings of
+	 * common parent.
+	 */
+
+	/* First walk up until both entities are at same depth */
+	entity_depth = depth_entity(*entity);
+	new_entity_depth = depth_entity(*new_entity);
+
+	while (entity_depth > new_entity_depth) {
+		entity_depth--;
+		*entity = parent_entity(*entity);
+	}
+
+	while (new_entity_depth > entity_depth) {
+		new_entity_depth--;
+		*new_entity = parent_entity(*new_entity);
+	}
+
+	while (!is_same_group(*entity, *new_entity)) {
+		*entity = parent_entity(*entity);
+		*new_entity = parent_entity(*new_entity);
+	}
+}
+
+#else /* GROUP_IOSCHED */
+#define for_each_entity(entity)	\
+	for (; entity != NULL; entity = NULL)
+
+#define for_each_entity_safe(entity, parent) \
+	for (parent = NULL; entity != NULL; entity = parent)
+
+static inline int bfq_update_next_active(struct io_sched_data *sd)
+{
+	return 0;
+}
+
+static inline void bfq_check_next_active(struct io_sched_data *sd,
+					 struct io_entity *entity)
+{
+}
+
+static inline int iog_deleting(struct io_group *iog)
+{
+	/* In flat mode, root cgroup can't be deleted. */
+	return 0;
+}
+
+static void bfq_find_matching_entity(struct io_entity **entity,
+					struct io_entity **new_entity)
+{
+}
+#endif /* GROUP_IOSCHED */
 
 /*
  * Shift for timestamp calculations.  This actually limits the maximum
@@ -283,7 +430,6 @@ static void bfq_active_insert(struct io_service_tree *st,
 	struct rb_node *node = &entity->rb_node;
 
 	bfq_insert(&st->active, entity);
-
 	if (node->rb_left != NULL)
 		node = node->rb_left;
 	else if (node->rb_right != NULL)
@@ -292,16 +438,6 @@ static void bfq_active_insert(struct io_service_tree *st,
 	bfq_update_active_tree(node);
 }
 
-/**
- * bfq_ioprio_to_weight - calc a weight from an ioprio.
- * @ioprio: the ioprio value to convert.
- */
-static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
-{
-	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
-	return IOPRIO_BE_NR - ioprio;
-}
-
 void bfq_get_entity(struct io_entity *entity)
 {
 	struct io_queue *ioq = io_entity_to_ioq(entity);
@@ -310,13 +446,6 @@ void bfq_get_entity(struct io_entity *entity)
 		elv_get_ioq(ioq);
 }
 
-void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
-{
-	entity->ioprio = entity->new_ioprio;
-	entity->ioprio_class = entity->new_ioprio_class;
-	entity->sched_data = &iog->sched_data;
-}
-
 /**
  * bfq_find_deepest - find the deepest node that an extraction can modify.
  * @node: the node being removed.
@@ -359,7 +488,6 @@ static void bfq_active_extract(struct io_service_tree *st,
 
 	node = bfq_find_deepest(&entity->rb_node);
 	bfq_extract(&st->active, entity);
-
 	if (node != NULL)
 		bfq_update_active_tree(node);
 }
@@ -454,8 +582,10 @@ __bfq_entity_update_prio(struct io_service_tree *old_st,
 	struct io_queue *ioq = io_entity_to_ioq(entity);
 
 	if (entity->ioprio_changed) {
+		old_st->wsum -= entity->weight;
 		entity->ioprio = entity->new_ioprio;
 		entity->ioprio_class = entity->new_ioprio_class;
+		entity->weight = entity->new_weight;
 		entity->ioprio_changed = 0;
 
 		/*
@@ -467,9 +597,6 @@ __bfq_entity_update_prio(struct io_service_tree *old_st,
 			entity->budget = elv_prio_to_slice(efqd, ioq);
 		}
 
-		old_st->wsum -= entity->weight;
-		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
-
 		/*
 		 * NOTE: here we may be changing the weight too early,
 		 * this will cause unfairness.  The correct approach
@@ -551,11 +678,8 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 	if (add_front) {
 		struct io_entity *next_entity;
 
-		/*
-		 * Determine the entity which will be dispatched next
-		 * Use sd->next_active once hierarchical patch is applied
-		 */
-		next_entity = bfq_lookup_next_entity(sd, 0);
+		/* Determine the entity which will be dispatched next */
+		next_entity = sd->next_active;
 
 		if (next_entity && next_entity != entity) {
 			struct io_service_tree *new_st;
@@ -582,12 +706,27 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 }
 
 /**
- * bfq_activate_entity - activate an entity.
+ * bfq_activate_entity - activate an entity and its ancestors if necessary.
  * @entity: the entity to activate.
+ * Activate @entity and all the entities on the path from it to the root.
  */
 void bfq_activate_entity(struct io_entity *entity, int add_front)
 {
-	__bfq_activate_entity(entity, add_front);
+	struct io_sched_data *sd;
+
+	for_each_entity(entity) {
+		__bfq_activate_entity(entity, add_front);
+
+		add_front = 0;
+		sd = entity->sched_data;
+		if (!bfq_update_next_active(sd))
+			/*
+			 * No need to propagate the activation to the
+			 * upper entities, as they will be updated when
+			 * the active entity is rescheduled.
+			 */
+			break;
+	}
 }
 
 /**
@@ -623,12 +762,16 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
 	else if (entity->tree != NULL)
 		BUG();
 
+	if (was_active || sd->next_active == entity)
+		ret = bfq_update_next_active(sd);
+
 	if (!requeue || !bfq_gt(entity->finish, st->vtime))
 		bfq_forget_entity(st, entity);
 	else
 		bfq_idle_insert(st, entity);
 
 	BUG_ON(sd->active_entity == entity);
+	BUG_ON(sd->next_active == entity);
 
 	return ret;
 }
@@ -640,7 +783,74 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
  */
 void bfq_deactivate_entity(struct io_entity *entity, int requeue)
 {
-	__bfq_deactivate_entity(entity, requeue);
+	struct io_sched_data *sd;
+	struct io_group *iog, *__iog;
+	struct io_entity *parent;
+
+	iog = container_of(entity->sched_data, struct io_group, sched_data);
+
+	/*
+	 * Hold a reference to entity's iog until we are done. This function
+	 * travels the hierarchy and we don't want to free up the group yet
+	 * while we are traversing the hiearchy. It is possible that this
+	 * group's cgroup has been removed hence cgroup reference is gone.
+	 * If this entity was active entity, then its group will not be on
+	 * any of the trees and it will be freed up the moment queue is
+	 * freed up in __bfq_deactivate_entity().
+	 *
+	 * Hence, hold a reference, deactivate the hierarhcy of entities and
+	 * then drop the reference which should free up the whole chain of
+	 * groups.
+	 */
+	elv_get_iog(iog);
+
+	for_each_entity_safe(entity, parent) {
+		sd = entity->sched_data;
+
+		if (!__bfq_deactivate_entity(entity, requeue))
+			/*
+			 * The parent entity is still backlogged, and
+			 * we don't need to update it as it is still
+			 * under service.
+			 */
+			break;
+
+		if (sd->next_active != NULL) {
+			/*
+			 * The parent entity is still backlogged and
+			 * the budgets on the path towards the root
+			 * need to be updated.
+			 */
+			elv_put_iog(iog);
+			goto update;
+		}
+
+		/*
+		 * If we reach there the parent is no more backlogged and
+		 * we want to propagate the dequeue upwards.
+		 *
+		 * If entity's group has been marked for deletion, don't
+		 * requeue the group in idle tree so that it can be freed.
+		 */
+
+		__iog = container_of(entity->sched_data, struct io_group,
+						sched_data);
+		if (!iog_deleting(__iog))
+			requeue = 1;
+	}
+
+	elv_put_iog(iog);
+	return;
+
+update:
+	entity = parent;
+	for_each_entity(entity) {
+		__bfq_activate_entity(entity, 0);
+
+		sd = entity->sched_data;
+		if (!bfq_update_next_active(sd))
+			break;
+	}
 }
 
 /**
@@ -757,8 +967,10 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 		entity = __bfq_lookup_next_entity(st);
 		if (entity != NULL) {
 			if (extract) {
+				bfq_check_next_active(sd, entity);
 				bfq_active_extract(st, entity);
 				sd->active_entity = entity;
+				sd->next_active = NULL;
 			}
 			break;
 		}
@@ -770,12 +982,13 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 void entity_served(struct io_entity *entity, bfq_service_t served)
 {
 	struct io_service_tree *st;
-
-	st = io_entity_service_tree(entity);
-	entity->service += served;
-	BUG_ON(st->wsum == 0);
-	st->vtime += bfq_delta(served, st->wsum);
-	bfq_forget_idle(st);
+	for_each_entity(entity) {
+		st = io_entity_service_tree(entity);
+		entity->service += served;
+		BUG_ON(st->wsum == 0);
+		st->vtime += bfq_delta(served, st->wsum);
+		bfq_forget_idle(st);
+	}
 }
 
 /**
@@ -790,6 +1003,817 @@ void io_flush_idle_tree(struct io_service_tree *st)
 		__bfq_deactivate_entity(entity, 0);
 }
 
+/*
+ * Release all the io group references to its async queues.
+ */
+void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
+{
+	int i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < IOPRIO_BE_NR; j++)
+			elv_release_ioq(e, &iog->async_queue[i][j]);
+
+	/* Free up async idle queue */
+	elv_release_ioq(e, &iog->async_idle_queue);
+}
+
+
+/* Mainly hierarchical grouping code */
+#ifdef CONFIG_GROUP_IOSCHED
+
+struct io_cgroup io_root_cgroup = {
+	.weight = IO_DEFAULT_GRP_WEIGHT,
+	.ioprio_class = IO_DEFAULT_GRP_CLASS,
+};
+
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->weight = entity->new_weight;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->parent = iog->my_entity;
+	entity->sched_data = &iog->sched_data;
+}
+
+struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
+{
+	return container_of(cgroup_subsys_state(cgroup, io_subsys_id),
+			    struct io_cgroup, css);
+}
+
+/*
+ * Search the bfq_group for bfqd into the hash table (by now only a list)
+ * of bgrp.  Must be called under rcu_read_lock().
+ */
+struct io_group *io_cgroup_lookup_group(struct io_cgroup *iocg, void *key)
+{
+	struct io_group *iog;
+	struct hlist_node *n;
+	void *__key;
+
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		__key = rcu_dereference(iog->key);
+		if (__key == key)
+			return iog;
+	}
+
+	return NULL;
+}
+
+void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog)
+{
+	struct io_entity *entity = &iog->entity;
+
+	entity->weight = entity->new_weight = iocg->weight;
+	entity->ioprio_class = entity->new_ioprio_class = iocg->ioprio_class;
+	entity->ioprio_changed = 1;
+	entity->my_sched_data = &iog->sched_data;
+}
+
+void io_group_set_parent(struct io_group *iog, struct io_group *parent)
+{
+	struct io_entity *entity;
+
+	BUG_ON(parent == NULL);
+	BUG_ON(iog == NULL);
+
+	entity = &iog->entity;
+	entity->parent = parent->my_entity;
+	entity->sched_data = &parent->sched_data;
+	if (entity->parent)
+		/* Child group reference on parent group. */
+		elv_get_iog(parent);
+}
+
+#define SHOW_FUNCTION(__VAR)						\
+static u64 io_cgroup_##__VAR##_read(struct cgroup *cgroup,		\
+				       struct cftype *cftype)		\
+{									\
+	struct io_cgroup *iocg;					\
+	u64 ret;							\
+									\
+	if (!cgroup_lock_live_group(cgroup))				\
+		return -ENODEV;						\
+									\
+	iocg = cgroup_to_io_cgroup(cgroup);				\
+	spin_lock_irq(&iocg->lock);					\
+	ret = iocg->__VAR;						\
+	spin_unlock_irq(&iocg->lock);					\
+									\
+	cgroup_unlock();						\
+									\
+	return ret;							\
+}
+
+SHOW_FUNCTION(weight);
+SHOW_FUNCTION(ioprio_class);
+#undef SHOW_FUNCTION
+
+#define STORE_FUNCTION(__VAR, __MIN, __MAX)				\
+static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
+					struct cftype *cftype,		\
+					u64 val)			\
+{									\
+	struct io_cgroup *iocg;					\
+	struct io_group *iog;						\
+	struct hlist_node *n;						\
+									\
+	if (val < (__MIN) || val > (__MAX))				\
+		return -EINVAL;						\
+									\
+	if (!cgroup_lock_live_group(cgroup))				\
+		return -ENODEV;						\
+									\
+	iocg = cgroup_to_io_cgroup(cgroup);				\
+									\
+	spin_lock_irq(&iocg->lock);					\
+	iocg->__VAR = (unsigned long)val;				\
+	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {	\
+		iog->entity.new_##__VAR = (unsigned long)val;		\
+		smp_wmb();						\
+		iog->entity.ioprio_changed = 1;				\
+	}								\
+	spin_unlock_irq(&iocg->lock);					\
+									\
+	cgroup_unlock();						\
+									\
+	return 0;							\
+}
+
+STORE_FUNCTION(weight, 1, WEIGHT_MAX);
+STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
+#undef STORE_FUNCTION
+
+/**
+ * bfq_group_chain_alloc - allocate a chain of groups.
+ * @bfqd: queue descriptor.
+ * @cgroup: the leaf cgroup this chain starts from.
+ *
+ * Allocate a chain of groups starting from the one belonging to
+ * @cgroup up to the root cgroup.  Stop if a cgroup on the chain
+ * to the root has already an allocated group on @bfqd.
+ */
+struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
+					struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog, *leaf = NULL, *prev = NULL;
+	gfp_t flags = GFP_ATOMIC |  __GFP_ZERO;
+
+	for (; cgroup != NULL; cgroup = cgroup->parent) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+
+		iog = io_cgroup_lookup_group(iocg, key);
+		if (iog != NULL) {
+			/*
+			 * All the cgroups in the path from there to the
+			 * root must have a bfq_group for bfqd, so we don't
+			 * need any more allocations.
+			 */
+			break;
+		}
+
+		iog = kzalloc_node(sizeof(*iog), flags, q->node);
+		if (!iog)
+			goto cleanup;
+
+		iog->iocg_id = css_id(&iocg->css);
+
+		io_group_init_entity(iocg, iog);
+		iog->my_entity = &iog->entity;
+
+		atomic_set(&iog->ref, 0);
+		iog->deleting = 0;
+
+		/*
+		 * Take the initial reference that will be released on destroy
+		 * This can be thought of a joint reference by cgroup and
+		 * elevator which will be dropped by either elevator exit
+		 * or cgroup deletion path depending on who is exiting first.
+		 */
+		elv_get_iog(iog);
+
+		if (leaf == NULL) {
+			leaf = iog;
+			prev = leaf;
+		} else {
+			io_group_set_parent(prev, iog);
+			/*
+			 * Build a list of allocated nodes using the bfqd
+			 * filed, that is still unused and will be initialized
+			 * only after the node will be connected.
+			 */
+			prev->key = iog;
+			prev = iog;
+		}
+	}
+
+	return leaf;
+
+cleanup:
+	while (leaf != NULL) {
+		prev = leaf;
+		leaf = leaf->key;
+		kfree(prev);
+	}
+
+	return NULL;
+}
+
+/**
+ * bfq_group_chain_link - link an allocatd group chain to a cgroup hierarchy.
+ * @bfqd: the queue descriptor.
+ * @cgroup: the leaf cgroup to start from.
+ * @leaf: the leaf group (to be associated to @cgroup).
+ *
+ * Try to link a chain of groups to a cgroup hierarchy, connecting the
+ * nodes bottom-up, so we can be sure that when we find a cgroup in the
+ * hierarchy that already as a group associated to @bfqd all the nodes
+ * in the path to the root cgroup have one too.
+ *
+ * On locking: the queue lock protects the hierarchy (there is a hierarchy
+ * per device) while the bfqio_cgroup lock protects the list of groups
+ * belonging to the same cgroup.
+ */
+void io_group_chain_link(struct request_queue *q, void *key,
+				struct cgroup *cgroup,
+				struct io_group *leaf,
+				struct elv_fq_data *efqd)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog, *next, *prev = NULL;
+	unsigned long flags;
+
+	assert_spin_locked(q->queue_lock);
+
+	for (; cgroup != NULL && leaf != NULL; cgroup = cgroup->parent) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+		next = leaf->key;
+
+		iog = io_cgroup_lookup_group(iocg, key);
+		BUG_ON(iog != NULL);
+
+		spin_lock_irqsave(&iocg->lock, flags);
+
+		rcu_assign_pointer(leaf->key, key);
+		hlist_add_head_rcu(&leaf->group_node, &iocg->group_data);
+		hlist_add_head(&leaf->elv_data_node, &efqd->group_list);
+
+		spin_unlock_irqrestore(&iocg->lock, flags);
+
+		prev = leaf;
+		leaf = next;
+	}
+
+	BUG_ON(cgroup == NULL && leaf != NULL);
+
+	if (cgroup != NULL && prev != NULL) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+		iog = io_cgroup_lookup_group(iocg, key);
+		io_group_set_parent(prev, iog);
+	}
+}
+
+/**
+ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
+ * @bfqd: queue descriptor.
+ * @cgroup: cgroup being searched for.
+ * @create: if set to 1, create the io group if it has not been created yet.
+ *
+ * Return a group associated to @bfqd in @cgroup, allocating one if
+ * necessary.  When a group is returned all the cgroups in the path
+ * to the root have a group associated to @bfqd.
+ *
+ * If the allocation fails, return the root group: this breaks guarantees
+ * but is a safe fallbak.  If this loss becames a problem it can be
+ * mitigated using the equivalent weight (given by the product of the
+ * weights of the groups in the path from @group to the root) in the
+ * root scheduler.
+ *
+ * We allocate all the missing nodes in the path from the leaf cgroup
+ * to the root and we connect the nodes only after all the allocations
+ * have been successful.
+ */
+struct io_group *io_find_alloc_group(struct request_queue *q,
+			struct cgroup *cgroup, struct elv_fq_data *efqd,
+			int create)
+{
+	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
+	struct io_group *iog = NULL;
+	/* Note: Use efqd as key */
+	void *key = efqd;
+
+	/*
+	 * Take a refenrece to css object. Don't want to map a bio to
+	 * a group if it has been marked for deletion
+	 */
+
+	if (!css_tryget(&iocg->css))
+		return iog;
+
+	iog = io_cgroup_lookup_group(iocg, key);
+	if (iog != NULL || !create)
+		goto end;
+
+	iog = io_group_chain_alloc(q, key, cgroup);
+	if (iog != NULL)
+		io_group_chain_link(q, key, cgroup, iog, efqd);
+
+end:
+	css_put(&iocg->css);
+	return iog;
+}
+
+/*
+ * Search for the io group current task belongs to. If create=1, then also
+ * create the io group if it is not already there.
+ *
+ * Note: This function should be called with queue lock held. It returns
+ * a pointer to io group without taking any reference. That group will
+ * be around as long as queue lock is not dropped (as group reclaim code
+ * needs to get hold of queue lock). So if somebody needs to use group
+ * pointer even after dropping queue lock, take a reference to the group
+ * before dropping queue lock.
+ */
+struct io_group *io_get_io_group(struct request_queue *q, int create)
+{
+	struct cgroup *cgroup;
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	assert_spin_locked(q->queue_lock);
+
+	rcu_read_lock();
+	cgroup = task_cgroup(current, io_subsys_id);
+	iog = io_find_alloc_group(q, cgroup, efqd, create);
+	if (!iog) {
+		if (create)
+			iog = efqd->root_group;
+		else
+			/*
+			 * bio merge functions doing lookup don't want to
+			 * map bio to root group by default
+			 */
+			iog = NULL;
+	}
+	rcu_read_unlock();
+	return iog;
+}
+EXPORT_SYMBOL(io_get_io_group);
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_cgroup *iocg = &io_root_cgroup;
+	struct elv_fq_data *efqd = &e->efqd;
+	struct io_group *iog = efqd->root_group;
+	struct io_service_tree *st;
+	int i;
+
+	BUG_ON(!iog);
+	spin_lock_irq(&iocg->lock);
+	hlist_del_rcu(&iog->group_node);
+	spin_unlock_irq(&iocg->lock);
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	elv_put_iog(iog);
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	struct io_cgroup *iocg;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	elv_get_iog(iog);
+	iog->entity.parent = NULL;
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	iocg = &io_root_cgroup;
+	spin_lock_irq(&iocg->lock);
+	rcu_assign_pointer(iog->key, key);
+	hlist_add_head_rcu(&iog->group_node, &iocg->group_data);
+	iog->iocg_id = css_id(&iocg->css);
+	spin_unlock_irq(&iocg->lock);
+
+	return iog;
+}
+
+struct cftype bfqio_files[] = {
+	{
+		.name = "weight",
+		.read_u64 = io_cgroup_weight_read,
+		.write_u64 = io_cgroup_weight_write,
+	},
+	{
+		.name = "ioprio_class",
+		.read_u64 = io_cgroup_ioprio_class_read,
+		.write_u64 = io_cgroup_ioprio_class_write,
+	},
+};
+
+int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
+{
+	return cgroup_add_files(cgroup, subsys, bfqio_files,
+				ARRAY_SIZE(bfqio_files));
+}
+
+struct cgroup_subsys_state *iocg_create(struct cgroup_subsys *subsys,
+						struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg;
+
+	if (cgroup->parent != NULL) {
+		iocg = kzalloc(sizeof(*iocg), GFP_KERNEL);
+		if (iocg == NULL)
+			return ERR_PTR(-ENOMEM);
+	} else
+		iocg = &io_root_cgroup;
+
+	spin_lock_init(&iocg->lock);
+	INIT_HLIST_HEAD(&iocg->group_data);
+	iocg->weight = IO_DEFAULT_GRP_WEIGHT;
+	iocg->ioprio_class = IO_DEFAULT_GRP_CLASS;
+
+	return &iocg->css;
+}
+
+/*
+ * We cannot support shared io contexts, as we have no mean to support
+ * two tasks with the same ioc in two different groups without major rework
+ * of the main cic/bfqq data structures.  By now we allow a task to change
+ * its cgroup only if it's the only owner of its ioc; the drawback of this
+ * behavior is that a group containing a task that forked using CLONE_IO
+ * will not be destroyed until the tasks sharing the ioc die.
+ */
+int iocg_can_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
+			    struct task_struct *tsk)
+{
+	struct io_context *ioc;
+	int ret = 0;
+
+	/* task_lock() is needed to avoid races with exit_io_context() */
+	task_lock(tsk);
+	ioc = tsk->io_context;
+	if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
+		/*
+		 * ioc == NULL means that the task is either too young or
+		 * exiting: if it has still no ioc the ioc can't be shared,
+		 * if the task is exiting the attach will fail anyway, no
+		 * matter what we return here.
+		 */
+		ret = -EINVAL;
+	task_unlock(tsk);
+
+	return ret;
+}
+
+void iocg_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
+			 struct cgroup *prev, struct task_struct *tsk)
+{
+	struct io_context *ioc;
+
+	task_lock(tsk);
+	ioc = tsk->io_context;
+	if (ioc != NULL)
+		ioc->cgroup_changed = 1;
+	task_unlock(tsk);
+}
+
+/*
+ * This cleanup function does the last bit of things to destroy cgroup.
+ * It should only get called after io_destroy_group has been invoked.
+ */
+void io_group_cleanup(struct io_group *iog)
+{
+	struct io_service_tree *st;
+	struct io_entity *entity = iog->my_entity;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+
+		BUG_ON(!RB_EMPTY_ROOT(&st->active));
+		BUG_ON(!RB_EMPTY_ROOT(&st->idle));
+		BUG_ON(st->wsum != 0);
+	}
+
+	BUG_ON(iog->sched_data.next_active != NULL);
+	BUG_ON(iog->sched_data.active_entity != NULL);
+	BUG_ON(entity != NULL && entity->tree != NULL);
+
+	iog->iocg_id = 0;
+	kfree(iog);
+}
+
+void elv_put_iog(struct io_group *iog)
+{
+	struct io_group *parent = NULL;
+	struct io_entity *entity;
+
+	BUG_ON(!iog);
+
+	entity = iog->my_entity;
+
+	BUG_ON(atomic_read(&iog->ref) <= 0);
+	if (!atomic_dec_and_test(&iog->ref))
+		return;
+
+	if (entity)
+		parent = container_of(iog->my_entity->parent,
+					struct io_group, entity);
+
+	io_group_cleanup(iog);
+
+	if (parent)
+		elv_put_iog(parent);
+}
+EXPORT_SYMBOL(elv_put_iog);
+
+/*
+ * check whether a given group has got any active entities on any of the
+ * service tree.
+ */
+static inline int io_group_has_active_entities(struct io_group *iog)
+{
+	int i;
+	struct io_service_tree *st;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		if (!RB_EMPTY_ROOT(&st->active))
+			return 1;
+	}
+
+	/*
+	 * Also check there are no active entities being served which are
+	 * not on active tree
+	 */
+
+	if (iog->sched_data.active_entity)
+		return 1;
+
+	return 0;
+}
+
+/*
+ * After the group is destroyed, no new sync IO should come to the group.
+ * It might still have pending IOs in some busy queues. It should be able to
+ * send those IOs down to the disk. The async IOs (due to dirty page writeback)
+ * would go in the root group queues after this, as the group does not exist
+ * anymore.
+ */
+static void __io_destroy_group(struct elv_fq_data *efqd, struct io_group *iog)
+{
+	struct elevator_queue *eq;
+	struct io_service_tree *st;
+	int i;
+
+	BUG_ON(iog->my_entity == NULL);
+
+	/*
+	 * Mark io group for deletion so that no new entry goes in
+	 * idle tree. Any active queue will be removed from active
+	 * tree and not put in to idle tree.
+	 */
+	iog->deleting = 1;
+
+	/* We flush idle tree now, and don't put things in there any more. */
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+
+		io_flush_idle_tree(st);
+	}
+
+	eq = container_of(efqd, struct elevator_queue, efqd);
+	hlist_del(&iog->elv_data_node);
+	io_put_io_group_queues(eq, iog);
+
+	/*
+	 * We can come here either through cgroup deletion path or through
+	 * elevator exit path. If we come here through cgroup deletion path
+	 * check if io group has any active entities or not. If not, then
+	 * deactivate this io group to make sure it is removed from idle
+	 * tree it might have been on. If this group was on idle tree, then
+	 * this probably will be the last reference and group will be
+	 * freed upon putting the reference down.
+	 */
+
+	if (!io_group_has_active_entities(iog)) {
+		/*
+		 * io group does not have any active entites. Because this
+		 * group has been decoupled from io_cgroup list and this
+		 * cgroup is being deleted, this group should not receive
+		 * any new IO. Hence it should be safe to deactivate this
+		 * io group and remove from the scheduling tree.
+		 */
+		__bfq_deactivate_entity(iog->my_entity, 0);
+	}
+
+	/*
+	 * Put the reference taken at the time of creation so that when all
+	 * queues are gone, cgroup can be destroyed.
+	 */
+	elv_put_iog(iog);
+}
+
+void iocg_destroy(struct cgroup_subsys *subsys, struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
+	struct io_group *iog;
+	struct elv_fq_data *efqd;
+	unsigned long uninitialized_var(flags);
+
+	/*
+	 * io groups are linked in two lists. One list is maintained
+	 * in elevator (efqd->group_list) and other is maintained
+	 * per cgroup structure (iocg->group_data).
+	 *
+	 * While a cgroup is being deleted, elevator also might be
+	 * exiting and both might try to cleanup the same io group
+	 * so need to be little careful.
+	 *
+	 * (iocg->group_data) is protected by iocg->lock. To avoid deadlock,
+	 * we can't hold the queue lock while holding iocg->lock. So we first
+	 * remove iog from iocg->group_data under iocg->lock. Whoever removes
+	 * iog from iocg->group_data should call __io_destroy_group to remove
+	 * iog.
+	 */
+
+	rcu_read_lock();
+
+remove_entry:
+	spin_lock_irqsave(&iocg->lock, flags);
+
+	if (hlist_empty(&iocg->group_data)) {
+		spin_unlock_irqrestore(&iocg->lock, flags);
+		goto done;
+	}
+	iog = hlist_entry(iocg->group_data.first, struct io_group,
+			  group_node);
+	efqd = rcu_dereference(iog->key);
+	hlist_del_rcu(&iog->group_node);
+	spin_unlock_irqrestore(&iocg->lock, flags);
+
+	spin_lock_irqsave(efqd->queue->queue_lock, flags);
+	__io_destroy_group(efqd, iog);
+	spin_unlock_irqrestore(efqd->queue->queue_lock, flags);
+	goto remove_entry;
+
+done:
+	free_css_id(&io_subsys, &iocg->css);
+	rcu_read_unlock();
+	BUG_ON(!hlist_empty(&iocg->group_data));
+	kfree(iocg);
+}
+
+/*
+ * This functions checks if iog is still in iocg->group_data, and removes it.
+ * If iog is not in that list, then cgroup destroy path has removed it, and
+ * we do not need to remove it.
+ */
+void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
+{
+	struct io_cgroup *iocg;
+	unsigned short id = iog->iocg_id;
+	struct hlist_node *n;
+	struct io_group *__iog;
+	unsigned long flags;
+	struct cgroup_subsys_state *css;
+
+	rcu_read_lock();
+
+	BUG_ON(!id);
+	css = css_lookup(&io_subsys, id);
+
+	/* css can't go away as associated io group is still around */
+	BUG_ON(!css);
+
+	iocg = container_of(css, struct io_cgroup, css);
+
+	spin_lock_irqsave(&iocg->lock, flags);
+	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
+		/*
+		 * Remove iog only if it is still in iocg list. Cgroup
+		 * deletion could have deleted it already.
+		 */
+		if (__iog == iog) {
+			hlist_del_rcu(&iog->group_node);
+			__io_destroy_group(efqd, iog);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&iocg->lock, flags);
+	rcu_read_unlock();
+}
+
+void io_disconnect_groups(struct elevator_queue *e)
+{
+	struct hlist_node *pos, *n;
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &e->efqd;
+
+	hlist_for_each_entry_safe(iog, pos, n, &efqd->group_list,
+					elv_data_node) {
+		io_group_check_and_destroy(efqd, iog);
+	}
+}
+
+struct cgroup_subsys io_subsys = {
+	.name = "io",
+	.create = iocg_create,
+	.can_attach = iocg_can_attach,
+	.attach = iocg_attach,
+	.destroy = iocg_destroy,
+	.populate = iocg_populate,
+	.subsys_id = io_subsys_id,
+};
+
+/*
+ * if bio sumbmitting task and rq don't belong to same io_group, it can't
+ * be merged
+ */
+int io_group_allow_merge(struct request *rq, struct bio *bio)
+{
+	struct request_queue *q = rq->q;
+	struct io_queue *ioq = rq->ioq;
+	struct io_group *iog, *__iog;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return 1;
+
+	/* Determine the io group of the bio submitting task */
+	iog = io_get_io_group(q, 0);
+	if (!iog) {
+		/* May be task belongs to a differet cgroup for which io
+		 * group has not been setup yet. */
+		return 0;
+	}
+
+	/* Determine the io group of the ioq, rq belongs to*/
+	__iog = ioq_to_io_group(ioq);
+
+	return (iog == __iog);
+}
+
+#else /* GROUP_IOSCHED */
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->weight = entity->new_weight;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->sched_data = &iog->sched_data;
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	return iog;
+}
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_group *iog = e->efqd.root_group;
+	struct io_service_tree *st;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	kfree(iog);
+}
+
+struct io_group *io_get_io_group(struct request_queue *q, int create)
+{
+	return q->elevator->efqd.root_group;
+}
+EXPORT_SYMBOL(io_get_io_group);
+#endif /* CONFIG_GROUP_IOSCHED*/
+
 /* Elevator fair queuing function */
 struct io_queue *rq_ioq(struct request *rq)
 {
@@ -1070,11 +2094,10 @@ void elv_free_ioq(struct io_queue *ioq)
 EXPORT_SYMBOL(elv_free_ioq);
 
 int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
-			void *sched_queue, int ioprio_class, int ioprio,
-			int is_sync)
+		struct io_group *iog, void *sched_queue, int ioprio_class,
+		int ioprio, int is_sync)
 {
 	struct elv_fq_data *efqd = &eq->efqd;
-	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
 
 	RB_CLEAR_NODE(&ioq->entity.rb_node);
 	atomic_set(&ioq->ref, 0);
@@ -1099,10 +2122,14 @@ void elv_put_ioq(struct io_queue *ioq)
 	struct elv_fq_data *efqd = ioq->efqd;
 	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
 						efqd);
+	struct io_group *iog;
 
 	BUG_ON(atomic_read(&ioq->ref) <= 0);
 	if (!atomic_dec_and_test(&ioq->ref))
 		return;
+
+	iog = ioq_to_io_group(ioq);
+
 	BUG_ON(ioq->nr_queued);
 	BUG_ON(ioq->entity.tree != NULL);
 	BUG_ON(elv_ioq_busy(ioq));
@@ -1114,6 +2141,7 @@ void elv_put_ioq(struct io_queue *ioq)
 	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
 	elv_log_ioq(efqd, ioq, "put_queue");
 	elv_free_ioq(ioq);
+	elv_put_iog(iog);
 }
 EXPORT_SYMBOL(elv_put_ioq);
 
@@ -1175,11 +2203,23 @@ struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
 		return NULL;
 
 	sd = &efqd->root_group->sched_data;
-	entity = bfq_lookup_next_entity(sd, 1);
+	for (; sd != NULL; sd = entity->my_sched_data) {
+		entity = bfq_lookup_next_entity(sd, 1);
+		/*
+		 * entity can be null despite the fact that there are busy
+		 * queues. if all the busy queues are under a group which is
+		 * currently under service.
+		 * So if we are just looking for next ioq while something is
+		 * being served, null entity is not an error.
+		 */
+		BUG_ON(!entity && extract);
 
-	BUG_ON(!entity);
-	if (extract)
-		entity->service = 0;
+		if (extract)
+			entity->service = 0;
+
+		if (!entity)
+			return NULL;
+	}
 	ioq = io_entity_to_ioq(entity);
 
 	return ioq;
@@ -1195,8 +2235,12 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 	struct request_queue *q = efqd->queue;
 
 	if (ioq) {
-		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
-							efqd->busy_queues);
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "set_active, busy=%d ioprio=%d"
+				" weight=%ld group_weight=%ld",
+				efqd->busy_queues,
+				ioq->entity.ioprio, ioq->entity.weight,
+				iog_weight(iog));
 		ioq->slice_end = 0;
 
 		elv_clear_ioq_wait_request(ioq);
@@ -1258,6 +2302,7 @@ void elv_activate_ioq(struct io_queue *ioq, int add_front)
 void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 					int requeue)
 {
+	requeue = update_requeue(ioq, requeue);
 	bfq_deactivate_entity(&ioq->entity, requeue);
 }
 
@@ -1433,6 +2478,7 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	struct io_queue *ioq;
 	struct elevator_queue *eq = q->elevator;
 	struct io_entity *entity, *new_entity;
+	struct io_group *iog = NULL, *new_iog = NULL;
 
 	ioq = elv_active_ioq(eq);
 
@@ -1443,6 +2489,13 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	new_entity = &new_ioq->entity;
 
 	/*
+	 * In hierarchical setup, one need to traverse up the hierarchy
+	 * till both the queues are children of same parent to make a
+	 * decision whether to do the preemption or not.
+	 */
+	bfq_find_matching_entity(&entity, &new_entity);
+
+	/*
 	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
 	 */
 
@@ -1458,9 +2511,17 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 		return 1;
 
 	/*
-	 * Check with io scheduler if it has additional criterion based on
-	 * which it wants to preempt existing queue.
+	 * If both the queues belong to same group, check with io scheduler
+	 * if it has additional criterion based on which it wants to
+	 * preempt existing queue.
 	 */
+	iog = ioq_to_io_group(ioq);
+	new_iog = ioq_to_io_group(new_ioq);
+
+	if (iog != new_iog)
+		return 0;
+
+
 	if (eq->ops->elevator_should_preempt_fn)
 		return eq->ops->elevator_should_preempt_fn(q,
 						ioq_sched_queue(new_ioq), rq);
@@ -1879,14 +2940,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		elv_schedule_dispatch(q);
 }
 
-struct io_group *io_lookup_io_group_current(struct request_queue *q)
-{
-	struct elv_fq_data *efqd = &q->elevator->efqd;
-
-	return efqd->root_group;
-}
-EXPORT_SYMBOL(io_lookup_io_group_current);
-
 void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio)
 {
@@ -1937,52 +2990,6 @@ void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 }
 EXPORT_SYMBOL(io_group_set_async_queue);
 
-/*
- * Release all the io group references to its async queues.
- */
-void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
-{
-	int i, j;
-
-	for (i = 0; i < 2; i++)
-		for (j = 0; j < IOPRIO_BE_NR; j++)
-			elv_release_ioq(e, &iog->async_queue[i][j]);
-
-	/* Free up async idle queue */
-	elv_release_ioq(e, &iog->async_idle_queue);
-}
-
-struct io_group *io_alloc_root_group(struct request_queue *q,
-					struct elevator_queue *e, void *key)
-{
-	struct io_group *iog;
-	int i;
-
-	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
-	if (iog == NULL)
-		return NULL;
-
-	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
-		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
-
-	return iog;
-}
-
-void io_free_root_group(struct elevator_queue *e)
-{
-	struct io_group *iog = e->efqd.root_group;
-	struct io_service_tree *st;
-	int i;
-
-	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
-		st = iog->sched_data.service_tree + i;
-		io_flush_idle_tree(st);
-	}
-
-	io_put_io_group_queues(e, iog);
-	kfree(iog);
-}
-
 static void elv_slab_kill(void)
 {
 	/*
@@ -2026,6 +3033,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->idle_slice_timer.data = (unsigned long) efqd;
 
 	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
+	INIT_HLIST_HEAD(&efqd->group_list);
 
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
@@ -2045,12 +3053,23 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 void elv_exit_fq_data(struct elevator_queue *e)
 {
 	struct elv_fq_data *efqd = &e->efqd;
+	struct request_queue *q = efqd->queue;
 
 	if (!elv_iosched_fair_queuing_enabled(e))
 		return;
 
 	elv_shutdown_timer_wq(e);
 
+	spin_lock_irq(q->queue_lock);
+	/* This should drop all the io group references of async queues */
+	io_disconnect_groups(e);
+	spin_unlock_irq(q->queue_lock);
+
+	elv_shutdown_timer_wq(e);
+
+	/* Wait for iog->key accessors to exit their grace periods. */
+	synchronize_rcu();
+
 	BUG_ON(timer_pending(&efqd->idle_slice_timer));
 	io_free_root_group(e);
 }
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index a0acf32..d9a643a 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -11,11 +11,13 @@
  */
 
 #include <linux/blkdev.h>
+#include <linux/cgroup.h>
 
 #ifndef _BFQ_SCHED_H
 #define _BFQ_SCHED_H
 
 #define IO_IOPRIO_CLASSES	3
+#define WEIGHT_MAX 		1000
 
 typedef u64 bfq_timestamp_t;
 typedef unsigned long bfq_weight_t;
@@ -74,6 +76,7 @@ struct io_service_tree {
  */
 struct io_sched_data {
 	struct io_entity *active_entity;
+	struct io_entity *next_active;
 	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
 };
 
@@ -89,13 +92,12 @@ struct io_sched_data {
  *             this entity; used for O(log N) lookups into active trees.
  * @service: service received during the last round of service.
  * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
- * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
  * @parent: parent entity, for hierarchical scheduling.
  * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
  *                 associated scheduler queue, %NULL on leaf nodes.
  * @sched_data: the scheduler queue this entity belongs to.
- * @ioprio: the ioprio in use.
- * @new_ioprio: when an ioprio change is requested, the new ioprio value
+ * @weight: the weight in use.
+ * @new_weight: when a weight change is requested, the new weight value
  * @ioprio_class: the ioprio_class in use.
  * @new_ioprio_class: when an ioprio_class change is requested, the new
  *                    ioprio_class value.
@@ -137,13 +139,13 @@ struct io_entity {
 	bfq_timestamp_t min_start;
 
 	bfq_service_t service, budget;
-	bfq_weight_t weight;
 
 	struct io_entity *parent;
 
 	struct io_sched_data *my_sched_data;
 	struct io_sched_data *sched_data;
 
+	bfq_weight_t weight, new_weight;
 	unsigned short ioprio, new_ioprio;
 	unsigned short ioprio_class, new_ioprio_class;
 
@@ -184,8 +186,50 @@ struct io_queue {
 	void *sched_queue;
 };
 
+#ifdef CONFIG_GROUP_IOSCHED
+/**
+ * struct bfq_group - per (device, cgroup) data structure.
+ * @entity: schedulable entity to insert into the parent group sched_data.
+ * @sched_data: own sched_data, to contain child entities (they may be
+ *              both bfq_queues and bfq_groups).
+ * @group_node: node to be inserted into the bfqio_cgroup->group_data
+ *              list of the containing cgroup's bfqio_cgroup.
+ * @bfqd_node: node to be inserted into the @bfqd->group_list list
+ *             of the groups active on the same device; used for cleanup.
+ * @bfqd: the bfq_data for the device this group acts upon.
+ * @async_bfqq: array of async queues for all the tasks belonging to
+ *              the group, one queue per ioprio value per ioprio_class,
+ *              except for the idle class that has only one queue.
+ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
+ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
+ *             to avoid too many special cases during group creation/migration.
+ *
+ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
+ * there is a set of bfq_groups, each one collecting the lower-level
+ * entities belonging to the group that are acting on the same device.
+ *
+ * Locking works as follows:
+ *    o @group_node is protected by the bfqio_cgroup lock, and is accessed
+ *      via RCU from its readers.
+ *    o @bfqd is protected by the queue lock, RCU is used to access it
+ *      from the readers.
+ *    o All the other fields are protected by the @bfqd queue lock.
+ */
 struct io_group {
+	struct io_entity entity;
+	struct hlist_node elv_data_node;
+	struct hlist_node group_node;
 	struct io_sched_data sched_data;
+	atomic_t ref;
+
+	struct io_entity *my_entity;
+
+	/*
+	 * A cgroup has multiple io_groups, one for each request queue.
+	 * to find io group belonging to a particular queue, elv_fq_data
+	 * pointer is stored as a key.
+	 */
+	void *key;
 
 	/* async_queue and idle_queue are used only for cfq */
 	struct io_queue *async_queue[2][IOPRIO_BE_NR];
@@ -196,11 +240,52 @@ struct io_group {
 	 * non-RT cfqq in service when this value is non-zero.
 	 */
 	unsigned int busy_rt_queues;
+
+	int deleting;
+	unsigned short iocg_id;
 };
 
+/**
+ * struct bfqio_cgroup - bfq cgroup data structure.
+ * @css: subsystem state for bfq in the containing cgroup.
+ * @weight: cgroup weight.
+ * @ioprio_class: cgroup ioprio_class.
+ * @lock: spinlock that protects @weight, @ioprio_class and @group_data.
+ * @group_data: list containing the bfq_group belonging to this cgroup.
+ *
+ * @group_data is accessed using RCU, with @lock protecting the updates,
+ * @weight and @ioprio_class are protected by @lock.
+ */
+struct io_cgroup {
+	struct cgroup_subsys_state css;
+
+	unsigned long weight, ioprio_class;
+
+	spinlock_t lock;
+	struct hlist_head group_data;
+};
+#else
+struct io_group {
+	struct io_sched_data sched_data;
+
+	/* async_queue and idle_queue are used only for cfq */
+	struct io_queue *async_queue[2][IOPRIO_BE_NR];
+	struct io_queue *async_idle_queue;
+
+	/*
+	 * Used to track any pending rt requests so we can pre-empt current
+	 * non-RT cfqq in service when this value is non-zero.
+	 */
+	unsigned int busy_rt_queues;
+};
+#endif
+
 struct elv_fq_data {
 	struct io_group *root_group;
 
+	/* List of io groups hanging on this elevator */
+	struct hlist_head group_list;
+
 	struct request_queue *queue;
 	unsigned int busy_queues;
 
@@ -362,9 +447,20 @@ static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
 	ioq->entity.ioprio_changed = 1;
 }
 
+/**
+ * bfq_ioprio_to_weight - calc a weight from an ioprio.
+ * @ioprio: the ioprio value to convert.
+ */
+static inline bfq_weight_t bfq_ioprio_to_weight(int ioprio)
+{
+	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+	return ((IOPRIO_BE_NR - ioprio) * WEIGHT_MAX)/IOPRIO_BE_NR;
+}
+
 static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
 {
 	ioq->entity.new_ioprio = ioprio;
+	ioq->entity.new_weight = bfq_ioprio_to_weight(ioprio);
 	ioq->entity.ioprio_changed = 1;
 }
 
@@ -381,6 +477,60 @@ static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
 						sched_data);
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int io_group_allow_merge(struct request *rq, struct bio *bio);
+extern void elv_put_iog(struct io_group *iog);
+static inline bfq_weight_t iog_weight(struct io_group *iog)
+{
+	return iog->entity.weight;
+}
+
+static inline void elv_get_iog(struct io_group *iog)
+{
+	atomic_inc(&iog->ref);
+}
+
+static inline int update_requeue(struct io_queue *ioq, int requeue)
+{
+	struct io_group *iog = ioq_to_io_group(ioq);
+
+	if (iog->deleting == 1)
+		return 0;
+
+	return requeue;
+}
+
+#else /* !GROUP_IOSCHED */
+static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
+{
+	return 1;
+}
+/*
+ * Currently root group is not part of elevator group list and freed
+ * separately. Hence in case of non-hierarchical setup, nothing todo.
+ */
+static inline void io_disconnect_groups(struct elevator_queue *e) {}
+static inline bfq_weight_t iog_weight(struct io_group *iog)
+{
+	/* Just root group is present and weight is immaterial. */
+	return 0;
+}
+
+static inline void elv_get_iog(struct io_group *iog)
+{
+}
+
+static inline void elv_put_iog(struct io_group *iog)
+{
+}
+
+static inline int update_requeue(struct io_queue *ioq, int requeue)
+{
+	return requeue;
+}
+
+#endif /* GROUP_IOSCHED */
+
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
 						size_t count);
@@ -416,7 +566,8 @@ extern void elv_put_ioq(struct io_queue *ioq);
 extern void __elv_ioq_slice_expired(struct request_queue *q,
 					struct io_queue *ioq);
 extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
-		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
+		struct io_group *iog, void *sched_queue, int ioprio_class,
+		int ioprio, int is_sync);
 extern void elv_schedule_dispatch(struct request_queue *q);
 extern int elv_hw_tag(struct elevator_queue *e);
 extern void *elv_active_sched_queue(struct elevator_queue *e);
@@ -428,7 +579,7 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio);
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
-extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
+extern struct io_group *io_get_io_group(struct request_queue *q, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
 extern void elv_free_ioq(struct io_queue *ioq);
@@ -480,5 +631,11 @@ static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	return NULL;
 }
+
+static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
+
+{
+	return 1;
+}
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index c2f07f5..3944385 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -105,6 +105,10 @@ int elv_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (bio_integrity(bio) != blk_integrity_rq(rq))
 		return 0;
 
+	/* If rq and bio belongs to different groups, dont allow merging */
+	if (!io_group_allow_merge(rq, bio))
+		return 0;
+
 	if (!elv_iosched_allow_merge(rq, bio))
 		return 0;
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 96a94c9..539cb9d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -249,7 +249,7 @@ struct request {
 #ifdef CONFIG_ELV_FAIR_QUEUING
 	/* io queue request belongs to */
 	struct io_queue *ioq;
-#endif
+#endif /* ELV_FAIR_QUEUING */
 };
 
 static inline unsigned short req_get_ioprio(struct request *req)
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 9c8d31b..68ea6bd 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -60,3 +60,10 @@ SUBSYS(net_cls)
 #endif
 
 /* */
+
+#ifdef CONFIG_GROUP_IOSCHED
+SUBSYS(io)
+#endif
+
+/* */
+
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 5be25b3..73027b6 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -68,6 +68,11 @@ struct io_context {
 	unsigned short ioprio;
 	unsigned short ioprio_changed;
 
+#ifdef CONFIG_GROUP_IOSCHED
+	/* If task changes the cgroup, elevator processes it asynchronously */
+	unsigned short cgroup_changed;
+#endif
+
 	/*
 	 * For request batching
 	 */
diff --git a/init/Kconfig b/init/Kconfig
index 7be4d38..ab76477 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -606,6 +606,14 @@ config CGROUP_MEM_RES_CTLR_SWAP
 	  Now, memory usage of swap_cgroup is 2 bytes per entry. If swap page
 	  size is 4096bytes, 512k per 1Gbytes of swap.
 
+config GROUP_IOSCHED
+	bool "Group IO Scheduler"
+	depends on CGROUPS && ELV_FAIR_QUEUING
+	default n
+	---help---
+	  This feature lets IO scheduler recognize task groups and control
+	  disk bandwidth allocation to such task groups.
+
 endif # CGROUPS
 
 config MM_OWNER
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o This patch enables hierarchical fair queuing in common layer. It is
  controlled by config option CONFIG_GROUP_IOSCHED.

o Requests keep a reference on ioq and ioq keeps  keep a reference
  on groups. For async queues in CFQ, and single ioq in other
  schedulers, io_group also keeps are reference on io_queue. This
  reference on ioq is dropped when the queue is released
  (elv_release_ioq). So the queue can be freed.

  When a queue is released, it puts the reference to io_group and the
  io_group is released after all the queues are released. Child groups
  also take reference on parent groups, and release it when they are
  destroyed.

o Reads of iocg->group_data are not always iocg->lock; so all the operations
  on that list are still protected by RCU. All modifications to
  iocg->group_data should always done under iocg->lock.

  Whenever iocg->lock and queue_lock can both be held, queue_lock should
  be held first. This avoids all deadlocks. In order to avoid race
  between cgroup deletion and elevator switch the following algorithm is
  used:

	- Cgroup deletion path holds iocg->lock and removes iog entry
	  to iocg->group_data list. Then it drops iocg->lock, holds
	  queue_lock and destroys iog. So in this path, we never hold
	  iocg->lock and queue_lock at the same time. Also, since we
	  remove iog from iocg->group_data under iocg->lock, we can't
	  race with elevator switch.

	- Elevator switch path does not remove iog from
	  iocg->group_data list directly. It first hold iocg->lock,
	  scans iocg->group_data again to see if iog is still there;
	  it removes iog only if it finds iog there. Otherwise, cgroup
	  deletion must have removed it from the list, and cgroup
	  deletion is responsible for removing iog.

  So the path which removes iog from iocg->group_data list does
  the final removal of iog by calling __io_destroy_group()
  function.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/blk-ioc.c               |    3 +
 block/cfq-iosched.c           |    2 +
 block/elevator-fq.c           | 1221 +++++++++++++++++++++++++++++++++++++----
 block/elevator-fq.h           |  169 ++++++-
 block/elevator.c              |    4 +
 include/linux/blkdev.h        |    2 +-
 include/linux/cgroup_subsys.h |    7 +
 include/linux/iocontext.h     |    5 +
 init/Kconfig                  |    8 +
 9 files changed, 1313 insertions(+), 108 deletions(-)

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..8f0f6cf 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -95,6 +95,9 @@ struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
 		spin_lock_init(&ret->lock);
 		ret->ioprio_changed = 0;
 		ret->ioprio = 0;
+#ifdef CONFIG_GROUP_IOSCHED
+		ret->cgroup_changed = 0;
+#endif
 		ret->last_waited = jiffies; /* doesn't matter... */
 		ret->nr_batch_requests = 0; /* because this is 0 */
 		ret->aic = NULL;
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 995c8dd..1b67303 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1306,6 +1306,8 @@ alloc_ioq:
 			elv_mark_ioq_sync(cfqq->ioq);
 		}
 		cfqq->pid = current->pid;
+		/* ioq reference on iog */
+		elv_get_iog(iog);
 		cfq_log_cfqq(cfqd, cfqq, "alloced");
 	}
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 3e956dc..e52ace7 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -26,6 +26,10 @@ static int elv_rate_sampling_window = HZ / 10;
 
 #define ELV_SLICE_SCALE		(5)
 #define ELV_HW_QUEUE_MIN	(5)
+
+#define IO_DEFAULT_GRP_WEIGHT  500
+#define IO_DEFAULT_GRP_CLASS   IOPRIO_CLASS_BE
+
 #define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
 				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
 
@@ -33,6 +37,7 @@ static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
 					struct io_queue *ioq, int probe);
 struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 						 int extract);
+void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
 
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
@@ -51,6 +56,148 @@ elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
 }
 
 /* Mainly the BFQ scheduling code Follows */
+#ifdef CONFIG_GROUP_IOSCHED
+#define for_each_entity(entity)	\
+	for (; entity != NULL; entity = entity->parent)
+
+#define for_each_entity_safe(entity, parent) \
+	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
+
+
+struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
+						 int extract);
+void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
+					int requeue);
+void elv_activate_ioq(struct io_queue *ioq, int add_front);
+void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
+					int requeue);
+
+static int bfq_update_next_active(struct io_sched_data *sd)
+{
+	struct io_group *iog;
+	struct io_entity *entity, *next_active;
+
+	if (sd->active_entity != NULL)
+		/* will update/requeue at the end of service */
+		return 0;
+
+	/*
+	 * NOTE: this can be improved in may ways, such as returning
+	 * 1 (and thus propagating upwards the update) only when the
+	 * budget changes, or caching the bfqq that will be scheduled
+	 * next from this subtree.  By now we worry more about
+	 * correctness than about performance...
+	 */
+	next_active = bfq_lookup_next_entity(sd, 0);
+	sd->next_active = next_active;
+
+	if (next_active != NULL) {
+		iog = container_of(sd, struct io_group, sched_data);
+		entity = iog->my_entity;
+		if (entity != NULL)
+			entity->budget = next_active->budget;
+	}
+
+	return 1;
+}
+
+static inline void bfq_check_next_active(struct io_sched_data *sd,
+					 struct io_entity *entity)
+{
+	BUG_ON(sd->next_active != entity);
+}
+
+static inline int iog_deleting(struct io_group *iog)
+{
+	return iog->deleting;
+}
+
+/* Do the two (enqueued) entities belong to the same group ? */
+static inline int
+is_same_group(struct io_entity *entity, struct io_entity *new_entity)
+{
+	if (entity->sched_data == new_entity->sched_data)
+		return 1;
+
+	return 0;
+}
+
+static inline struct io_entity *parent_entity(struct io_entity *entity)
+{
+	return entity->parent;
+}
+
+/* return depth at which a io entity is present in the hierarchy */
+static inline int depth_entity(struct io_entity *entity)
+{
+	int depth = 0;
+
+	for_each_entity(entity)
+		depth++;
+
+	return depth;
+}
+
+static void bfq_find_matching_entity(struct io_entity **entity,
+			struct io_entity **new_entity)
+{
+	int entity_depth, new_entity_depth;
+
+	/*
+	 * preemption test can be made between sibling entities who are in the
+	 * same group i.e who have a common parent. Walk up the hierarchy of
+	 * both entities until we find their ancestors who are siblings of
+	 * common parent.
+	 */
+
+	/* First walk up until both entities are at same depth */
+	entity_depth = depth_entity(*entity);
+	new_entity_depth = depth_entity(*new_entity);
+
+	while (entity_depth > new_entity_depth) {
+		entity_depth--;
+		*entity = parent_entity(*entity);
+	}
+
+	while (new_entity_depth > entity_depth) {
+		new_entity_depth--;
+		*new_entity = parent_entity(*new_entity);
+	}
+
+	while (!is_same_group(*entity, *new_entity)) {
+		*entity = parent_entity(*entity);
+		*new_entity = parent_entity(*new_entity);
+	}
+}
+
+#else /* GROUP_IOSCHED */
+#define for_each_entity(entity)	\
+	for (; entity != NULL; entity = NULL)
+
+#define for_each_entity_safe(entity, parent) \
+	for (parent = NULL; entity != NULL; entity = parent)
+
+static inline int bfq_update_next_active(struct io_sched_data *sd)
+{
+	return 0;
+}
+
+static inline void bfq_check_next_active(struct io_sched_data *sd,
+					 struct io_entity *entity)
+{
+}
+
+static inline int iog_deleting(struct io_group *iog)
+{
+	/* In flat mode, root cgroup can't be deleted. */
+	return 0;
+}
+
+static void bfq_find_matching_entity(struct io_entity **entity,
+					struct io_entity **new_entity)
+{
+}
+#endif /* GROUP_IOSCHED */
 
 /*
  * Shift for timestamp calculations.  This actually limits the maximum
@@ -283,7 +430,6 @@ static void bfq_active_insert(struct io_service_tree *st,
 	struct rb_node *node = &entity->rb_node;
 
 	bfq_insert(&st->active, entity);
-
 	if (node->rb_left != NULL)
 		node = node->rb_left;
 	else if (node->rb_right != NULL)
@@ -292,16 +438,6 @@ static void bfq_active_insert(struct io_service_tree *st,
 	bfq_update_active_tree(node);
 }
 
-/**
- * bfq_ioprio_to_weight - calc a weight from an ioprio.
- * @ioprio: the ioprio value to convert.
- */
-static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
-{
-	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
-	return IOPRIO_BE_NR - ioprio;
-}
-
 void bfq_get_entity(struct io_entity *entity)
 {
 	struct io_queue *ioq = io_entity_to_ioq(entity);
@@ -310,13 +446,6 @@ void bfq_get_entity(struct io_entity *entity)
 		elv_get_ioq(ioq);
 }
 
-void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
-{
-	entity->ioprio = entity->new_ioprio;
-	entity->ioprio_class = entity->new_ioprio_class;
-	entity->sched_data = &iog->sched_data;
-}
-
 /**
  * bfq_find_deepest - find the deepest node that an extraction can modify.
  * @node: the node being removed.
@@ -359,7 +488,6 @@ static void bfq_active_extract(struct io_service_tree *st,
 
 	node = bfq_find_deepest(&entity->rb_node);
 	bfq_extract(&st->active, entity);
-
 	if (node != NULL)
 		bfq_update_active_tree(node);
 }
@@ -454,8 +582,10 @@ __bfq_entity_update_prio(struct io_service_tree *old_st,
 	struct io_queue *ioq = io_entity_to_ioq(entity);
 
 	if (entity->ioprio_changed) {
+		old_st->wsum -= entity->weight;
 		entity->ioprio = entity->new_ioprio;
 		entity->ioprio_class = entity->new_ioprio_class;
+		entity->weight = entity->new_weight;
 		entity->ioprio_changed = 0;
 
 		/*
@@ -467,9 +597,6 @@ __bfq_entity_update_prio(struct io_service_tree *old_st,
 			entity->budget = elv_prio_to_slice(efqd, ioq);
 		}
 
-		old_st->wsum -= entity->weight;
-		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
-
 		/*
 		 * NOTE: here we may be changing the weight too early,
 		 * this will cause unfairness.  The correct approach
@@ -551,11 +678,8 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 	if (add_front) {
 		struct io_entity *next_entity;
 
-		/*
-		 * Determine the entity which will be dispatched next
-		 * Use sd->next_active once hierarchical patch is applied
-		 */
-		next_entity = bfq_lookup_next_entity(sd, 0);
+		/* Determine the entity which will be dispatched next */
+		next_entity = sd->next_active;
 
 		if (next_entity && next_entity != entity) {
 			struct io_service_tree *new_st;
@@ -582,12 +706,27 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 }
 
 /**
- * bfq_activate_entity - activate an entity.
+ * bfq_activate_entity - activate an entity and its ancestors if necessary.
  * @entity: the entity to activate.
+ * Activate @entity and all the entities on the path from it to the root.
  */
 void bfq_activate_entity(struct io_entity *entity, int add_front)
 {
-	__bfq_activate_entity(entity, add_front);
+	struct io_sched_data *sd;
+
+	for_each_entity(entity) {
+		__bfq_activate_entity(entity, add_front);
+
+		add_front = 0;
+		sd = entity->sched_data;
+		if (!bfq_update_next_active(sd))
+			/*
+			 * No need to propagate the activation to the
+			 * upper entities, as they will be updated when
+			 * the active entity is rescheduled.
+			 */
+			break;
+	}
 }
 
 /**
@@ -623,12 +762,16 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
 	else if (entity->tree != NULL)
 		BUG();
 
+	if (was_active || sd->next_active == entity)
+		ret = bfq_update_next_active(sd);
+
 	if (!requeue || !bfq_gt(entity->finish, st->vtime))
 		bfq_forget_entity(st, entity);
 	else
 		bfq_idle_insert(st, entity);
 
 	BUG_ON(sd->active_entity == entity);
+	BUG_ON(sd->next_active == entity);
 
 	return ret;
 }
@@ -640,7 +783,74 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
  */
 void bfq_deactivate_entity(struct io_entity *entity, int requeue)
 {
-	__bfq_deactivate_entity(entity, requeue);
+	struct io_sched_data *sd;
+	struct io_group *iog, *__iog;
+	struct io_entity *parent;
+
+	iog = container_of(entity->sched_data, struct io_group, sched_data);
+
+	/*
+	 * Hold a reference to entity's iog until we are done. This function
+	 * travels the hierarchy and we don't want to free up the group yet
+	 * while we are traversing the hiearchy. It is possible that this
+	 * group's cgroup has been removed hence cgroup reference is gone.
+	 * If this entity was active entity, then its group will not be on
+	 * any of the trees and it will be freed up the moment queue is
+	 * freed up in __bfq_deactivate_entity().
+	 *
+	 * Hence, hold a reference, deactivate the hierarhcy of entities and
+	 * then drop the reference which should free up the whole chain of
+	 * groups.
+	 */
+	elv_get_iog(iog);
+
+	for_each_entity_safe(entity, parent) {
+		sd = entity->sched_data;
+
+		if (!__bfq_deactivate_entity(entity, requeue))
+			/*
+			 * The parent entity is still backlogged, and
+			 * we don't need to update it as it is still
+			 * under service.
+			 */
+			break;
+
+		if (sd->next_active != NULL) {
+			/*
+			 * The parent entity is still backlogged and
+			 * the budgets on the path towards the root
+			 * need to be updated.
+			 */
+			elv_put_iog(iog);
+			goto update;
+		}
+
+		/*
+		 * If we reach there the parent is no more backlogged and
+		 * we want to propagate the dequeue upwards.
+		 *
+		 * If entity's group has been marked for deletion, don't
+		 * requeue the group in idle tree so that it can be freed.
+		 */
+
+		__iog = container_of(entity->sched_data, struct io_group,
+						sched_data);
+		if (!iog_deleting(__iog))
+			requeue = 1;
+	}
+
+	elv_put_iog(iog);
+	return;
+
+update:
+	entity = parent;
+	for_each_entity(entity) {
+		__bfq_activate_entity(entity, 0);
+
+		sd = entity->sched_data;
+		if (!bfq_update_next_active(sd))
+			break;
+	}
 }
 
 /**
@@ -757,8 +967,10 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 		entity = __bfq_lookup_next_entity(st);
 		if (entity != NULL) {
 			if (extract) {
+				bfq_check_next_active(sd, entity);
 				bfq_active_extract(st, entity);
 				sd->active_entity = entity;
+				sd->next_active = NULL;
 			}
 			break;
 		}
@@ -770,12 +982,13 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 void entity_served(struct io_entity *entity, bfq_service_t served)
 {
 	struct io_service_tree *st;
-
-	st = io_entity_service_tree(entity);
-	entity->service += served;
-	BUG_ON(st->wsum == 0);
-	st->vtime += bfq_delta(served, st->wsum);
-	bfq_forget_idle(st);
+	for_each_entity(entity) {
+		st = io_entity_service_tree(entity);
+		entity->service += served;
+		BUG_ON(st->wsum == 0);
+		st->vtime += bfq_delta(served, st->wsum);
+		bfq_forget_idle(st);
+	}
 }
 
 /**
@@ -790,6 +1003,817 @@ void io_flush_idle_tree(struct io_service_tree *st)
 		__bfq_deactivate_entity(entity, 0);
 }
 
+/*
+ * Release all the io group references to its async queues.
+ */
+void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
+{
+	int i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < IOPRIO_BE_NR; j++)
+			elv_release_ioq(e, &iog->async_queue[i][j]);
+
+	/* Free up async idle queue */
+	elv_release_ioq(e, &iog->async_idle_queue);
+}
+
+
+/* Mainly hierarchical grouping code */
+#ifdef CONFIG_GROUP_IOSCHED
+
+struct io_cgroup io_root_cgroup = {
+	.weight = IO_DEFAULT_GRP_WEIGHT,
+	.ioprio_class = IO_DEFAULT_GRP_CLASS,
+};
+
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->weight = entity->new_weight;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->parent = iog->my_entity;
+	entity->sched_data = &iog->sched_data;
+}
+
+struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
+{
+	return container_of(cgroup_subsys_state(cgroup, io_subsys_id),
+			    struct io_cgroup, css);
+}
+
+/*
+ * Search the bfq_group for bfqd into the hash table (by now only a list)
+ * of bgrp.  Must be called under rcu_read_lock().
+ */
+struct io_group *io_cgroup_lookup_group(struct io_cgroup *iocg, void *key)
+{
+	struct io_group *iog;
+	struct hlist_node *n;
+	void *__key;
+
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		__key = rcu_dereference(iog->key);
+		if (__key == key)
+			return iog;
+	}
+
+	return NULL;
+}
+
+void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog)
+{
+	struct io_entity *entity = &iog->entity;
+
+	entity->weight = entity->new_weight = iocg->weight;
+	entity->ioprio_class = entity->new_ioprio_class = iocg->ioprio_class;
+	entity->ioprio_changed = 1;
+	entity->my_sched_data = &iog->sched_data;
+}
+
+void io_group_set_parent(struct io_group *iog, struct io_group *parent)
+{
+	struct io_entity *entity;
+
+	BUG_ON(parent == NULL);
+	BUG_ON(iog == NULL);
+
+	entity = &iog->entity;
+	entity->parent = parent->my_entity;
+	entity->sched_data = &parent->sched_data;
+	if (entity->parent)
+		/* Child group reference on parent group. */
+		elv_get_iog(parent);
+}
+
+#define SHOW_FUNCTION(__VAR)						\
+static u64 io_cgroup_##__VAR##_read(struct cgroup *cgroup,		\
+				       struct cftype *cftype)		\
+{									\
+	struct io_cgroup *iocg;					\
+	u64 ret;							\
+									\
+	if (!cgroup_lock_live_group(cgroup))				\
+		return -ENODEV;						\
+									\
+	iocg = cgroup_to_io_cgroup(cgroup);				\
+	spin_lock_irq(&iocg->lock);					\
+	ret = iocg->__VAR;						\
+	spin_unlock_irq(&iocg->lock);					\
+									\
+	cgroup_unlock();						\
+									\
+	return ret;							\
+}
+
+SHOW_FUNCTION(weight);
+SHOW_FUNCTION(ioprio_class);
+#undef SHOW_FUNCTION
+
+#define STORE_FUNCTION(__VAR, __MIN, __MAX)				\
+static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
+					struct cftype *cftype,		\
+					u64 val)			\
+{									\
+	struct io_cgroup *iocg;					\
+	struct io_group *iog;						\
+	struct hlist_node *n;						\
+									\
+	if (val < (__MIN) || val > (__MAX))				\
+		return -EINVAL;						\
+									\
+	if (!cgroup_lock_live_group(cgroup))				\
+		return -ENODEV;						\
+									\
+	iocg = cgroup_to_io_cgroup(cgroup);				\
+									\
+	spin_lock_irq(&iocg->lock);					\
+	iocg->__VAR = (unsigned long)val;				\
+	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {	\
+		iog->entity.new_##__VAR = (unsigned long)val;		\
+		smp_wmb();						\
+		iog->entity.ioprio_changed = 1;				\
+	}								\
+	spin_unlock_irq(&iocg->lock);					\
+									\
+	cgroup_unlock();						\
+									\
+	return 0;							\
+}
+
+STORE_FUNCTION(weight, 1, WEIGHT_MAX);
+STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
+#undef STORE_FUNCTION
+
+/**
+ * bfq_group_chain_alloc - allocate a chain of groups.
+ * @bfqd: queue descriptor.
+ * @cgroup: the leaf cgroup this chain starts from.
+ *
+ * Allocate a chain of groups starting from the one belonging to
+ * @cgroup up to the root cgroup.  Stop if a cgroup on the chain
+ * to the root has already an allocated group on @bfqd.
+ */
+struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
+					struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog, *leaf = NULL, *prev = NULL;
+	gfp_t flags = GFP_ATOMIC |  __GFP_ZERO;
+
+	for (; cgroup != NULL; cgroup = cgroup->parent) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+
+		iog = io_cgroup_lookup_group(iocg, key);
+		if (iog != NULL) {
+			/*
+			 * All the cgroups in the path from there to the
+			 * root must have a bfq_group for bfqd, so we don't
+			 * need any more allocations.
+			 */
+			break;
+		}
+
+		iog = kzalloc_node(sizeof(*iog), flags, q->node);
+		if (!iog)
+			goto cleanup;
+
+		iog->iocg_id = css_id(&iocg->css);
+
+		io_group_init_entity(iocg, iog);
+		iog->my_entity = &iog->entity;
+
+		atomic_set(&iog->ref, 0);
+		iog->deleting = 0;
+
+		/*
+		 * Take the initial reference that will be released on destroy
+		 * This can be thought of a joint reference by cgroup and
+		 * elevator which will be dropped by either elevator exit
+		 * or cgroup deletion path depending on who is exiting first.
+		 */
+		elv_get_iog(iog);
+
+		if (leaf == NULL) {
+			leaf = iog;
+			prev = leaf;
+		} else {
+			io_group_set_parent(prev, iog);
+			/*
+			 * Build a list of allocated nodes using the bfqd
+			 * filed, that is still unused and will be initialized
+			 * only after the node will be connected.
+			 */
+			prev->key = iog;
+			prev = iog;
+		}
+	}
+
+	return leaf;
+
+cleanup:
+	while (leaf != NULL) {
+		prev = leaf;
+		leaf = leaf->key;
+		kfree(prev);
+	}
+
+	return NULL;
+}
+
+/**
+ * bfq_group_chain_link - link an allocatd group chain to a cgroup hierarchy.
+ * @bfqd: the queue descriptor.
+ * @cgroup: the leaf cgroup to start from.
+ * @leaf: the leaf group (to be associated to @cgroup).
+ *
+ * Try to link a chain of groups to a cgroup hierarchy, connecting the
+ * nodes bottom-up, so we can be sure that when we find a cgroup in the
+ * hierarchy that already as a group associated to @bfqd all the nodes
+ * in the path to the root cgroup have one too.
+ *
+ * On locking: the queue lock protects the hierarchy (there is a hierarchy
+ * per device) while the bfqio_cgroup lock protects the list of groups
+ * belonging to the same cgroup.
+ */
+void io_group_chain_link(struct request_queue *q, void *key,
+				struct cgroup *cgroup,
+				struct io_group *leaf,
+				struct elv_fq_data *efqd)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog, *next, *prev = NULL;
+	unsigned long flags;
+
+	assert_spin_locked(q->queue_lock);
+
+	for (; cgroup != NULL && leaf != NULL; cgroup = cgroup->parent) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+		next = leaf->key;
+
+		iog = io_cgroup_lookup_group(iocg, key);
+		BUG_ON(iog != NULL);
+
+		spin_lock_irqsave(&iocg->lock, flags);
+
+		rcu_assign_pointer(leaf->key, key);
+		hlist_add_head_rcu(&leaf->group_node, &iocg->group_data);
+		hlist_add_head(&leaf->elv_data_node, &efqd->group_list);
+
+		spin_unlock_irqrestore(&iocg->lock, flags);
+
+		prev = leaf;
+		leaf = next;
+	}
+
+	BUG_ON(cgroup == NULL && leaf != NULL);
+
+	if (cgroup != NULL && prev != NULL) {
+		iocg = cgroup_to_io_cgroup(cgroup);
+		iog = io_cgroup_lookup_group(iocg, key);
+		io_group_set_parent(prev, iog);
+	}
+}
+
+/**
+ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
+ * @bfqd: queue descriptor.
+ * @cgroup: cgroup being searched for.
+ * @create: if set to 1, create the io group if it has not been created yet.
+ *
+ * Return a group associated to @bfqd in @cgroup, allocating one if
+ * necessary.  When a group is returned all the cgroups in the path
+ * to the root have a group associated to @bfqd.
+ *
+ * If the allocation fails, return the root group: this breaks guarantees
+ * but is a safe fallbak.  If this loss becames a problem it can be
+ * mitigated using the equivalent weight (given by the product of the
+ * weights of the groups in the path from @group to the root) in the
+ * root scheduler.
+ *
+ * We allocate all the missing nodes in the path from the leaf cgroup
+ * to the root and we connect the nodes only after all the allocations
+ * have been successful.
+ */
+struct io_group *io_find_alloc_group(struct request_queue *q,
+			struct cgroup *cgroup, struct elv_fq_data *efqd,
+			int create)
+{
+	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
+	struct io_group *iog = NULL;
+	/* Note: Use efqd as key */
+	void *key = efqd;
+
+	/*
+	 * Take a refenrece to css object. Don't want to map a bio to
+	 * a group if it has been marked for deletion
+	 */
+
+	if (!css_tryget(&iocg->css))
+		return iog;
+
+	iog = io_cgroup_lookup_group(iocg, key);
+	if (iog != NULL || !create)
+		goto end;
+
+	iog = io_group_chain_alloc(q, key, cgroup);
+	if (iog != NULL)
+		io_group_chain_link(q, key, cgroup, iog, efqd);
+
+end:
+	css_put(&iocg->css);
+	return iog;
+}
+
+/*
+ * Search for the io group current task belongs to. If create=1, then also
+ * create the io group if it is not already there.
+ *
+ * Note: This function should be called with queue lock held. It returns
+ * a pointer to io group without taking any reference. That group will
+ * be around as long as queue lock is not dropped (as group reclaim code
+ * needs to get hold of queue lock). So if somebody needs to use group
+ * pointer even after dropping queue lock, take a reference to the group
+ * before dropping queue lock.
+ */
+struct io_group *io_get_io_group(struct request_queue *q, int create)
+{
+	struct cgroup *cgroup;
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &q->elevator->efqd;
+
+	assert_spin_locked(q->queue_lock);
+
+	rcu_read_lock();
+	cgroup = task_cgroup(current, io_subsys_id);
+	iog = io_find_alloc_group(q, cgroup, efqd, create);
+	if (!iog) {
+		if (create)
+			iog = efqd->root_group;
+		else
+			/*
+			 * bio merge functions doing lookup don't want to
+			 * map bio to root group by default
+			 */
+			iog = NULL;
+	}
+	rcu_read_unlock();
+	return iog;
+}
+EXPORT_SYMBOL(io_get_io_group);
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_cgroup *iocg = &io_root_cgroup;
+	struct elv_fq_data *efqd = &e->efqd;
+	struct io_group *iog = efqd->root_group;
+	struct io_service_tree *st;
+	int i;
+
+	BUG_ON(!iog);
+	spin_lock_irq(&iocg->lock);
+	hlist_del_rcu(&iog->group_node);
+	spin_unlock_irq(&iocg->lock);
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	elv_put_iog(iog);
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	struct io_cgroup *iocg;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	elv_get_iog(iog);
+	iog->entity.parent = NULL;
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	iocg = &io_root_cgroup;
+	spin_lock_irq(&iocg->lock);
+	rcu_assign_pointer(iog->key, key);
+	hlist_add_head_rcu(&iog->group_node, &iocg->group_data);
+	iog->iocg_id = css_id(&iocg->css);
+	spin_unlock_irq(&iocg->lock);
+
+	return iog;
+}
+
+struct cftype bfqio_files[] = {
+	{
+		.name = "weight",
+		.read_u64 = io_cgroup_weight_read,
+		.write_u64 = io_cgroup_weight_write,
+	},
+	{
+		.name = "ioprio_class",
+		.read_u64 = io_cgroup_ioprio_class_read,
+		.write_u64 = io_cgroup_ioprio_class_write,
+	},
+};
+
+int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
+{
+	return cgroup_add_files(cgroup, subsys, bfqio_files,
+				ARRAY_SIZE(bfqio_files));
+}
+
+struct cgroup_subsys_state *iocg_create(struct cgroup_subsys *subsys,
+						struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg;
+
+	if (cgroup->parent != NULL) {
+		iocg = kzalloc(sizeof(*iocg), GFP_KERNEL);
+		if (iocg == NULL)
+			return ERR_PTR(-ENOMEM);
+	} else
+		iocg = &io_root_cgroup;
+
+	spin_lock_init(&iocg->lock);
+	INIT_HLIST_HEAD(&iocg->group_data);
+	iocg->weight = IO_DEFAULT_GRP_WEIGHT;
+	iocg->ioprio_class = IO_DEFAULT_GRP_CLASS;
+
+	return &iocg->css;
+}
+
+/*
+ * We cannot support shared io contexts, as we have no mean to support
+ * two tasks with the same ioc in two different groups without major rework
+ * of the main cic/bfqq data structures.  By now we allow a task to change
+ * its cgroup only if it's the only owner of its ioc; the drawback of this
+ * behavior is that a group containing a task that forked using CLONE_IO
+ * will not be destroyed until the tasks sharing the ioc die.
+ */
+int iocg_can_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
+			    struct task_struct *tsk)
+{
+	struct io_context *ioc;
+	int ret = 0;
+
+	/* task_lock() is needed to avoid races with exit_io_context() */
+	task_lock(tsk);
+	ioc = tsk->io_context;
+	if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
+		/*
+		 * ioc == NULL means that the task is either too young or
+		 * exiting: if it has still no ioc the ioc can't be shared,
+		 * if the task is exiting the attach will fail anyway, no
+		 * matter what we return here.
+		 */
+		ret = -EINVAL;
+	task_unlock(tsk);
+
+	return ret;
+}
+
+void iocg_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
+			 struct cgroup *prev, struct task_struct *tsk)
+{
+	struct io_context *ioc;
+
+	task_lock(tsk);
+	ioc = tsk->io_context;
+	if (ioc != NULL)
+		ioc->cgroup_changed = 1;
+	task_unlock(tsk);
+}
+
+/*
+ * This cleanup function does the last bit of things to destroy cgroup.
+ * It should only get called after io_destroy_group has been invoked.
+ */
+void io_group_cleanup(struct io_group *iog)
+{
+	struct io_service_tree *st;
+	struct io_entity *entity = iog->my_entity;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+
+		BUG_ON(!RB_EMPTY_ROOT(&st->active));
+		BUG_ON(!RB_EMPTY_ROOT(&st->idle));
+		BUG_ON(st->wsum != 0);
+	}
+
+	BUG_ON(iog->sched_data.next_active != NULL);
+	BUG_ON(iog->sched_data.active_entity != NULL);
+	BUG_ON(entity != NULL && entity->tree != NULL);
+
+	iog->iocg_id = 0;
+	kfree(iog);
+}
+
+void elv_put_iog(struct io_group *iog)
+{
+	struct io_group *parent = NULL;
+	struct io_entity *entity;
+
+	BUG_ON(!iog);
+
+	entity = iog->my_entity;
+
+	BUG_ON(atomic_read(&iog->ref) <= 0);
+	if (!atomic_dec_and_test(&iog->ref))
+		return;
+
+	if (entity)
+		parent = container_of(iog->my_entity->parent,
+					struct io_group, entity);
+
+	io_group_cleanup(iog);
+
+	if (parent)
+		elv_put_iog(parent);
+}
+EXPORT_SYMBOL(elv_put_iog);
+
+/*
+ * check whether a given group has got any active entities on any of the
+ * service tree.
+ */
+static inline int io_group_has_active_entities(struct io_group *iog)
+{
+	int i;
+	struct io_service_tree *st;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		if (!RB_EMPTY_ROOT(&st->active))
+			return 1;
+	}
+
+	/*
+	 * Also check there are no active entities being served which are
+	 * not on active tree
+	 */
+
+	if (iog->sched_data.active_entity)
+		return 1;
+
+	return 0;
+}
+
+/*
+ * After the group is destroyed, no new sync IO should come to the group.
+ * It might still have pending IOs in some busy queues. It should be able to
+ * send those IOs down to the disk. The async IOs (due to dirty page writeback)
+ * would go in the root group queues after this, as the group does not exist
+ * anymore.
+ */
+static void __io_destroy_group(struct elv_fq_data *efqd, struct io_group *iog)
+{
+	struct elevator_queue *eq;
+	struct io_service_tree *st;
+	int i;
+
+	BUG_ON(iog->my_entity == NULL);
+
+	/*
+	 * Mark io group for deletion so that no new entry goes in
+	 * idle tree. Any active queue will be removed from active
+	 * tree and not put in to idle tree.
+	 */
+	iog->deleting = 1;
+
+	/* We flush idle tree now, and don't put things in there any more. */
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+
+		io_flush_idle_tree(st);
+	}
+
+	eq = container_of(efqd, struct elevator_queue, efqd);
+	hlist_del(&iog->elv_data_node);
+	io_put_io_group_queues(eq, iog);
+
+	/*
+	 * We can come here either through cgroup deletion path or through
+	 * elevator exit path. If we come here through cgroup deletion path
+	 * check if io group has any active entities or not. If not, then
+	 * deactivate this io group to make sure it is removed from idle
+	 * tree it might have been on. If this group was on idle tree, then
+	 * this probably will be the last reference and group will be
+	 * freed upon putting the reference down.
+	 */
+
+	if (!io_group_has_active_entities(iog)) {
+		/*
+		 * io group does not have any active entites. Because this
+		 * group has been decoupled from io_cgroup list and this
+		 * cgroup is being deleted, this group should not receive
+		 * any new IO. Hence it should be safe to deactivate this
+		 * io group and remove from the scheduling tree.
+		 */
+		__bfq_deactivate_entity(iog->my_entity, 0);
+	}
+
+	/*
+	 * Put the reference taken at the time of creation so that when all
+	 * queues are gone, cgroup can be destroyed.
+	 */
+	elv_put_iog(iog);
+}
+
+void iocg_destroy(struct cgroup_subsys *subsys, struct cgroup *cgroup)
+{
+	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
+	struct io_group *iog;
+	struct elv_fq_data *efqd;
+	unsigned long uninitialized_var(flags);
+
+	/*
+	 * io groups are linked in two lists. One list is maintained
+	 * in elevator (efqd->group_list) and other is maintained
+	 * per cgroup structure (iocg->group_data).
+	 *
+	 * While a cgroup is being deleted, elevator also might be
+	 * exiting and both might try to cleanup the same io group
+	 * so need to be little careful.
+	 *
+	 * (iocg->group_data) is protected by iocg->lock. To avoid deadlock,
+	 * we can't hold the queue lock while holding iocg->lock. So we first
+	 * remove iog from iocg->group_data under iocg->lock. Whoever removes
+	 * iog from iocg->group_data should call __io_destroy_group to remove
+	 * iog.
+	 */
+
+	rcu_read_lock();
+
+remove_entry:
+	spin_lock_irqsave(&iocg->lock, flags);
+
+	if (hlist_empty(&iocg->group_data)) {
+		spin_unlock_irqrestore(&iocg->lock, flags);
+		goto done;
+	}
+	iog = hlist_entry(iocg->group_data.first, struct io_group,
+			  group_node);
+	efqd = rcu_dereference(iog->key);
+	hlist_del_rcu(&iog->group_node);
+	spin_unlock_irqrestore(&iocg->lock, flags);
+
+	spin_lock_irqsave(efqd->queue->queue_lock, flags);
+	__io_destroy_group(efqd, iog);
+	spin_unlock_irqrestore(efqd->queue->queue_lock, flags);
+	goto remove_entry;
+
+done:
+	free_css_id(&io_subsys, &iocg->css);
+	rcu_read_unlock();
+	BUG_ON(!hlist_empty(&iocg->group_data));
+	kfree(iocg);
+}
+
+/*
+ * This functions checks if iog is still in iocg->group_data, and removes it.
+ * If iog is not in that list, then cgroup destroy path has removed it, and
+ * we do not need to remove it.
+ */
+void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
+{
+	struct io_cgroup *iocg;
+	unsigned short id = iog->iocg_id;
+	struct hlist_node *n;
+	struct io_group *__iog;
+	unsigned long flags;
+	struct cgroup_subsys_state *css;
+
+	rcu_read_lock();
+
+	BUG_ON(!id);
+	css = css_lookup(&io_subsys, id);
+
+	/* css can't go away as associated io group is still around */
+	BUG_ON(!css);
+
+	iocg = container_of(css, struct io_cgroup, css);
+
+	spin_lock_irqsave(&iocg->lock, flags);
+	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
+		/*
+		 * Remove iog only if it is still in iocg list. Cgroup
+		 * deletion could have deleted it already.
+		 */
+		if (__iog == iog) {
+			hlist_del_rcu(&iog->group_node);
+			__io_destroy_group(efqd, iog);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&iocg->lock, flags);
+	rcu_read_unlock();
+}
+
+void io_disconnect_groups(struct elevator_queue *e)
+{
+	struct hlist_node *pos, *n;
+	struct io_group *iog;
+	struct elv_fq_data *efqd = &e->efqd;
+
+	hlist_for_each_entry_safe(iog, pos, n, &efqd->group_list,
+					elv_data_node) {
+		io_group_check_and_destroy(efqd, iog);
+	}
+}
+
+struct cgroup_subsys io_subsys = {
+	.name = "io",
+	.create = iocg_create,
+	.can_attach = iocg_can_attach,
+	.attach = iocg_attach,
+	.destroy = iocg_destroy,
+	.populate = iocg_populate,
+	.subsys_id = io_subsys_id,
+};
+
+/*
+ * if bio sumbmitting task and rq don't belong to same io_group, it can't
+ * be merged
+ */
+int io_group_allow_merge(struct request *rq, struct bio *bio)
+{
+	struct request_queue *q = rq->q;
+	struct io_queue *ioq = rq->ioq;
+	struct io_group *iog, *__iog;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return 1;
+
+	/* Determine the io group of the bio submitting task */
+	iog = io_get_io_group(q, 0);
+	if (!iog) {
+		/* May be task belongs to a differet cgroup for which io
+		 * group has not been setup yet. */
+		return 0;
+	}
+
+	/* Determine the io group of the ioq, rq belongs to*/
+	__iog = ioq_to_io_group(ioq);
+
+	return (iog == __iog);
+}
+
+#else /* GROUP_IOSCHED */
+void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
+{
+	entity->ioprio = entity->new_ioprio;
+	entity->weight = entity->new_weight;
+	entity->ioprio_class = entity->new_ioprio_class;
+	entity->sched_data = &iog->sched_data;
+}
+
+struct io_group *io_alloc_root_group(struct request_queue *q,
+					struct elevator_queue *e, void *key)
+{
+	struct io_group *iog;
+	int i;
+
+	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
+	if (iog == NULL)
+		return NULL;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
+		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
+
+	return iog;
+}
+
+void io_free_root_group(struct elevator_queue *e)
+{
+	struct io_group *iog = e->efqd.root_group;
+	struct io_service_tree *st;
+	int i;
+
+	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
+		st = iog->sched_data.service_tree + i;
+		io_flush_idle_tree(st);
+	}
+
+	io_put_io_group_queues(e, iog);
+	kfree(iog);
+}
+
+struct io_group *io_get_io_group(struct request_queue *q, int create)
+{
+	return q->elevator->efqd.root_group;
+}
+EXPORT_SYMBOL(io_get_io_group);
+#endif /* CONFIG_GROUP_IOSCHED*/
+
 /* Elevator fair queuing function */
 struct io_queue *rq_ioq(struct request *rq)
 {
@@ -1070,11 +2094,10 @@ void elv_free_ioq(struct io_queue *ioq)
 EXPORT_SYMBOL(elv_free_ioq);
 
 int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
-			void *sched_queue, int ioprio_class, int ioprio,
-			int is_sync)
+		struct io_group *iog, void *sched_queue, int ioprio_class,
+		int ioprio, int is_sync)
 {
 	struct elv_fq_data *efqd = &eq->efqd;
-	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
 
 	RB_CLEAR_NODE(&ioq->entity.rb_node);
 	atomic_set(&ioq->ref, 0);
@@ -1099,10 +2122,14 @@ void elv_put_ioq(struct io_queue *ioq)
 	struct elv_fq_data *efqd = ioq->efqd;
 	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
 						efqd);
+	struct io_group *iog;
 
 	BUG_ON(atomic_read(&ioq->ref) <= 0);
 	if (!atomic_dec_and_test(&ioq->ref))
 		return;
+
+	iog = ioq_to_io_group(ioq);
+
 	BUG_ON(ioq->nr_queued);
 	BUG_ON(ioq->entity.tree != NULL);
 	BUG_ON(elv_ioq_busy(ioq));
@@ -1114,6 +2141,7 @@ void elv_put_ioq(struct io_queue *ioq)
 	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
 	elv_log_ioq(efqd, ioq, "put_queue");
 	elv_free_ioq(ioq);
+	elv_put_iog(iog);
 }
 EXPORT_SYMBOL(elv_put_ioq);
 
@@ -1175,11 +2203,23 @@ struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
 		return NULL;
 
 	sd = &efqd->root_group->sched_data;
-	entity = bfq_lookup_next_entity(sd, 1);
+	for (; sd != NULL; sd = entity->my_sched_data) {
+		entity = bfq_lookup_next_entity(sd, 1);
+		/*
+		 * entity can be null despite the fact that there are busy
+		 * queues. if all the busy queues are under a group which is
+		 * currently under service.
+		 * So if we are just looking for next ioq while something is
+		 * being served, null entity is not an error.
+		 */
+		BUG_ON(!entity && extract);
 
-	BUG_ON(!entity);
-	if (extract)
-		entity->service = 0;
+		if (extract)
+			entity->service = 0;
+
+		if (!entity)
+			return NULL;
+	}
 	ioq = io_entity_to_ioq(entity);
 
 	return ioq;
@@ -1195,8 +2235,12 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 	struct request_queue *q = efqd->queue;
 
 	if (ioq) {
-		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
-							efqd->busy_queues);
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "set_active, busy=%d ioprio=%d"
+				" weight=%ld group_weight=%ld",
+				efqd->busy_queues,
+				ioq->entity.ioprio, ioq->entity.weight,
+				iog_weight(iog));
 		ioq->slice_end = 0;
 
 		elv_clear_ioq_wait_request(ioq);
@@ -1258,6 +2302,7 @@ void elv_activate_ioq(struct io_queue *ioq, int add_front)
 void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 					int requeue)
 {
+	requeue = update_requeue(ioq, requeue);
 	bfq_deactivate_entity(&ioq->entity, requeue);
 }
 
@@ -1433,6 +2478,7 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	struct io_queue *ioq;
 	struct elevator_queue *eq = q->elevator;
 	struct io_entity *entity, *new_entity;
+	struct io_group *iog = NULL, *new_iog = NULL;
 
 	ioq = elv_active_ioq(eq);
 
@@ -1443,6 +2489,13 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	new_entity = &new_ioq->entity;
 
 	/*
+	 * In hierarchical setup, one need to traverse up the hierarchy
+	 * till both the queues are children of same parent to make a
+	 * decision whether to do the preemption or not.
+	 */
+	bfq_find_matching_entity(&entity, &new_entity);
+
+	/*
 	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
 	 */
 
@@ -1458,9 +2511,17 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 		return 1;
 
 	/*
-	 * Check with io scheduler if it has additional criterion based on
-	 * which it wants to preempt existing queue.
+	 * If both the queues belong to same group, check with io scheduler
+	 * if it has additional criterion based on which it wants to
+	 * preempt existing queue.
 	 */
+	iog = ioq_to_io_group(ioq);
+	new_iog = ioq_to_io_group(new_ioq);
+
+	if (iog != new_iog)
+		return 0;
+
+
 	if (eq->ops->elevator_should_preempt_fn)
 		return eq->ops->elevator_should_preempt_fn(q,
 						ioq_sched_queue(new_ioq), rq);
@@ -1879,14 +2940,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		elv_schedule_dispatch(q);
 }
 
-struct io_group *io_lookup_io_group_current(struct request_queue *q)
-{
-	struct elv_fq_data *efqd = &q->elevator->efqd;
-
-	return efqd->root_group;
-}
-EXPORT_SYMBOL(io_lookup_io_group_current);
-
 void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio)
 {
@@ -1937,52 +2990,6 @@ void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 }
 EXPORT_SYMBOL(io_group_set_async_queue);
 
-/*
- * Release all the io group references to its async queues.
- */
-void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
-{
-	int i, j;
-
-	for (i = 0; i < 2; i++)
-		for (j = 0; j < IOPRIO_BE_NR; j++)
-			elv_release_ioq(e, &iog->async_queue[i][j]);
-
-	/* Free up async idle queue */
-	elv_release_ioq(e, &iog->async_idle_queue);
-}
-
-struct io_group *io_alloc_root_group(struct request_queue *q,
-					struct elevator_queue *e, void *key)
-{
-	struct io_group *iog;
-	int i;
-
-	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
-	if (iog == NULL)
-		return NULL;
-
-	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
-		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
-
-	return iog;
-}
-
-void io_free_root_group(struct elevator_queue *e)
-{
-	struct io_group *iog = e->efqd.root_group;
-	struct io_service_tree *st;
-	int i;
-
-	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
-		st = iog->sched_data.service_tree + i;
-		io_flush_idle_tree(st);
-	}
-
-	io_put_io_group_queues(e, iog);
-	kfree(iog);
-}
-
 static void elv_slab_kill(void)
 {
 	/*
@@ -2026,6 +3033,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->idle_slice_timer.data = (unsigned long) efqd;
 
 	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
+	INIT_HLIST_HEAD(&efqd->group_list);
 
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
@@ -2045,12 +3053,23 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 void elv_exit_fq_data(struct elevator_queue *e)
 {
 	struct elv_fq_data *efqd = &e->efqd;
+	struct request_queue *q = efqd->queue;
 
 	if (!elv_iosched_fair_queuing_enabled(e))
 		return;
 
 	elv_shutdown_timer_wq(e);
 
+	spin_lock_irq(q->queue_lock);
+	/* This should drop all the io group references of async queues */
+	io_disconnect_groups(e);
+	spin_unlock_irq(q->queue_lock);
+
+	elv_shutdown_timer_wq(e);
+
+	/* Wait for iog->key accessors to exit their grace periods. */
+	synchronize_rcu();
+
 	BUG_ON(timer_pending(&efqd->idle_slice_timer));
 	io_free_root_group(e);
 }
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index a0acf32..d9a643a 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -11,11 +11,13 @@
  */
 
 #include <linux/blkdev.h>
+#include <linux/cgroup.h>
 
 #ifndef _BFQ_SCHED_H
 #define _BFQ_SCHED_H
 
 #define IO_IOPRIO_CLASSES	3
+#define WEIGHT_MAX 		1000
 
 typedef u64 bfq_timestamp_t;
 typedef unsigned long bfq_weight_t;
@@ -74,6 +76,7 @@ struct io_service_tree {
  */
 struct io_sched_data {
 	struct io_entity *active_entity;
+	struct io_entity *next_active;
 	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
 };
 
@@ -89,13 +92,12 @@ struct io_sched_data {
  *             this entity; used for O(log N) lookups into active trees.
  * @service: service received during the last round of service.
  * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
- * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
  * @parent: parent entity, for hierarchical scheduling.
  * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
  *                 associated scheduler queue, %NULL on leaf nodes.
  * @sched_data: the scheduler queue this entity belongs to.
- * @ioprio: the ioprio in use.
- * @new_ioprio: when an ioprio change is requested, the new ioprio value
+ * @weight: the weight in use.
+ * @new_weight: when a weight change is requested, the new weight value
  * @ioprio_class: the ioprio_class in use.
  * @new_ioprio_class: when an ioprio_class change is requested, the new
  *                    ioprio_class value.
@@ -137,13 +139,13 @@ struct io_entity {
 	bfq_timestamp_t min_start;
 
 	bfq_service_t service, budget;
-	bfq_weight_t weight;
 
 	struct io_entity *parent;
 
 	struct io_sched_data *my_sched_data;
 	struct io_sched_data *sched_data;
 
+	bfq_weight_t weight, new_weight;
 	unsigned short ioprio, new_ioprio;
 	unsigned short ioprio_class, new_ioprio_class;
 
@@ -184,8 +186,50 @@ struct io_queue {
 	void *sched_queue;
 };
 
+#ifdef CONFIG_GROUP_IOSCHED
+/**
+ * struct bfq_group - per (device, cgroup) data structure.
+ * @entity: schedulable entity to insert into the parent group sched_data.
+ * @sched_data: own sched_data, to contain child entities (they may be
+ *              both bfq_queues and bfq_groups).
+ * @group_node: node to be inserted into the bfqio_cgroup->group_data
+ *              list of the containing cgroup's bfqio_cgroup.
+ * @bfqd_node: node to be inserted into the @bfqd->group_list list
+ *             of the groups active on the same device; used for cleanup.
+ * @bfqd: the bfq_data for the device this group acts upon.
+ * @async_bfqq: array of async queues for all the tasks belonging to
+ *              the group, one queue per ioprio value per ioprio_class,
+ *              except for the idle class that has only one queue.
+ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
+ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
+ *             to avoid too many special cases during group creation/migration.
+ *
+ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
+ * there is a set of bfq_groups, each one collecting the lower-level
+ * entities belonging to the group that are acting on the same device.
+ *
+ * Locking works as follows:
+ *    o @group_node is protected by the bfqio_cgroup lock, and is accessed
+ *      via RCU from its readers.
+ *    o @bfqd is protected by the queue lock, RCU is used to access it
+ *      from the readers.
+ *    o All the other fields are protected by the @bfqd queue lock.
+ */
 struct io_group {
+	struct io_entity entity;
+	struct hlist_node elv_data_node;
+	struct hlist_node group_node;
 	struct io_sched_data sched_data;
+	atomic_t ref;
+
+	struct io_entity *my_entity;
+
+	/*
+	 * A cgroup has multiple io_groups, one for each request queue.
+	 * to find io group belonging to a particular queue, elv_fq_data
+	 * pointer is stored as a key.
+	 */
+	void *key;
 
 	/* async_queue and idle_queue are used only for cfq */
 	struct io_queue *async_queue[2][IOPRIO_BE_NR];
@@ -196,11 +240,52 @@ struct io_group {
 	 * non-RT cfqq in service when this value is non-zero.
 	 */
 	unsigned int busy_rt_queues;
+
+	int deleting;
+	unsigned short iocg_id;
 };
 
+/**
+ * struct bfqio_cgroup - bfq cgroup data structure.
+ * @css: subsystem state for bfq in the containing cgroup.
+ * @weight: cgroup weight.
+ * @ioprio_class: cgroup ioprio_class.
+ * @lock: spinlock that protects @weight, @ioprio_class and @group_data.
+ * @group_data: list containing the bfq_group belonging to this cgroup.
+ *
+ * @group_data is accessed using RCU, with @lock protecting the updates,
+ * @weight and @ioprio_class are protected by @lock.
+ */
+struct io_cgroup {
+	struct cgroup_subsys_state css;
+
+	unsigned long weight, ioprio_class;
+
+	spinlock_t lock;
+	struct hlist_head group_data;
+};
+#else
+struct io_group {
+	struct io_sched_data sched_data;
+
+	/* async_queue and idle_queue are used only for cfq */
+	struct io_queue *async_queue[2][IOPRIO_BE_NR];
+	struct io_queue *async_idle_queue;
+
+	/*
+	 * Used to track any pending rt requests so we can pre-empt current
+	 * non-RT cfqq in service when this value is non-zero.
+	 */
+	unsigned int busy_rt_queues;
+};
+#endif
+
 struct elv_fq_data {
 	struct io_group *root_group;
 
+	/* List of io groups hanging on this elevator */
+	struct hlist_head group_list;
+
 	struct request_queue *queue;
 	unsigned int busy_queues;
 
@@ -362,9 +447,20 @@ static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
 	ioq->entity.ioprio_changed = 1;
 }
 
+/**
+ * bfq_ioprio_to_weight - calc a weight from an ioprio.
+ * @ioprio: the ioprio value to convert.
+ */
+static inline bfq_weight_t bfq_ioprio_to_weight(int ioprio)
+{
+	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+	return ((IOPRIO_BE_NR - ioprio) * WEIGHT_MAX)/IOPRIO_BE_NR;
+}
+
 static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
 {
 	ioq->entity.new_ioprio = ioprio;
+	ioq->entity.new_weight = bfq_ioprio_to_weight(ioprio);
 	ioq->entity.ioprio_changed = 1;
 }
 
@@ -381,6 +477,60 @@ static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
 						sched_data);
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int io_group_allow_merge(struct request *rq, struct bio *bio);
+extern void elv_put_iog(struct io_group *iog);
+static inline bfq_weight_t iog_weight(struct io_group *iog)
+{
+	return iog->entity.weight;
+}
+
+static inline void elv_get_iog(struct io_group *iog)
+{
+	atomic_inc(&iog->ref);
+}
+
+static inline int update_requeue(struct io_queue *ioq, int requeue)
+{
+	struct io_group *iog = ioq_to_io_group(ioq);
+
+	if (iog->deleting == 1)
+		return 0;
+
+	return requeue;
+}
+
+#else /* !GROUP_IOSCHED */
+static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
+{
+	return 1;
+}
+/*
+ * Currently root group is not part of elevator group list and freed
+ * separately. Hence in case of non-hierarchical setup, nothing todo.
+ */
+static inline void io_disconnect_groups(struct elevator_queue *e) {}
+static inline bfq_weight_t iog_weight(struct io_group *iog)
+{
+	/* Just root group is present and weight is immaterial. */
+	return 0;
+}
+
+static inline void elv_get_iog(struct io_group *iog)
+{
+}
+
+static inline void elv_put_iog(struct io_group *iog)
+{
+}
+
+static inline int update_requeue(struct io_queue *ioq, int requeue)
+{
+	return requeue;
+}
+
+#endif /* GROUP_IOSCHED */
+
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
 						size_t count);
@@ -416,7 +566,8 @@ extern void elv_put_ioq(struct io_queue *ioq);
 extern void __elv_ioq_slice_expired(struct request_queue *q,
 					struct io_queue *ioq);
 extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
-		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
+		struct io_group *iog, void *sched_queue, int ioprio_class,
+		int ioprio, int is_sync);
 extern void elv_schedule_dispatch(struct request_queue *q);
 extern int elv_hw_tag(struct elevator_queue *e);
 extern void *elv_active_sched_queue(struct elevator_queue *e);
@@ -428,7 +579,7 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio);
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
-extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
+extern struct io_group *io_get_io_group(struct request_queue *q, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
 extern void elv_free_ioq(struct io_queue *ioq);
@@ -480,5 +631,11 @@ static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	return NULL;
 }
+
+static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
+
+{
+	return 1;
+}
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index c2f07f5..3944385 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -105,6 +105,10 @@ int elv_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (bio_integrity(bio) != blk_integrity_rq(rq))
 		return 0;
 
+	/* If rq and bio belongs to different groups, dont allow merging */
+	if (!io_group_allow_merge(rq, bio))
+		return 0;
+
 	if (!elv_iosched_allow_merge(rq, bio))
 		return 0;
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 96a94c9..539cb9d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -249,7 +249,7 @@ struct request {
 #ifdef CONFIG_ELV_FAIR_QUEUING
 	/* io queue request belongs to */
 	struct io_queue *ioq;
-#endif
+#endif /* ELV_FAIR_QUEUING */
 };
 
 static inline unsigned short req_get_ioprio(struct request *req)
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 9c8d31b..68ea6bd 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -60,3 +60,10 @@ SUBSYS(net_cls)
 #endif
 
 /* */
+
+#ifdef CONFIG_GROUP_IOSCHED
+SUBSYS(io)
+#endif
+
+/* */
+
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 5be25b3..73027b6 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -68,6 +68,11 @@ struct io_context {
 	unsigned short ioprio;
 	unsigned short ioprio_changed;
 
+#ifdef CONFIG_GROUP_IOSCHED
+	/* If task changes the cgroup, elevator processes it asynchronously */
+	unsigned short cgroup_changed;
+#endif
+
 	/*
 	 * For request batching
 	 */
diff --git a/init/Kconfig b/init/Kconfig
index 7be4d38..ab76477 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -606,6 +606,14 @@ config CGROUP_MEM_RES_CTLR_SWAP
 	  Now, memory usage of swap_cgroup is 2 bytes per entry. If swap page
 	  size is 4096bytes, 512k per 1Gbytes of swap.
 
+config GROUP_IOSCHED
+	bool "Group IO Scheduler"
+	depends on CGROUPS && ELV_FAIR_QUEUING
+	default n
+	---help---
+	  This feature lets IO scheduler recognize task groups and control
+	  disk bandwidth allocation to such task groups.
+
 endif # CGROUPS
 
 config MM_OWNER
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 06/20] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (4 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
                     ` (15 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

Make cfq hierarhical.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
Signed-off-by: Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
Signed-off-by: Aristeu Rozanski <aris-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched |    8 ++++++
 block/cfq-iosched.c   |   68 ++++++++++++++++++++++++++++++++++++++++++++++--
 init/Kconfig          |    2 +-
 3 files changed, 74 insertions(+), 4 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index dd5224d..a91a807 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -54,6 +54,14 @@ config IOSCHED_CFQ
 	  working environment, suitable for desktop systems.
 	  This is the default I/O scheduler.
 
+config IOSCHED_CFQ_HIER
+	bool "CFQ Hierarchical Scheduling support"
+	depends on IOSCHED_CFQ && CGROUPS
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in cfq.
+
 choice
 	prompt "Default I/O scheduler"
 	default DEFAULT_CFQ
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 1b67303..b64c8fd 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1222,6 +1222,60 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc)
 	ioc->ioprio_changed = 0;
 }
 
+#ifdef CONFIG_IOSCHED_CFQ_HIER
+static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic)
+{
+	struct cfq_queue *async_cfqq = cic_to_cfqq(cic, 0);
+	struct cfq_queue *sync_cfqq = cic_to_cfqq(cic, 1);
+	struct cfq_data *cfqd = cic->key;
+	struct io_group *iog, *__iog;
+	unsigned long flags;
+	struct request_queue *q;
+
+	if (unlikely(!cfqd))
+		return;
+
+	q = cfqd->queue;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	iog = io_get_io_group(q, 0);
+
+	if (async_cfqq != NULL) {
+		__iog = cfqq_to_io_group(async_cfqq);
+		if (iog != __iog) {
+			/* cgroup changed, drop the reference to async queue */
+			cic_set_cfqq(cic, NULL, 0);
+			cfq_put_queue(async_cfqq);
+		}
+	}
+
+	if (sync_cfqq != NULL) {
+		__iog = cfqq_to_io_group(sync_cfqq);
+
+		/*
+		 * Drop reference to sync queue. A new sync queue will
+		 * be assigned in new group upon arrival of a fresh request.
+		 * If old queue has got requests, those reuests will be
+		 * dispatched over a period of time and queue will be freed
+		 * automatically.
+		 */
+		if (iog != __iog) {
+			cic_set_cfqq(cic, NULL, 1);
+			cfq_put_queue(sync_cfqq);
+		}
+	}
+
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+static void cfq_ioc_set_cgroup(struct io_context *ioc)
+{
+	call_for_each_cic(ioc, changed_cgroup);
+	ioc->cgroup_changed = 0;
+}
+#endif  /* CONFIG_IOSCHED_CFQ_HIER */
+
 static struct cfq_queue *
 cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 				struct io_context *ioc, gfp_t gfp_mask)
@@ -1230,7 +1284,10 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 	struct cfq_io_context *cic;
 	struct request_queue *q = cfqd->queue;
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
+	struct io_group *iog = NULL;
 retry:
+	iog = io_get_io_group(q, 1);
+
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
 	cfqq = cic_to_cfqq(cic, is_sync);
@@ -1297,8 +1354,9 @@ alloc_ioq:
 
 		cfqq->ioq = ioq;
 		cfq_init_prio_data(cfqq, ioc);
-		elv_init_ioq(q->elevator, ioq, cfqq, cfqq->org_ioprio_class,
-				cfqq->org_ioprio, is_sync);
+		elv_init_ioq(q->elevator, ioq, iog, cfqq,
+				cfqq->org_ioprio_class, cfqq->org_ioprio,
+				is_sync);
 
 		if (is_sync) {
 			if (!cfq_class_idle(cfqq))
@@ -1330,7 +1388,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_lookup_io_group_current(cfqd->queue);
+	struct io_group *iog = io_get_io_group(cfqd->queue, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
@@ -1489,6 +1547,10 @@ out:
 	smp_read_barrier_depends();
 	if (unlikely(ioc->ioprio_changed))
 		cfq_ioc_set_ioprio(ioc);
+#ifdef CONFIG_IOSCHED_CFQ_HIER
+	if (unlikely(ioc->cgroup_changed))
+		cfq_ioc_set_cgroup(ioc);
+#endif
 	return cic;
 err_free:
 	cfq_cic_free(cic);
diff --git a/init/Kconfig b/init/Kconfig
index ab76477..1a4686d 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -607,7 +607,7 @@ config CGROUP_MEM_RES_CTLR_SWAP
 	  size is 4096bytes, 512k per 1Gbytes of swap.
 
 config GROUP_IOSCHED
-	bool "Group IO Scheduler"
+	bool
 	depends on CGROUPS && ELV_FAIR_QUEUING
 	default n
 	---help---
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 06/20] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

Make cfq hierarhical.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched |    8 ++++++
 block/cfq-iosched.c   |   68 ++++++++++++++++++++++++++++++++++++++++++++++--
 init/Kconfig          |    2 +-
 3 files changed, 74 insertions(+), 4 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index dd5224d..a91a807 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -54,6 +54,14 @@ config IOSCHED_CFQ
 	  working environment, suitable for desktop systems.
 	  This is the default I/O scheduler.
 
+config IOSCHED_CFQ_HIER
+	bool "CFQ Hierarchical Scheduling support"
+	depends on IOSCHED_CFQ && CGROUPS
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in cfq.
+
 choice
 	prompt "Default I/O scheduler"
 	default DEFAULT_CFQ
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 1b67303..b64c8fd 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1222,6 +1222,60 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc)
 	ioc->ioprio_changed = 0;
 }
 
+#ifdef CONFIG_IOSCHED_CFQ_HIER
+static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic)
+{
+	struct cfq_queue *async_cfqq = cic_to_cfqq(cic, 0);
+	struct cfq_queue *sync_cfqq = cic_to_cfqq(cic, 1);
+	struct cfq_data *cfqd = cic->key;
+	struct io_group *iog, *__iog;
+	unsigned long flags;
+	struct request_queue *q;
+
+	if (unlikely(!cfqd))
+		return;
+
+	q = cfqd->queue;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	iog = io_get_io_group(q, 0);
+
+	if (async_cfqq != NULL) {
+		__iog = cfqq_to_io_group(async_cfqq);
+		if (iog != __iog) {
+			/* cgroup changed, drop the reference to async queue */
+			cic_set_cfqq(cic, NULL, 0);
+			cfq_put_queue(async_cfqq);
+		}
+	}
+
+	if (sync_cfqq != NULL) {
+		__iog = cfqq_to_io_group(sync_cfqq);
+
+		/*
+		 * Drop reference to sync queue. A new sync queue will
+		 * be assigned in new group upon arrival of a fresh request.
+		 * If old queue has got requests, those reuests will be
+		 * dispatched over a period of time and queue will be freed
+		 * automatically.
+		 */
+		if (iog != __iog) {
+			cic_set_cfqq(cic, NULL, 1);
+			cfq_put_queue(sync_cfqq);
+		}
+	}
+
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+static void cfq_ioc_set_cgroup(struct io_context *ioc)
+{
+	call_for_each_cic(ioc, changed_cgroup);
+	ioc->cgroup_changed = 0;
+}
+#endif  /* CONFIG_IOSCHED_CFQ_HIER */
+
 static struct cfq_queue *
 cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 				struct io_context *ioc, gfp_t gfp_mask)
@@ -1230,7 +1284,10 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 	struct cfq_io_context *cic;
 	struct request_queue *q = cfqd->queue;
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
+	struct io_group *iog = NULL;
 retry:
+	iog = io_get_io_group(q, 1);
+
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
 	cfqq = cic_to_cfqq(cic, is_sync);
@@ -1297,8 +1354,9 @@ alloc_ioq:
 
 		cfqq->ioq = ioq;
 		cfq_init_prio_data(cfqq, ioc);
-		elv_init_ioq(q->elevator, ioq, cfqq, cfqq->org_ioprio_class,
-				cfqq->org_ioprio, is_sync);
+		elv_init_ioq(q->elevator, ioq, iog, cfqq,
+				cfqq->org_ioprio_class, cfqq->org_ioprio,
+				is_sync);
 
 		if (is_sync) {
 			if (!cfq_class_idle(cfqq))
@@ -1330,7 +1388,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_lookup_io_group_current(cfqd->queue);
+	struct io_group *iog = io_get_io_group(cfqd->queue, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
@@ -1489,6 +1547,10 @@ out:
 	smp_read_barrier_depends();
 	if (unlikely(ioc->ioprio_changed))
 		cfq_ioc_set_ioprio(ioc);
+#ifdef CONFIG_IOSCHED_CFQ_HIER
+	if (unlikely(ioc->cgroup_changed))
+		cfq_ioc_set_cgroup(ioc);
+#endif
 	return cic;
 err_free:
 	cfq_cic_free(cic);
diff --git a/init/Kconfig b/init/Kconfig
index ab76477..1a4686d 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -607,7 +607,7 @@ config CGROUP_MEM_RES_CTLR_SWAP
 	  size is 4096bytes, 512k per 1Gbytes of swap.
 
 config GROUP_IOSCHED
-	bool "Group IO Scheduler"
+	bool
 	depends on CGROUPS && ELV_FAIR_QUEUING
 	default n
 	---help---
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 06/20] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

Make cfq hierarhical.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched |    8 ++++++
 block/cfq-iosched.c   |   68 ++++++++++++++++++++++++++++++++++++++++++++++--
 init/Kconfig          |    2 +-
 3 files changed, 74 insertions(+), 4 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index dd5224d..a91a807 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -54,6 +54,14 @@ config IOSCHED_CFQ
 	  working environment, suitable for desktop systems.
 	  This is the default I/O scheduler.
 
+config IOSCHED_CFQ_HIER
+	bool "CFQ Hierarchical Scheduling support"
+	depends on IOSCHED_CFQ && CGROUPS
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in cfq.
+
 choice
 	prompt "Default I/O scheduler"
 	default DEFAULT_CFQ
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 1b67303..b64c8fd 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1222,6 +1222,60 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc)
 	ioc->ioprio_changed = 0;
 }
 
+#ifdef CONFIG_IOSCHED_CFQ_HIER
+static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic)
+{
+	struct cfq_queue *async_cfqq = cic_to_cfqq(cic, 0);
+	struct cfq_queue *sync_cfqq = cic_to_cfqq(cic, 1);
+	struct cfq_data *cfqd = cic->key;
+	struct io_group *iog, *__iog;
+	unsigned long flags;
+	struct request_queue *q;
+
+	if (unlikely(!cfqd))
+		return;
+
+	q = cfqd->queue;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+
+	iog = io_get_io_group(q, 0);
+
+	if (async_cfqq != NULL) {
+		__iog = cfqq_to_io_group(async_cfqq);
+		if (iog != __iog) {
+			/* cgroup changed, drop the reference to async queue */
+			cic_set_cfqq(cic, NULL, 0);
+			cfq_put_queue(async_cfqq);
+		}
+	}
+
+	if (sync_cfqq != NULL) {
+		__iog = cfqq_to_io_group(sync_cfqq);
+
+		/*
+		 * Drop reference to sync queue. A new sync queue will
+		 * be assigned in new group upon arrival of a fresh request.
+		 * If old queue has got requests, those reuests will be
+		 * dispatched over a period of time and queue will be freed
+		 * automatically.
+		 */
+		if (iog != __iog) {
+			cic_set_cfqq(cic, NULL, 1);
+			cfq_put_queue(sync_cfqq);
+		}
+	}
+
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+static void cfq_ioc_set_cgroup(struct io_context *ioc)
+{
+	call_for_each_cic(ioc, changed_cgroup);
+	ioc->cgroup_changed = 0;
+}
+#endif  /* CONFIG_IOSCHED_CFQ_HIER */
+
 static struct cfq_queue *
 cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 				struct io_context *ioc, gfp_t gfp_mask)
@@ -1230,7 +1284,10 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 	struct cfq_io_context *cic;
 	struct request_queue *q = cfqd->queue;
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
+	struct io_group *iog = NULL;
 retry:
+	iog = io_get_io_group(q, 1);
+
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
 	cfqq = cic_to_cfqq(cic, is_sync);
@@ -1297,8 +1354,9 @@ alloc_ioq:
 
 		cfqq->ioq = ioq;
 		cfq_init_prio_data(cfqq, ioc);
-		elv_init_ioq(q->elevator, ioq, cfqq, cfqq->org_ioprio_class,
-				cfqq->org_ioprio, is_sync);
+		elv_init_ioq(q->elevator, ioq, iog, cfqq,
+				cfqq->org_ioprio_class, cfqq->org_ioprio,
+				is_sync);
 
 		if (is_sync) {
 			if (!cfq_class_idle(cfqq))
@@ -1330,7 +1388,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_lookup_io_group_current(cfqd->queue);
+	struct io_group *iog = io_get_io_group(cfqd->queue, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
@@ -1489,6 +1547,10 @@ out:
 	smp_read_barrier_depends();
 	if (unlikely(ioc->ioprio_changed))
 		cfq_ioc_set_ioprio(ioc);
+#ifdef CONFIG_IOSCHED_CFQ_HIER
+	if (unlikely(ioc->cgroup_changed))
+		cfq_ioc_set_cgroup(ioc);
+#endif
 	return cic;
 err_free:
 	cfq_cic_free(cic);
diff --git a/init/Kconfig b/init/Kconfig
index ab76477..1a4686d 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -607,7 +607,7 @@ config CGROUP_MEM_RES_CTLR_SWAP
 	  size is 4096bytes, 512k per 1Gbytes of swap.
 
 config GROUP_IOSCHED
-	bool "Group IO Scheduler"
+	bool
 	depends on CGROUPS && ELV_FAIR_QUEUING
 	default n
 	---help---
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (5 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 06/20] io-controller: cfq changes to use " Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
                     ` (14 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o This patch exports some statistics through cgroup interface. Two of the
  statistics currently exported are actual disk time assigned to the cgroup
  and actual number of sectors dispatched to disk on behalf of this cgroup.

Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/elevator-fq.c |   87 +++++++++++++++++++++++++++++++++++++++++++++++---
 block/elevator-fq.h |   10 ++++++
 2 files changed, 91 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index e52ace7..11a7fca 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -13,6 +13,7 @@
 #include <linux/blkdev.h>
 #include "elevator-fq.h"
 #include <linux/blktrace_api.h>
+#include <linux/seq_file.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -979,12 +980,15 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 	return entity;
 }
 
-void entity_served(struct io_entity *entity, bfq_service_t served)
+void entity_served(struct io_entity *entity, bfq_service_t served,
+					bfq_service_t nr_sectors)
 {
 	struct io_service_tree *st;
 	for_each_entity(entity) {
 		st = io_entity_service_tree(entity);
 		entity->service += served;
+		entity->total_service += served;
+		entity->total_sector_service += nr_sectors;
 		BUG_ON(st->wsum == 0);
 		st->vtime += bfq_delta(served, st->wsum);
 		bfq_forget_idle(st);
@@ -1145,6 +1149,66 @@ STORE_FUNCTION(weight, 1, WEIGHT_MAX);
 STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
 #undef STORE_FUNCTION
 
+static int io_cgroup_disk_time_read(struct cgroup *cgroup,
+				struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+
+	spin_lock_irq(&iocg->lock);
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev),
+					iog->entity.total_service);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
+static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
+				struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+
+	spin_lock_irq(&iocg->lock);
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev),
+					iog->entity.total_sector_service);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
 /**
  * bfq_group_chain_alloc - allocate a chain of groups.
  * @bfqd: queue descriptor.
@@ -1155,7 +1219,7 @@ STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
  * to the root has already an allocated group on @bfqd.
  */
 struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
-					struct cgroup *cgroup)
+					struct cgroup *cgroup, struct bio *bio)
 {
 	struct io_cgroup *iocg;
 	struct io_group *iog, *leaf = NULL, *prev = NULL;
@@ -1180,6 +1244,9 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 
 		iog->iocg_id = css_id(&iocg->css);
 
+		sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);
+		iog->dev = MKDEV(major, minor);
+
 		io_group_init_entity(iocg, iog);
 		iog->my_entity = &iog->entity;
 
@@ -1297,7 +1364,7 @@ void io_group_chain_link(struct request_queue *q, void *key,
  */
 struct io_group *io_find_alloc_group(struct request_queue *q,
 			struct cgroup *cgroup, struct elv_fq_data *efqd,
-			int create)
+			int create, struct bio *bio)
 {
 	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
 	struct io_group *iog = NULL;
@@ -1316,7 +1383,7 @@ struct io_group *io_find_alloc_group(struct request_queue *q,
 	if (iog != NULL || !create)
 		goto end;
 
-	iog = io_group_chain_alloc(q, key, cgroup);
+	iog = io_group_chain_alloc(q, key, cgroup, bio);
 	if (iog != NULL)
 		io_group_chain_link(q, key, cgroup, iog, efqd);
 
@@ -1346,7 +1413,7 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 
 	rcu_read_lock();
 	cgroup = task_cgroup(current, io_subsys_id);
-	iog = io_find_alloc_group(q, cgroup, efqd, create);
+	iog = io_find_alloc_group(q, cgroup, efqd, create, NULL);
 	if (!iog) {
 		if (create)
 			iog = efqd->root_group;
@@ -1421,6 +1488,14 @@ struct cftype bfqio_files[] = {
 		.read_u64 = io_cgroup_ioprio_class_read,
 		.write_u64 = io_cgroup_ioprio_class_write,
 	},
+	{
+		.name = "disk_time",
+		.read_seq_string = io_cgroup_disk_time_read,
+	},
+	{
+		.name = "disk_sectors",
+		.read_seq_string = io_cgroup_disk_sectors_read,
+	},
 };
 
 int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
@@ -1868,7 +1943,7 @@ EXPORT_SYMBOL(elv_get_slice_idle);
 
 void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
 {
-	entity_served(&ioq->entity, served);
+	entity_served(&ioq->entity, served, ioq->nr_sectors);
 }
 
 /* Tells whether ioq is queued in root group or not */
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index d9a643a..9f0c9a0 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -150,6 +150,13 @@ struct io_entity {
 	unsigned short ioprio_class, new_ioprio_class;
 
 	int ioprio_changed;
+
+	/*
+	 * Keep track of total service received by this entity. Keep the
+	 * stats both for time slices and number of sectors dispatched
+	 */
+	unsigned long total_service;
+	unsigned long total_sector_service;
 };
 
 /*
@@ -243,6 +250,9 @@ struct io_group {
 
 	int deleting;
 	unsigned short iocg_id;
+
+	/* The device MKDEV(major, minor), this group has been created for */
+	dev_t	dev;
 };
 
 /**
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o This patch exports some statistics through cgroup interface. Two of the
  statistics currently exported are actual disk time assigned to the cgroup
  and actual number of sectors dispatched to disk on behalf of this cgroup.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/elevator-fq.c |   87 +++++++++++++++++++++++++++++++++++++++++++++++---
 block/elevator-fq.h |   10 ++++++
 2 files changed, 91 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index e52ace7..11a7fca 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -13,6 +13,7 @@
 #include <linux/blkdev.h>
 #include "elevator-fq.h"
 #include <linux/blktrace_api.h>
+#include <linux/seq_file.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -979,12 +980,15 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 	return entity;
 }
 
-void entity_served(struct io_entity *entity, bfq_service_t served)
+void entity_served(struct io_entity *entity, bfq_service_t served,
+					bfq_service_t nr_sectors)
 {
 	struct io_service_tree *st;
 	for_each_entity(entity) {
 		st = io_entity_service_tree(entity);
 		entity->service += served;
+		entity->total_service += served;
+		entity->total_sector_service += nr_sectors;
 		BUG_ON(st->wsum == 0);
 		st->vtime += bfq_delta(served, st->wsum);
 		bfq_forget_idle(st);
@@ -1145,6 +1149,66 @@ STORE_FUNCTION(weight, 1, WEIGHT_MAX);
 STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
 #undef STORE_FUNCTION
 
+static int io_cgroup_disk_time_read(struct cgroup *cgroup,
+				struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+
+	spin_lock_irq(&iocg->lock);
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev),
+					iog->entity.total_service);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
+static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
+				struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+
+	spin_lock_irq(&iocg->lock);
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev),
+					iog->entity.total_sector_service);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
 /**
  * bfq_group_chain_alloc - allocate a chain of groups.
  * @bfqd: queue descriptor.
@@ -1155,7 +1219,7 @@ STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
  * to the root has already an allocated group on @bfqd.
  */
 struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
-					struct cgroup *cgroup)
+					struct cgroup *cgroup, struct bio *bio)
 {
 	struct io_cgroup *iocg;
 	struct io_group *iog, *leaf = NULL, *prev = NULL;
@@ -1180,6 +1244,9 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 
 		iog->iocg_id = css_id(&iocg->css);
 
+		sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);
+		iog->dev = MKDEV(major, minor);
+
 		io_group_init_entity(iocg, iog);
 		iog->my_entity = &iog->entity;
 
@@ -1297,7 +1364,7 @@ void io_group_chain_link(struct request_queue *q, void *key,
  */
 struct io_group *io_find_alloc_group(struct request_queue *q,
 			struct cgroup *cgroup, struct elv_fq_data *efqd,
-			int create)
+			int create, struct bio *bio)
 {
 	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
 	struct io_group *iog = NULL;
@@ -1316,7 +1383,7 @@ struct io_group *io_find_alloc_group(struct request_queue *q,
 	if (iog != NULL || !create)
 		goto end;
 
-	iog = io_group_chain_alloc(q, key, cgroup);
+	iog = io_group_chain_alloc(q, key, cgroup, bio);
 	if (iog != NULL)
 		io_group_chain_link(q, key, cgroup, iog, efqd);
 
@@ -1346,7 +1413,7 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 
 	rcu_read_lock();
 	cgroup = task_cgroup(current, io_subsys_id);
-	iog = io_find_alloc_group(q, cgroup, efqd, create);
+	iog = io_find_alloc_group(q, cgroup, efqd, create, NULL);
 	if (!iog) {
 		if (create)
 			iog = efqd->root_group;
@@ -1421,6 +1488,14 @@ struct cftype bfqio_files[] = {
 		.read_u64 = io_cgroup_ioprio_class_read,
 		.write_u64 = io_cgroup_ioprio_class_write,
 	},
+	{
+		.name = "disk_time",
+		.read_seq_string = io_cgroup_disk_time_read,
+	},
+	{
+		.name = "disk_sectors",
+		.read_seq_string = io_cgroup_disk_sectors_read,
+	},
 };
 
 int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
@@ -1868,7 +1943,7 @@ EXPORT_SYMBOL(elv_get_slice_idle);
 
 void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
 {
-	entity_served(&ioq->entity, served);
+	entity_served(&ioq->entity, served, ioq->nr_sectors);
 }
 
 /* Tells whether ioq is queued in root group or not */
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index d9a643a..9f0c9a0 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -150,6 +150,13 @@ struct io_entity {
 	unsigned short ioprio_class, new_ioprio_class;
 
 	int ioprio_changed;
+
+	/*
+	 * Keep track of total service received by this entity. Keep the
+	 * stats both for time slices and number of sectors dispatched
+	 */
+	unsigned long total_service;
+	unsigned long total_sector_service;
 };
 
 /*
@@ -243,6 +250,9 @@ struct io_group {
 
 	int deleting;
 	unsigned short iocg_id;
+
+	/* The device MKDEV(major, minor), this group has been created for */
+	dev_t	dev;
 };
 
 /**
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o This patch exports some statistics through cgroup interface. Two of the
  statistics currently exported are actual disk time assigned to the cgroup
  and actual number of sectors dispatched to disk on behalf of this cgroup.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/elevator-fq.c |   87 +++++++++++++++++++++++++++++++++++++++++++++++---
 block/elevator-fq.h |   10 ++++++
 2 files changed, 91 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index e52ace7..11a7fca 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -13,6 +13,7 @@
 #include <linux/blkdev.h>
 #include "elevator-fq.h"
 #include <linux/blktrace_api.h>
+#include <linux/seq_file.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -979,12 +980,15 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 	return entity;
 }
 
-void entity_served(struct io_entity *entity, bfq_service_t served)
+void entity_served(struct io_entity *entity, bfq_service_t served,
+					bfq_service_t nr_sectors)
 {
 	struct io_service_tree *st;
 	for_each_entity(entity) {
 		st = io_entity_service_tree(entity);
 		entity->service += served;
+		entity->total_service += served;
+		entity->total_sector_service += nr_sectors;
 		BUG_ON(st->wsum == 0);
 		st->vtime += bfq_delta(served, st->wsum);
 		bfq_forget_idle(st);
@@ -1145,6 +1149,66 @@ STORE_FUNCTION(weight, 1, WEIGHT_MAX);
 STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
 #undef STORE_FUNCTION
 
+static int io_cgroup_disk_time_read(struct cgroup *cgroup,
+				struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+
+	spin_lock_irq(&iocg->lock);
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev),
+					iog->entity.total_service);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
+static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
+				struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_group *iog;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+
+	spin_lock_irq(&iocg->lock);
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev),
+					iog->entity.total_sector_service);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
 /**
  * bfq_group_chain_alloc - allocate a chain of groups.
  * @bfqd: queue descriptor.
@@ -1155,7 +1219,7 @@ STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
  * to the root has already an allocated group on @bfqd.
  */
 struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
-					struct cgroup *cgroup)
+					struct cgroup *cgroup, struct bio *bio)
 {
 	struct io_cgroup *iocg;
 	struct io_group *iog, *leaf = NULL, *prev = NULL;
@@ -1180,6 +1244,9 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 
 		iog->iocg_id = css_id(&iocg->css);
 
+		sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);
+		iog->dev = MKDEV(major, minor);
+
 		io_group_init_entity(iocg, iog);
 		iog->my_entity = &iog->entity;
 
@@ -1297,7 +1364,7 @@ void io_group_chain_link(struct request_queue *q, void *key,
  */
 struct io_group *io_find_alloc_group(struct request_queue *q,
 			struct cgroup *cgroup, struct elv_fq_data *efqd,
-			int create)
+			int create, struct bio *bio)
 {
 	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
 	struct io_group *iog = NULL;
@@ -1316,7 +1383,7 @@ struct io_group *io_find_alloc_group(struct request_queue *q,
 	if (iog != NULL || !create)
 		goto end;
 
-	iog = io_group_chain_alloc(q, key, cgroup);
+	iog = io_group_chain_alloc(q, key, cgroup, bio);
 	if (iog != NULL)
 		io_group_chain_link(q, key, cgroup, iog, efqd);
 
@@ -1346,7 +1413,7 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 
 	rcu_read_lock();
 	cgroup = task_cgroup(current, io_subsys_id);
-	iog = io_find_alloc_group(q, cgroup, efqd, create);
+	iog = io_find_alloc_group(q, cgroup, efqd, create, NULL);
 	if (!iog) {
 		if (create)
 			iog = efqd->root_group;
@@ -1421,6 +1488,14 @@ struct cftype bfqio_files[] = {
 		.read_u64 = io_cgroup_ioprio_class_read,
 		.write_u64 = io_cgroup_ioprio_class_write,
 	},
+	{
+		.name = "disk_time",
+		.read_seq_string = io_cgroup_disk_time_read,
+	},
+	{
+		.name = "disk_sectors",
+		.read_seq_string = io_cgroup_disk_sectors_read,
+	},
 };
 
 int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
@@ -1868,7 +1943,7 @@ EXPORT_SYMBOL(elv_get_slice_idle);
 
 void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
 {
-	entity_served(&ioq->entity, served);
+	entity_served(&ioq->entity, served, ioq->nr_sectors);
 }
 
 /* Tells whether ioq is queued in root group or not */
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index d9a643a..9f0c9a0 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -150,6 +150,13 @@ struct io_entity {
 	unsigned short ioprio_class, new_ioprio_class;
 
 	int ioprio_changed;
+
+	/*
+	 * Keep track of total service received by this entity. Keep the
+	 * stats both for time slices and number of sectors dispatched
+	 */
+	unsigned long total_service;
+	unsigned long total_sector_service;
 };
 
 /*
@@ -243,6 +250,9 @@ struct io_group {
 
 	int deleting;
 	unsigned short iocg_id;
+
+	/* The device MKDEV(major, minor), this group has been created for */
+	dev_t	dev;
 };
 
 /**
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (6 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 09/20] io-controller: Separate out queue and data Vivek Goyal
                     ` (13 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o When a sync queue expires, in many cases it might be empty and then
  it will be deleted from the active tree. This will lead to a scenario
  where out of two competing queues, only one is on the tree and when a
  new queue is selected, vtime jump takes place and we don't see services
  provided in proportion to weight.

o In general this is a fundamental problem with fairness of sync queues
  where queues are not continuously backlogged. Looks like idling is
  only solution to make sure such kind of queues can get some decent amount
  of disk bandwidth in the face of competion from continusouly backlogged
  queues. But excessive idling has potential to reduce performance on SSD
  and disks with commnad queuing.

o This patch experiments with waiting for next request to come before a
  queue is expired after it has consumed its time slice. This can ensure
  more accurate fairness numbers in some cases.

o Introduced a tunable "fairness". If set, io-controller will put more
  focus on getting fairness right than getting throughput right.

Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |  131 ++++++++++++++++++++++++++++++++++++++++++++------
 block/elevator-fq.h |   15 ++++++
 3 files changed, 131 insertions(+), 16 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b64c8fd..bba85b1 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2004,6 +2004,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_idle),
 	ELV_ATTR(slice_sync),
 	ELV_ATTR(slice_async),
+	ELV_ATTR(fairness),
 	__ATTR_NULL
 };
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 11a7fca..cde2155 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -431,6 +431,7 @@ static void bfq_active_insert(struct io_service_tree *st,
 	struct rb_node *node = &entity->rb_node;
 
 	bfq_insert(&st->active, entity);
+	entity->sched_data->nr_active++;
 	if (node->rb_left != NULL)
 		node = node->rb_left;
 	else if (node->rb_right != NULL)
@@ -489,6 +490,7 @@ static void bfq_active_extract(struct io_service_tree *st,
 
 	node = bfq_find_deepest(&entity->rb_node);
 	bfq_extract(&st->active, entity);
+	entity->sched_data->nr_active--;
 	if (node != NULL)
 		bfq_update_active_tree(node);
 }
@@ -1022,6 +1024,21 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
 	elv_release_ioq(e, &iog->async_idle_queue);
 }
 
+/*
+ * Returns the number of active entities a particular io group has. This
+ * includes number of active entities on service tree as well as the active
+ * entity which is being served currently, if any.
+ */
+
+static inline int elv_iog_nr_active(struct io_group *iog)
+{
+	struct io_sched_data *sd = &iog->sched_data;
+
+	if (sd->active_entity)
+		return sd->nr_active + 1;
+	else
+		return sd->nr_active;
+}
 
 /* Mainly hierarchical grouping code */
 #ifdef CONFIG_GROUP_IOSCHED
@@ -1988,6 +2005,8 @@ SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
 EXPORT_SYMBOL(elv_slice_sync_show);
 SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
+SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
+EXPORT_SYMBOL(elv_fairness_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -2012,6 +2031,8 @@ STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_sync_store);
 STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
+STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
+EXPORT_SYMBOL(elv_fairness_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -2136,7 +2157,7 @@ static void elv_ioq_update_idle_window(struct elevator_queue *eq,
 	 * io scheduler if it wants to disable idling based on additional
 	 * considrations like seek pattern.
 	 */
-	if (enable_idle) {
+	if (enable_idle && !efqd->fairness) {
 		if (eq->ops->elevator_update_idle_window_fn)
 			enable_idle = eq->ops->elevator_update_idle_window_fn(
 						eq, ioq->sched_queue, rq);
@@ -2320,6 +2341,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 
 		elv_clear_ioq_wait_request(ioq);
 		elv_clear_ioq_must_dispatch(ioq);
+		elv_clear_ioq_wait_busy_done(ioq);
 		elv_mark_ioq_slice_new(ioq);
 
 		del_timer(&efqd->idle_slice_timer);
@@ -2473,10 +2495,12 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	assert_spin_locked(q->queue_lock);
 	elv_log_ioq(efqd, ioq, "slice expired");
 
-	if (elv_ioq_wait_request(ioq))
+	if (elv_ioq_wait_request(ioq) || elv_ioq_wait_busy(ioq))
 		del_timer(&efqd->idle_slice_timer);
 
 	elv_clear_ioq_wait_request(ioq);
+	elv_clear_ioq_wait_busy(ioq);
+	elv_clear_ioq_wait_busy_done(ioq);
 
 	/*
 	 * if ioq->slice_end = 0, that means a queue was expired before first
@@ -2649,7 +2673,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 		 * has other work pending, don't risk delaying until the
 		 * idle timer unplug to continue working.
 		 */
-		if (elv_ioq_wait_request(ioq)) {
+		if (elv_ioq_wait_request(ioq) && !elv_ioq_wait_busy(ioq)) {
 			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
 			    efqd->busy_queues > 1) {
 				del_timer(&efqd->idle_slice_timer);
@@ -2657,6 +2681,18 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 			}
 			elv_mark_ioq_must_dispatch(ioq);
 		}
+
+		/*
+		 * If we were waiting for a request on this queue, wait is
+		 * done. Schedule the next dispatch
+		 */
+		if (elv_ioq_wait_busy(ioq)) {
+			del_timer(&efqd->idle_slice_timer);
+			elv_clear_ioq_wait_busy(ioq);
+			elv_mark_ioq_wait_busy_done(ioq);
+			elv_clear_ioq_must_dispatch(ioq);
+			elv_schedule_dispatch(q);
+		}
 	} else if (elv_should_preempt(q, ioq, rq)) {
 		/*
 		 * not the active queue - expire current slice if it is
@@ -2684,6 +2720,9 @@ void elv_idle_slice_timer(unsigned long data)
 
 	if (ioq) {
 
+		if (elv_ioq_wait_busy(ioq))
+			goto expire;
+
 		/*
 		 * We saw a request before the queue expired, let it through
 		 */
@@ -2717,7 +2756,7 @@ out_cont:
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
-void elv_ioq_arm_slice_timer(struct request_queue *q)
+void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *ioq = elv_active_ioq(q->elevator);
@@ -2730,26 +2769,38 @@ void elv_ioq_arm_slice_timer(struct request_queue *q)
 	 * for devices that support queuing, otherwise we still have a problem
 	 * with sync vs async workloads.
 	 */
-	if (blk_queue_nonrot(q) && efqd->hw_tag)
+	if (blk_queue_nonrot(q) && efqd->hw_tag && !efqd->fairness)
 		return;
 
 	/*
-	 * still requests with the driver, don't idle
+	 * idle is disabled, either manually or by past process history
 	 */
-	if (efqd->rq_in_driver)
+	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
 		return;
 
 	/*
-	 * idle is disabled, either manually or by past process history
+	 * This queue has consumed its time slice. We are waiting only for
+	 * it to become busy before we select next queue for dispatch.
 	 */
-	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+	if (wait_for_busy) {
+		elv_mark_ioq_wait_busy(ioq);
+		sl = efqd->elv_slice_idle;
+		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
+		elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl);
+		return;
+	}
+
+	/*
+	 * still requests with the driver, don't idle
+	 */
+	if (efqd->rq_in_driver && !efqd->fairness)
 		return;
 
 	/*
 	 * may be iosched got its own idling logic. In that case io
 	 * schduler will take care of arming the timer, if need be.
 	 */
-	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
+	if (q->elevator->ops->elevator_arm_slice_timer_fn && !efqd->fairness) {
 		q->elevator->ops->elevator_arm_slice_timer_fn(q,
 						ioq->sched_queue);
 	} else {
@@ -2784,11 +2835,38 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/* We are waiting for this queue to become busy before it expires.*/
+	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
 	/*
 	 * The active queue has run out of time, expire it and select new.
 	 */
-	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
-		goto expire;
+	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq)) {
+		/*
+		 * Queue has used up its slice. Wait busy is not on otherwise
+		 * we wouldn't have been here. There is a chance that after
+		 * slice expiry no request from the queue completed hence
+		 * wait busy timer could not be turned on. If that's the case
+		 * don't expire the queue yet. Next request completion from
+		 * the queue will arm the wait busy timer.
+		 *
+		 * Don't wait if this group has other active queues. This
+		 * will make sure that we don't loose fairness at group level
+		 * at the same time in root group we will not see cfq
+		 * regressions.
+		 */
+		if (elv_ioq_sync(ioq) && !ioq->nr_queued
+		    && elv_ioq_nr_dispatched(ioq)
+		    && (elv_iog_nr_active(ioq_to_io_group(ioq)) <= 1)
+		    && !elv_ioq_wait_busy_done(ioq)) {
+			ioq = NULL;
+			goto keep_queue;
+		} else
+			goto expire;
+	}
 
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
@@ -2967,11 +3045,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	const int sync = rq_is_sync(rq);
 	struct io_queue *ioq;
 	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_group *iog;
 
 	if (!elv_iosched_fair_queuing_enabled(q->elevator))
 		return;
 
 	ioq = rq->ioq;
+	iog = ioq_to_io_group(ioq);
 
 	elv_log_ioq(efqd, ioq, "complete");
 
@@ -2997,6 +3077,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			elv_ioq_set_prio_slice(q, ioq);
 			elv_clear_ioq_slice_new(ioq);
 		}
+
+		if (elv_ioq_class_idle(ioq)) {
+			elv_ioq_slice_expired(q);
+			goto done;
+		}
+
 		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
@@ -3004,13 +3090,24 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		 * mean seek distance, give them a chance to run instead
 		 * of idling.
 		 */
-		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
-			elv_ioq_slice_expired(q);
-		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
+		if (elv_ioq_slice_used(ioq)) {
+			if (sync && !ioq->nr_queued
+			    && (elv_iog_nr_active(iog) <= 1)) {
+				/*
+				 * Idle for one extra period in hierarchical
+				 * setup
+				 */
+				elv_ioq_arm_slice_timer(q, 1);
+			} else {
+				/* Expire the queue */
+				elv_ioq_slice_expired(q);
+			}
+		} else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
 			 && sync && !rq_noidle(rq))
-			elv_ioq_arm_slice_timer(q);
+			elv_ioq_arm_slice_timer(q, 0);
 	}
 
+done:
 	if (!efqd->rq_in_driver)
 		elv_schedule_dispatch(q);
 }
@@ -3115,6 +3212,8 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->elv_slice_idle = elv_slice_idle;
 	efqd->hw_tag = 1;
 
+	/* For the time being keep fairness enabled by default */
+	efqd->fairness = 1;
 	return 0;
 }
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 9f0c9a0..e13999e 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -77,6 +77,7 @@ struct io_service_tree {
 struct io_sched_data {
 	struct io_entity *active_entity;
 	struct io_entity *next_active;
+	int nr_active;
 	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
 };
 
@@ -331,6 +332,13 @@ struct elv_fq_data {
 	unsigned long long rate_sampling_start; /*sampling window start jifies*/
 	/* number of sectors finished io during current sampling window */
 	unsigned long rate_sectors_current;
+
+	/*
+	 * If set to 1, will disable many optimizations done for boost
+	 * throughput and focus more on providing fairness for sync
+	 * queues.
+	 */
+	unsigned int fairness;
 };
 
 extern int elv_slice_idle;
@@ -355,6 +363,8 @@ enum elv_queue_state_flags {
 	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
 	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
 	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
+	ELV_QUEUE_FLAG_wait_busy,	  /* wait for this queue to get busy */
+	ELV_QUEUE_FLAG_wait_busy_done,	  /* Have already waited on this queue*/
 	ELV_QUEUE_FLAG_NR,
 };
 
@@ -378,6 +388,8 @@ ELV_IO_QUEUE_FLAG_FNS(wait_request)
 ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
 ELV_IO_QUEUE_FLAG_FNS(idle_window)
 ELV_IO_QUEUE_FLAG_FNS(slice_new)
+ELV_IO_QUEUE_FLAG_FNS(wait_busy)
+ELV_IO_QUEUE_FLAG_FNS(wait_busy_done)
 
 static inline struct io_service_tree *
 io_entity_service_tree(struct io_entity *entity)
@@ -550,6 +562,9 @@ extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 						size_t count);
+extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
+						size_t count);
 
 /* Functions used by elevator.c */
 extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o When a sync queue expires, in many cases it might be empty and then
  it will be deleted from the active tree. This will lead to a scenario
  where out of two competing queues, only one is on the tree and when a
  new queue is selected, vtime jump takes place and we don't see services
  provided in proportion to weight.

o In general this is a fundamental problem with fairness of sync queues
  where queues are not continuously backlogged. Looks like idling is
  only solution to make sure such kind of queues can get some decent amount
  of disk bandwidth in the face of competion from continusouly backlogged
  queues. But excessive idling has potential to reduce performance on SSD
  and disks with commnad queuing.

o This patch experiments with waiting for next request to come before a
  queue is expired after it has consumed its time slice. This can ensure
  more accurate fairness numbers in some cases.

o Introduced a tunable "fairness". If set, io-controller will put more
  focus on getting fairness right than getting throughput right.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |  131 ++++++++++++++++++++++++++++++++++++++++++++------
 block/elevator-fq.h |   15 ++++++
 3 files changed, 131 insertions(+), 16 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b64c8fd..bba85b1 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2004,6 +2004,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_idle),
 	ELV_ATTR(slice_sync),
 	ELV_ATTR(slice_async),
+	ELV_ATTR(fairness),
 	__ATTR_NULL
 };
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 11a7fca..cde2155 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -431,6 +431,7 @@ static void bfq_active_insert(struct io_service_tree *st,
 	struct rb_node *node = &entity->rb_node;
 
 	bfq_insert(&st->active, entity);
+	entity->sched_data->nr_active++;
 	if (node->rb_left != NULL)
 		node = node->rb_left;
 	else if (node->rb_right != NULL)
@@ -489,6 +490,7 @@ static void bfq_active_extract(struct io_service_tree *st,
 
 	node = bfq_find_deepest(&entity->rb_node);
 	bfq_extract(&st->active, entity);
+	entity->sched_data->nr_active--;
 	if (node != NULL)
 		bfq_update_active_tree(node);
 }
@@ -1022,6 +1024,21 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
 	elv_release_ioq(e, &iog->async_idle_queue);
 }
 
+/*
+ * Returns the number of active entities a particular io group has. This
+ * includes number of active entities on service tree as well as the active
+ * entity which is being served currently, if any.
+ */
+
+static inline int elv_iog_nr_active(struct io_group *iog)
+{
+	struct io_sched_data *sd = &iog->sched_data;
+
+	if (sd->active_entity)
+		return sd->nr_active + 1;
+	else
+		return sd->nr_active;
+}
 
 /* Mainly hierarchical grouping code */
 #ifdef CONFIG_GROUP_IOSCHED
@@ -1988,6 +2005,8 @@ SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
 EXPORT_SYMBOL(elv_slice_sync_show);
 SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
+SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
+EXPORT_SYMBOL(elv_fairness_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -2012,6 +2031,8 @@ STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_sync_store);
 STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
+STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
+EXPORT_SYMBOL(elv_fairness_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -2136,7 +2157,7 @@ static void elv_ioq_update_idle_window(struct elevator_queue *eq,
 	 * io scheduler if it wants to disable idling based on additional
 	 * considrations like seek pattern.
 	 */
-	if (enable_idle) {
+	if (enable_idle && !efqd->fairness) {
 		if (eq->ops->elevator_update_idle_window_fn)
 			enable_idle = eq->ops->elevator_update_idle_window_fn(
 						eq, ioq->sched_queue, rq);
@@ -2320,6 +2341,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 
 		elv_clear_ioq_wait_request(ioq);
 		elv_clear_ioq_must_dispatch(ioq);
+		elv_clear_ioq_wait_busy_done(ioq);
 		elv_mark_ioq_slice_new(ioq);
 
 		del_timer(&efqd->idle_slice_timer);
@@ -2473,10 +2495,12 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	assert_spin_locked(q->queue_lock);
 	elv_log_ioq(efqd, ioq, "slice expired");
 
-	if (elv_ioq_wait_request(ioq))
+	if (elv_ioq_wait_request(ioq) || elv_ioq_wait_busy(ioq))
 		del_timer(&efqd->idle_slice_timer);
 
 	elv_clear_ioq_wait_request(ioq);
+	elv_clear_ioq_wait_busy(ioq);
+	elv_clear_ioq_wait_busy_done(ioq);
 
 	/*
 	 * if ioq->slice_end = 0, that means a queue was expired before first
@@ -2649,7 +2673,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 		 * has other work pending, don't risk delaying until the
 		 * idle timer unplug to continue working.
 		 */
-		if (elv_ioq_wait_request(ioq)) {
+		if (elv_ioq_wait_request(ioq) && !elv_ioq_wait_busy(ioq)) {
 			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
 			    efqd->busy_queues > 1) {
 				del_timer(&efqd->idle_slice_timer);
@@ -2657,6 +2681,18 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 			}
 			elv_mark_ioq_must_dispatch(ioq);
 		}
+
+		/*
+		 * If we were waiting for a request on this queue, wait is
+		 * done. Schedule the next dispatch
+		 */
+		if (elv_ioq_wait_busy(ioq)) {
+			del_timer(&efqd->idle_slice_timer);
+			elv_clear_ioq_wait_busy(ioq);
+			elv_mark_ioq_wait_busy_done(ioq);
+			elv_clear_ioq_must_dispatch(ioq);
+			elv_schedule_dispatch(q);
+		}
 	} else if (elv_should_preempt(q, ioq, rq)) {
 		/*
 		 * not the active queue - expire current slice if it is
@@ -2684,6 +2720,9 @@ void elv_idle_slice_timer(unsigned long data)
 
 	if (ioq) {
 
+		if (elv_ioq_wait_busy(ioq))
+			goto expire;
+
 		/*
 		 * We saw a request before the queue expired, let it through
 		 */
@@ -2717,7 +2756,7 @@ out_cont:
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
-void elv_ioq_arm_slice_timer(struct request_queue *q)
+void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *ioq = elv_active_ioq(q->elevator);
@@ -2730,26 +2769,38 @@ void elv_ioq_arm_slice_timer(struct request_queue *q)
 	 * for devices that support queuing, otherwise we still have a problem
 	 * with sync vs async workloads.
 	 */
-	if (blk_queue_nonrot(q) && efqd->hw_tag)
+	if (blk_queue_nonrot(q) && efqd->hw_tag && !efqd->fairness)
 		return;
 
 	/*
-	 * still requests with the driver, don't idle
+	 * idle is disabled, either manually or by past process history
 	 */
-	if (efqd->rq_in_driver)
+	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
 		return;
 
 	/*
-	 * idle is disabled, either manually or by past process history
+	 * This queue has consumed its time slice. We are waiting only for
+	 * it to become busy before we select next queue for dispatch.
 	 */
-	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+	if (wait_for_busy) {
+		elv_mark_ioq_wait_busy(ioq);
+		sl = efqd->elv_slice_idle;
+		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
+		elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl);
+		return;
+	}
+
+	/*
+	 * still requests with the driver, don't idle
+	 */
+	if (efqd->rq_in_driver && !efqd->fairness)
 		return;
 
 	/*
 	 * may be iosched got its own idling logic. In that case io
 	 * schduler will take care of arming the timer, if need be.
 	 */
-	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
+	if (q->elevator->ops->elevator_arm_slice_timer_fn && !efqd->fairness) {
 		q->elevator->ops->elevator_arm_slice_timer_fn(q,
 						ioq->sched_queue);
 	} else {
@@ -2784,11 +2835,38 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/* We are waiting for this queue to become busy before it expires.*/
+	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
 	/*
 	 * The active queue has run out of time, expire it and select new.
 	 */
-	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
-		goto expire;
+	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq)) {
+		/*
+		 * Queue has used up its slice. Wait busy is not on otherwise
+		 * we wouldn't have been here. There is a chance that after
+		 * slice expiry no request from the queue completed hence
+		 * wait busy timer could not be turned on. If that's the case
+		 * don't expire the queue yet. Next request completion from
+		 * the queue will arm the wait busy timer.
+		 *
+		 * Don't wait if this group has other active queues. This
+		 * will make sure that we don't loose fairness at group level
+		 * at the same time in root group we will not see cfq
+		 * regressions.
+		 */
+		if (elv_ioq_sync(ioq) && !ioq->nr_queued
+		    && elv_ioq_nr_dispatched(ioq)
+		    && (elv_iog_nr_active(ioq_to_io_group(ioq)) <= 1)
+		    && !elv_ioq_wait_busy_done(ioq)) {
+			ioq = NULL;
+			goto keep_queue;
+		} else
+			goto expire;
+	}
 
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
@@ -2967,11 +3045,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	const int sync = rq_is_sync(rq);
 	struct io_queue *ioq;
 	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_group *iog;
 
 	if (!elv_iosched_fair_queuing_enabled(q->elevator))
 		return;
 
 	ioq = rq->ioq;
+	iog = ioq_to_io_group(ioq);
 
 	elv_log_ioq(efqd, ioq, "complete");
 
@@ -2997,6 +3077,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			elv_ioq_set_prio_slice(q, ioq);
 			elv_clear_ioq_slice_new(ioq);
 		}
+
+		if (elv_ioq_class_idle(ioq)) {
+			elv_ioq_slice_expired(q);
+			goto done;
+		}
+
 		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
@@ -3004,13 +3090,24 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		 * mean seek distance, give them a chance to run instead
 		 * of idling.
 		 */
-		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
-			elv_ioq_slice_expired(q);
-		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
+		if (elv_ioq_slice_used(ioq)) {
+			if (sync && !ioq->nr_queued
+			    && (elv_iog_nr_active(iog) <= 1)) {
+				/*
+				 * Idle for one extra period in hierarchical
+				 * setup
+				 */
+				elv_ioq_arm_slice_timer(q, 1);
+			} else {
+				/* Expire the queue */
+				elv_ioq_slice_expired(q);
+			}
+		} else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
 			 && sync && !rq_noidle(rq))
-			elv_ioq_arm_slice_timer(q);
+			elv_ioq_arm_slice_timer(q, 0);
 	}
 
+done:
 	if (!efqd->rq_in_driver)
 		elv_schedule_dispatch(q);
 }
@@ -3115,6 +3212,8 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->elv_slice_idle = elv_slice_idle;
 	efqd->hw_tag = 1;
 
+	/* For the time being keep fairness enabled by default */
+	efqd->fairness = 1;
 	return 0;
 }
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 9f0c9a0..e13999e 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -77,6 +77,7 @@ struct io_service_tree {
 struct io_sched_data {
 	struct io_entity *active_entity;
 	struct io_entity *next_active;
+	int nr_active;
 	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
 };
 
@@ -331,6 +332,13 @@ struct elv_fq_data {
 	unsigned long long rate_sampling_start; /*sampling window start jifies*/
 	/* number of sectors finished io during current sampling window */
 	unsigned long rate_sectors_current;
+
+	/*
+	 * If set to 1, will disable many optimizations done for boost
+	 * throughput and focus more on providing fairness for sync
+	 * queues.
+	 */
+	unsigned int fairness;
 };
 
 extern int elv_slice_idle;
@@ -355,6 +363,8 @@ enum elv_queue_state_flags {
 	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
 	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
 	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
+	ELV_QUEUE_FLAG_wait_busy,	  /* wait for this queue to get busy */
+	ELV_QUEUE_FLAG_wait_busy_done,	  /* Have already waited on this queue*/
 	ELV_QUEUE_FLAG_NR,
 };
 
@@ -378,6 +388,8 @@ ELV_IO_QUEUE_FLAG_FNS(wait_request)
 ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
 ELV_IO_QUEUE_FLAG_FNS(idle_window)
 ELV_IO_QUEUE_FLAG_FNS(slice_new)
+ELV_IO_QUEUE_FLAG_FNS(wait_busy)
+ELV_IO_QUEUE_FLAG_FNS(wait_busy_done)
 
 static inline struct io_service_tree *
 io_entity_service_tree(struct io_entity *entity)
@@ -550,6 +562,9 @@ extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 						size_t count);
+extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
+						size_t count);
 
 /* Functions used by elevator.c */
 extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o When a sync queue expires, in many cases it might be empty and then
  it will be deleted from the active tree. This will lead to a scenario
  where out of two competing queues, only one is on the tree and when a
  new queue is selected, vtime jump takes place and we don't see services
  provided in proportion to weight.

o In general this is a fundamental problem with fairness of sync queues
  where queues are not continuously backlogged. Looks like idling is
  only solution to make sure such kind of queues can get some decent amount
  of disk bandwidth in the face of competion from continusouly backlogged
  queues. But excessive idling has potential to reduce performance on SSD
  and disks with commnad queuing.

o This patch experiments with waiting for next request to come before a
  queue is expired after it has consumed its time slice. This can ensure
  more accurate fairness numbers in some cases.

o Introduced a tunable "fairness". If set, io-controller will put more
  focus on getting fairness right than getting throughput right.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |  131 ++++++++++++++++++++++++++++++++++++++++++++------
 block/elevator-fq.h |   15 ++++++
 3 files changed, 131 insertions(+), 16 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b64c8fd..bba85b1 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2004,6 +2004,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_idle),
 	ELV_ATTR(slice_sync),
 	ELV_ATTR(slice_async),
+	ELV_ATTR(fairness),
 	__ATTR_NULL
 };
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 11a7fca..cde2155 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -431,6 +431,7 @@ static void bfq_active_insert(struct io_service_tree *st,
 	struct rb_node *node = &entity->rb_node;
 
 	bfq_insert(&st->active, entity);
+	entity->sched_data->nr_active++;
 	if (node->rb_left != NULL)
 		node = node->rb_left;
 	else if (node->rb_right != NULL)
@@ -489,6 +490,7 @@ static void bfq_active_extract(struct io_service_tree *st,
 
 	node = bfq_find_deepest(&entity->rb_node);
 	bfq_extract(&st->active, entity);
+	entity->sched_data->nr_active--;
 	if (node != NULL)
 		bfq_update_active_tree(node);
 }
@@ -1022,6 +1024,21 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
 	elv_release_ioq(e, &iog->async_idle_queue);
 }
 
+/*
+ * Returns the number of active entities a particular io group has. This
+ * includes number of active entities on service tree as well as the active
+ * entity which is being served currently, if any.
+ */
+
+static inline int elv_iog_nr_active(struct io_group *iog)
+{
+	struct io_sched_data *sd = &iog->sched_data;
+
+	if (sd->active_entity)
+		return sd->nr_active + 1;
+	else
+		return sd->nr_active;
+}
 
 /* Mainly hierarchical grouping code */
 #ifdef CONFIG_GROUP_IOSCHED
@@ -1988,6 +2005,8 @@ SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
 EXPORT_SYMBOL(elv_slice_sync_show);
 SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
+SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
+EXPORT_SYMBOL(elv_fairness_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -2012,6 +2031,8 @@ STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_sync_store);
 STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
+STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
+EXPORT_SYMBOL(elv_fairness_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -2136,7 +2157,7 @@ static void elv_ioq_update_idle_window(struct elevator_queue *eq,
 	 * io scheduler if it wants to disable idling based on additional
 	 * considrations like seek pattern.
 	 */
-	if (enable_idle) {
+	if (enable_idle && !efqd->fairness) {
 		if (eq->ops->elevator_update_idle_window_fn)
 			enable_idle = eq->ops->elevator_update_idle_window_fn(
 						eq, ioq->sched_queue, rq);
@@ -2320,6 +2341,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 
 		elv_clear_ioq_wait_request(ioq);
 		elv_clear_ioq_must_dispatch(ioq);
+		elv_clear_ioq_wait_busy_done(ioq);
 		elv_mark_ioq_slice_new(ioq);
 
 		del_timer(&efqd->idle_slice_timer);
@@ -2473,10 +2495,12 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	assert_spin_locked(q->queue_lock);
 	elv_log_ioq(efqd, ioq, "slice expired");
 
-	if (elv_ioq_wait_request(ioq))
+	if (elv_ioq_wait_request(ioq) || elv_ioq_wait_busy(ioq))
 		del_timer(&efqd->idle_slice_timer);
 
 	elv_clear_ioq_wait_request(ioq);
+	elv_clear_ioq_wait_busy(ioq);
+	elv_clear_ioq_wait_busy_done(ioq);
 
 	/*
 	 * if ioq->slice_end = 0, that means a queue was expired before first
@@ -2649,7 +2673,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 		 * has other work pending, don't risk delaying until the
 		 * idle timer unplug to continue working.
 		 */
-		if (elv_ioq_wait_request(ioq)) {
+		if (elv_ioq_wait_request(ioq) && !elv_ioq_wait_busy(ioq)) {
 			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
 			    efqd->busy_queues > 1) {
 				del_timer(&efqd->idle_slice_timer);
@@ -2657,6 +2681,18 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 			}
 			elv_mark_ioq_must_dispatch(ioq);
 		}
+
+		/*
+		 * If we were waiting for a request on this queue, wait is
+		 * done. Schedule the next dispatch
+		 */
+		if (elv_ioq_wait_busy(ioq)) {
+			del_timer(&efqd->idle_slice_timer);
+			elv_clear_ioq_wait_busy(ioq);
+			elv_mark_ioq_wait_busy_done(ioq);
+			elv_clear_ioq_must_dispatch(ioq);
+			elv_schedule_dispatch(q);
+		}
 	} else if (elv_should_preempt(q, ioq, rq)) {
 		/*
 		 * not the active queue - expire current slice if it is
@@ -2684,6 +2720,9 @@ void elv_idle_slice_timer(unsigned long data)
 
 	if (ioq) {
 
+		if (elv_ioq_wait_busy(ioq))
+			goto expire;
+
 		/*
 		 * We saw a request before the queue expired, let it through
 		 */
@@ -2717,7 +2756,7 @@ out_cont:
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
-void elv_ioq_arm_slice_timer(struct request_queue *q)
+void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *ioq = elv_active_ioq(q->elevator);
@@ -2730,26 +2769,38 @@ void elv_ioq_arm_slice_timer(struct request_queue *q)
 	 * for devices that support queuing, otherwise we still have a problem
 	 * with sync vs async workloads.
 	 */
-	if (blk_queue_nonrot(q) && efqd->hw_tag)
+	if (blk_queue_nonrot(q) && efqd->hw_tag && !efqd->fairness)
 		return;
 
 	/*
-	 * still requests with the driver, don't idle
+	 * idle is disabled, either manually or by past process history
 	 */
-	if (efqd->rq_in_driver)
+	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
 		return;
 
 	/*
-	 * idle is disabled, either manually or by past process history
+	 * This queue has consumed its time slice. We are waiting only for
+	 * it to become busy before we select next queue for dispatch.
 	 */
-	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+	if (wait_for_busy) {
+		elv_mark_ioq_wait_busy(ioq);
+		sl = efqd->elv_slice_idle;
+		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
+		elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl);
+		return;
+	}
+
+	/*
+	 * still requests with the driver, don't idle
+	 */
+	if (efqd->rq_in_driver && !efqd->fairness)
 		return;
 
 	/*
 	 * may be iosched got its own idling logic. In that case io
 	 * schduler will take care of arming the timer, if need be.
 	 */
-	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
+	if (q->elevator->ops->elevator_arm_slice_timer_fn && !efqd->fairness) {
 		q->elevator->ops->elevator_arm_slice_timer_fn(q,
 						ioq->sched_queue);
 	} else {
@@ -2784,11 +2835,38 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/* We are waiting for this queue to become busy before it expires.*/
+	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
 	/*
 	 * The active queue has run out of time, expire it and select new.
 	 */
-	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
-		goto expire;
+	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq)) {
+		/*
+		 * Queue has used up its slice. Wait busy is not on otherwise
+		 * we wouldn't have been here. There is a chance that after
+		 * slice expiry no request from the queue completed hence
+		 * wait busy timer could not be turned on. If that's the case
+		 * don't expire the queue yet. Next request completion from
+		 * the queue will arm the wait busy timer.
+		 *
+		 * Don't wait if this group has other active queues. This
+		 * will make sure that we don't loose fairness at group level
+		 * at the same time in root group we will not see cfq
+		 * regressions.
+		 */
+		if (elv_ioq_sync(ioq) && !ioq->nr_queued
+		    && elv_ioq_nr_dispatched(ioq)
+		    && (elv_iog_nr_active(ioq_to_io_group(ioq)) <= 1)
+		    && !elv_ioq_wait_busy_done(ioq)) {
+			ioq = NULL;
+			goto keep_queue;
+		} else
+			goto expire;
+	}
 
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
@@ -2967,11 +3045,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	const int sync = rq_is_sync(rq);
 	struct io_queue *ioq;
 	struct elv_fq_data *efqd = &q->elevator->efqd;
+	struct io_group *iog;
 
 	if (!elv_iosched_fair_queuing_enabled(q->elevator))
 		return;
 
 	ioq = rq->ioq;
+	iog = ioq_to_io_group(ioq);
 
 	elv_log_ioq(efqd, ioq, "complete");
 
@@ -2997,6 +3077,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			elv_ioq_set_prio_slice(q, ioq);
 			elv_clear_ioq_slice_new(ioq);
 		}
+
+		if (elv_ioq_class_idle(ioq)) {
+			elv_ioq_slice_expired(q);
+			goto done;
+		}
+
 		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
@@ -3004,13 +3090,24 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		 * mean seek distance, give them a chance to run instead
 		 * of idling.
 		 */
-		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
-			elv_ioq_slice_expired(q);
-		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
+		if (elv_ioq_slice_used(ioq)) {
+			if (sync && !ioq->nr_queued
+			    && (elv_iog_nr_active(iog) <= 1)) {
+				/*
+				 * Idle for one extra period in hierarchical
+				 * setup
+				 */
+				elv_ioq_arm_slice_timer(q, 1);
+			} else {
+				/* Expire the queue */
+				elv_ioq_slice_expired(q);
+			}
+		} else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
 			 && sync && !rq_noidle(rq))
-			elv_ioq_arm_slice_timer(q);
+			elv_ioq_arm_slice_timer(q, 0);
 	}
 
+done:
 	if (!efqd->rq_in_driver)
 		elv_schedule_dispatch(q);
 }
@@ -3115,6 +3212,8 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->elv_slice_idle = elv_slice_idle;
 	efqd->hw_tag = 1;
 
+	/* For the time being keep fairness enabled by default */
+	efqd->fairness = 1;
 	return 0;
 }
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 9f0c9a0..e13999e 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -77,6 +77,7 @@ struct io_service_tree {
 struct io_sched_data {
 	struct io_entity *active_entity;
 	struct io_entity *next_active;
+	int nr_active;
 	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
 };
 
@@ -331,6 +332,13 @@ struct elv_fq_data {
 	unsigned long long rate_sampling_start; /*sampling window start jifies*/
 	/* number of sectors finished io during current sampling window */
 	unsigned long rate_sectors_current;
+
+	/*
+	 * If set to 1, will disable many optimizations done for boost
+	 * throughput and focus more on providing fairness for sync
+	 * queues.
+	 */
+	unsigned int fairness;
 };
 
 extern int elv_slice_idle;
@@ -355,6 +363,8 @@ enum elv_queue_state_flags {
 	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
 	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
 	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
+	ELV_QUEUE_FLAG_wait_busy,	  /* wait for this queue to get busy */
+	ELV_QUEUE_FLAG_wait_busy_done,	  /* Have already waited on this queue*/
 	ELV_QUEUE_FLAG_NR,
 };
 
@@ -378,6 +388,8 @@ ELV_IO_QUEUE_FLAG_FNS(wait_request)
 ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
 ELV_IO_QUEUE_FLAG_FNS(idle_window)
 ELV_IO_QUEUE_FLAG_FNS(slice_new)
+ELV_IO_QUEUE_FLAG_FNS(wait_busy)
+ELV_IO_QUEUE_FLAG_FNS(wait_busy_done)
 
 static inline struct io_service_tree *
 io_entity_service_tree(struct io_entity *entity)
@@ -550,6 +562,9 @@ extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 						size_t count);
+extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
+						size_t count);
 
 /* Functions used by elevator.c */
 extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 09/20] io-controller: Separate out queue and data
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (7 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
                     ` (12 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o So far noop, deadline and AS had one common structure called *_data which
  contained both the queue information where requests are queued and also
  common data used for scheduling. This patch breaks down this common
  structure in two parts, *_queue and *_data. This is along the lines of
  cfq where all the reuquests are queued in queue and common data and tunables
  are part of data.

o It does not change the functionality but this re-organization helps once
  noop, deadline and AS are changed to use hierarchical fair queuing.

o looks like queue_empty function is not required and we can check for
  q->nr_sorted in elevator layer to see if ioscheduler queues are empty or
  not.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/as-iosched.c       |  208 ++++++++++++++++++++++++++--------------------
 block/deadline-iosched.c |  117 ++++++++++++++++----------
 block/elevator.c         |  111 +++++++++++++++++++++----
 block/noop-iosched.c     |   59 ++++++-------
 include/linux/elevator.h |    8 ++-
 5 files changed, 319 insertions(+), 184 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..7158e13 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -76,13 +76,7 @@ enum anticipation_status {
 				 * or timed out */
 };
 
-struct as_data {
-	/*
-	 * run time data
-	 */
-
-	struct request_queue *q;	/* the "owner" queue */
-
+struct as_queue {
 	/*
 	 * requests (as_rq s) are present on both sort_list and fifo_list
 	 */
@@ -90,6 +84,14 @@ struct as_data {
 	struct list_head fifo_list[2];
 
 	struct request *next_rq[2];	/* next in sort order */
+	unsigned long last_check_fifo[2];
+	int write_batch_count;		/* max # of reqs in a write batch */
+	int current_write_count;	/* how many requests left this batch */
+	int write_batch_idled;		/* has the write batch gone idle? */
+};
+
+struct as_data {
+	struct request_queue *q;	/* the "owner" queue */
 	sector_t last_sector[2];	/* last SYNC & ASYNC sectors */
 
 	unsigned long exit_prob;	/* probability a task will exit while
@@ -103,21 +105,17 @@ struct as_data {
 	sector_t new_seek_mean;
 
 	unsigned long current_batch_expires;
-	unsigned long last_check_fifo[2];
 	int changed_batch;		/* 1: waiting for old batch to end */
 	int new_batch;			/* 1: waiting on first read complete */
-	int batch_data_dir;		/* current batch SYNC / ASYNC */
-	int write_batch_count;		/* max # of reqs in a write batch */
-	int current_write_count;	/* how many requests left this batch */
-	int write_batch_idled;		/* has the write batch gone idle? */
 
 	enum anticipation_status antic_status;
 	unsigned long antic_start;	/* jiffies: when it started */
 	struct timer_list antic_timer;	/* anticipatory scheduling timer */
-	struct work_struct antic_work;	/* Deferred unplugging */
+	struct work_struct antic_work;  /* Deferred unplugging */
 	struct io_context *io_context;	/* Identify the expected process */
 	int ioc_finished; /* IO associated with io_context is finished */
 	int nr_dispatched;
+	int batch_data_dir;		/* current batch SYNC / ASYNC */
 
 	/*
 	 * settings that change how the i/o scheduler behaves
@@ -258,13 +256,14 @@ static void as_put_io_context(struct request *rq)
 /*
  * rb tree support functions
  */
-#define RQ_RB_ROOT(ad, rq)	(&(ad)->sort_list[rq_is_sync((rq))])
+#define RQ_RB_ROOT(asq, rq)	(&(asq)->sort_list[rq_is_sync((rq))])
 
 static void as_add_rq_rb(struct as_data *ad, struct request *rq)
 {
 	struct request *alias;
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
-	while ((unlikely(alias = elv_rb_add(RQ_RB_ROOT(ad, rq), rq)))) {
+	while ((unlikely(alias = elv_rb_add(RQ_RB_ROOT(asq, rq), rq)))) {
 		as_move_to_dispatch(ad, alias);
 		as_antic_stop(ad);
 	}
@@ -272,7 +271,9 @@ static void as_add_rq_rb(struct as_data *ad, struct request *rq)
 
 static inline void as_del_rq_rb(struct as_data *ad, struct request *rq)
 {
-	elv_rb_del(RQ_RB_ROOT(ad, rq), rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
+
+	elv_rb_del(RQ_RB_ROOT(asq, rq), rq);
 }
 
 /*
@@ -366,7 +367,7 @@ as_choose_req(struct as_data *ad, struct request *rq1, struct request *rq2)
  * what request to process next. Anticipation works on top of this.
  */
 static struct request *
-as_find_next_rq(struct as_data *ad, struct request *last)
+as_find_next_rq(struct as_data *ad, struct as_queue *asq, struct request *last)
 {
 	struct rb_node *rbnext = rb_next(&last->rb_node);
 	struct rb_node *rbprev = rb_prev(&last->rb_node);
@@ -382,7 +383,7 @@ as_find_next_rq(struct as_data *ad, struct request *last)
 	else {
 		const int data_dir = rq_is_sync(last);
 
-		rbnext = rb_first(&ad->sort_list[data_dir]);
+		rbnext = rb_first(&asq->sort_list[data_dir]);
 		if (rbnext && rbnext != &last->rb_node)
 			next = rb_entry_rq(rbnext);
 	}
@@ -787,9 +788,10 @@ static int as_can_anticipate(struct as_data *ad, struct request *rq)
 static void as_update_rq(struct as_data *ad, struct request *rq)
 {
 	const int data_dir = rq_is_sync(rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	/* keep the next_rq cache up to date */
-	ad->next_rq[data_dir] = as_choose_req(ad, rq, ad->next_rq[data_dir]);
+	asq->next_rq[data_dir] = as_choose_req(ad, rq, asq->next_rq[data_dir]);
 
 	/*
 	 * have we been anticipating this request?
@@ -810,25 +812,26 @@ static void update_write_batch(struct as_data *ad)
 {
 	unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
 	long write_time;
+	struct as_queue *asq = elv_get_sched_queue(ad->q, NULL);
 
 	write_time = (jiffies - ad->current_batch_expires) + batch;
 	if (write_time < 0)
 		write_time = 0;
 
-	if (write_time > batch && !ad->write_batch_idled) {
+	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
-			ad->write_batch_count /= 2;
+			asq->write_batch_count /= 2;
 		else
-			ad->write_batch_count--;
-	} else if (write_time < batch && ad->current_write_count == 0) {
+			asq->write_batch_count--;
+	} else if (write_time < batch && asq->current_write_count == 0) {
 		if (batch > write_time * 3)
-			ad->write_batch_count *= 2;
+			asq->write_batch_count *= 2;
 		else
-			ad->write_batch_count++;
+			asq->write_batch_count++;
 	}
 
-	if (ad->write_batch_count < 1)
-		ad->write_batch_count = 1;
+	if (asq->write_batch_count < 1)
+		asq->write_batch_count = 1;
 }
 
 /*
@@ -899,6 +902,7 @@ static void as_remove_queued_request(struct request_queue *q,
 	const int data_dir = rq_is_sync(rq);
 	struct as_data *ad = q->elevator->elevator_data;
 	struct io_context *ioc;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	WARN_ON(RQ_STATE(rq) != AS_RQ_QUEUED);
 
@@ -912,8 +916,8 @@ static void as_remove_queued_request(struct request_queue *q,
 	 * Update the "next_rq" cache if we are about to remove its
 	 * entry
 	 */
-	if (ad->next_rq[data_dir] == rq)
-		ad->next_rq[data_dir] = as_find_next_rq(ad, rq);
+	if (asq->next_rq[data_dir] == rq)
+		asq->next_rq[data_dir] = as_find_next_rq(ad, asq, rq);
 
 	rq_fifo_clear(rq);
 	as_del_rq_rb(ad, rq);
@@ -927,23 +931,23 @@ static void as_remove_queued_request(struct request_queue *q,
  *
  * See as_antic_expired comment.
  */
-static int as_fifo_expired(struct as_data *ad, int adir)
+static int as_fifo_expired(struct as_data *ad, struct as_queue *asq, int adir)
 {
 	struct request *rq;
 	long delta_jif;
 
-	delta_jif = jiffies - ad->last_check_fifo[adir];
+	delta_jif = jiffies - asq->last_check_fifo[adir];
 	if (unlikely(delta_jif < 0))
 		delta_jif = -delta_jif;
 	if (delta_jif < ad->fifo_expire[adir])
 		return 0;
 
-	ad->last_check_fifo[adir] = jiffies;
+	asq->last_check_fifo[adir] = jiffies;
 
-	if (list_empty(&ad->fifo_list[adir]))
+	if (list_empty(&asq->fifo_list[adir]))
 		return 0;
 
-	rq = rq_entry_fifo(ad->fifo_list[adir].next);
+	rq = rq_entry_fifo(asq->fifo_list[adir].next);
 
 	return time_after(jiffies, rq_fifo_time(rq));
 }
@@ -952,7 +956,7 @@ static int as_fifo_expired(struct as_data *ad, int adir)
  * as_batch_expired returns true if the current batch has expired. A batch
  * is a set of reads or a set of writes.
  */
-static inline int as_batch_expired(struct as_data *ad)
+static inline int as_batch_expired(struct as_data *ad, struct as_queue *asq)
 {
 	if (ad->changed_batch || ad->new_batch)
 		return 0;
@@ -962,7 +966,7 @@ static inline int as_batch_expired(struct as_data *ad)
 		return time_after(jiffies, ad->current_batch_expires);
 
 	return time_after(jiffies, ad->current_batch_expires)
-		|| ad->current_write_count == 0;
+		|| asq->current_write_count == 0;
 }
 
 /*
@@ -971,6 +975,7 @@ static inline int as_batch_expired(struct as_data *ad)
 static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 {
 	const int data_dir = rq_is_sync(rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	BUG_ON(RB_EMPTY_NODE(&rq->rb_node));
 
@@ -993,12 +998,12 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 			ad->io_context = NULL;
 		}
 
-		if (ad->current_write_count != 0)
-			ad->current_write_count--;
+		if (asq->current_write_count != 0)
+			asq->current_write_count--;
 	}
 	ad->ioc_finished = 0;
 
-	ad->next_rq[data_dir] = as_find_next_rq(ad, rq);
+	asq->next_rq[data_dir] = as_find_next_rq(ad, asq, rq);
 
 	/*
 	 * take it off the sort and fifo list, add to dispatch queue
@@ -1022,9 +1027,16 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 static int as_dispatch_request(struct request_queue *q, int force)
 {
 	struct as_data *ad = q->elevator->elevator_data;
-	const int reads = !list_empty(&ad->fifo_list[BLK_RW_SYNC]);
-	const int writes = !list_empty(&ad->fifo_list[BLK_RW_ASYNC]);
 	struct request *rq;
+	struct as_queue *asq = elv_select_sched_queue(q, force);
+	int reads, writes;
+
+	if (!asq)
+		return 0;
+
+	reads = !list_empty(&asq->fifo_list[BLK_RW_SYNC]);
+	writes = !list_empty(&asq->fifo_list[BLK_RW_ASYNC]);
+
 
 	if (unlikely(force)) {
 		/*
@@ -1040,25 +1052,25 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		ad->changed_batch = 0;
 		ad->new_batch = 0;
 
-		while (ad->next_rq[BLK_RW_SYNC]) {
-			as_move_to_dispatch(ad, ad->next_rq[BLK_RW_SYNC]);
+		while (asq->next_rq[BLK_RW_SYNC]) {
+			as_move_to_dispatch(ad, asq->next_rq[BLK_RW_SYNC]);
 			dispatched++;
 		}
-		ad->last_check_fifo[BLK_RW_SYNC] = jiffies;
+		asq->last_check_fifo[BLK_RW_SYNC] = jiffies;
 
-		while (ad->next_rq[BLK_RW_ASYNC]) {
-			as_move_to_dispatch(ad, ad->next_rq[BLK_RW_ASYNC]);
+		while (asq->next_rq[BLK_RW_ASYNC]) {
+			as_move_to_dispatch(ad, asq->next_rq[BLK_RW_ASYNC]);
 			dispatched++;
 		}
-		ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
+		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
 		return dispatched;
 	}
 
 	/* Signal that the write batch was uncontended, so we can't time it */
 	if (ad->batch_data_dir == BLK_RW_ASYNC && !reads) {
-		if (ad->current_write_count == 0 || !writes)
-			ad->write_batch_idled = 1;
+		if (asq->current_write_count == 0 || !writes)
+			asq->write_batch_idled = 1;
 	}
 
 	if (!(reads || writes)
@@ -1067,14 +1079,14 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		|| ad->changed_batch)
 		return 0;
 
-	if (!(reads && writes && as_batch_expired(ad))) {
+	if (!(reads && writes && as_batch_expired(ad, asq))) {
 		/*
 		 * batch is still running or no reads or no writes
 		 */
-		rq = ad->next_rq[ad->batch_data_dir];
+		rq = asq->next_rq[ad->batch_data_dir];
 
 		if (ad->batch_data_dir == BLK_RW_SYNC && ad->antic_expire) {
-			if (as_fifo_expired(ad, BLK_RW_SYNC))
+			if (as_fifo_expired(ad, asq, BLK_RW_SYNC))
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
@@ -1098,7 +1110,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 */
 
 	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_SYNC]));
+		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
 
 		if (writes && ad->batch_data_dir == BLK_RW_SYNC)
 			/*
@@ -1111,8 +1123,8 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
-		rq = rq_entry_fifo(ad->fifo_list[BLK_RW_SYNC].next);
-		ad->last_check_fifo[ad->batch_data_dir] = jiffies;
+		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
+		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
 	}
 
@@ -1122,7 +1134,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 
 	if (writes) {
 dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_ASYNC]));
+		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_ASYNC]));
 
 		if (ad->batch_data_dir == BLK_RW_SYNC) {
 			ad->changed_batch = 1;
@@ -1135,10 +1147,10 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
-		ad->current_write_count = ad->write_batch_count;
-		ad->write_batch_idled = 0;
-		rq = rq_entry_fifo(ad->fifo_list[BLK_RW_ASYNC].next);
-		ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
+		asq->current_write_count = asq->write_batch_count;
+		asq->write_batch_idled = 0;
+		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
+		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 		goto dispatch_request;
 	}
 
@@ -1150,9 +1162,9 @@ dispatch_request:
 	 * If a request has expired, service it.
 	 */
 
-	if (as_fifo_expired(ad, ad->batch_data_dir)) {
+	if (as_fifo_expired(ad, asq, ad->batch_data_dir)) {
 fifo_expired:
-		rq = rq_entry_fifo(ad->fifo_list[ad->batch_data_dir].next);
+		rq = rq_entry_fifo(asq->fifo_list[ad->batch_data_dir].next);
 	}
 
 	if (ad->changed_batch) {
@@ -1185,6 +1197,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 {
 	struct as_data *ad = q->elevator->elevator_data;
 	int data_dir;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	RQ_SET_STATE(rq, AS_RQ_NEW);
 
@@ -1203,7 +1216,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 	 * set expire time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies + ad->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &ad->fifo_list[data_dir]);
+	list_add_tail(&rq->queuelist, &asq->fifo_list[data_dir]);
 
 	as_update_rq(ad, rq); /* keep state machine up to date */
 	RQ_SET_STATE(rq, AS_RQ_QUEUED);
@@ -1225,31 +1238,20 @@ static void as_deactivate_request(struct request_queue *q, struct request *rq)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 }
 
-/*
- * as_queue_empty tells us if there are requests left in the device. It may
- * not be the case that a driver can get the next request even if the queue
- * is not empty - it is used in the block layer to check for plugging and
- * merging opportunities
- */
-static int as_queue_empty(struct request_queue *q)
-{
-	struct as_data *ad = q->elevator->elevator_data;
-
-	return list_empty(&ad->fifo_list[BLK_RW_ASYNC])
-		&& list_empty(&ad->fifo_list[BLK_RW_SYNC]);
-}
-
 static int
 as_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
-	struct as_data *ad = q->elevator->elevator_data;
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
+	struct as_queue *asq = elv_get_sched_queue_current(q);
+
+	if (!asq)
+		return ELEVATOR_NO_MERGE;
 
 	/*
 	 * check for front merge
 	 */
-	__rq = elv_rb_find(&ad->sort_list[bio_data_dir(bio)], rb_key);
+	__rq = elv_rb_find(&asq->sort_list[bio_data_dir(bio)], rb_key);
 	if (__rq && elv_rq_merge_ok(__rq, bio)) {
 		*req = __rq;
 		return ELEVATOR_FRONT_MERGE;
@@ -1336,6 +1338,41 @@ static int as_may_queue(struct request_queue *q, int rw)
 	return ret;
 }
 
+/* Called with queue lock held */
+static void *as_alloc_as_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
+{
+	struct as_queue *asq;
+	struct as_data *ad = eq->elevator_data;
+
+	asq = kmalloc_node(sizeof(*asq), gfp_mask | __GFP_ZERO, q->node);
+	if (asq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&asq->fifo_list[BLK_RW_SYNC]);
+	INIT_LIST_HEAD(&asq->fifo_list[BLK_RW_ASYNC]);
+	asq->sort_list[BLK_RW_SYNC] = RB_ROOT;
+	asq->sort_list[BLK_RW_ASYNC] = RB_ROOT;
+	if (ad)
+		asq->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
+	else
+		asq->write_batch_count = default_write_batch_expire / 10;
+
+	if (asq->write_batch_count < 2)
+		asq->write_batch_count = 2;
+out:
+	return asq;
+}
+
+static void as_free_as_queue(struct elevator_queue *e, void *sched_queue)
+{
+	struct as_queue *asq = sched_queue;
+
+	BUG_ON(!list_empty(&asq->fifo_list[BLK_RW_SYNC]));
+	BUG_ON(!list_empty(&asq->fifo_list[BLK_RW_ASYNC]));
+	kfree(asq);
+}
+
 static void as_exit_queue(struct elevator_queue *e)
 {
 	struct as_data *ad = e->elevator_data;
@@ -1343,9 +1380,6 @@ static void as_exit_queue(struct elevator_queue *e)
 	del_timer_sync(&ad->antic_timer);
 	cancel_work_sync(&ad->antic_work);
 
-	BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_SYNC]));
-	BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_ASYNC]));
-
 	put_io_context(ad->io_context);
 	kfree(ad);
 }
@@ -1369,10 +1403,6 @@ static void *as_init_queue(struct request_queue *q)
 	init_timer(&ad->antic_timer);
 	INIT_WORK(&ad->antic_work, as_work_handler);
 
-	INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_SYNC]);
-	INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_ASYNC]);
-	ad->sort_list[BLK_RW_SYNC] = RB_ROOT;
-	ad->sort_list[BLK_RW_ASYNC] = RB_ROOT;
 	ad->fifo_expire[BLK_RW_SYNC] = default_read_expire;
 	ad->fifo_expire[BLK_RW_ASYNC] = default_write_expire;
 	ad->antic_expire = default_antic_expire;
@@ -1380,9 +1410,6 @@ static void *as_init_queue(struct request_queue *q)
 	ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
 
 	ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
-	ad->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
-	if (ad->write_batch_count < 2)
-		ad->write_batch_count = 2;
 
 	return ad;
 }
@@ -1480,7 +1507,6 @@ static struct elevator_type iosched_as = {
 		.elevator_add_req_fn =		as_add_request,
 		.elevator_activate_req_fn =	as_activate_request,
 		.elevator_deactivate_req_fn = 	as_deactivate_request,
-		.elevator_queue_empty_fn =	as_queue_empty,
 		.elevator_completed_req_fn =	as_completed_request,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
@@ -1488,6 +1514,8 @@ static struct elevator_type iosched_as = {
 		.elevator_init_fn =		as_init_queue,
 		.elevator_exit_fn =		as_exit_queue,
 		.trim =				as_trim,
+		.elevator_alloc_sched_queue_fn = as_alloc_as_queue,
+		.elevator_free_sched_queue_fn = as_free_as_queue,
 	},
 
 	.elevator_attrs = as_attrs,
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index c4d991d..5e65041 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -23,25 +23,23 @@ static const int writes_starved = 2;    /* max times reads can starve a write */
 static const int fifo_batch = 16;       /* # of sequential requests treated as one
 				     by the above parameters. For throughput. */
 
-struct deadline_data {
-	/*
-	 * run time data
-	 */
-
+struct deadline_queue {
 	/*
 	 * requests (deadline_rq s) are present on both sort_list and fifo_list
 	 */
-	struct rb_root sort_list[2];	
+	struct rb_root sort_list[2];
 	struct list_head fifo_list[2];
-
 	/*
 	 * next in sort order. read, write or both are NULL
 	 */
 	struct request *next_rq[2];
 	unsigned int batching;		/* number of sequential requests made */
-	sector_t last_sector;		/* head position */
 	unsigned int starved;		/* times reads have starved writes */
+};
 
+struct deadline_data {
+	struct request_queue *q;
+	sector_t last_sector;		/* head position */
 	/*
 	 * settings that change how the i/o scheduler behaves
 	 */
@@ -56,7 +54,9 @@ static void deadline_move_request(struct deadline_data *, struct request *);
 static inline struct rb_root *
 deadline_rb_root(struct deadline_data *dd, struct request *rq)
 {
-	return &dd->sort_list[rq_data_dir(rq)];
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
+
+	return &dq->sort_list[rq_data_dir(rq)];
 }
 
 /*
@@ -87,9 +87,10 @@ static inline void
 deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 {
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
 
-	if (dd->next_rq[data_dir] == rq)
-		dd->next_rq[data_dir] = deadline_latter_request(rq);
+	if (dq->next_rq[data_dir] == rq)
+		dq->next_rq[data_dir] = deadline_latter_request(rq);
 
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
@@ -102,6 +103,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(q, rq);
 
 	deadline_add_rq_rb(dd, rq);
 
@@ -109,7 +111,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 	 * set expire time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies + dd->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
+	list_add_tail(&rq->queuelist, &dq->fifo_list[data_dir]);
 }
 
 /*
@@ -129,6 +131,11 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	struct deadline_data *dd = q->elevator->elevator_data;
 	struct request *__rq;
 	int ret;
+	struct deadline_queue *dq;
+
+	dq = elv_get_sched_queue_current(q);
+	if (!dq)
+		return ELEVATOR_NO_MERGE;
 
 	/*
 	 * check for front merge
@@ -136,7 +143,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	if (dd->front_merges) {
 		sector_t sector = bio->bi_sector + bio_sectors(bio);
 
-		__rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector);
+		__rq = elv_rb_find(&dq->sort_list[bio_data_dir(bio)], sector);
 		if (__rq) {
 			BUG_ON(sector != __rq->sector);
 
@@ -207,10 +214,11 @@ static void
 deadline_move_request(struct deadline_data *dd, struct request *rq)
 {
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
 
-	dd->next_rq[READ] = NULL;
-	dd->next_rq[WRITE] = NULL;
-	dd->next_rq[data_dir] = deadline_latter_request(rq);
+	dq->next_rq[READ] = NULL;
+	dq->next_rq[WRITE] = NULL;
+	dq->next_rq[data_dir] = deadline_latter_request(rq);
 
 	dd->last_sector = rq_end_sector(rq);
 
@@ -225,9 +233,9 @@ deadline_move_request(struct deadline_data *dd, struct request *rq)
  * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
  * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
  */
-static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
+static inline int deadline_check_fifo(struct deadline_queue *dq, int ddir)
 {
-	struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
+	struct request *rq = rq_entry_fifo(dq->fifo_list[ddir].next);
 
 	/*
 	 * rq is expired!
@@ -245,20 +253,26 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
 static int deadline_dispatch_requests(struct request_queue *q, int force)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int reads = !list_empty(&dd->fifo_list[READ]);
-	const int writes = !list_empty(&dd->fifo_list[WRITE]);
+	struct deadline_queue *dq = elv_select_sched_queue(q, force);
+	int reads, writes;
 	struct request *rq;
 	int data_dir;
 
+	if (!dq)
+		return 0;
+
+	reads = !list_empty(&dq->fifo_list[READ]);
+	writes = !list_empty(&dq->fifo_list[WRITE]);
+
 	/*
 	 * batches are currently reads XOR writes
 	 */
-	if (dd->next_rq[WRITE])
-		rq = dd->next_rq[WRITE];
+	if (dq->next_rq[WRITE])
+		rq = dq->next_rq[WRITE];
 	else
-		rq = dd->next_rq[READ];
+		rq = dq->next_rq[READ];
 
-	if (rq && dd->batching < dd->fifo_batch)
+	if (rq && dq->batching < dd->fifo_batch)
 		/* we have a next request are still entitled to batch */
 		goto dispatch_request;
 
@@ -268,9 +282,9 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 */
 
 	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
+		BUG_ON(RB_EMPTY_ROOT(&dq->sort_list[READ]));
 
-		if (writes && (dd->starved++ >= dd->writes_starved))
+		if (writes && (dq->starved++ >= dd->writes_starved))
 			goto dispatch_writes;
 
 		data_dir = READ;
@@ -284,9 +298,9 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 
 	if (writes) {
 dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
+		BUG_ON(RB_EMPTY_ROOT(&dq->sort_list[WRITE]));
 
-		dd->starved = 0;
+		dq->starved = 0;
 
 		data_dir = WRITE;
 
@@ -299,48 +313,62 @@ dispatch_find_request:
 	/*
 	 * we are not running a batch, find best request for selected data_dir
 	 */
-	if (deadline_check_fifo(dd, data_dir) || !dd->next_rq[data_dir]) {
+	if (deadline_check_fifo(dq, data_dir) || !dq->next_rq[data_dir]) {
 		/*
 		 * A deadline has expired, the last request was in the other
 		 * direction, or we have run out of higher-sectored requests.
 		 * Start again from the request with the earliest expiry time.
 		 */
-		rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
+		rq = rq_entry_fifo(dq->fifo_list[data_dir].next);
 	} else {
 		/*
 		 * The last req was the same dir and we have a next request in
 		 * sort order. No expired requests so continue on from here.
 		 */
-		rq = dd->next_rq[data_dir];
+		rq = dq->next_rq[data_dir];
 	}
 
-	dd->batching = 0;
+	dq->batching = 0;
 
 dispatch_request:
 	/*
 	 * rq is the selected appropriate request.
 	 */
-	dd->batching++;
+	dq->batching++;
 	deadline_move_request(dd, rq);
 
 	return 1;
 }
 
-static int deadline_queue_empty(struct request_queue *q)
+static void *deadline_alloc_deadline_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_queue *dq;
 
-	return list_empty(&dd->fifo_list[WRITE])
-		&& list_empty(&dd->fifo_list[READ]);
+	dq = kmalloc_node(sizeof(*dq), gfp_mask | __GFP_ZERO, q->node);
+	if (dq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&dq->fifo_list[READ]);
+	INIT_LIST_HEAD(&dq->fifo_list[WRITE]);
+	dq->sort_list[READ] = RB_ROOT;
+	dq->sort_list[WRITE] = RB_ROOT;
+out:
+	return dq;
+}
+
+static void deadline_free_deadline_queue(struct elevator_queue *e,
+						void *sched_queue)
+{
+	struct deadline_queue *dq = sched_queue;
+
+	kfree(dq);
 }
 
 static void deadline_exit_queue(struct elevator_queue *e)
 {
 	struct deadline_data *dd = e->elevator_data;
 
-	BUG_ON(!list_empty(&dd->fifo_list[READ]));
-	BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
-
 	kfree(dd);
 }
 
@@ -355,10 +383,7 @@ static void *deadline_init_queue(struct request_queue *q)
 	if (!dd)
 		return NULL;
 
-	INIT_LIST_HEAD(&dd->fifo_list[READ]);
-	INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
-	dd->sort_list[READ] = RB_ROOT;
-	dd->sort_list[WRITE] = RB_ROOT;
+	dd->q = q;
 	dd->fifo_expire[READ] = read_expire;
 	dd->fifo_expire[WRITE] = write_expire;
 	dd->writes_starved = writes_starved;
@@ -445,13 +470,13 @@ static struct elevator_type iosched_deadline = {
 		.elevator_merge_req_fn =	deadline_merged_requests,
 		.elevator_dispatch_fn =		deadline_dispatch_requests,
 		.elevator_add_req_fn =		deadline_add_request,
-		.elevator_queue_empty_fn =	deadline_queue_empty,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
 		.elevator_init_fn =		deadline_init_queue,
 		.elevator_exit_fn =		deadline_exit_queue,
+		.elevator_alloc_sched_queue_fn = deadline_alloc_deadline_queue,
+		.elevator_free_sched_queue_fn = deadline_free_deadline_queue,
 	},
-
 	.elevator_attrs = deadline_attrs,
 	.elevator_name = "deadline",
 	.elevator_owner = THIS_MODULE,
diff --git a/block/elevator.c b/block/elevator.c
index 3944385..67a0601 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -180,17 +180,54 @@ static struct elevator_type *elevator_get(const char *name)
 	return e;
 }
 
-static void *elevator_init_queue(struct request_queue *q,
-				 struct elevator_queue *eq)
+static void *elevator_init_data(struct request_queue *q,
+					struct elevator_queue *eq)
 {
-	return eq->ops->elevator_init_fn(q);
+	void *data = NULL;
+
+	if (eq->ops->elevator_init_fn) {
+		data = eq->ops->elevator_init_fn(q);
+		if (data)
+			return data;
+		else
+			return ERR_PTR(-ENOMEM);
+	}
+
+	/* IO scheduler does not instanciate data (noop), it is not an error */
+	return NULL;
+}
+
+static void elevator_free_sched_queue(struct elevator_queue *eq,
+						void *sched_queue)
+{
+	/* Not all io schedulers (cfq) strore sched_queue */
+	if (!sched_queue)
+		return;
+	eq->ops->elevator_free_sched_queue_fn(eq, sched_queue);
+}
+
+static void *elevator_alloc_sched_queue(struct request_queue *q,
+					struct elevator_queue *eq)
+{
+	void *sched_queue = NULL;
+
+	if (eq->ops->elevator_alloc_sched_queue_fn) {
+		sched_queue = eq->ops->elevator_alloc_sched_queue_fn(q, eq,
+								GFP_KERNEL);
+		if (!sched_queue)
+			return ERR_PTR(-ENOMEM);
+
+	}
+
+	return sched_queue;
 }
 
 static void elevator_attach(struct request_queue *q, struct elevator_queue *eq,
-			   void *data)
+			   void *data, void *sched_queue)
 {
 	q->elevator = eq;
 	eq->elevator_data = data;
+	eq->sched_queue = sched_queue;
 }
 
 static char chosen_elevator[16];
@@ -260,7 +297,7 @@ int elevator_init(struct request_queue *q, char *name)
 	struct elevator_type *e = NULL;
 	struct elevator_queue *eq;
 	int ret = 0;
-	void *data;
+	void *data = NULL, *sched_queue = NULL;
 
 	INIT_LIST_HEAD(&q->queue_head);
 	q->last_merge = NULL;
@@ -294,13 +331,21 @@ int elevator_init(struct request_queue *q, char *name)
 	if (!eq)
 		return -ENOMEM;
 
-	data = elevator_init_queue(q, eq);
-	if (!data) {
+	data = elevator_init_data(q, eq);
+
+	if (IS_ERR(data)) {
+		kobject_put(&eq->kobj);
+		return -ENOMEM;
+	}
+
+	sched_queue = elevator_alloc_sched_queue(q, eq);
+
+	if (IS_ERR(sched_queue)) {
 		kobject_put(&eq->kobj);
 		return -ENOMEM;
 	}
 
-	elevator_attach(q, eq, data);
+	elevator_attach(q, eq, data, sched_queue);
 	return ret;
 }
 EXPORT_SYMBOL(elevator_init);
@@ -308,6 +353,7 @@ EXPORT_SYMBOL(elevator_init);
 void elevator_exit(struct elevator_queue *e)
 {
 	mutex_lock(&e->sysfs_lock);
+	elevator_free_sched_queue(e, e->sched_queue);
 	elv_exit_fq_data(e);
 	if (e->ops->elevator_exit_fn)
 		e->ops->elevator_exit_fn(e);
@@ -1121,7 +1167,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
 static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 {
 	struct elevator_queue *old_elevator, *e;
-	void *data;
+	void *data = NULL, *sched_queue = NULL;
 
 	/*
 	 * Allocate new elevator
@@ -1130,10 +1176,18 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	if (!e)
 		return 0;
 
-	data = elevator_init_queue(q, e);
-	if (!data) {
+	data = elevator_init_data(q, e);
+
+	if (IS_ERR(data)) {
 		kobject_put(&e->kobj);
-		return 0;
+		return -ENOMEM;
+	}
+
+	sched_queue = elevator_alloc_sched_queue(q, e);
+
+	if (IS_ERR(sched_queue)) {
+		kobject_put(&e->kobj);
+		return -ENOMEM;
 	}
 
 	/*
@@ -1150,7 +1204,7 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	/*
 	 * attach and start new elevator
 	 */
-	elevator_attach(q, e, data);
+	elevator_attach(q, e, data, sched_queue);
 
 	spin_unlock_irq(q->queue_lock);
 
@@ -1257,16 +1311,43 @@ struct request *elv_rb_latter_request(struct request_queue *q,
 }
 EXPORT_SYMBOL(elv_rb_latter_request);
 
-/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
+/* Get the io scheduler queue pointer. */
 void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
 {
-	return ioq_sched_queue(rq_ioq(rq));
+	/*
+	 * io scheduler is not using fair queuing. Return sched_queue
+	 * pointer stored in elevator_queue. It will be null if io
+	 * scheduler never stored anything there to begin with (cfq)
+	 */
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
+	/*
+	 * IO schedueler is using fair queuing infrasture. If io scheduler
+	 * has passed a non null rq, retrieve sched_queue pointer from
+	 * there. */
+	if (rq)
+		return ioq_sched_queue(rq_ioq(rq));
+
+	return NULL;
 }
 EXPORT_SYMBOL(elv_get_sched_queue);
 
 /* Select an ioscheduler queue to dispatch request from. */
 void *elv_select_sched_queue(struct request_queue *q, int force)
 {
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
 	return ioq_sched_queue(elv_fq_select_ioq(q, force));
 }
 EXPORT_SYMBOL(elv_select_sched_queue);
+
+/*
+ * Get the io scheduler queue pointer for current task.
+ */
+void *elv_get_sched_queue_current(struct request_queue *q)
+{
+	return q->elevator->sched_queue;
+}
+EXPORT_SYMBOL(elv_get_sched_queue_current);
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index 3a0d369..d587832 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -7,7 +7,7 @@
 #include <linux/module.h>
 #include <linux/init.h>
 
-struct noop_data {
+struct noop_queue {
 	struct list_head queue;
 };
 
@@ -19,11 +19,14 @@ static void noop_merged_requests(struct request_queue *q, struct request *rq,
 
 static int noop_dispatch(struct request_queue *q, int force)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_select_sched_queue(q, force);
 
-	if (!list_empty(&nd->queue)) {
+	if (!nq)
+		return 0;
+
+	if (!list_empty(&nq->queue)) {
 		struct request *rq;
-		rq = list_entry(nd->queue.next, struct request, queuelist);
+		rq = list_entry(nq->queue.next, struct request, queuelist);
 		list_del_init(&rq->queuelist);
 		elv_dispatch_sort(q, rq);
 		return 1;
@@ -33,24 +36,17 @@ static int noop_dispatch(struct request_queue *q, int force)
 
 static void noop_add_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	list_add_tail(&rq->queuelist, &nd->queue);
-}
-
-static int noop_queue_empty(struct request_queue *q)
-{
-	struct noop_data *nd = q->elevator->elevator_data;
-
-	return list_empty(&nd->queue);
+	list_add_tail(&rq->queuelist, &nq->queue);
 }
 
 static struct request *
 noop_former_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	if (rq->queuelist.prev == &nd->queue)
+	if (rq->queuelist.prev == &nq->queue)
 		return NULL;
 	return list_entry(rq->queuelist.prev, struct request, queuelist);
 }
@@ -58,30 +54,32 @@ noop_former_request(struct request_queue *q, struct request *rq)
 static struct request *
 noop_latter_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	if (rq->queuelist.next == &nd->queue)
+	if (rq->queuelist.next == &nq->queue)
 		return NULL;
 	return list_entry(rq->queuelist.next, struct request, queuelist);
 }
 
-static void *noop_init_queue(struct request_queue *q)
+static void *noop_alloc_noop_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
 {
-	struct noop_data *nd;
+	struct noop_queue *nq;
 
-	nd = kmalloc_node(sizeof(*nd), GFP_KERNEL, q->node);
-	if (!nd)
-		return NULL;
-	INIT_LIST_HEAD(&nd->queue);
-	return nd;
+	nq = kmalloc_node(sizeof(*nq), gfp_mask | __GFP_ZERO, q->node);
+	if (nq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&nq->queue);
+out:
+	return nq;
 }
 
-static void noop_exit_queue(struct elevator_queue *e)
+static void noop_free_noop_queue(struct elevator_queue *e, void *sched_queue)
 {
-	struct noop_data *nd = e->elevator_data;
+	struct noop_queue *nq = sched_queue;
 
-	BUG_ON(!list_empty(&nd->queue));
-	kfree(nd);
+	kfree(nq);
 }
 
 static struct elevator_type elevator_noop = {
@@ -89,11 +87,10 @@ static struct elevator_type elevator_noop = {
 		.elevator_merge_req_fn		= noop_merged_requests,
 		.elevator_dispatch_fn		= noop_dispatch,
 		.elevator_add_req_fn		= noop_add_request,
-		.elevator_queue_empty_fn	= noop_queue_empty,
 		.elevator_former_req_fn		= noop_former_request,
 		.elevator_latter_req_fn		= noop_latter_request,
-		.elevator_init_fn		= noop_init_queue,
-		.elevator_exit_fn		= noop_exit_queue,
+		.elevator_alloc_sched_queue_fn	= noop_alloc_noop_queue,
+		.elevator_free_sched_queue_fn	= noop_free_noop_queue,
 	},
 	.elevator_name = "noop",
 	.elevator_owner = THIS_MODULE,
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 679c149..3729a2f 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -30,8 +30,9 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
-#ifdef CONFIG_ELV_FAIR_QUEUING
+typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t);
 typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
+#ifdef CONFIG_ELV_FAIR_QUEUING
 typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
 typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
 typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
@@ -70,8 +71,9 @@ struct elevator_ops
 	elevator_exit_fn *elevator_exit_fn;
 	void (*trim)(struct io_context *);
 
-#ifdef CONFIG_ELV_FAIR_QUEUING
+	elevator_alloc_sched_queue_fn *elevator_alloc_sched_queue_fn;
 	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
+#ifdef CONFIG_ELV_FAIR_QUEUING
 	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
 	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
 
@@ -112,6 +114,7 @@ struct elevator_queue
 {
 	struct elevator_ops *ops;
 	void *elevator_data;
+	void *sched_queue;
 	struct kobject kobj;
 	struct elevator_type *elevator_type;
 	struct mutex sysfs_lock;
@@ -260,5 +263,6 @@ static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
+extern void *elv_get_sched_queue_current(struct request_queue *q);
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 09/20] io-controller: Separate out queue and data
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o So far noop, deadline and AS had one common structure called *_data which
  contained both the queue information where requests are queued and also
  common data used for scheduling. This patch breaks down this common
  structure in two parts, *_queue and *_data. This is along the lines of
  cfq where all the reuquests are queued in queue and common data and tunables
  are part of data.

o It does not change the functionality but this re-organization helps once
  noop, deadline and AS are changed to use hierarchical fair queuing.

o looks like queue_empty function is not required and we can check for
  q->nr_sorted in elevator layer to see if ioscheduler queues are empty or
  not.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/as-iosched.c       |  208 ++++++++++++++++++++++++++--------------------
 block/deadline-iosched.c |  117 ++++++++++++++++----------
 block/elevator.c         |  111 +++++++++++++++++++++----
 block/noop-iosched.c     |   59 ++++++-------
 include/linux/elevator.h |    8 ++-
 5 files changed, 319 insertions(+), 184 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..7158e13 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -76,13 +76,7 @@ enum anticipation_status {
 				 * or timed out */
 };
 
-struct as_data {
-	/*
-	 * run time data
-	 */
-
-	struct request_queue *q;	/* the "owner" queue */
-
+struct as_queue {
 	/*
 	 * requests (as_rq s) are present on both sort_list and fifo_list
 	 */
@@ -90,6 +84,14 @@ struct as_data {
 	struct list_head fifo_list[2];
 
 	struct request *next_rq[2];	/* next in sort order */
+	unsigned long last_check_fifo[2];
+	int write_batch_count;		/* max # of reqs in a write batch */
+	int current_write_count;	/* how many requests left this batch */
+	int write_batch_idled;		/* has the write batch gone idle? */
+};
+
+struct as_data {
+	struct request_queue *q;	/* the "owner" queue */
 	sector_t last_sector[2];	/* last SYNC & ASYNC sectors */
 
 	unsigned long exit_prob;	/* probability a task will exit while
@@ -103,21 +105,17 @@ struct as_data {
 	sector_t new_seek_mean;
 
 	unsigned long current_batch_expires;
-	unsigned long last_check_fifo[2];
 	int changed_batch;		/* 1: waiting for old batch to end */
 	int new_batch;			/* 1: waiting on first read complete */
-	int batch_data_dir;		/* current batch SYNC / ASYNC */
-	int write_batch_count;		/* max # of reqs in a write batch */
-	int current_write_count;	/* how many requests left this batch */
-	int write_batch_idled;		/* has the write batch gone idle? */
 
 	enum anticipation_status antic_status;
 	unsigned long antic_start;	/* jiffies: when it started */
 	struct timer_list antic_timer;	/* anticipatory scheduling timer */
-	struct work_struct antic_work;	/* Deferred unplugging */
+	struct work_struct antic_work;  /* Deferred unplugging */
 	struct io_context *io_context;	/* Identify the expected process */
 	int ioc_finished; /* IO associated with io_context is finished */
 	int nr_dispatched;
+	int batch_data_dir;		/* current batch SYNC / ASYNC */
 
 	/*
 	 * settings that change how the i/o scheduler behaves
@@ -258,13 +256,14 @@ static void as_put_io_context(struct request *rq)
 /*
  * rb tree support functions
  */
-#define RQ_RB_ROOT(ad, rq)	(&(ad)->sort_list[rq_is_sync((rq))])
+#define RQ_RB_ROOT(asq, rq)	(&(asq)->sort_list[rq_is_sync((rq))])
 
 static void as_add_rq_rb(struct as_data *ad, struct request *rq)
 {
 	struct request *alias;
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
-	while ((unlikely(alias = elv_rb_add(RQ_RB_ROOT(ad, rq), rq)))) {
+	while ((unlikely(alias = elv_rb_add(RQ_RB_ROOT(asq, rq), rq)))) {
 		as_move_to_dispatch(ad, alias);
 		as_antic_stop(ad);
 	}
@@ -272,7 +271,9 @@ static void as_add_rq_rb(struct as_data *ad, struct request *rq)
 
 static inline void as_del_rq_rb(struct as_data *ad, struct request *rq)
 {
-	elv_rb_del(RQ_RB_ROOT(ad, rq), rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
+
+	elv_rb_del(RQ_RB_ROOT(asq, rq), rq);
 }
 
 /*
@@ -366,7 +367,7 @@ as_choose_req(struct as_data *ad, struct request *rq1, struct request *rq2)
  * what request to process next. Anticipation works on top of this.
  */
 static struct request *
-as_find_next_rq(struct as_data *ad, struct request *last)
+as_find_next_rq(struct as_data *ad, struct as_queue *asq, struct request *last)
 {
 	struct rb_node *rbnext = rb_next(&last->rb_node);
 	struct rb_node *rbprev = rb_prev(&last->rb_node);
@@ -382,7 +383,7 @@ as_find_next_rq(struct as_data *ad, struct request *last)
 	else {
 		const int data_dir = rq_is_sync(last);
 
-		rbnext = rb_first(&ad->sort_list[data_dir]);
+		rbnext = rb_first(&asq->sort_list[data_dir]);
 		if (rbnext && rbnext != &last->rb_node)
 			next = rb_entry_rq(rbnext);
 	}
@@ -787,9 +788,10 @@ static int as_can_anticipate(struct as_data *ad, struct request *rq)
 static void as_update_rq(struct as_data *ad, struct request *rq)
 {
 	const int data_dir = rq_is_sync(rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	/* keep the next_rq cache up to date */
-	ad->next_rq[data_dir] = as_choose_req(ad, rq, ad->next_rq[data_dir]);
+	asq->next_rq[data_dir] = as_choose_req(ad, rq, asq->next_rq[data_dir]);
 
 	/*
 	 * have we been anticipating this request?
@@ -810,25 +812,26 @@ static void update_write_batch(struct as_data *ad)
 {
 	unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
 	long write_time;
+	struct as_queue *asq = elv_get_sched_queue(ad->q, NULL);
 
 	write_time = (jiffies - ad->current_batch_expires) + batch;
 	if (write_time < 0)
 		write_time = 0;
 
-	if (write_time > batch && !ad->write_batch_idled) {
+	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
-			ad->write_batch_count /= 2;
+			asq->write_batch_count /= 2;
 		else
-			ad->write_batch_count--;
-	} else if (write_time < batch && ad->current_write_count == 0) {
+			asq->write_batch_count--;
+	} else if (write_time < batch && asq->current_write_count == 0) {
 		if (batch > write_time * 3)
-			ad->write_batch_count *= 2;
+			asq->write_batch_count *= 2;
 		else
-			ad->write_batch_count++;
+			asq->write_batch_count++;
 	}
 
-	if (ad->write_batch_count < 1)
-		ad->write_batch_count = 1;
+	if (asq->write_batch_count < 1)
+		asq->write_batch_count = 1;
 }
 
 /*
@@ -899,6 +902,7 @@ static void as_remove_queued_request(struct request_queue *q,
 	const int data_dir = rq_is_sync(rq);
 	struct as_data *ad = q->elevator->elevator_data;
 	struct io_context *ioc;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	WARN_ON(RQ_STATE(rq) != AS_RQ_QUEUED);
 
@@ -912,8 +916,8 @@ static void as_remove_queued_request(struct request_queue *q,
 	 * Update the "next_rq" cache if we are about to remove its
 	 * entry
 	 */
-	if (ad->next_rq[data_dir] == rq)
-		ad->next_rq[data_dir] = as_find_next_rq(ad, rq);
+	if (asq->next_rq[data_dir] == rq)
+		asq->next_rq[data_dir] = as_find_next_rq(ad, asq, rq);
 
 	rq_fifo_clear(rq);
 	as_del_rq_rb(ad, rq);
@@ -927,23 +931,23 @@ static void as_remove_queued_request(struct request_queue *q,
  *
  * See as_antic_expired comment.
  */
-static int as_fifo_expired(struct as_data *ad, int adir)
+static int as_fifo_expired(struct as_data *ad, struct as_queue *asq, int adir)
 {
 	struct request *rq;
 	long delta_jif;
 
-	delta_jif = jiffies - ad->last_check_fifo[adir];
+	delta_jif = jiffies - asq->last_check_fifo[adir];
 	if (unlikely(delta_jif < 0))
 		delta_jif = -delta_jif;
 	if (delta_jif < ad->fifo_expire[adir])
 		return 0;
 
-	ad->last_check_fifo[adir] = jiffies;
+	asq->last_check_fifo[adir] = jiffies;
 
-	if (list_empty(&ad->fifo_list[adir]))
+	if (list_empty(&asq->fifo_list[adir]))
 		return 0;
 
-	rq = rq_entry_fifo(ad->fifo_list[adir].next);
+	rq = rq_entry_fifo(asq->fifo_list[adir].next);
 
 	return time_after(jiffies, rq_fifo_time(rq));
 }
@@ -952,7 +956,7 @@ static int as_fifo_expired(struct as_data *ad, int adir)
  * as_batch_expired returns true if the current batch has expired. A batch
  * is a set of reads or a set of writes.
  */
-static inline int as_batch_expired(struct as_data *ad)
+static inline int as_batch_expired(struct as_data *ad, struct as_queue *asq)
 {
 	if (ad->changed_batch || ad->new_batch)
 		return 0;
@@ -962,7 +966,7 @@ static inline int as_batch_expired(struct as_data *ad)
 		return time_after(jiffies, ad->current_batch_expires);
 
 	return time_after(jiffies, ad->current_batch_expires)
-		|| ad->current_write_count == 0;
+		|| asq->current_write_count == 0;
 }
 
 /*
@@ -971,6 +975,7 @@ static inline int as_batch_expired(struct as_data *ad)
 static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 {
 	const int data_dir = rq_is_sync(rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	BUG_ON(RB_EMPTY_NODE(&rq->rb_node));
 
@@ -993,12 +998,12 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 			ad->io_context = NULL;
 		}
 
-		if (ad->current_write_count != 0)
-			ad->current_write_count--;
+		if (asq->current_write_count != 0)
+			asq->current_write_count--;
 	}
 	ad->ioc_finished = 0;
 
-	ad->next_rq[data_dir] = as_find_next_rq(ad, rq);
+	asq->next_rq[data_dir] = as_find_next_rq(ad, asq, rq);
 
 	/*
 	 * take it off the sort and fifo list, add to dispatch queue
@@ -1022,9 +1027,16 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 static int as_dispatch_request(struct request_queue *q, int force)
 {
 	struct as_data *ad = q->elevator->elevator_data;
-	const int reads = !list_empty(&ad->fifo_list[BLK_RW_SYNC]);
-	const int writes = !list_empty(&ad->fifo_list[BLK_RW_ASYNC]);
 	struct request *rq;
+	struct as_queue *asq = elv_select_sched_queue(q, force);
+	int reads, writes;
+
+	if (!asq)
+		return 0;
+
+	reads = !list_empty(&asq->fifo_list[BLK_RW_SYNC]);
+	writes = !list_empty(&asq->fifo_list[BLK_RW_ASYNC]);
+
 
 	if (unlikely(force)) {
 		/*
@@ -1040,25 +1052,25 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		ad->changed_batch = 0;
 		ad->new_batch = 0;
 
-		while (ad->next_rq[BLK_RW_SYNC]) {
-			as_move_to_dispatch(ad, ad->next_rq[BLK_RW_SYNC]);
+		while (asq->next_rq[BLK_RW_SYNC]) {
+			as_move_to_dispatch(ad, asq->next_rq[BLK_RW_SYNC]);
 			dispatched++;
 		}
-		ad->last_check_fifo[BLK_RW_SYNC] = jiffies;
+		asq->last_check_fifo[BLK_RW_SYNC] = jiffies;
 
-		while (ad->next_rq[BLK_RW_ASYNC]) {
-			as_move_to_dispatch(ad, ad->next_rq[BLK_RW_ASYNC]);
+		while (asq->next_rq[BLK_RW_ASYNC]) {
+			as_move_to_dispatch(ad, asq->next_rq[BLK_RW_ASYNC]);
 			dispatched++;
 		}
-		ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
+		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
 		return dispatched;
 	}
 
 	/* Signal that the write batch was uncontended, so we can't time it */
 	if (ad->batch_data_dir == BLK_RW_ASYNC && !reads) {
-		if (ad->current_write_count == 0 || !writes)
-			ad->write_batch_idled = 1;
+		if (asq->current_write_count == 0 || !writes)
+			asq->write_batch_idled = 1;
 	}
 
 	if (!(reads || writes)
@@ -1067,14 +1079,14 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		|| ad->changed_batch)
 		return 0;
 
-	if (!(reads && writes && as_batch_expired(ad))) {
+	if (!(reads && writes && as_batch_expired(ad, asq))) {
 		/*
 		 * batch is still running or no reads or no writes
 		 */
-		rq = ad->next_rq[ad->batch_data_dir];
+		rq = asq->next_rq[ad->batch_data_dir];
 
 		if (ad->batch_data_dir == BLK_RW_SYNC && ad->antic_expire) {
-			if (as_fifo_expired(ad, BLK_RW_SYNC))
+			if (as_fifo_expired(ad, asq, BLK_RW_SYNC))
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
@@ -1098,7 +1110,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 */
 
 	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_SYNC]));
+		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
 
 		if (writes && ad->batch_data_dir == BLK_RW_SYNC)
 			/*
@@ -1111,8 +1123,8 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
-		rq = rq_entry_fifo(ad->fifo_list[BLK_RW_SYNC].next);
-		ad->last_check_fifo[ad->batch_data_dir] = jiffies;
+		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
+		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
 	}
 
@@ -1122,7 +1134,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 
 	if (writes) {
 dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_ASYNC]));
+		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_ASYNC]));
 
 		if (ad->batch_data_dir == BLK_RW_SYNC) {
 			ad->changed_batch = 1;
@@ -1135,10 +1147,10 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
-		ad->current_write_count = ad->write_batch_count;
-		ad->write_batch_idled = 0;
-		rq = rq_entry_fifo(ad->fifo_list[BLK_RW_ASYNC].next);
-		ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
+		asq->current_write_count = asq->write_batch_count;
+		asq->write_batch_idled = 0;
+		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
+		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 		goto dispatch_request;
 	}
 
@@ -1150,9 +1162,9 @@ dispatch_request:
 	 * If a request has expired, service it.
 	 */
 
-	if (as_fifo_expired(ad, ad->batch_data_dir)) {
+	if (as_fifo_expired(ad, asq, ad->batch_data_dir)) {
 fifo_expired:
-		rq = rq_entry_fifo(ad->fifo_list[ad->batch_data_dir].next);
+		rq = rq_entry_fifo(asq->fifo_list[ad->batch_data_dir].next);
 	}
 
 	if (ad->changed_batch) {
@@ -1185,6 +1197,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 {
 	struct as_data *ad = q->elevator->elevator_data;
 	int data_dir;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	RQ_SET_STATE(rq, AS_RQ_NEW);
 
@@ -1203,7 +1216,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 	 * set expire time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies + ad->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &ad->fifo_list[data_dir]);
+	list_add_tail(&rq->queuelist, &asq->fifo_list[data_dir]);
 
 	as_update_rq(ad, rq); /* keep state machine up to date */
 	RQ_SET_STATE(rq, AS_RQ_QUEUED);
@@ -1225,31 +1238,20 @@ static void as_deactivate_request(struct request_queue *q, struct request *rq)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 }
 
-/*
- * as_queue_empty tells us if there are requests left in the device. It may
- * not be the case that a driver can get the next request even if the queue
- * is not empty - it is used in the block layer to check for plugging and
- * merging opportunities
- */
-static int as_queue_empty(struct request_queue *q)
-{
-	struct as_data *ad = q->elevator->elevator_data;
-
-	return list_empty(&ad->fifo_list[BLK_RW_ASYNC])
-		&& list_empty(&ad->fifo_list[BLK_RW_SYNC]);
-}
-
 static int
 as_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
-	struct as_data *ad = q->elevator->elevator_data;
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
+	struct as_queue *asq = elv_get_sched_queue_current(q);
+
+	if (!asq)
+		return ELEVATOR_NO_MERGE;
 
 	/*
 	 * check for front merge
 	 */
-	__rq = elv_rb_find(&ad->sort_list[bio_data_dir(bio)], rb_key);
+	__rq = elv_rb_find(&asq->sort_list[bio_data_dir(bio)], rb_key);
 	if (__rq && elv_rq_merge_ok(__rq, bio)) {
 		*req = __rq;
 		return ELEVATOR_FRONT_MERGE;
@@ -1336,6 +1338,41 @@ static int as_may_queue(struct request_queue *q, int rw)
 	return ret;
 }
 
+/* Called with queue lock held */
+static void *as_alloc_as_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
+{
+	struct as_queue *asq;
+	struct as_data *ad = eq->elevator_data;
+
+	asq = kmalloc_node(sizeof(*asq), gfp_mask | __GFP_ZERO, q->node);
+	if (asq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&asq->fifo_list[BLK_RW_SYNC]);
+	INIT_LIST_HEAD(&asq->fifo_list[BLK_RW_ASYNC]);
+	asq->sort_list[BLK_RW_SYNC] = RB_ROOT;
+	asq->sort_list[BLK_RW_ASYNC] = RB_ROOT;
+	if (ad)
+		asq->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
+	else
+		asq->write_batch_count = default_write_batch_expire / 10;
+
+	if (asq->write_batch_count < 2)
+		asq->write_batch_count = 2;
+out:
+	return asq;
+}
+
+static void as_free_as_queue(struct elevator_queue *e, void *sched_queue)
+{
+	struct as_queue *asq = sched_queue;
+
+	BUG_ON(!list_empty(&asq->fifo_list[BLK_RW_SYNC]));
+	BUG_ON(!list_empty(&asq->fifo_list[BLK_RW_ASYNC]));
+	kfree(asq);
+}
+
 static void as_exit_queue(struct elevator_queue *e)
 {
 	struct as_data *ad = e->elevator_data;
@@ -1343,9 +1380,6 @@ static void as_exit_queue(struct elevator_queue *e)
 	del_timer_sync(&ad->antic_timer);
 	cancel_work_sync(&ad->antic_work);
 
-	BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_SYNC]));
-	BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_ASYNC]));
-
 	put_io_context(ad->io_context);
 	kfree(ad);
 }
@@ -1369,10 +1403,6 @@ static void *as_init_queue(struct request_queue *q)
 	init_timer(&ad->antic_timer);
 	INIT_WORK(&ad->antic_work, as_work_handler);
 
-	INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_SYNC]);
-	INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_ASYNC]);
-	ad->sort_list[BLK_RW_SYNC] = RB_ROOT;
-	ad->sort_list[BLK_RW_ASYNC] = RB_ROOT;
 	ad->fifo_expire[BLK_RW_SYNC] = default_read_expire;
 	ad->fifo_expire[BLK_RW_ASYNC] = default_write_expire;
 	ad->antic_expire = default_antic_expire;
@@ -1380,9 +1410,6 @@ static void *as_init_queue(struct request_queue *q)
 	ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
 
 	ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
-	ad->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
-	if (ad->write_batch_count < 2)
-		ad->write_batch_count = 2;
 
 	return ad;
 }
@@ -1480,7 +1507,6 @@ static struct elevator_type iosched_as = {
 		.elevator_add_req_fn =		as_add_request,
 		.elevator_activate_req_fn =	as_activate_request,
 		.elevator_deactivate_req_fn = 	as_deactivate_request,
-		.elevator_queue_empty_fn =	as_queue_empty,
 		.elevator_completed_req_fn =	as_completed_request,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
@@ -1488,6 +1514,8 @@ static struct elevator_type iosched_as = {
 		.elevator_init_fn =		as_init_queue,
 		.elevator_exit_fn =		as_exit_queue,
 		.trim =				as_trim,
+		.elevator_alloc_sched_queue_fn = as_alloc_as_queue,
+		.elevator_free_sched_queue_fn = as_free_as_queue,
 	},
 
 	.elevator_attrs = as_attrs,
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index c4d991d..5e65041 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -23,25 +23,23 @@ static const int writes_starved = 2;    /* max times reads can starve a write */
 static const int fifo_batch = 16;       /* # of sequential requests treated as one
 				     by the above parameters. For throughput. */
 
-struct deadline_data {
-	/*
-	 * run time data
-	 */
-
+struct deadline_queue {
 	/*
 	 * requests (deadline_rq s) are present on both sort_list and fifo_list
 	 */
-	struct rb_root sort_list[2];	
+	struct rb_root sort_list[2];
 	struct list_head fifo_list[2];
-
 	/*
 	 * next in sort order. read, write or both are NULL
 	 */
 	struct request *next_rq[2];
 	unsigned int batching;		/* number of sequential requests made */
-	sector_t last_sector;		/* head position */
 	unsigned int starved;		/* times reads have starved writes */
+};
 
+struct deadline_data {
+	struct request_queue *q;
+	sector_t last_sector;		/* head position */
 	/*
 	 * settings that change how the i/o scheduler behaves
 	 */
@@ -56,7 +54,9 @@ static void deadline_move_request(struct deadline_data *, struct request *);
 static inline struct rb_root *
 deadline_rb_root(struct deadline_data *dd, struct request *rq)
 {
-	return &dd->sort_list[rq_data_dir(rq)];
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
+
+	return &dq->sort_list[rq_data_dir(rq)];
 }
 
 /*
@@ -87,9 +87,10 @@ static inline void
 deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 {
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
 
-	if (dd->next_rq[data_dir] == rq)
-		dd->next_rq[data_dir] = deadline_latter_request(rq);
+	if (dq->next_rq[data_dir] == rq)
+		dq->next_rq[data_dir] = deadline_latter_request(rq);
 
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
@@ -102,6 +103,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(q, rq);
 
 	deadline_add_rq_rb(dd, rq);
 
@@ -109,7 +111,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 	 * set expire time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies + dd->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
+	list_add_tail(&rq->queuelist, &dq->fifo_list[data_dir]);
 }
 
 /*
@@ -129,6 +131,11 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	struct deadline_data *dd = q->elevator->elevator_data;
 	struct request *__rq;
 	int ret;
+	struct deadline_queue *dq;
+
+	dq = elv_get_sched_queue_current(q);
+	if (!dq)
+		return ELEVATOR_NO_MERGE;
 
 	/*
 	 * check for front merge
@@ -136,7 +143,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	if (dd->front_merges) {
 		sector_t sector = bio->bi_sector + bio_sectors(bio);
 
-		__rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector);
+		__rq = elv_rb_find(&dq->sort_list[bio_data_dir(bio)], sector);
 		if (__rq) {
 			BUG_ON(sector != __rq->sector);
 
@@ -207,10 +214,11 @@ static void
 deadline_move_request(struct deadline_data *dd, struct request *rq)
 {
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
 
-	dd->next_rq[READ] = NULL;
-	dd->next_rq[WRITE] = NULL;
-	dd->next_rq[data_dir] = deadline_latter_request(rq);
+	dq->next_rq[READ] = NULL;
+	dq->next_rq[WRITE] = NULL;
+	dq->next_rq[data_dir] = deadline_latter_request(rq);
 
 	dd->last_sector = rq_end_sector(rq);
 
@@ -225,9 +233,9 @@ deadline_move_request(struct deadline_data *dd, struct request *rq)
  * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
  * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
  */
-static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
+static inline int deadline_check_fifo(struct deadline_queue *dq, int ddir)
 {
-	struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
+	struct request *rq = rq_entry_fifo(dq->fifo_list[ddir].next);
 
 	/*
 	 * rq is expired!
@@ -245,20 +253,26 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
 static int deadline_dispatch_requests(struct request_queue *q, int force)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int reads = !list_empty(&dd->fifo_list[READ]);
-	const int writes = !list_empty(&dd->fifo_list[WRITE]);
+	struct deadline_queue *dq = elv_select_sched_queue(q, force);
+	int reads, writes;
 	struct request *rq;
 	int data_dir;
 
+	if (!dq)
+		return 0;
+
+	reads = !list_empty(&dq->fifo_list[READ]);
+	writes = !list_empty(&dq->fifo_list[WRITE]);
+
 	/*
 	 * batches are currently reads XOR writes
 	 */
-	if (dd->next_rq[WRITE])
-		rq = dd->next_rq[WRITE];
+	if (dq->next_rq[WRITE])
+		rq = dq->next_rq[WRITE];
 	else
-		rq = dd->next_rq[READ];
+		rq = dq->next_rq[READ];
 
-	if (rq && dd->batching < dd->fifo_batch)
+	if (rq && dq->batching < dd->fifo_batch)
 		/* we have a next request are still entitled to batch */
 		goto dispatch_request;
 
@@ -268,9 +282,9 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 */
 
 	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
+		BUG_ON(RB_EMPTY_ROOT(&dq->sort_list[READ]));
 
-		if (writes && (dd->starved++ >= dd->writes_starved))
+		if (writes && (dq->starved++ >= dd->writes_starved))
 			goto dispatch_writes;
 
 		data_dir = READ;
@@ -284,9 +298,9 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 
 	if (writes) {
 dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
+		BUG_ON(RB_EMPTY_ROOT(&dq->sort_list[WRITE]));
 
-		dd->starved = 0;
+		dq->starved = 0;
 
 		data_dir = WRITE;
 
@@ -299,48 +313,62 @@ dispatch_find_request:
 	/*
 	 * we are not running a batch, find best request for selected data_dir
 	 */
-	if (deadline_check_fifo(dd, data_dir) || !dd->next_rq[data_dir]) {
+	if (deadline_check_fifo(dq, data_dir) || !dq->next_rq[data_dir]) {
 		/*
 		 * A deadline has expired, the last request was in the other
 		 * direction, or we have run out of higher-sectored requests.
 		 * Start again from the request with the earliest expiry time.
 		 */
-		rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
+		rq = rq_entry_fifo(dq->fifo_list[data_dir].next);
 	} else {
 		/*
 		 * The last req was the same dir and we have a next request in
 		 * sort order. No expired requests so continue on from here.
 		 */
-		rq = dd->next_rq[data_dir];
+		rq = dq->next_rq[data_dir];
 	}
 
-	dd->batching = 0;
+	dq->batching = 0;
 
 dispatch_request:
 	/*
 	 * rq is the selected appropriate request.
 	 */
-	dd->batching++;
+	dq->batching++;
 	deadline_move_request(dd, rq);
 
 	return 1;
 }
 
-static int deadline_queue_empty(struct request_queue *q)
+static void *deadline_alloc_deadline_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_queue *dq;
 
-	return list_empty(&dd->fifo_list[WRITE])
-		&& list_empty(&dd->fifo_list[READ]);
+	dq = kmalloc_node(sizeof(*dq), gfp_mask | __GFP_ZERO, q->node);
+	if (dq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&dq->fifo_list[READ]);
+	INIT_LIST_HEAD(&dq->fifo_list[WRITE]);
+	dq->sort_list[READ] = RB_ROOT;
+	dq->sort_list[WRITE] = RB_ROOT;
+out:
+	return dq;
+}
+
+static void deadline_free_deadline_queue(struct elevator_queue *e,
+						void *sched_queue)
+{
+	struct deadline_queue *dq = sched_queue;
+
+	kfree(dq);
 }
 
 static void deadline_exit_queue(struct elevator_queue *e)
 {
 	struct deadline_data *dd = e->elevator_data;
 
-	BUG_ON(!list_empty(&dd->fifo_list[READ]));
-	BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
-
 	kfree(dd);
 }
 
@@ -355,10 +383,7 @@ static void *deadline_init_queue(struct request_queue *q)
 	if (!dd)
 		return NULL;
 
-	INIT_LIST_HEAD(&dd->fifo_list[READ]);
-	INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
-	dd->sort_list[READ] = RB_ROOT;
-	dd->sort_list[WRITE] = RB_ROOT;
+	dd->q = q;
 	dd->fifo_expire[READ] = read_expire;
 	dd->fifo_expire[WRITE] = write_expire;
 	dd->writes_starved = writes_starved;
@@ -445,13 +470,13 @@ static struct elevator_type iosched_deadline = {
 		.elevator_merge_req_fn =	deadline_merged_requests,
 		.elevator_dispatch_fn =		deadline_dispatch_requests,
 		.elevator_add_req_fn =		deadline_add_request,
-		.elevator_queue_empty_fn =	deadline_queue_empty,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
 		.elevator_init_fn =		deadline_init_queue,
 		.elevator_exit_fn =		deadline_exit_queue,
+		.elevator_alloc_sched_queue_fn = deadline_alloc_deadline_queue,
+		.elevator_free_sched_queue_fn = deadline_free_deadline_queue,
 	},
-
 	.elevator_attrs = deadline_attrs,
 	.elevator_name = "deadline",
 	.elevator_owner = THIS_MODULE,
diff --git a/block/elevator.c b/block/elevator.c
index 3944385..67a0601 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -180,17 +180,54 @@ static struct elevator_type *elevator_get(const char *name)
 	return e;
 }
 
-static void *elevator_init_queue(struct request_queue *q,
-				 struct elevator_queue *eq)
+static void *elevator_init_data(struct request_queue *q,
+					struct elevator_queue *eq)
 {
-	return eq->ops->elevator_init_fn(q);
+	void *data = NULL;
+
+	if (eq->ops->elevator_init_fn) {
+		data = eq->ops->elevator_init_fn(q);
+		if (data)
+			return data;
+		else
+			return ERR_PTR(-ENOMEM);
+	}
+
+	/* IO scheduler does not instanciate data (noop), it is not an error */
+	return NULL;
+}
+
+static void elevator_free_sched_queue(struct elevator_queue *eq,
+						void *sched_queue)
+{
+	/* Not all io schedulers (cfq) strore sched_queue */
+	if (!sched_queue)
+		return;
+	eq->ops->elevator_free_sched_queue_fn(eq, sched_queue);
+}
+
+static void *elevator_alloc_sched_queue(struct request_queue *q,
+					struct elevator_queue *eq)
+{
+	void *sched_queue = NULL;
+
+	if (eq->ops->elevator_alloc_sched_queue_fn) {
+		sched_queue = eq->ops->elevator_alloc_sched_queue_fn(q, eq,
+								GFP_KERNEL);
+		if (!sched_queue)
+			return ERR_PTR(-ENOMEM);
+
+	}
+
+	return sched_queue;
 }
 
 static void elevator_attach(struct request_queue *q, struct elevator_queue *eq,
-			   void *data)
+			   void *data, void *sched_queue)
 {
 	q->elevator = eq;
 	eq->elevator_data = data;
+	eq->sched_queue = sched_queue;
 }
 
 static char chosen_elevator[16];
@@ -260,7 +297,7 @@ int elevator_init(struct request_queue *q, char *name)
 	struct elevator_type *e = NULL;
 	struct elevator_queue *eq;
 	int ret = 0;
-	void *data;
+	void *data = NULL, *sched_queue = NULL;
 
 	INIT_LIST_HEAD(&q->queue_head);
 	q->last_merge = NULL;
@@ -294,13 +331,21 @@ int elevator_init(struct request_queue *q, char *name)
 	if (!eq)
 		return -ENOMEM;
 
-	data = elevator_init_queue(q, eq);
-	if (!data) {
+	data = elevator_init_data(q, eq);
+
+	if (IS_ERR(data)) {
+		kobject_put(&eq->kobj);
+		return -ENOMEM;
+	}
+
+	sched_queue = elevator_alloc_sched_queue(q, eq);
+
+	if (IS_ERR(sched_queue)) {
 		kobject_put(&eq->kobj);
 		return -ENOMEM;
 	}
 
-	elevator_attach(q, eq, data);
+	elevator_attach(q, eq, data, sched_queue);
 	return ret;
 }
 EXPORT_SYMBOL(elevator_init);
@@ -308,6 +353,7 @@ EXPORT_SYMBOL(elevator_init);
 void elevator_exit(struct elevator_queue *e)
 {
 	mutex_lock(&e->sysfs_lock);
+	elevator_free_sched_queue(e, e->sched_queue);
 	elv_exit_fq_data(e);
 	if (e->ops->elevator_exit_fn)
 		e->ops->elevator_exit_fn(e);
@@ -1121,7 +1167,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
 static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 {
 	struct elevator_queue *old_elevator, *e;
-	void *data;
+	void *data = NULL, *sched_queue = NULL;
 
 	/*
 	 * Allocate new elevator
@@ -1130,10 +1176,18 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	if (!e)
 		return 0;
 
-	data = elevator_init_queue(q, e);
-	if (!data) {
+	data = elevator_init_data(q, e);
+
+	if (IS_ERR(data)) {
 		kobject_put(&e->kobj);
-		return 0;
+		return -ENOMEM;
+	}
+
+	sched_queue = elevator_alloc_sched_queue(q, e);
+
+	if (IS_ERR(sched_queue)) {
+		kobject_put(&e->kobj);
+		return -ENOMEM;
 	}
 
 	/*
@@ -1150,7 +1204,7 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	/*
 	 * attach and start new elevator
 	 */
-	elevator_attach(q, e, data);
+	elevator_attach(q, e, data, sched_queue);
 
 	spin_unlock_irq(q->queue_lock);
 
@@ -1257,16 +1311,43 @@ struct request *elv_rb_latter_request(struct request_queue *q,
 }
 EXPORT_SYMBOL(elv_rb_latter_request);
 
-/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
+/* Get the io scheduler queue pointer. */
 void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
 {
-	return ioq_sched_queue(rq_ioq(rq));
+	/*
+	 * io scheduler is not using fair queuing. Return sched_queue
+	 * pointer stored in elevator_queue. It will be null if io
+	 * scheduler never stored anything there to begin with (cfq)
+	 */
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
+	/*
+	 * IO schedueler is using fair queuing infrasture. If io scheduler
+	 * has passed a non null rq, retrieve sched_queue pointer from
+	 * there. */
+	if (rq)
+		return ioq_sched_queue(rq_ioq(rq));
+
+	return NULL;
 }
 EXPORT_SYMBOL(elv_get_sched_queue);
 
 /* Select an ioscheduler queue to dispatch request from. */
 void *elv_select_sched_queue(struct request_queue *q, int force)
 {
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
 	return ioq_sched_queue(elv_fq_select_ioq(q, force));
 }
 EXPORT_SYMBOL(elv_select_sched_queue);
+
+/*
+ * Get the io scheduler queue pointer for current task.
+ */
+void *elv_get_sched_queue_current(struct request_queue *q)
+{
+	return q->elevator->sched_queue;
+}
+EXPORT_SYMBOL(elv_get_sched_queue_current);
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index 3a0d369..d587832 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -7,7 +7,7 @@
 #include <linux/module.h>
 #include <linux/init.h>
 
-struct noop_data {
+struct noop_queue {
 	struct list_head queue;
 };
 
@@ -19,11 +19,14 @@ static void noop_merged_requests(struct request_queue *q, struct request *rq,
 
 static int noop_dispatch(struct request_queue *q, int force)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_select_sched_queue(q, force);
 
-	if (!list_empty(&nd->queue)) {
+	if (!nq)
+		return 0;
+
+	if (!list_empty(&nq->queue)) {
 		struct request *rq;
-		rq = list_entry(nd->queue.next, struct request, queuelist);
+		rq = list_entry(nq->queue.next, struct request, queuelist);
 		list_del_init(&rq->queuelist);
 		elv_dispatch_sort(q, rq);
 		return 1;
@@ -33,24 +36,17 @@ static int noop_dispatch(struct request_queue *q, int force)
 
 static void noop_add_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	list_add_tail(&rq->queuelist, &nd->queue);
-}
-
-static int noop_queue_empty(struct request_queue *q)
-{
-	struct noop_data *nd = q->elevator->elevator_data;
-
-	return list_empty(&nd->queue);
+	list_add_tail(&rq->queuelist, &nq->queue);
 }
 
 static struct request *
 noop_former_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	if (rq->queuelist.prev == &nd->queue)
+	if (rq->queuelist.prev == &nq->queue)
 		return NULL;
 	return list_entry(rq->queuelist.prev, struct request, queuelist);
 }
@@ -58,30 +54,32 @@ noop_former_request(struct request_queue *q, struct request *rq)
 static struct request *
 noop_latter_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	if (rq->queuelist.next == &nd->queue)
+	if (rq->queuelist.next == &nq->queue)
 		return NULL;
 	return list_entry(rq->queuelist.next, struct request, queuelist);
 }
 
-static void *noop_init_queue(struct request_queue *q)
+static void *noop_alloc_noop_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
 {
-	struct noop_data *nd;
+	struct noop_queue *nq;
 
-	nd = kmalloc_node(sizeof(*nd), GFP_KERNEL, q->node);
-	if (!nd)
-		return NULL;
-	INIT_LIST_HEAD(&nd->queue);
-	return nd;
+	nq = kmalloc_node(sizeof(*nq), gfp_mask | __GFP_ZERO, q->node);
+	if (nq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&nq->queue);
+out:
+	return nq;
 }
 
-static void noop_exit_queue(struct elevator_queue *e)
+static void noop_free_noop_queue(struct elevator_queue *e, void *sched_queue)
 {
-	struct noop_data *nd = e->elevator_data;
+	struct noop_queue *nq = sched_queue;
 
-	BUG_ON(!list_empty(&nd->queue));
-	kfree(nd);
+	kfree(nq);
 }
 
 static struct elevator_type elevator_noop = {
@@ -89,11 +87,10 @@ static struct elevator_type elevator_noop = {
 		.elevator_merge_req_fn		= noop_merged_requests,
 		.elevator_dispatch_fn		= noop_dispatch,
 		.elevator_add_req_fn		= noop_add_request,
-		.elevator_queue_empty_fn	= noop_queue_empty,
 		.elevator_former_req_fn		= noop_former_request,
 		.elevator_latter_req_fn		= noop_latter_request,
-		.elevator_init_fn		= noop_init_queue,
-		.elevator_exit_fn		= noop_exit_queue,
+		.elevator_alloc_sched_queue_fn	= noop_alloc_noop_queue,
+		.elevator_free_sched_queue_fn	= noop_free_noop_queue,
 	},
 	.elevator_name = "noop",
 	.elevator_owner = THIS_MODULE,
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 679c149..3729a2f 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -30,8 +30,9 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
-#ifdef CONFIG_ELV_FAIR_QUEUING
+typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t);
 typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
+#ifdef CONFIG_ELV_FAIR_QUEUING
 typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
 typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
 typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
@@ -70,8 +71,9 @@ struct elevator_ops
 	elevator_exit_fn *elevator_exit_fn;
 	void (*trim)(struct io_context *);
 
-#ifdef CONFIG_ELV_FAIR_QUEUING
+	elevator_alloc_sched_queue_fn *elevator_alloc_sched_queue_fn;
 	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
+#ifdef CONFIG_ELV_FAIR_QUEUING
 	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
 	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
 
@@ -112,6 +114,7 @@ struct elevator_queue
 {
 	struct elevator_ops *ops;
 	void *elevator_data;
+	void *sched_queue;
 	struct kobject kobj;
 	struct elevator_type *elevator_type;
 	struct mutex sysfs_lock;
@@ -260,5 +263,6 @@ static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
+extern void *elv_get_sched_queue_current(struct request_queue *q);
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 09/20] io-controller: Separate out queue and data
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o So far noop, deadline and AS had one common structure called *_data which
  contained both the queue information where requests are queued and also
  common data used for scheduling. This patch breaks down this common
  structure in two parts, *_queue and *_data. This is along the lines of
  cfq where all the reuquests are queued in queue and common data and tunables
  are part of data.

o It does not change the functionality but this re-organization helps once
  noop, deadline and AS are changed to use hierarchical fair queuing.

o looks like queue_empty function is not required and we can check for
  q->nr_sorted in elevator layer to see if ioscheduler queues are empty or
  not.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/as-iosched.c       |  208 ++++++++++++++++++++++++++--------------------
 block/deadline-iosched.c |  117 ++++++++++++++++----------
 block/elevator.c         |  111 +++++++++++++++++++++----
 block/noop-iosched.c     |   59 ++++++-------
 include/linux/elevator.h |    8 ++-
 5 files changed, 319 insertions(+), 184 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..7158e13 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -76,13 +76,7 @@ enum anticipation_status {
 				 * or timed out */
 };
 
-struct as_data {
-	/*
-	 * run time data
-	 */
-
-	struct request_queue *q;	/* the "owner" queue */
-
+struct as_queue {
 	/*
 	 * requests (as_rq s) are present on both sort_list and fifo_list
 	 */
@@ -90,6 +84,14 @@ struct as_data {
 	struct list_head fifo_list[2];
 
 	struct request *next_rq[2];	/* next in sort order */
+	unsigned long last_check_fifo[2];
+	int write_batch_count;		/* max # of reqs in a write batch */
+	int current_write_count;	/* how many requests left this batch */
+	int write_batch_idled;		/* has the write batch gone idle? */
+};
+
+struct as_data {
+	struct request_queue *q;	/* the "owner" queue */
 	sector_t last_sector[2];	/* last SYNC & ASYNC sectors */
 
 	unsigned long exit_prob;	/* probability a task will exit while
@@ -103,21 +105,17 @@ struct as_data {
 	sector_t new_seek_mean;
 
 	unsigned long current_batch_expires;
-	unsigned long last_check_fifo[2];
 	int changed_batch;		/* 1: waiting for old batch to end */
 	int new_batch;			/* 1: waiting on first read complete */
-	int batch_data_dir;		/* current batch SYNC / ASYNC */
-	int write_batch_count;		/* max # of reqs in a write batch */
-	int current_write_count;	/* how many requests left this batch */
-	int write_batch_idled;		/* has the write batch gone idle? */
 
 	enum anticipation_status antic_status;
 	unsigned long antic_start;	/* jiffies: when it started */
 	struct timer_list antic_timer;	/* anticipatory scheduling timer */
-	struct work_struct antic_work;	/* Deferred unplugging */
+	struct work_struct antic_work;  /* Deferred unplugging */
 	struct io_context *io_context;	/* Identify the expected process */
 	int ioc_finished; /* IO associated with io_context is finished */
 	int nr_dispatched;
+	int batch_data_dir;		/* current batch SYNC / ASYNC */
 
 	/*
 	 * settings that change how the i/o scheduler behaves
@@ -258,13 +256,14 @@ static void as_put_io_context(struct request *rq)
 /*
  * rb tree support functions
  */
-#define RQ_RB_ROOT(ad, rq)	(&(ad)->sort_list[rq_is_sync((rq))])
+#define RQ_RB_ROOT(asq, rq)	(&(asq)->sort_list[rq_is_sync((rq))])
 
 static void as_add_rq_rb(struct as_data *ad, struct request *rq)
 {
 	struct request *alias;
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
-	while ((unlikely(alias = elv_rb_add(RQ_RB_ROOT(ad, rq), rq)))) {
+	while ((unlikely(alias = elv_rb_add(RQ_RB_ROOT(asq, rq), rq)))) {
 		as_move_to_dispatch(ad, alias);
 		as_antic_stop(ad);
 	}
@@ -272,7 +271,9 @@ static void as_add_rq_rb(struct as_data *ad, struct request *rq)
 
 static inline void as_del_rq_rb(struct as_data *ad, struct request *rq)
 {
-	elv_rb_del(RQ_RB_ROOT(ad, rq), rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
+
+	elv_rb_del(RQ_RB_ROOT(asq, rq), rq);
 }
 
 /*
@@ -366,7 +367,7 @@ as_choose_req(struct as_data *ad, struct request *rq1, struct request *rq2)
  * what request to process next. Anticipation works on top of this.
  */
 static struct request *
-as_find_next_rq(struct as_data *ad, struct request *last)
+as_find_next_rq(struct as_data *ad, struct as_queue *asq, struct request *last)
 {
 	struct rb_node *rbnext = rb_next(&last->rb_node);
 	struct rb_node *rbprev = rb_prev(&last->rb_node);
@@ -382,7 +383,7 @@ as_find_next_rq(struct as_data *ad, struct request *last)
 	else {
 		const int data_dir = rq_is_sync(last);
 
-		rbnext = rb_first(&ad->sort_list[data_dir]);
+		rbnext = rb_first(&asq->sort_list[data_dir]);
 		if (rbnext && rbnext != &last->rb_node)
 			next = rb_entry_rq(rbnext);
 	}
@@ -787,9 +788,10 @@ static int as_can_anticipate(struct as_data *ad, struct request *rq)
 static void as_update_rq(struct as_data *ad, struct request *rq)
 {
 	const int data_dir = rq_is_sync(rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	/* keep the next_rq cache up to date */
-	ad->next_rq[data_dir] = as_choose_req(ad, rq, ad->next_rq[data_dir]);
+	asq->next_rq[data_dir] = as_choose_req(ad, rq, asq->next_rq[data_dir]);
 
 	/*
 	 * have we been anticipating this request?
@@ -810,25 +812,26 @@ static void update_write_batch(struct as_data *ad)
 {
 	unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
 	long write_time;
+	struct as_queue *asq = elv_get_sched_queue(ad->q, NULL);
 
 	write_time = (jiffies - ad->current_batch_expires) + batch;
 	if (write_time < 0)
 		write_time = 0;
 
-	if (write_time > batch && !ad->write_batch_idled) {
+	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
-			ad->write_batch_count /= 2;
+			asq->write_batch_count /= 2;
 		else
-			ad->write_batch_count--;
-	} else if (write_time < batch && ad->current_write_count == 0) {
+			asq->write_batch_count--;
+	} else if (write_time < batch && asq->current_write_count == 0) {
 		if (batch > write_time * 3)
-			ad->write_batch_count *= 2;
+			asq->write_batch_count *= 2;
 		else
-			ad->write_batch_count++;
+			asq->write_batch_count++;
 	}
 
-	if (ad->write_batch_count < 1)
-		ad->write_batch_count = 1;
+	if (asq->write_batch_count < 1)
+		asq->write_batch_count = 1;
 }
 
 /*
@@ -899,6 +902,7 @@ static void as_remove_queued_request(struct request_queue *q,
 	const int data_dir = rq_is_sync(rq);
 	struct as_data *ad = q->elevator->elevator_data;
 	struct io_context *ioc;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	WARN_ON(RQ_STATE(rq) != AS_RQ_QUEUED);
 
@@ -912,8 +916,8 @@ static void as_remove_queued_request(struct request_queue *q,
 	 * Update the "next_rq" cache if we are about to remove its
 	 * entry
 	 */
-	if (ad->next_rq[data_dir] == rq)
-		ad->next_rq[data_dir] = as_find_next_rq(ad, rq);
+	if (asq->next_rq[data_dir] == rq)
+		asq->next_rq[data_dir] = as_find_next_rq(ad, asq, rq);
 
 	rq_fifo_clear(rq);
 	as_del_rq_rb(ad, rq);
@@ -927,23 +931,23 @@ static void as_remove_queued_request(struct request_queue *q,
  *
  * See as_antic_expired comment.
  */
-static int as_fifo_expired(struct as_data *ad, int adir)
+static int as_fifo_expired(struct as_data *ad, struct as_queue *asq, int adir)
 {
 	struct request *rq;
 	long delta_jif;
 
-	delta_jif = jiffies - ad->last_check_fifo[adir];
+	delta_jif = jiffies - asq->last_check_fifo[adir];
 	if (unlikely(delta_jif < 0))
 		delta_jif = -delta_jif;
 	if (delta_jif < ad->fifo_expire[adir])
 		return 0;
 
-	ad->last_check_fifo[adir] = jiffies;
+	asq->last_check_fifo[adir] = jiffies;
 
-	if (list_empty(&ad->fifo_list[adir]))
+	if (list_empty(&asq->fifo_list[adir]))
 		return 0;
 
-	rq = rq_entry_fifo(ad->fifo_list[adir].next);
+	rq = rq_entry_fifo(asq->fifo_list[adir].next);
 
 	return time_after(jiffies, rq_fifo_time(rq));
 }
@@ -952,7 +956,7 @@ static int as_fifo_expired(struct as_data *ad, int adir)
  * as_batch_expired returns true if the current batch has expired. A batch
  * is a set of reads or a set of writes.
  */
-static inline int as_batch_expired(struct as_data *ad)
+static inline int as_batch_expired(struct as_data *ad, struct as_queue *asq)
 {
 	if (ad->changed_batch || ad->new_batch)
 		return 0;
@@ -962,7 +966,7 @@ static inline int as_batch_expired(struct as_data *ad)
 		return time_after(jiffies, ad->current_batch_expires);
 
 	return time_after(jiffies, ad->current_batch_expires)
-		|| ad->current_write_count == 0;
+		|| asq->current_write_count == 0;
 }
 
 /*
@@ -971,6 +975,7 @@ static inline int as_batch_expired(struct as_data *ad)
 static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 {
 	const int data_dir = rq_is_sync(rq);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	BUG_ON(RB_EMPTY_NODE(&rq->rb_node));
 
@@ -993,12 +998,12 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 			ad->io_context = NULL;
 		}
 
-		if (ad->current_write_count != 0)
-			ad->current_write_count--;
+		if (asq->current_write_count != 0)
+			asq->current_write_count--;
 	}
 	ad->ioc_finished = 0;
 
-	ad->next_rq[data_dir] = as_find_next_rq(ad, rq);
+	asq->next_rq[data_dir] = as_find_next_rq(ad, asq, rq);
 
 	/*
 	 * take it off the sort and fifo list, add to dispatch queue
@@ -1022,9 +1027,16 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 static int as_dispatch_request(struct request_queue *q, int force)
 {
 	struct as_data *ad = q->elevator->elevator_data;
-	const int reads = !list_empty(&ad->fifo_list[BLK_RW_SYNC]);
-	const int writes = !list_empty(&ad->fifo_list[BLK_RW_ASYNC]);
 	struct request *rq;
+	struct as_queue *asq = elv_select_sched_queue(q, force);
+	int reads, writes;
+
+	if (!asq)
+		return 0;
+
+	reads = !list_empty(&asq->fifo_list[BLK_RW_SYNC]);
+	writes = !list_empty(&asq->fifo_list[BLK_RW_ASYNC]);
+
 
 	if (unlikely(force)) {
 		/*
@@ -1040,25 +1052,25 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		ad->changed_batch = 0;
 		ad->new_batch = 0;
 
-		while (ad->next_rq[BLK_RW_SYNC]) {
-			as_move_to_dispatch(ad, ad->next_rq[BLK_RW_SYNC]);
+		while (asq->next_rq[BLK_RW_SYNC]) {
+			as_move_to_dispatch(ad, asq->next_rq[BLK_RW_SYNC]);
 			dispatched++;
 		}
-		ad->last_check_fifo[BLK_RW_SYNC] = jiffies;
+		asq->last_check_fifo[BLK_RW_SYNC] = jiffies;
 
-		while (ad->next_rq[BLK_RW_ASYNC]) {
-			as_move_to_dispatch(ad, ad->next_rq[BLK_RW_ASYNC]);
+		while (asq->next_rq[BLK_RW_ASYNC]) {
+			as_move_to_dispatch(ad, asq->next_rq[BLK_RW_ASYNC]);
 			dispatched++;
 		}
-		ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
+		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
 		return dispatched;
 	}
 
 	/* Signal that the write batch was uncontended, so we can't time it */
 	if (ad->batch_data_dir == BLK_RW_ASYNC && !reads) {
-		if (ad->current_write_count == 0 || !writes)
-			ad->write_batch_idled = 1;
+		if (asq->current_write_count == 0 || !writes)
+			asq->write_batch_idled = 1;
 	}
 
 	if (!(reads || writes)
@@ -1067,14 +1079,14 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		|| ad->changed_batch)
 		return 0;
 
-	if (!(reads && writes && as_batch_expired(ad))) {
+	if (!(reads && writes && as_batch_expired(ad, asq))) {
 		/*
 		 * batch is still running or no reads or no writes
 		 */
-		rq = ad->next_rq[ad->batch_data_dir];
+		rq = asq->next_rq[ad->batch_data_dir];
 
 		if (ad->batch_data_dir == BLK_RW_SYNC && ad->antic_expire) {
-			if (as_fifo_expired(ad, BLK_RW_SYNC))
+			if (as_fifo_expired(ad, asq, BLK_RW_SYNC))
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
@@ -1098,7 +1110,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 */
 
 	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_SYNC]));
+		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
 
 		if (writes && ad->batch_data_dir == BLK_RW_SYNC)
 			/*
@@ -1111,8 +1123,8 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
-		rq = rq_entry_fifo(ad->fifo_list[BLK_RW_SYNC].next);
-		ad->last_check_fifo[ad->batch_data_dir] = jiffies;
+		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
+		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
 	}
 
@@ -1122,7 +1134,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 
 	if (writes) {
 dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_ASYNC]));
+		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_ASYNC]));
 
 		if (ad->batch_data_dir == BLK_RW_SYNC) {
 			ad->changed_batch = 1;
@@ -1135,10 +1147,10 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
-		ad->current_write_count = ad->write_batch_count;
-		ad->write_batch_idled = 0;
-		rq = rq_entry_fifo(ad->fifo_list[BLK_RW_ASYNC].next);
-		ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
+		asq->current_write_count = asq->write_batch_count;
+		asq->write_batch_idled = 0;
+		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
+		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 		goto dispatch_request;
 	}
 
@@ -1150,9 +1162,9 @@ dispatch_request:
 	 * If a request has expired, service it.
 	 */
 
-	if (as_fifo_expired(ad, ad->batch_data_dir)) {
+	if (as_fifo_expired(ad, asq, ad->batch_data_dir)) {
 fifo_expired:
-		rq = rq_entry_fifo(ad->fifo_list[ad->batch_data_dir].next);
+		rq = rq_entry_fifo(asq->fifo_list[ad->batch_data_dir].next);
 	}
 
 	if (ad->changed_batch) {
@@ -1185,6 +1197,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 {
 	struct as_data *ad = q->elevator->elevator_data;
 	int data_dir;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	RQ_SET_STATE(rq, AS_RQ_NEW);
 
@@ -1203,7 +1216,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 	 * set expire time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies + ad->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &ad->fifo_list[data_dir]);
+	list_add_tail(&rq->queuelist, &asq->fifo_list[data_dir]);
 
 	as_update_rq(ad, rq); /* keep state machine up to date */
 	RQ_SET_STATE(rq, AS_RQ_QUEUED);
@@ -1225,31 +1238,20 @@ static void as_deactivate_request(struct request_queue *q, struct request *rq)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 }
 
-/*
- * as_queue_empty tells us if there are requests left in the device. It may
- * not be the case that a driver can get the next request even if the queue
- * is not empty - it is used in the block layer to check for plugging and
- * merging opportunities
- */
-static int as_queue_empty(struct request_queue *q)
-{
-	struct as_data *ad = q->elevator->elevator_data;
-
-	return list_empty(&ad->fifo_list[BLK_RW_ASYNC])
-		&& list_empty(&ad->fifo_list[BLK_RW_SYNC]);
-}
-
 static int
 as_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
-	struct as_data *ad = q->elevator->elevator_data;
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
+	struct as_queue *asq = elv_get_sched_queue_current(q);
+
+	if (!asq)
+		return ELEVATOR_NO_MERGE;
 
 	/*
 	 * check for front merge
 	 */
-	__rq = elv_rb_find(&ad->sort_list[bio_data_dir(bio)], rb_key);
+	__rq = elv_rb_find(&asq->sort_list[bio_data_dir(bio)], rb_key);
 	if (__rq && elv_rq_merge_ok(__rq, bio)) {
 		*req = __rq;
 		return ELEVATOR_FRONT_MERGE;
@@ -1336,6 +1338,41 @@ static int as_may_queue(struct request_queue *q, int rw)
 	return ret;
 }
 
+/* Called with queue lock held */
+static void *as_alloc_as_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
+{
+	struct as_queue *asq;
+	struct as_data *ad = eq->elevator_data;
+
+	asq = kmalloc_node(sizeof(*asq), gfp_mask | __GFP_ZERO, q->node);
+	if (asq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&asq->fifo_list[BLK_RW_SYNC]);
+	INIT_LIST_HEAD(&asq->fifo_list[BLK_RW_ASYNC]);
+	asq->sort_list[BLK_RW_SYNC] = RB_ROOT;
+	asq->sort_list[BLK_RW_ASYNC] = RB_ROOT;
+	if (ad)
+		asq->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
+	else
+		asq->write_batch_count = default_write_batch_expire / 10;
+
+	if (asq->write_batch_count < 2)
+		asq->write_batch_count = 2;
+out:
+	return asq;
+}
+
+static void as_free_as_queue(struct elevator_queue *e, void *sched_queue)
+{
+	struct as_queue *asq = sched_queue;
+
+	BUG_ON(!list_empty(&asq->fifo_list[BLK_RW_SYNC]));
+	BUG_ON(!list_empty(&asq->fifo_list[BLK_RW_ASYNC]));
+	kfree(asq);
+}
+
 static void as_exit_queue(struct elevator_queue *e)
 {
 	struct as_data *ad = e->elevator_data;
@@ -1343,9 +1380,6 @@ static void as_exit_queue(struct elevator_queue *e)
 	del_timer_sync(&ad->antic_timer);
 	cancel_work_sync(&ad->antic_work);
 
-	BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_SYNC]));
-	BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_ASYNC]));
-
 	put_io_context(ad->io_context);
 	kfree(ad);
 }
@@ -1369,10 +1403,6 @@ static void *as_init_queue(struct request_queue *q)
 	init_timer(&ad->antic_timer);
 	INIT_WORK(&ad->antic_work, as_work_handler);
 
-	INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_SYNC]);
-	INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_ASYNC]);
-	ad->sort_list[BLK_RW_SYNC] = RB_ROOT;
-	ad->sort_list[BLK_RW_ASYNC] = RB_ROOT;
 	ad->fifo_expire[BLK_RW_SYNC] = default_read_expire;
 	ad->fifo_expire[BLK_RW_ASYNC] = default_write_expire;
 	ad->antic_expire = default_antic_expire;
@@ -1380,9 +1410,6 @@ static void *as_init_queue(struct request_queue *q)
 	ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
 
 	ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
-	ad->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
-	if (ad->write_batch_count < 2)
-		ad->write_batch_count = 2;
 
 	return ad;
 }
@@ -1480,7 +1507,6 @@ static struct elevator_type iosched_as = {
 		.elevator_add_req_fn =		as_add_request,
 		.elevator_activate_req_fn =	as_activate_request,
 		.elevator_deactivate_req_fn = 	as_deactivate_request,
-		.elevator_queue_empty_fn =	as_queue_empty,
 		.elevator_completed_req_fn =	as_completed_request,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
@@ -1488,6 +1514,8 @@ static struct elevator_type iosched_as = {
 		.elevator_init_fn =		as_init_queue,
 		.elevator_exit_fn =		as_exit_queue,
 		.trim =				as_trim,
+		.elevator_alloc_sched_queue_fn = as_alloc_as_queue,
+		.elevator_free_sched_queue_fn = as_free_as_queue,
 	},
 
 	.elevator_attrs = as_attrs,
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index c4d991d..5e65041 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -23,25 +23,23 @@ static const int writes_starved = 2;    /* max times reads can starve a write */
 static const int fifo_batch = 16;       /* # of sequential requests treated as one
 				     by the above parameters. For throughput. */
 
-struct deadline_data {
-	/*
-	 * run time data
-	 */
-
+struct deadline_queue {
 	/*
 	 * requests (deadline_rq s) are present on both sort_list and fifo_list
 	 */
-	struct rb_root sort_list[2];	
+	struct rb_root sort_list[2];
 	struct list_head fifo_list[2];
-
 	/*
 	 * next in sort order. read, write or both are NULL
 	 */
 	struct request *next_rq[2];
 	unsigned int batching;		/* number of sequential requests made */
-	sector_t last_sector;		/* head position */
 	unsigned int starved;		/* times reads have starved writes */
+};
 
+struct deadline_data {
+	struct request_queue *q;
+	sector_t last_sector;		/* head position */
 	/*
 	 * settings that change how the i/o scheduler behaves
 	 */
@@ -56,7 +54,9 @@ static void deadline_move_request(struct deadline_data *, struct request *);
 static inline struct rb_root *
 deadline_rb_root(struct deadline_data *dd, struct request *rq)
 {
-	return &dd->sort_list[rq_data_dir(rq)];
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
+
+	return &dq->sort_list[rq_data_dir(rq)];
 }
 
 /*
@@ -87,9 +87,10 @@ static inline void
 deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 {
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
 
-	if (dd->next_rq[data_dir] == rq)
-		dd->next_rq[data_dir] = deadline_latter_request(rq);
+	if (dq->next_rq[data_dir] == rq)
+		dq->next_rq[data_dir] = deadline_latter_request(rq);
 
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
@@ -102,6 +103,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(q, rq);
 
 	deadline_add_rq_rb(dd, rq);
 
@@ -109,7 +111,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 	 * set expire time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies + dd->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
+	list_add_tail(&rq->queuelist, &dq->fifo_list[data_dir]);
 }
 
 /*
@@ -129,6 +131,11 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	struct deadline_data *dd = q->elevator->elevator_data;
 	struct request *__rq;
 	int ret;
+	struct deadline_queue *dq;
+
+	dq = elv_get_sched_queue_current(q);
+	if (!dq)
+		return ELEVATOR_NO_MERGE;
 
 	/*
 	 * check for front merge
@@ -136,7 +143,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	if (dd->front_merges) {
 		sector_t sector = bio->bi_sector + bio_sectors(bio);
 
-		__rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector);
+		__rq = elv_rb_find(&dq->sort_list[bio_data_dir(bio)], sector);
 		if (__rq) {
 			BUG_ON(sector != __rq->sector);
 
@@ -207,10 +214,11 @@ static void
 deadline_move_request(struct deadline_data *dd, struct request *rq)
 {
 	const int data_dir = rq_data_dir(rq);
+	struct deadline_queue *dq = elv_get_sched_queue(dd->q, rq);
 
-	dd->next_rq[READ] = NULL;
-	dd->next_rq[WRITE] = NULL;
-	dd->next_rq[data_dir] = deadline_latter_request(rq);
+	dq->next_rq[READ] = NULL;
+	dq->next_rq[WRITE] = NULL;
+	dq->next_rq[data_dir] = deadline_latter_request(rq);
 
 	dd->last_sector = rq_end_sector(rq);
 
@@ -225,9 +233,9 @@ deadline_move_request(struct deadline_data *dd, struct request *rq)
  * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
  * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
  */
-static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
+static inline int deadline_check_fifo(struct deadline_queue *dq, int ddir)
 {
-	struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
+	struct request *rq = rq_entry_fifo(dq->fifo_list[ddir].next);
 
 	/*
 	 * rq is expired!
@@ -245,20 +253,26 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
 static int deadline_dispatch_requests(struct request_queue *q, int force)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int reads = !list_empty(&dd->fifo_list[READ]);
-	const int writes = !list_empty(&dd->fifo_list[WRITE]);
+	struct deadline_queue *dq = elv_select_sched_queue(q, force);
+	int reads, writes;
 	struct request *rq;
 	int data_dir;
 
+	if (!dq)
+		return 0;
+
+	reads = !list_empty(&dq->fifo_list[READ]);
+	writes = !list_empty(&dq->fifo_list[WRITE]);
+
 	/*
 	 * batches are currently reads XOR writes
 	 */
-	if (dd->next_rq[WRITE])
-		rq = dd->next_rq[WRITE];
+	if (dq->next_rq[WRITE])
+		rq = dq->next_rq[WRITE];
 	else
-		rq = dd->next_rq[READ];
+		rq = dq->next_rq[READ];
 
-	if (rq && dd->batching < dd->fifo_batch)
+	if (rq && dq->batching < dd->fifo_batch)
 		/* we have a next request are still entitled to batch */
 		goto dispatch_request;
 
@@ -268,9 +282,9 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 */
 
 	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
+		BUG_ON(RB_EMPTY_ROOT(&dq->sort_list[READ]));
 
-		if (writes && (dd->starved++ >= dd->writes_starved))
+		if (writes && (dq->starved++ >= dd->writes_starved))
 			goto dispatch_writes;
 
 		data_dir = READ;
@@ -284,9 +298,9 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 
 	if (writes) {
 dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
+		BUG_ON(RB_EMPTY_ROOT(&dq->sort_list[WRITE]));
 
-		dd->starved = 0;
+		dq->starved = 0;
 
 		data_dir = WRITE;
 
@@ -299,48 +313,62 @@ dispatch_find_request:
 	/*
 	 * we are not running a batch, find best request for selected data_dir
 	 */
-	if (deadline_check_fifo(dd, data_dir) || !dd->next_rq[data_dir]) {
+	if (deadline_check_fifo(dq, data_dir) || !dq->next_rq[data_dir]) {
 		/*
 		 * A deadline has expired, the last request was in the other
 		 * direction, or we have run out of higher-sectored requests.
 		 * Start again from the request with the earliest expiry time.
 		 */
-		rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
+		rq = rq_entry_fifo(dq->fifo_list[data_dir].next);
 	} else {
 		/*
 		 * The last req was the same dir and we have a next request in
 		 * sort order. No expired requests so continue on from here.
 		 */
-		rq = dd->next_rq[data_dir];
+		rq = dq->next_rq[data_dir];
 	}
 
-	dd->batching = 0;
+	dq->batching = 0;
 
 dispatch_request:
 	/*
 	 * rq is the selected appropriate request.
 	 */
-	dd->batching++;
+	dq->batching++;
 	deadline_move_request(dd, rq);
 
 	return 1;
 }
 
-static int deadline_queue_empty(struct request_queue *q)
+static void *deadline_alloc_deadline_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_queue *dq;
 
-	return list_empty(&dd->fifo_list[WRITE])
-		&& list_empty(&dd->fifo_list[READ]);
+	dq = kmalloc_node(sizeof(*dq), gfp_mask | __GFP_ZERO, q->node);
+	if (dq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&dq->fifo_list[READ]);
+	INIT_LIST_HEAD(&dq->fifo_list[WRITE]);
+	dq->sort_list[READ] = RB_ROOT;
+	dq->sort_list[WRITE] = RB_ROOT;
+out:
+	return dq;
+}
+
+static void deadline_free_deadline_queue(struct elevator_queue *e,
+						void *sched_queue)
+{
+	struct deadline_queue *dq = sched_queue;
+
+	kfree(dq);
 }
 
 static void deadline_exit_queue(struct elevator_queue *e)
 {
 	struct deadline_data *dd = e->elevator_data;
 
-	BUG_ON(!list_empty(&dd->fifo_list[READ]));
-	BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
-
 	kfree(dd);
 }
 
@@ -355,10 +383,7 @@ static void *deadline_init_queue(struct request_queue *q)
 	if (!dd)
 		return NULL;
 
-	INIT_LIST_HEAD(&dd->fifo_list[READ]);
-	INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
-	dd->sort_list[READ] = RB_ROOT;
-	dd->sort_list[WRITE] = RB_ROOT;
+	dd->q = q;
 	dd->fifo_expire[READ] = read_expire;
 	dd->fifo_expire[WRITE] = write_expire;
 	dd->writes_starved = writes_starved;
@@ -445,13 +470,13 @@ static struct elevator_type iosched_deadline = {
 		.elevator_merge_req_fn =	deadline_merged_requests,
 		.elevator_dispatch_fn =		deadline_dispatch_requests,
 		.elevator_add_req_fn =		deadline_add_request,
-		.elevator_queue_empty_fn =	deadline_queue_empty,
 		.elevator_former_req_fn =	elv_rb_former_request,
 		.elevator_latter_req_fn =	elv_rb_latter_request,
 		.elevator_init_fn =		deadline_init_queue,
 		.elevator_exit_fn =		deadline_exit_queue,
+		.elevator_alloc_sched_queue_fn = deadline_alloc_deadline_queue,
+		.elevator_free_sched_queue_fn = deadline_free_deadline_queue,
 	},
-
 	.elevator_attrs = deadline_attrs,
 	.elevator_name = "deadline",
 	.elevator_owner = THIS_MODULE,
diff --git a/block/elevator.c b/block/elevator.c
index 3944385..67a0601 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -180,17 +180,54 @@ static struct elevator_type *elevator_get(const char *name)
 	return e;
 }
 
-static void *elevator_init_queue(struct request_queue *q,
-				 struct elevator_queue *eq)
+static void *elevator_init_data(struct request_queue *q,
+					struct elevator_queue *eq)
 {
-	return eq->ops->elevator_init_fn(q);
+	void *data = NULL;
+
+	if (eq->ops->elevator_init_fn) {
+		data = eq->ops->elevator_init_fn(q);
+		if (data)
+			return data;
+		else
+			return ERR_PTR(-ENOMEM);
+	}
+
+	/* IO scheduler does not instanciate data (noop), it is not an error */
+	return NULL;
+}
+
+static void elevator_free_sched_queue(struct elevator_queue *eq,
+						void *sched_queue)
+{
+	/* Not all io schedulers (cfq) strore sched_queue */
+	if (!sched_queue)
+		return;
+	eq->ops->elevator_free_sched_queue_fn(eq, sched_queue);
+}
+
+static void *elevator_alloc_sched_queue(struct request_queue *q,
+					struct elevator_queue *eq)
+{
+	void *sched_queue = NULL;
+
+	if (eq->ops->elevator_alloc_sched_queue_fn) {
+		sched_queue = eq->ops->elevator_alloc_sched_queue_fn(q, eq,
+								GFP_KERNEL);
+		if (!sched_queue)
+			return ERR_PTR(-ENOMEM);
+
+	}
+
+	return sched_queue;
 }
 
 static void elevator_attach(struct request_queue *q, struct elevator_queue *eq,
-			   void *data)
+			   void *data, void *sched_queue)
 {
 	q->elevator = eq;
 	eq->elevator_data = data;
+	eq->sched_queue = sched_queue;
 }
 
 static char chosen_elevator[16];
@@ -260,7 +297,7 @@ int elevator_init(struct request_queue *q, char *name)
 	struct elevator_type *e = NULL;
 	struct elevator_queue *eq;
 	int ret = 0;
-	void *data;
+	void *data = NULL, *sched_queue = NULL;
 
 	INIT_LIST_HEAD(&q->queue_head);
 	q->last_merge = NULL;
@@ -294,13 +331,21 @@ int elevator_init(struct request_queue *q, char *name)
 	if (!eq)
 		return -ENOMEM;
 
-	data = elevator_init_queue(q, eq);
-	if (!data) {
+	data = elevator_init_data(q, eq);
+
+	if (IS_ERR(data)) {
+		kobject_put(&eq->kobj);
+		return -ENOMEM;
+	}
+
+	sched_queue = elevator_alloc_sched_queue(q, eq);
+
+	if (IS_ERR(sched_queue)) {
 		kobject_put(&eq->kobj);
 		return -ENOMEM;
 	}
 
-	elevator_attach(q, eq, data);
+	elevator_attach(q, eq, data, sched_queue);
 	return ret;
 }
 EXPORT_SYMBOL(elevator_init);
@@ -308,6 +353,7 @@ EXPORT_SYMBOL(elevator_init);
 void elevator_exit(struct elevator_queue *e)
 {
 	mutex_lock(&e->sysfs_lock);
+	elevator_free_sched_queue(e, e->sched_queue);
 	elv_exit_fq_data(e);
 	if (e->ops->elevator_exit_fn)
 		e->ops->elevator_exit_fn(e);
@@ -1121,7 +1167,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
 static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 {
 	struct elevator_queue *old_elevator, *e;
-	void *data;
+	void *data = NULL, *sched_queue = NULL;
 
 	/*
 	 * Allocate new elevator
@@ -1130,10 +1176,18 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	if (!e)
 		return 0;
 
-	data = elevator_init_queue(q, e);
-	if (!data) {
+	data = elevator_init_data(q, e);
+
+	if (IS_ERR(data)) {
 		kobject_put(&e->kobj);
-		return 0;
+		return -ENOMEM;
+	}
+
+	sched_queue = elevator_alloc_sched_queue(q, e);
+
+	if (IS_ERR(sched_queue)) {
+		kobject_put(&e->kobj);
+		return -ENOMEM;
 	}
 
 	/*
@@ -1150,7 +1204,7 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	/*
 	 * attach and start new elevator
 	 */
-	elevator_attach(q, e, data);
+	elevator_attach(q, e, data, sched_queue);
 
 	spin_unlock_irq(q->queue_lock);
 
@@ -1257,16 +1311,43 @@ struct request *elv_rb_latter_request(struct request_queue *q,
 }
 EXPORT_SYMBOL(elv_rb_latter_request);
 
-/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
+/* Get the io scheduler queue pointer. */
 void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
 {
-	return ioq_sched_queue(rq_ioq(rq));
+	/*
+	 * io scheduler is not using fair queuing. Return sched_queue
+	 * pointer stored in elevator_queue. It will be null if io
+	 * scheduler never stored anything there to begin with (cfq)
+	 */
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
+	/*
+	 * IO schedueler is using fair queuing infrasture. If io scheduler
+	 * has passed a non null rq, retrieve sched_queue pointer from
+	 * there. */
+	if (rq)
+		return ioq_sched_queue(rq_ioq(rq));
+
+	return NULL;
 }
 EXPORT_SYMBOL(elv_get_sched_queue);
 
 /* Select an ioscheduler queue to dispatch request from. */
 void *elv_select_sched_queue(struct request_queue *q, int force)
 {
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
 	return ioq_sched_queue(elv_fq_select_ioq(q, force));
 }
 EXPORT_SYMBOL(elv_select_sched_queue);
+
+/*
+ * Get the io scheduler queue pointer for current task.
+ */
+void *elv_get_sched_queue_current(struct request_queue *q)
+{
+	return q->elevator->sched_queue;
+}
+EXPORT_SYMBOL(elv_get_sched_queue_current);
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index 3a0d369..d587832 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -7,7 +7,7 @@
 #include <linux/module.h>
 #include <linux/init.h>
 
-struct noop_data {
+struct noop_queue {
 	struct list_head queue;
 };
 
@@ -19,11 +19,14 @@ static void noop_merged_requests(struct request_queue *q, struct request *rq,
 
 static int noop_dispatch(struct request_queue *q, int force)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_select_sched_queue(q, force);
 
-	if (!list_empty(&nd->queue)) {
+	if (!nq)
+		return 0;
+
+	if (!list_empty(&nq->queue)) {
 		struct request *rq;
-		rq = list_entry(nd->queue.next, struct request, queuelist);
+		rq = list_entry(nq->queue.next, struct request, queuelist);
 		list_del_init(&rq->queuelist);
 		elv_dispatch_sort(q, rq);
 		return 1;
@@ -33,24 +36,17 @@ static int noop_dispatch(struct request_queue *q, int force)
 
 static void noop_add_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	list_add_tail(&rq->queuelist, &nd->queue);
-}
-
-static int noop_queue_empty(struct request_queue *q)
-{
-	struct noop_data *nd = q->elevator->elevator_data;
-
-	return list_empty(&nd->queue);
+	list_add_tail(&rq->queuelist, &nq->queue);
 }
 
 static struct request *
 noop_former_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	if (rq->queuelist.prev == &nd->queue)
+	if (rq->queuelist.prev == &nq->queue)
 		return NULL;
 	return list_entry(rq->queuelist.prev, struct request, queuelist);
 }
@@ -58,30 +54,32 @@ noop_former_request(struct request_queue *q, struct request *rq)
 static struct request *
 noop_latter_request(struct request_queue *q, struct request *rq)
 {
-	struct noop_data *nd = q->elevator->elevator_data;
+	struct noop_queue *nq = elv_get_sched_queue(q, rq);
 
-	if (rq->queuelist.next == &nd->queue)
+	if (rq->queuelist.next == &nq->queue)
 		return NULL;
 	return list_entry(rq->queuelist.next, struct request, queuelist);
 }
 
-static void *noop_init_queue(struct request_queue *q)
+static void *noop_alloc_noop_queue(struct request_queue *q,
+				struct elevator_queue *eq, gfp_t gfp_mask)
 {
-	struct noop_data *nd;
+	struct noop_queue *nq;
 
-	nd = kmalloc_node(sizeof(*nd), GFP_KERNEL, q->node);
-	if (!nd)
-		return NULL;
-	INIT_LIST_HEAD(&nd->queue);
-	return nd;
+	nq = kmalloc_node(sizeof(*nq), gfp_mask | __GFP_ZERO, q->node);
+	if (nq == NULL)
+		goto out;
+
+	INIT_LIST_HEAD(&nq->queue);
+out:
+	return nq;
 }
 
-static void noop_exit_queue(struct elevator_queue *e)
+static void noop_free_noop_queue(struct elevator_queue *e, void *sched_queue)
 {
-	struct noop_data *nd = e->elevator_data;
+	struct noop_queue *nq = sched_queue;
 
-	BUG_ON(!list_empty(&nd->queue));
-	kfree(nd);
+	kfree(nq);
 }
 
 static struct elevator_type elevator_noop = {
@@ -89,11 +87,10 @@ static struct elevator_type elevator_noop = {
 		.elevator_merge_req_fn		= noop_merged_requests,
 		.elevator_dispatch_fn		= noop_dispatch,
 		.elevator_add_req_fn		= noop_add_request,
-		.elevator_queue_empty_fn	= noop_queue_empty,
 		.elevator_former_req_fn		= noop_former_request,
 		.elevator_latter_req_fn		= noop_latter_request,
-		.elevator_init_fn		= noop_init_queue,
-		.elevator_exit_fn		= noop_exit_queue,
+		.elevator_alloc_sched_queue_fn	= noop_alloc_noop_queue,
+		.elevator_free_sched_queue_fn	= noop_free_noop_queue,
 	},
 	.elevator_name = "noop",
 	.elevator_owner = THIS_MODULE,
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 679c149..3729a2f 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -30,8 +30,9 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
-#ifdef CONFIG_ELV_FAIR_QUEUING
+typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t);
 typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
+#ifdef CONFIG_ELV_FAIR_QUEUING
 typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
 typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
 typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
@@ -70,8 +71,9 @@ struct elevator_ops
 	elevator_exit_fn *elevator_exit_fn;
 	void (*trim)(struct io_context *);
 
-#ifdef CONFIG_ELV_FAIR_QUEUING
+	elevator_alloc_sched_queue_fn *elevator_alloc_sched_queue_fn;
 	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
+#ifdef CONFIG_ELV_FAIR_QUEUING
 	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
 	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
 
@@ -112,6 +114,7 @@ struct elevator_queue
 {
 	struct elevator_ops *ops;
 	void *elevator_data;
+	void *sched_queue;
 	struct kobject kobj;
 	struct elevator_type *elevator_type;
 	struct mutex sysfs_lock;
@@ -260,5 +263,6 @@ static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
+extern void *elv_get_sched_queue_current(struct request_queue *q);
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (8 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 09/20] io-controller: Separate out queue and data Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
                     ` (11 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

Elevator layer now has support for hierarchical fair queuing. cfq has
been migrated to make use of it and now it is time to do groundwork for
noop, deadline and AS.

noop deadline and AS don't maintain separate queues for different processes.
There is only one single queue. Effectively one can think that in hierarchical
setup, there will be one queue per cgroup where requests from all the
processes in the cgroup will be queued.

Generally io scheduler takes care of creating queues. Because there is
only one queue here, we have modified common layer to take care of queue
creation and some other functionality. This special casing helps in keeping
the changes to noop, deadline and AS to the minimum.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/as-iosched.c       |    2 +-
 block/deadline-iosched.c |    2 +-
 block/elevator-fq.c      |  206 +++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h      |   70 ++++++++++++++++
 block/elevator.c         |   37 ++++++++-
 block/noop-iosched.c     |    2 +-
 include/linux/elevator.h |   16 ++++-
 7 files changed, 327 insertions(+), 8 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index 7158e13..3aa54a8 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1340,7 +1340,7 @@ static int as_may_queue(struct request_queue *q, int rw)
 
 /* Called with queue lock held */
 static void *as_alloc_as_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct as_queue *asq;
 	struct as_data *ad = eq->elevator_data;
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 5e65041..3a195ce 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -341,7 +341,7 @@ dispatch_request:
 }
 
 static void *deadline_alloc_deadline_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct deadline_queue *dq;
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index cde2155..5711a6d 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -72,7 +72,6 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 void elv_activate_ioq(struct io_queue *ioq, int add_front);
 void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 					int requeue);
-
 static int bfq_update_next_active(struct io_sched_data *sd)
 {
 	struct io_group *iog;
@@ -1022,6 +1021,12 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
 
 	/* Free up async idle queue */
 	elv_release_ioq(e, &iog->async_idle_queue);
+
+#ifdef CONFIG_GROUP_IOSCHED
+	/* Optimization for io schedulers having single ioq */
+	if (elv_iosched_single_ioq(e))
+		elv_release_ioq(e, &iog->ioq);
+#endif
 }
 
 /*
@@ -1048,6 +1053,14 @@ struct io_cgroup io_root_cgroup = {
 	.ioprio_class = IO_DEFAULT_GRP_CLASS,
 };
 
+static inline int is_only_root_group(void)
+{
+	if (list_empty(&io_root_cgroup.css.cgroup->children))
+		return 1;
+
+	return 0;
+}
+
 void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 {
 	entity->ioprio = entity->new_ioprio;
@@ -1859,6 +1872,153 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 	return (iog == __iog);
 }
 
+/*
+ * Find/Create the io queue the rq should go in. This is an optimization
+ * for the io schedulers (noop, deadline and AS) which maintain only single
+ * io queue per cgroup. In this case common layer can just maintain a
+ * pointer in group data structure and keeps track of it.
+ *
+ * For the io schdulers like cfq, which maintain multiple io queues per
+ * cgroup, and decide the io queue  of request based on process, this
+ * function is not invoked.
+ */
+int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
+					gfp_t gfp_mask)
+{
+	struct elevator_queue *e = q->elevator;
+	unsigned long flags;
+	struct io_queue *ioq = NULL, *new_ioq = NULL;
+	struct io_group *iog;
+	void *sched_q = NULL, *new_sched_q = NULL;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return 0;
+
+	might_sleep_if(gfp_mask & __GFP_WAIT);
+	spin_lock_irqsave(q->queue_lock, flags);
+
+retry:
+	/* Determine the io group request belongs to */
+	iog = io_get_io_group(q, 1);
+	BUG_ON(!iog);
+
+	/* Get the iosched queue */
+	ioq = io_group_ioq(iog);
+	if (!ioq) {
+		/* io queue and sched_queue needs to be allocated */
+		BUG_ON(!e->ops->elevator_alloc_sched_queue_fn);
+
+		if (new_ioq) {
+			goto alloc_sched_q;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			new_ioq = elv_alloc_ioq(q, gfp_mask | __GFP_NOFAIL
+							| __GFP_ZERO);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			ioq = elv_alloc_ioq(q, gfp_mask | __GFP_ZERO);
+			if (!ioq)
+				goto queue_fail;
+		}
+
+alloc_sched_q:
+		if (new_sched_q) {
+			ioq = new_ioq;
+			new_ioq = NULL;
+			sched_q = new_sched_q;
+			new_sched_q = NULL;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			/* Call io scheduer to create scheduler queue */
+			new_sched_q = e->ops->elevator_alloc_sched_queue_fn(q,
+					e, gfp_mask | __GFP_NOFAIL
+					| __GFP_ZERO, new_ioq);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			sched_q = e->ops->elevator_alloc_sched_queue_fn(q, e,
+						gfp_mask | __GFP_ZERO, ioq);
+			if (!sched_q) {
+				elv_free_ioq(ioq);
+				goto queue_fail;
+			}
+		}
+
+		elv_init_ioq(e, ioq, iog, sched_q, IOPRIO_CLASS_BE,
+					IOPRIO_NORM, 1);
+		io_group_set_ioq(iog, ioq);
+		elv_mark_ioq_sync(ioq);
+		elv_get_iog(iog);
+	}
+
+	if (new_sched_q)
+		e->ops->elevator_free_sched_queue_fn(q->elevator, new_sched_q);
+
+	if (new_ioq)
+		elv_free_ioq(new_ioq);
+
+	/* Request reference */
+	elv_get_ioq(ioq);
+	rq->ioq = ioq;
+	spin_unlock_irqrestore(q->queue_lock, flags);
+	return 0;
+
+queue_fail:
+	WARN_ON((gfp_mask & __GFP_WAIT) && !ioq);
+	elv_schedule_dispatch(q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+	return 1;
+}
+
+/*
+ * Find out the io queue of current task. Optimization for single ioq
+ * per io group io schedulers.
+ */
+struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	struct io_group *iog;
+
+	/* Determine the io group and io queue of the bio submitting task */
+	iog = io_get_io_group(q, 0);
+	if (!iog) {
+		/* May be task belongs to a cgroup for which io group has
+		 * not been setup yet. */
+		return NULL;
+	}
+	return io_group_ioq(iog);
+}
+
+/*
+ * This request has been serviced. Clean up ioq info and drop the reference.
+ * Again this is called only for single queue per cgroup schedulers (noop,
+ * deadline, AS).
+ */
+void elv_fq_unset_request_ioq(struct request_queue *q, struct request *rq)
+{
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	if (ioq) {
+		rq->ioq = NULL;
+		elv_put_ioq(ioq);
+	}
+}
+
 #else /* GROUP_IOSCHED */
 void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 {
@@ -1904,6 +2064,11 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 	return q->elevator->efqd.root_group;
 }
 EXPORT_SYMBOL(io_get_io_group);
+
+static inline int is_only_root_group(void)
+{
+	return 1;
+}
 #endif /* CONFIG_GROUP_IOSCHED*/
 
 /* Elevator fair queuing function */
@@ -2200,7 +2365,12 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
 	ioq->efqd = efqd;
 	elv_ioq_set_ioprio_class(ioq, ioprio_class);
 	elv_ioq_set_ioprio(ioq, ioprio);
-	ioq->pid = current->pid;
+
+	if (elv_iosched_single_ioq(eq))
+		ioq->pid = 0;
+	else
+		ioq->pid = current->pid;
+
 	ioq->sched_queue = sched_queue;
 	if (is_sync && !elv_ioq_class_idle(ioq))
 		elv_mark_ioq_idle_window(ioq);
@@ -2579,6 +2749,14 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	struct io_entity *entity, *new_entity;
 	struct io_group *iog = NULL, *new_iog = NULL;
 
+	/*
+	 * Currently only CFQ has preemption logic. Other schedulers don't
+	 * have any notion of preemption across classes or preemption with-in
+	 * class etc.
+	 */
+	if (elv_iosched_single_ioq(eq))
+		return 0;
+
 	ioq = elv_active_ioq(eq);
 
 	if (!ioq)
@@ -2835,6 +3013,17 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/*
+	 * If there is only root group present, don't expire the queue for
+	 * single queue ioschedulers (noop, deadline, AS). It is unnecessary
+	 * overhead.
+	 */
+
+	if (is_only_root_group() && elv_iosched_single_ioq(q->elevator)) {
+		elv_log_ioq(efqd, ioq, "select: only root group, no expiry");
+		goto keep_queue;
+	}
+
 	/* We are waiting for this queue to become busy before it expires.*/
 	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
 		ioq = NULL;
@@ -3084,6 +3273,19 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		}
 
 		/*
+		 * If there is only root group present, don't expire the queue
+		 * for single queue ioschedulers (noop, deadline, AS). It is
+		 * unnecessary overhead.
+		 */
+
+		if (is_only_root_group() &&
+			elv_iosched_single_ioq(q->elevator)) {
+			elv_log_ioq(efqd, ioq, "select: only root group,"
+					" no expiry");
+			goto done;
+		}
+
+		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
 		 * those other queues are issuing requests within our
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index e13999e..7281451 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -254,6 +254,9 @@ struct io_group {
 
 	/* The device MKDEV(major, minor), this group has been created for */
 	dev_t	dev;
+
+	/* Single ioq per group, used for noop, deadline, anticipatory */
+	struct io_queue *ioq;
 };
 
 /**
@@ -365,6 +368,8 @@ enum elv_queue_state_flags {
 	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
 	ELV_QUEUE_FLAG_wait_busy,	  /* wait for this queue to get busy */
 	ELV_QUEUE_FLAG_wait_busy_done,	  /* Have already waited on this queue*/
+	ELV_QUEUE_FLAG_must_expire,       /* Expire this queue even if it has
+					   * request and time slice left */
 	ELV_QUEUE_FLAG_NR,
 };
 
@@ -390,6 +395,7 @@ ELV_IO_QUEUE_FLAG_FNS(idle_window)
 ELV_IO_QUEUE_FLAG_FNS(slice_new)
 ELV_IO_QUEUE_FLAG_FNS(wait_busy)
 ELV_IO_QUEUE_FLAG_FNS(wait_busy_done)
+ELV_IO_QUEUE_FLAG_FNS(must_expire)
 
 static inline struct io_service_tree *
 io_entity_service_tree(struct io_entity *entity)
@@ -522,6 +528,28 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 	return requeue;
 }
 
+extern int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
+					gfp_t gfp_mask);
+extern void elv_fq_unset_request_ioq(struct request_queue *q,
+					struct request *rq);
+extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
+
+/* Returns single ioq associated with the io group. */
+static inline struct io_queue *io_group_ioq(struct io_group *iog)
+{
+	BUG_ON(!iog);
+	return iog->ioq;
+}
+
+/* Sets the single ioq associated with the io group. (noop, deadline, AS) */
+static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
+{
+	BUG_ON(!iog);
+	/* io group reference. Will be dropped when group is destroyed. */
+	elv_get_ioq(ioq);
+	iog->ioq = ioq;
+}
+
 #else /* !GROUP_IOSCHED */
 static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 {
@@ -551,6 +579,32 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 	return requeue;
 }
 
+/* Returns single ioq associated with the io group. */
+static inline struct io_queue *io_group_ioq(struct io_group *iog)
+{
+	return NULL;
+}
+
+static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
+{
+}
+
+static inline int elv_fq_set_request_ioq(struct request_queue *q,
+					struct request *rq, gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void elv_fq_unset_request_ioq(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
@@ -662,5 +716,21 @@ static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 {
 	return 1;
 }
+static inline int elv_fq_set_request_ioq(struct request_queue *q,
+					struct request *rq, gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void elv_fq_unset_request_ioq(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	return NULL;
+}
+
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index 67a0601..de42fd6 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -211,9 +211,17 @@ static void *elevator_alloc_sched_queue(struct request_queue *q,
 {
 	void *sched_queue = NULL;
 
+	/*
+	 * If fair queuing is enabled, then queue allocation takes place
+	 * during set_request() functions when request actually comes
+	 * in.
+	 */
+	if (elv_iosched_fair_queuing_enabled(eq))
+		return NULL;
+
 	if (eq->ops->elevator_alloc_sched_queue_fn) {
 		sched_queue = eq->ops->elevator_alloc_sched_queue_fn(q, eq,
-								GFP_KERNEL);
+							GFP_KERNEL, NULL);
 		if (!sched_queue)
 			return ERR_PTR(-ENOMEM);
 
@@ -963,6 +971,13 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 
+	/*
+	 * Optimization for noop, deadline and AS which maintain only single
+	 * ioq per io group
+	 */
+	if (elv_iosched_single_ioq(e))
+		return elv_fq_set_request_ioq(q, rq, gfp_mask);
+
 	if (e->ops->elevator_set_req_fn)
 		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
 
@@ -974,6 +989,15 @@ void elv_put_request(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	/*
+	 * Optimization for noop, deadline and AS which maintain only single
+	 * ioq per io group
+	 */
+	if (elv_iosched_single_ioq(e)) {
+		elv_fq_unset_request_ioq(q, rq);
+		return;
+	}
+
 	if (e->ops->elevator_put_req_fn)
 		e->ops->elevator_put_req_fn(rq);
 }
@@ -1345,9 +1369,18 @@ EXPORT_SYMBOL(elv_select_sched_queue);
 
 /*
  * Get the io scheduler queue pointer for current task.
+ *
+ * If fair queuing is enabled, determine the io group of task and retrieve
+ * the ioq pointer from that. This is used by only single queue ioschedulers
+ * for retrieving the queue associated with the group to decide whether the
+ * new bio can do a front merge or not.
  */
 void *elv_get_sched_queue_current(struct request_queue *q)
 {
-	return q->elevator->sched_queue;
+	/* Fair queuing is not enabled. There is only one queue. */
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
+	return ioq_sched_queue(elv_lookup_ioq_current(q));
 }
 EXPORT_SYMBOL(elv_get_sched_queue_current);
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index d587832..731dbf2 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -62,7 +62,7 @@ noop_latter_request(struct request_queue *q, struct request *rq)
 }
 
 static void *noop_alloc_noop_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct noop_queue *nq;
 
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 3729a2f..3e99bdb 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -30,7 +30,7 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
-typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t);
+typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t, struct io_queue *ioq);
 typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
 #ifdef CONFIG_ELV_FAIR_QUEUING
 typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
@@ -249,17 +249,31 @@ enum {
 /* iosched wants to use fq logic of elevator layer */
 #define	ELV_IOSCHED_NEED_FQ	1
 
+/* iosched maintains only single ioq per group.*/
+#define ELV_IOSCHED_SINGLE_IOQ        2
+
 static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 {
 	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
 }
 
+static inline int elv_iosched_single_ioq(struct elevator_queue *e)
+{
+	return (e->elevator_type->elevator_features) & ELV_IOSCHED_SINGLE_IOQ;
+}
+
 #else /* ELV_IOSCHED_FAIR_QUEUING */
 
 static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 {
 	return 0;
 }
+
+static inline int elv_iosched_single_ioq(struct elevator_queue *e)
+{
+	return 0;
+}
+
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

Elevator layer now has support for hierarchical fair queuing. cfq has
been migrated to make use of it and now it is time to do groundwork for
noop, deadline and AS.

noop deadline and AS don't maintain separate queues for different processes.
There is only one single queue. Effectively one can think that in hierarchical
setup, there will be one queue per cgroup where requests from all the
processes in the cgroup will be queued.

Generally io scheduler takes care of creating queues. Because there is
only one queue here, we have modified common layer to take care of queue
creation and some other functionality. This special casing helps in keeping
the changes to noop, deadline and AS to the minimum.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/as-iosched.c       |    2 +-
 block/deadline-iosched.c |    2 +-
 block/elevator-fq.c      |  206 +++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h      |   70 ++++++++++++++++
 block/elevator.c         |   37 ++++++++-
 block/noop-iosched.c     |    2 +-
 include/linux/elevator.h |   16 ++++-
 7 files changed, 327 insertions(+), 8 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index 7158e13..3aa54a8 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1340,7 +1340,7 @@ static int as_may_queue(struct request_queue *q, int rw)
 
 /* Called with queue lock held */
 static void *as_alloc_as_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct as_queue *asq;
 	struct as_data *ad = eq->elevator_data;
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 5e65041..3a195ce 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -341,7 +341,7 @@ dispatch_request:
 }
 
 static void *deadline_alloc_deadline_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct deadline_queue *dq;
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index cde2155..5711a6d 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -72,7 +72,6 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 void elv_activate_ioq(struct io_queue *ioq, int add_front);
 void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 					int requeue);
-
 static int bfq_update_next_active(struct io_sched_data *sd)
 {
 	struct io_group *iog;
@@ -1022,6 +1021,12 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
 
 	/* Free up async idle queue */
 	elv_release_ioq(e, &iog->async_idle_queue);
+
+#ifdef CONFIG_GROUP_IOSCHED
+	/* Optimization for io schedulers having single ioq */
+	if (elv_iosched_single_ioq(e))
+		elv_release_ioq(e, &iog->ioq);
+#endif
 }
 
 /*
@@ -1048,6 +1053,14 @@ struct io_cgroup io_root_cgroup = {
 	.ioprio_class = IO_DEFAULT_GRP_CLASS,
 };
 
+static inline int is_only_root_group(void)
+{
+	if (list_empty(&io_root_cgroup.css.cgroup->children))
+		return 1;
+
+	return 0;
+}
+
 void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 {
 	entity->ioprio = entity->new_ioprio;
@@ -1859,6 +1872,153 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 	return (iog == __iog);
 }
 
+/*
+ * Find/Create the io queue the rq should go in. This is an optimization
+ * for the io schedulers (noop, deadline and AS) which maintain only single
+ * io queue per cgroup. In this case common layer can just maintain a
+ * pointer in group data structure and keeps track of it.
+ *
+ * For the io schdulers like cfq, which maintain multiple io queues per
+ * cgroup, and decide the io queue  of request based on process, this
+ * function is not invoked.
+ */
+int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
+					gfp_t gfp_mask)
+{
+	struct elevator_queue *e = q->elevator;
+	unsigned long flags;
+	struct io_queue *ioq = NULL, *new_ioq = NULL;
+	struct io_group *iog;
+	void *sched_q = NULL, *new_sched_q = NULL;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return 0;
+
+	might_sleep_if(gfp_mask & __GFP_WAIT);
+	spin_lock_irqsave(q->queue_lock, flags);
+
+retry:
+	/* Determine the io group request belongs to */
+	iog = io_get_io_group(q, 1);
+	BUG_ON(!iog);
+
+	/* Get the iosched queue */
+	ioq = io_group_ioq(iog);
+	if (!ioq) {
+		/* io queue and sched_queue needs to be allocated */
+		BUG_ON(!e->ops->elevator_alloc_sched_queue_fn);
+
+		if (new_ioq) {
+			goto alloc_sched_q;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			new_ioq = elv_alloc_ioq(q, gfp_mask | __GFP_NOFAIL
+							| __GFP_ZERO);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			ioq = elv_alloc_ioq(q, gfp_mask | __GFP_ZERO);
+			if (!ioq)
+				goto queue_fail;
+		}
+
+alloc_sched_q:
+		if (new_sched_q) {
+			ioq = new_ioq;
+			new_ioq = NULL;
+			sched_q = new_sched_q;
+			new_sched_q = NULL;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			/* Call io scheduer to create scheduler queue */
+			new_sched_q = e->ops->elevator_alloc_sched_queue_fn(q,
+					e, gfp_mask | __GFP_NOFAIL
+					| __GFP_ZERO, new_ioq);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			sched_q = e->ops->elevator_alloc_sched_queue_fn(q, e,
+						gfp_mask | __GFP_ZERO, ioq);
+			if (!sched_q) {
+				elv_free_ioq(ioq);
+				goto queue_fail;
+			}
+		}
+
+		elv_init_ioq(e, ioq, iog, sched_q, IOPRIO_CLASS_BE,
+					IOPRIO_NORM, 1);
+		io_group_set_ioq(iog, ioq);
+		elv_mark_ioq_sync(ioq);
+		elv_get_iog(iog);
+	}
+
+	if (new_sched_q)
+		e->ops->elevator_free_sched_queue_fn(q->elevator, new_sched_q);
+
+	if (new_ioq)
+		elv_free_ioq(new_ioq);
+
+	/* Request reference */
+	elv_get_ioq(ioq);
+	rq->ioq = ioq;
+	spin_unlock_irqrestore(q->queue_lock, flags);
+	return 0;
+
+queue_fail:
+	WARN_ON((gfp_mask & __GFP_WAIT) && !ioq);
+	elv_schedule_dispatch(q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+	return 1;
+}
+
+/*
+ * Find out the io queue of current task. Optimization for single ioq
+ * per io group io schedulers.
+ */
+struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	struct io_group *iog;
+
+	/* Determine the io group and io queue of the bio submitting task */
+	iog = io_get_io_group(q, 0);
+	if (!iog) {
+		/* May be task belongs to a cgroup for which io group has
+		 * not been setup yet. */
+		return NULL;
+	}
+	return io_group_ioq(iog);
+}
+
+/*
+ * This request has been serviced. Clean up ioq info and drop the reference.
+ * Again this is called only for single queue per cgroup schedulers (noop,
+ * deadline, AS).
+ */
+void elv_fq_unset_request_ioq(struct request_queue *q, struct request *rq)
+{
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	if (ioq) {
+		rq->ioq = NULL;
+		elv_put_ioq(ioq);
+	}
+}
+
 #else /* GROUP_IOSCHED */
 void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 {
@@ -1904,6 +2064,11 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 	return q->elevator->efqd.root_group;
 }
 EXPORT_SYMBOL(io_get_io_group);
+
+static inline int is_only_root_group(void)
+{
+	return 1;
+}
 #endif /* CONFIG_GROUP_IOSCHED*/
 
 /* Elevator fair queuing function */
@@ -2200,7 +2365,12 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
 	ioq->efqd = efqd;
 	elv_ioq_set_ioprio_class(ioq, ioprio_class);
 	elv_ioq_set_ioprio(ioq, ioprio);
-	ioq->pid = current->pid;
+
+	if (elv_iosched_single_ioq(eq))
+		ioq->pid = 0;
+	else
+		ioq->pid = current->pid;
+
 	ioq->sched_queue = sched_queue;
 	if (is_sync && !elv_ioq_class_idle(ioq))
 		elv_mark_ioq_idle_window(ioq);
@@ -2579,6 +2749,14 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	struct io_entity *entity, *new_entity;
 	struct io_group *iog = NULL, *new_iog = NULL;
 
+	/*
+	 * Currently only CFQ has preemption logic. Other schedulers don't
+	 * have any notion of preemption across classes or preemption with-in
+	 * class etc.
+	 */
+	if (elv_iosched_single_ioq(eq))
+		return 0;
+
 	ioq = elv_active_ioq(eq);
 
 	if (!ioq)
@@ -2835,6 +3013,17 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/*
+	 * If there is only root group present, don't expire the queue for
+	 * single queue ioschedulers (noop, deadline, AS). It is unnecessary
+	 * overhead.
+	 */
+
+	if (is_only_root_group() && elv_iosched_single_ioq(q->elevator)) {
+		elv_log_ioq(efqd, ioq, "select: only root group, no expiry");
+		goto keep_queue;
+	}
+
 	/* We are waiting for this queue to become busy before it expires.*/
 	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
 		ioq = NULL;
@@ -3084,6 +3273,19 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		}
 
 		/*
+		 * If there is only root group present, don't expire the queue
+		 * for single queue ioschedulers (noop, deadline, AS). It is
+		 * unnecessary overhead.
+		 */
+
+		if (is_only_root_group() &&
+			elv_iosched_single_ioq(q->elevator)) {
+			elv_log_ioq(efqd, ioq, "select: only root group,"
+					" no expiry");
+			goto done;
+		}
+
+		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
 		 * those other queues are issuing requests within our
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index e13999e..7281451 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -254,6 +254,9 @@ struct io_group {
 
 	/* The device MKDEV(major, minor), this group has been created for */
 	dev_t	dev;
+
+	/* Single ioq per group, used for noop, deadline, anticipatory */
+	struct io_queue *ioq;
 };
 
 /**
@@ -365,6 +368,8 @@ enum elv_queue_state_flags {
 	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
 	ELV_QUEUE_FLAG_wait_busy,	  /* wait for this queue to get busy */
 	ELV_QUEUE_FLAG_wait_busy_done,	  /* Have already waited on this queue*/
+	ELV_QUEUE_FLAG_must_expire,       /* Expire this queue even if it has
+					   * request and time slice left */
 	ELV_QUEUE_FLAG_NR,
 };
 
@@ -390,6 +395,7 @@ ELV_IO_QUEUE_FLAG_FNS(idle_window)
 ELV_IO_QUEUE_FLAG_FNS(slice_new)
 ELV_IO_QUEUE_FLAG_FNS(wait_busy)
 ELV_IO_QUEUE_FLAG_FNS(wait_busy_done)
+ELV_IO_QUEUE_FLAG_FNS(must_expire)
 
 static inline struct io_service_tree *
 io_entity_service_tree(struct io_entity *entity)
@@ -522,6 +528,28 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 	return requeue;
 }
 
+extern int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
+					gfp_t gfp_mask);
+extern void elv_fq_unset_request_ioq(struct request_queue *q,
+					struct request *rq);
+extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
+
+/* Returns single ioq associated with the io group. */
+static inline struct io_queue *io_group_ioq(struct io_group *iog)
+{
+	BUG_ON(!iog);
+	return iog->ioq;
+}
+
+/* Sets the single ioq associated with the io group. (noop, deadline, AS) */
+static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
+{
+	BUG_ON(!iog);
+	/* io group reference. Will be dropped when group is destroyed. */
+	elv_get_ioq(ioq);
+	iog->ioq = ioq;
+}
+
 #else /* !GROUP_IOSCHED */
 static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 {
@@ -551,6 +579,32 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 	return requeue;
 }
 
+/* Returns single ioq associated with the io group. */
+static inline struct io_queue *io_group_ioq(struct io_group *iog)
+{
+	return NULL;
+}
+
+static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
+{
+}
+
+static inline int elv_fq_set_request_ioq(struct request_queue *q,
+					struct request *rq, gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void elv_fq_unset_request_ioq(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
@@ -662,5 +716,21 @@ static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 {
 	return 1;
 }
+static inline int elv_fq_set_request_ioq(struct request_queue *q,
+					struct request *rq, gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void elv_fq_unset_request_ioq(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	return NULL;
+}
+
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index 67a0601..de42fd6 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -211,9 +211,17 @@ static void *elevator_alloc_sched_queue(struct request_queue *q,
 {
 	void *sched_queue = NULL;
 
+	/*
+	 * If fair queuing is enabled, then queue allocation takes place
+	 * during set_request() functions when request actually comes
+	 * in.
+	 */
+	if (elv_iosched_fair_queuing_enabled(eq))
+		return NULL;
+
 	if (eq->ops->elevator_alloc_sched_queue_fn) {
 		sched_queue = eq->ops->elevator_alloc_sched_queue_fn(q, eq,
-								GFP_KERNEL);
+							GFP_KERNEL, NULL);
 		if (!sched_queue)
 			return ERR_PTR(-ENOMEM);
 
@@ -963,6 +971,13 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 
+	/*
+	 * Optimization for noop, deadline and AS which maintain only single
+	 * ioq per io group
+	 */
+	if (elv_iosched_single_ioq(e))
+		return elv_fq_set_request_ioq(q, rq, gfp_mask);
+
 	if (e->ops->elevator_set_req_fn)
 		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
 
@@ -974,6 +989,15 @@ void elv_put_request(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	/*
+	 * Optimization for noop, deadline and AS which maintain only single
+	 * ioq per io group
+	 */
+	if (elv_iosched_single_ioq(e)) {
+		elv_fq_unset_request_ioq(q, rq);
+		return;
+	}
+
 	if (e->ops->elevator_put_req_fn)
 		e->ops->elevator_put_req_fn(rq);
 }
@@ -1345,9 +1369,18 @@ EXPORT_SYMBOL(elv_select_sched_queue);
 
 /*
  * Get the io scheduler queue pointer for current task.
+ *
+ * If fair queuing is enabled, determine the io group of task and retrieve
+ * the ioq pointer from that. This is used by only single queue ioschedulers
+ * for retrieving the queue associated with the group to decide whether the
+ * new bio can do a front merge or not.
  */
 void *elv_get_sched_queue_current(struct request_queue *q)
 {
-	return q->elevator->sched_queue;
+	/* Fair queuing is not enabled. There is only one queue. */
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
+	return ioq_sched_queue(elv_lookup_ioq_current(q));
 }
 EXPORT_SYMBOL(elv_get_sched_queue_current);
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index d587832..731dbf2 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -62,7 +62,7 @@ noop_latter_request(struct request_queue *q, struct request *rq)
 }
 
 static void *noop_alloc_noop_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct noop_queue *nq;
 
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 3729a2f..3e99bdb 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -30,7 +30,7 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
-typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t);
+typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t, struct io_queue *ioq);
 typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
 #ifdef CONFIG_ELV_FAIR_QUEUING
 typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
@@ -249,17 +249,31 @@ enum {
 /* iosched wants to use fq logic of elevator layer */
 #define	ELV_IOSCHED_NEED_FQ	1
 
+/* iosched maintains only single ioq per group.*/
+#define ELV_IOSCHED_SINGLE_IOQ        2
+
 static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 {
 	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
 }
 
+static inline int elv_iosched_single_ioq(struct elevator_queue *e)
+{
+	return (e->elevator_type->elevator_features) & ELV_IOSCHED_SINGLE_IOQ;
+}
+
 #else /* ELV_IOSCHED_FAIR_QUEUING */
 
 static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 {
 	return 0;
 }
+
+static inline int elv_iosched_single_ioq(struct elevator_queue *e)
+{
+	return 0;
+}
+
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

Elevator layer now has support for hierarchical fair queuing. cfq has
been migrated to make use of it and now it is time to do groundwork for
noop, deadline and AS.

noop deadline and AS don't maintain separate queues for different processes.
There is only one single queue. Effectively one can think that in hierarchical
setup, there will be one queue per cgroup where requests from all the
processes in the cgroup will be queued.

Generally io scheduler takes care of creating queues. Because there is
only one queue here, we have modified common layer to take care of queue
creation and some other functionality. This special casing helps in keeping
the changes to noop, deadline and AS to the minimum.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/as-iosched.c       |    2 +-
 block/deadline-iosched.c |    2 +-
 block/elevator-fq.c      |  206 +++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h      |   70 ++++++++++++++++
 block/elevator.c         |   37 ++++++++-
 block/noop-iosched.c     |    2 +-
 include/linux/elevator.h |   16 ++++-
 7 files changed, 327 insertions(+), 8 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index 7158e13..3aa54a8 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1340,7 +1340,7 @@ static int as_may_queue(struct request_queue *q, int rw)
 
 /* Called with queue lock held */
 static void *as_alloc_as_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct as_queue *asq;
 	struct as_data *ad = eq->elevator_data;
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 5e65041..3a195ce 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -341,7 +341,7 @@ dispatch_request:
 }
 
 static void *deadline_alloc_deadline_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct deadline_queue *dq;
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index cde2155..5711a6d 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -72,7 +72,6 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 void elv_activate_ioq(struct io_queue *ioq, int add_front);
 void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 					int requeue);
-
 static int bfq_update_next_active(struct io_sched_data *sd)
 {
 	struct io_group *iog;
@@ -1022,6 +1021,12 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
 
 	/* Free up async idle queue */
 	elv_release_ioq(e, &iog->async_idle_queue);
+
+#ifdef CONFIG_GROUP_IOSCHED
+	/* Optimization for io schedulers having single ioq */
+	if (elv_iosched_single_ioq(e))
+		elv_release_ioq(e, &iog->ioq);
+#endif
 }
 
 /*
@@ -1048,6 +1053,14 @@ struct io_cgroup io_root_cgroup = {
 	.ioprio_class = IO_DEFAULT_GRP_CLASS,
 };
 
+static inline int is_only_root_group(void)
+{
+	if (list_empty(&io_root_cgroup.css.cgroup->children))
+		return 1;
+
+	return 0;
+}
+
 void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 {
 	entity->ioprio = entity->new_ioprio;
@@ -1859,6 +1872,153 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 	return (iog == __iog);
 }
 
+/*
+ * Find/Create the io queue the rq should go in. This is an optimization
+ * for the io schedulers (noop, deadline and AS) which maintain only single
+ * io queue per cgroup. In this case common layer can just maintain a
+ * pointer in group data structure and keeps track of it.
+ *
+ * For the io schdulers like cfq, which maintain multiple io queues per
+ * cgroup, and decide the io queue  of request based on process, this
+ * function is not invoked.
+ */
+int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
+					gfp_t gfp_mask)
+{
+	struct elevator_queue *e = q->elevator;
+	unsigned long flags;
+	struct io_queue *ioq = NULL, *new_ioq = NULL;
+	struct io_group *iog;
+	void *sched_q = NULL, *new_sched_q = NULL;
+
+	if (!elv_iosched_fair_queuing_enabled(e))
+		return 0;
+
+	might_sleep_if(gfp_mask & __GFP_WAIT);
+	spin_lock_irqsave(q->queue_lock, flags);
+
+retry:
+	/* Determine the io group request belongs to */
+	iog = io_get_io_group(q, 1);
+	BUG_ON(!iog);
+
+	/* Get the iosched queue */
+	ioq = io_group_ioq(iog);
+	if (!ioq) {
+		/* io queue and sched_queue needs to be allocated */
+		BUG_ON(!e->ops->elevator_alloc_sched_queue_fn);
+
+		if (new_ioq) {
+			goto alloc_sched_q;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			new_ioq = elv_alloc_ioq(q, gfp_mask | __GFP_NOFAIL
+							| __GFP_ZERO);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			ioq = elv_alloc_ioq(q, gfp_mask | __GFP_ZERO);
+			if (!ioq)
+				goto queue_fail;
+		}
+
+alloc_sched_q:
+		if (new_sched_q) {
+			ioq = new_ioq;
+			new_ioq = NULL;
+			sched_q = new_sched_q;
+			new_sched_q = NULL;
+		} else if (gfp_mask & __GFP_WAIT) {
+			/*
+			 * Inform the allocator of the fact that we will
+			 * just repeat this allocation if it fails, to allow
+			 * the allocator to do whatever it needs to attempt to
+			 * free memory.
+			 */
+			spin_unlock_irq(q->queue_lock);
+			/* Call io scheduer to create scheduler queue */
+			new_sched_q = e->ops->elevator_alloc_sched_queue_fn(q,
+					e, gfp_mask | __GFP_NOFAIL
+					| __GFP_ZERO, new_ioq);
+			spin_lock_irq(q->queue_lock);
+			goto retry;
+		} else {
+			sched_q = e->ops->elevator_alloc_sched_queue_fn(q, e,
+						gfp_mask | __GFP_ZERO, ioq);
+			if (!sched_q) {
+				elv_free_ioq(ioq);
+				goto queue_fail;
+			}
+		}
+
+		elv_init_ioq(e, ioq, iog, sched_q, IOPRIO_CLASS_BE,
+					IOPRIO_NORM, 1);
+		io_group_set_ioq(iog, ioq);
+		elv_mark_ioq_sync(ioq);
+		elv_get_iog(iog);
+	}
+
+	if (new_sched_q)
+		e->ops->elevator_free_sched_queue_fn(q->elevator, new_sched_q);
+
+	if (new_ioq)
+		elv_free_ioq(new_ioq);
+
+	/* Request reference */
+	elv_get_ioq(ioq);
+	rq->ioq = ioq;
+	spin_unlock_irqrestore(q->queue_lock, flags);
+	return 0;
+
+queue_fail:
+	WARN_ON((gfp_mask & __GFP_WAIT) && !ioq);
+	elv_schedule_dispatch(q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+	return 1;
+}
+
+/*
+ * Find out the io queue of current task. Optimization for single ioq
+ * per io group io schedulers.
+ */
+struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	struct io_group *iog;
+
+	/* Determine the io group and io queue of the bio submitting task */
+	iog = io_get_io_group(q, 0);
+	if (!iog) {
+		/* May be task belongs to a cgroup for which io group has
+		 * not been setup yet. */
+		return NULL;
+	}
+	return io_group_ioq(iog);
+}
+
+/*
+ * This request has been serviced. Clean up ioq info and drop the reference.
+ * Again this is called only for single queue per cgroup schedulers (noop,
+ * deadline, AS).
+ */
+void elv_fq_unset_request_ioq(struct request_queue *q, struct request *rq)
+{
+	struct io_queue *ioq = rq->ioq;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return;
+
+	if (ioq) {
+		rq->ioq = NULL;
+		elv_put_ioq(ioq);
+	}
+}
+
 #else /* GROUP_IOSCHED */
 void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 {
@@ -1904,6 +2064,11 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 	return q->elevator->efqd.root_group;
 }
 EXPORT_SYMBOL(io_get_io_group);
+
+static inline int is_only_root_group(void)
+{
+	return 1;
+}
 #endif /* CONFIG_GROUP_IOSCHED*/
 
 /* Elevator fair queuing function */
@@ -2200,7 +2365,12 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
 	ioq->efqd = efqd;
 	elv_ioq_set_ioprio_class(ioq, ioprio_class);
 	elv_ioq_set_ioprio(ioq, ioprio);
-	ioq->pid = current->pid;
+
+	if (elv_iosched_single_ioq(eq))
+		ioq->pid = 0;
+	else
+		ioq->pid = current->pid;
+
 	ioq->sched_queue = sched_queue;
 	if (is_sync && !elv_ioq_class_idle(ioq))
 		elv_mark_ioq_idle_window(ioq);
@@ -2579,6 +2749,14 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 	struct io_entity *entity, *new_entity;
 	struct io_group *iog = NULL, *new_iog = NULL;
 
+	/*
+	 * Currently only CFQ has preemption logic. Other schedulers don't
+	 * have any notion of preemption across classes or preemption with-in
+	 * class etc.
+	 */
+	if (elv_iosched_single_ioq(eq))
+		return 0;
+
 	ioq = elv_active_ioq(eq);
 
 	if (!ioq)
@@ -2835,6 +3013,17 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/*
+	 * If there is only root group present, don't expire the queue for
+	 * single queue ioschedulers (noop, deadline, AS). It is unnecessary
+	 * overhead.
+	 */
+
+	if (is_only_root_group() && elv_iosched_single_ioq(q->elevator)) {
+		elv_log_ioq(efqd, ioq, "select: only root group, no expiry");
+		goto keep_queue;
+	}
+
 	/* We are waiting for this queue to become busy before it expires.*/
 	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
 		ioq = NULL;
@@ -3084,6 +3273,19 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		}
 
 		/*
+		 * If there is only root group present, don't expire the queue
+		 * for single queue ioschedulers (noop, deadline, AS). It is
+		 * unnecessary overhead.
+		 */
+
+		if (is_only_root_group() &&
+			elv_iosched_single_ioq(q->elevator)) {
+			elv_log_ioq(efqd, ioq, "select: only root group,"
+					" no expiry");
+			goto done;
+		}
+
+		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
 		 * those other queues are issuing requests within our
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index e13999e..7281451 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -254,6 +254,9 @@ struct io_group {
 
 	/* The device MKDEV(major, minor), this group has been created for */
 	dev_t	dev;
+
+	/* Single ioq per group, used for noop, deadline, anticipatory */
+	struct io_queue *ioq;
 };
 
 /**
@@ -365,6 +368,8 @@ enum elv_queue_state_flags {
 	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
 	ELV_QUEUE_FLAG_wait_busy,	  /* wait for this queue to get busy */
 	ELV_QUEUE_FLAG_wait_busy_done,	  /* Have already waited on this queue*/
+	ELV_QUEUE_FLAG_must_expire,       /* Expire this queue even if it has
+					   * request and time slice left */
 	ELV_QUEUE_FLAG_NR,
 };
 
@@ -390,6 +395,7 @@ ELV_IO_QUEUE_FLAG_FNS(idle_window)
 ELV_IO_QUEUE_FLAG_FNS(slice_new)
 ELV_IO_QUEUE_FLAG_FNS(wait_busy)
 ELV_IO_QUEUE_FLAG_FNS(wait_busy_done)
+ELV_IO_QUEUE_FLAG_FNS(must_expire)
 
 static inline struct io_service_tree *
 io_entity_service_tree(struct io_entity *entity)
@@ -522,6 +528,28 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 	return requeue;
 }
 
+extern int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
+					gfp_t gfp_mask);
+extern void elv_fq_unset_request_ioq(struct request_queue *q,
+					struct request *rq);
+extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
+
+/* Returns single ioq associated with the io group. */
+static inline struct io_queue *io_group_ioq(struct io_group *iog)
+{
+	BUG_ON(!iog);
+	return iog->ioq;
+}
+
+/* Sets the single ioq associated with the io group. (noop, deadline, AS) */
+static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
+{
+	BUG_ON(!iog);
+	/* io group reference. Will be dropped when group is destroyed. */
+	elv_get_ioq(ioq);
+	iog->ioq = ioq;
+}
+
 #else /* !GROUP_IOSCHED */
 static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 {
@@ -551,6 +579,32 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 	return requeue;
 }
 
+/* Returns single ioq associated with the io group. */
+static inline struct io_queue *io_group_ioq(struct io_group *iog)
+{
+	return NULL;
+}
+
+static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
+{
+}
+
+static inline int elv_fq_set_request_ioq(struct request_queue *q,
+					struct request *rq, gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void elv_fq_unset_request_ioq(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
@@ -662,5 +716,21 @@ static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 {
 	return 1;
 }
+static inline int elv_fq_set_request_ioq(struct request_queue *q,
+					struct request *rq, gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void elv_fq_unset_request_ioq(struct request_queue *q,
+						struct request *rq)
+{
+}
+
+static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+{
+	return NULL;
+}
+
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index 67a0601..de42fd6 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -211,9 +211,17 @@ static void *elevator_alloc_sched_queue(struct request_queue *q,
 {
 	void *sched_queue = NULL;
 
+	/*
+	 * If fair queuing is enabled, then queue allocation takes place
+	 * during set_request() functions when request actually comes
+	 * in.
+	 */
+	if (elv_iosched_fair_queuing_enabled(eq))
+		return NULL;
+
 	if (eq->ops->elevator_alloc_sched_queue_fn) {
 		sched_queue = eq->ops->elevator_alloc_sched_queue_fn(q, eq,
-								GFP_KERNEL);
+							GFP_KERNEL, NULL);
 		if (!sched_queue)
 			return ERR_PTR(-ENOMEM);
 
@@ -963,6 +971,13 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 
+	/*
+	 * Optimization for noop, deadline and AS which maintain only single
+	 * ioq per io group
+	 */
+	if (elv_iosched_single_ioq(e))
+		return elv_fq_set_request_ioq(q, rq, gfp_mask);
+
 	if (e->ops->elevator_set_req_fn)
 		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
 
@@ -974,6 +989,15 @@ void elv_put_request(struct request_queue *q, struct request *rq)
 {
 	struct elevator_queue *e = q->elevator;
 
+	/*
+	 * Optimization for noop, deadline and AS which maintain only single
+	 * ioq per io group
+	 */
+	if (elv_iosched_single_ioq(e)) {
+		elv_fq_unset_request_ioq(q, rq);
+		return;
+	}
+
 	if (e->ops->elevator_put_req_fn)
 		e->ops->elevator_put_req_fn(rq);
 }
@@ -1345,9 +1369,18 @@ EXPORT_SYMBOL(elv_select_sched_queue);
 
 /*
  * Get the io scheduler queue pointer for current task.
+ *
+ * If fair queuing is enabled, determine the io group of task and retrieve
+ * the ioq pointer from that. This is used by only single queue ioschedulers
+ * for retrieving the queue associated with the group to decide whether the
+ * new bio can do a front merge or not.
  */
 void *elv_get_sched_queue_current(struct request_queue *q)
 {
-	return q->elevator->sched_queue;
+	/* Fair queuing is not enabled. There is only one queue. */
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return q->elevator->sched_queue;
+
+	return ioq_sched_queue(elv_lookup_ioq_current(q));
 }
 EXPORT_SYMBOL(elv_get_sched_queue_current);
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index d587832..731dbf2 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -62,7 +62,7 @@ noop_latter_request(struct request_queue *q, struct request *rq)
 }
 
 static void *noop_alloc_noop_queue(struct request_queue *q,
-				struct elevator_queue *eq, gfp_t gfp_mask)
+		struct elevator_queue *eq, gfp_t gfp_mask, struct io_queue *ioq)
 {
 	struct noop_queue *nq;
 
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 3729a2f..3e99bdb 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -30,7 +30,7 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 
 typedef void *(elevator_init_fn) (struct request_queue *);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
-typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t);
+typedef void* (elevator_alloc_sched_queue_fn) (struct request_queue *q, struct elevator_queue *eq, gfp_t, struct io_queue *ioq);
 typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
 #ifdef CONFIG_ELV_FAIR_QUEUING
 typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
@@ -249,17 +249,31 @@ enum {
 /* iosched wants to use fq logic of elevator layer */
 #define	ELV_IOSCHED_NEED_FQ	1
 
+/* iosched maintains only single ioq per group.*/
+#define ELV_IOSCHED_SINGLE_IOQ        2
+
 static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 {
 	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
 }
 
+static inline int elv_iosched_single_ioq(struct elevator_queue *e)
+{
+	return (e->elevator_type->elevator_features) & ELV_IOSCHED_SINGLE_IOQ;
+}
+
 #else /* ELV_IOSCHED_FAIR_QUEUING */
 
 static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
 {
 	return 0;
 }
+
+static inline int elv_iosched_single_ioq(struct elevator_queue *e)
+{
+	return 0;
+}
+
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (9 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 12/20] io-controller: deadline " Vivek Goyal
                     ` (10 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

This patch changes noop to use queue scheduling code from elevator layer.
One can go back to old noop by deselecting CONFIG_IOSCHED_NOOP_HIER.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched |   11 +++++++++++
 block/noop-iosched.c  |   13 +++++++++++++
 2 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index a91a807..9da6657 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -25,6 +25,17 @@ config IOSCHED_NOOP
 	  that do their own scheduling and require only minimal assistance from
 	  the kernel.
 
+config IOSCHED_NOOP_HIER
+	bool "Noop Hierarchical Scheduling support"
+	depends on IOSCHED_NOOP && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in noop. In this mode noop keeps
+	  one IO queue per cgroup instead of a global queue. Elevator
+	  fair queuing logic ensures fairness among various queues.
+
 config IOSCHED_AS
 	tristate "Anticipatory I/O scheduler"
 	default y
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index 731dbf2..97ea41b 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -82,6 +82,15 @@ static void noop_free_noop_queue(struct elevator_queue *e, void *sched_queue)
 	kfree(nq);
 }
 
+#ifdef CONFIG_IOSCHED_NOOP_HIER
+static struct elv_fs_entry noop_attrs[] = {
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+	__ATTR_NULL
+};
+#endif
+
 static struct elevator_type elevator_noop = {
 	.ops = {
 		.elevator_merge_req_fn		= noop_merged_requests,
@@ -92,6 +101,10 @@ static struct elevator_type elevator_noop = {
 		.elevator_alloc_sched_queue_fn	= noop_alloc_noop_queue,
 		.elevator_free_sched_queue_fn	= noop_free_noop_queue,
 	},
+#ifdef CONFIG_IOSCHED_NOOP_HIER
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+	.elevator_attrs = noop_attrs,
+#endif
 	.elevator_name = "noop",
 	.elevator_owner = THIS_MODULE,
 };
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

This patch changes noop to use queue scheduling code from elevator layer.
One can go back to old noop by deselecting CONFIG_IOSCHED_NOOP_HIER.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched |   11 +++++++++++
 block/noop-iosched.c  |   13 +++++++++++++
 2 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index a91a807..9da6657 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -25,6 +25,17 @@ config IOSCHED_NOOP
 	  that do their own scheduling and require only minimal assistance from
 	  the kernel.
 
+config IOSCHED_NOOP_HIER
+	bool "Noop Hierarchical Scheduling support"
+	depends on IOSCHED_NOOP && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in noop. In this mode noop keeps
+	  one IO queue per cgroup instead of a global queue. Elevator
+	  fair queuing logic ensures fairness among various queues.
+
 config IOSCHED_AS
 	tristate "Anticipatory I/O scheduler"
 	default y
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index 731dbf2..97ea41b 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -82,6 +82,15 @@ static void noop_free_noop_queue(struct elevator_queue *e, void *sched_queue)
 	kfree(nq);
 }
 
+#ifdef CONFIG_IOSCHED_NOOP_HIER
+static struct elv_fs_entry noop_attrs[] = {
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+	__ATTR_NULL
+};
+#endif
+
 static struct elevator_type elevator_noop = {
 	.ops = {
 		.elevator_merge_req_fn		= noop_merged_requests,
@@ -92,6 +101,10 @@ static struct elevator_type elevator_noop = {
 		.elevator_alloc_sched_queue_fn	= noop_alloc_noop_queue,
 		.elevator_free_sched_queue_fn	= noop_free_noop_queue,
 	},
+#ifdef CONFIG_IOSCHED_NOOP_HIER
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+	.elevator_attrs = noop_attrs,
+#endif
 	.elevator_name = "noop",
 	.elevator_owner = THIS_MODULE,
 };
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

This patch changes noop to use queue scheduling code from elevator layer.
One can go back to old noop by deselecting CONFIG_IOSCHED_NOOP_HIER.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched |   11 +++++++++++
 block/noop-iosched.c  |   13 +++++++++++++
 2 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index a91a807..9da6657 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -25,6 +25,17 @@ config IOSCHED_NOOP
 	  that do their own scheduling and require only minimal assistance from
 	  the kernel.
 
+config IOSCHED_NOOP_HIER
+	bool "Noop Hierarchical Scheduling support"
+	depends on IOSCHED_NOOP && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in noop. In this mode noop keeps
+	  one IO queue per cgroup instead of a global queue. Elevator
+	  fair queuing logic ensures fairness among various queues.
+
 config IOSCHED_AS
 	tristate "Anticipatory I/O scheduler"
 	default y
diff --git a/block/noop-iosched.c b/block/noop-iosched.c
index 731dbf2..97ea41b 100644
--- a/block/noop-iosched.c
+++ b/block/noop-iosched.c
@@ -82,6 +82,15 @@ static void noop_free_noop_queue(struct elevator_queue *e, void *sched_queue)
 	kfree(nq);
 }
 
+#ifdef CONFIG_IOSCHED_NOOP_HIER
+static struct elv_fs_entry noop_attrs[] = {
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+	__ATTR_NULL
+};
+#endif
+
 static struct elevator_type elevator_noop = {
 	.ops = {
 		.elevator_merge_req_fn		= noop_merged_requests,
@@ -92,6 +101,10 @@ static struct elevator_type elevator_noop = {
 		.elevator_alloc_sched_queue_fn	= noop_alloc_noop_queue,
 		.elevator_free_sched_queue_fn	= noop_free_noop_queue,
 	},
+#ifdef CONFIG_IOSCHED_NOOP_HIER
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+	.elevator_attrs = noop_attrs,
+#endif
 	.elevator_name = "noop",
 	.elevator_owner = THIS_MODULE,
 };
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 12/20] io-controller: deadline changes for hierarchical fair queuing
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (10 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 13/20] io-controller: anticipatory " Vivek Goyal
                     ` (9 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

This patch changes deadline to use queue scheduling code from elevator layer.
One can go back to old deadline by selecting CONFIG_IOSCHED_DEADLINE_HIER.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched    |   11 +++++++++++
 block/deadline-iosched.c |    8 ++++++++
 2 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 9da6657..3a9e7d7 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -55,6 +55,17 @@ config IOSCHED_DEADLINE
 	  a disk at any one time, its behaviour is almost identical to the
 	  anticipatory I/O scheduler and so is a good choice.
 
+config IOSCHED_DEADLINE_HIER
+	bool "Deadline Hierarchical Scheduling support"
+	depends on IOSCHED_DEADLINE && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in deadline. In this mode deadline keeps
+	  one IO queue per cgroup instead of a global queue. Elevator
+	  fair queuing logic ensures fairness among various queues.
+
 config IOSCHED_CFQ
 	tristate "CFQ I/O scheduler"
 	select ELV_FAIR_QUEUING
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 3a195ce..bae8e44 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -460,6 +460,11 @@ static struct elv_fs_entry deadline_attrs[] = {
 	DD_ATTR(writes_starved),
 	DD_ATTR(front_merges),
 	DD_ATTR(fifo_batch),
+#ifdef CONFIG_IOSCHED_DEADLINE_HIER
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+#endif
 	__ATTR_NULL
 };
 
@@ -477,6 +482,9 @@ static struct elevator_type iosched_deadline = {
 		.elevator_alloc_sched_queue_fn = deadline_alloc_deadline_queue,
 		.elevator_free_sched_queue_fn = deadline_free_deadline_queue,
 	},
+#ifdef CONFIG_IOSCHED_DEADLINE_HIER
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+#endif
 	.elevator_attrs = deadline_attrs,
 	.elevator_name = "deadline",
 	.elevator_owner = THIS_MODULE,
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 12/20] io-controller: deadline changes for hierarchical fair queuing
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

This patch changes deadline to use queue scheduling code from elevator layer.
One can go back to old deadline by selecting CONFIG_IOSCHED_DEADLINE_HIER.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   11 +++++++++++
 block/deadline-iosched.c |    8 ++++++++
 2 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 9da6657..3a9e7d7 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -55,6 +55,17 @@ config IOSCHED_DEADLINE
 	  a disk at any one time, its behaviour is almost identical to the
 	  anticipatory I/O scheduler and so is a good choice.
 
+config IOSCHED_DEADLINE_HIER
+	bool "Deadline Hierarchical Scheduling support"
+	depends on IOSCHED_DEADLINE && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in deadline. In this mode deadline keeps
+	  one IO queue per cgroup instead of a global queue. Elevator
+	  fair queuing logic ensures fairness among various queues.
+
 config IOSCHED_CFQ
 	tristate "CFQ I/O scheduler"
 	select ELV_FAIR_QUEUING
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 3a195ce..bae8e44 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -460,6 +460,11 @@ static struct elv_fs_entry deadline_attrs[] = {
 	DD_ATTR(writes_starved),
 	DD_ATTR(front_merges),
 	DD_ATTR(fifo_batch),
+#ifdef CONFIG_IOSCHED_DEADLINE_HIER
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+#endif
 	__ATTR_NULL
 };
 
@@ -477,6 +482,9 @@ static struct elevator_type iosched_deadline = {
 		.elevator_alloc_sched_queue_fn = deadline_alloc_deadline_queue,
 		.elevator_free_sched_queue_fn = deadline_free_deadline_queue,
 	},
+#ifdef CONFIG_IOSCHED_DEADLINE_HIER
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+#endif
 	.elevator_attrs = deadline_attrs,
 	.elevator_name = "deadline",
 	.elevator_owner = THIS_MODULE,
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 12/20] io-controller: deadline changes for hierarchical fair queuing
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

This patch changes deadline to use queue scheduling code from elevator layer.
One can go back to old deadline by selecting CONFIG_IOSCHED_DEADLINE_HIER.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   11 +++++++++++
 block/deadline-iosched.c |    8 ++++++++
 2 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 9da6657..3a9e7d7 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -55,6 +55,17 @@ config IOSCHED_DEADLINE
 	  a disk at any one time, its behaviour is almost identical to the
 	  anticipatory I/O scheduler and so is a good choice.
 
+config IOSCHED_DEADLINE_HIER
+	bool "Deadline Hierarchical Scheduling support"
+	depends on IOSCHED_DEADLINE && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in deadline. In this mode deadline keeps
+	  one IO queue per cgroup instead of a global queue. Elevator
+	  fair queuing logic ensures fairness among various queues.
+
 config IOSCHED_CFQ
 	tristate "CFQ I/O scheduler"
 	select ELV_FAIR_QUEUING
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 3a195ce..bae8e44 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -460,6 +460,11 @@ static struct elv_fs_entry deadline_attrs[] = {
 	DD_ATTR(writes_starved),
 	DD_ATTR(front_merges),
 	DD_ATTR(fifo_batch),
+#ifdef CONFIG_IOSCHED_DEADLINE_HIER
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+#endif
 	__ATTR_NULL
 };
 
@@ -477,6 +482,9 @@ static struct elevator_type iosched_deadline = {
 		.elevator_alloc_sched_queue_fn = deadline_alloc_deadline_queue,
 		.elevator_free_sched_queue_fn = deadline_free_deadline_queue,
 	},
+#ifdef CONFIG_IOSCHED_DEADLINE_HIER
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+#endif
 	.elevator_attrs = deadline_attrs,
 	.elevator_name = "deadline",
 	.elevator_owner = THIS_MODULE,
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 13/20] io-controller: anticipatory changes for hierarchical fair queuing
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (11 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 12/20] io-controller: deadline " Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
                     ` (8 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

This patch changes anticipatory scheduler to use queue scheduling code from
elevator layer.  One can go back to old as by deselecting
CONFIG_IOSCHED_AS_HIER. Even with CONFIG_IOSCHED_AS_HIER=y, with-out any
other cgroup created, AS behavior should remain the same as old.

o AS is a single queue ioschduler, that means there is one AS queue per group.

o common layer code select the queue to dispatch from based on fairness, and
  then AS code selects the request with-in group.

o AS runs reads and writes batches with-in group. So common layer runs timed
  group queues and with-in group time, AS runs timed batches of reads and
  writes.

o Note: Previously AS write batch length was adjusted synamically whenever
  a W->R batch data direction took place and when first request from the
  read batch completed.

  Now write batch updation takes place when last request from the write
  batch has finished during W->R transition.

o AS runs its own anticipation logic to anticipate on reads. common layer also
  does the anticipation on the group if think time of the group is with-in
  slice_idle.

o Introduced few debugging messages in AS.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched    |   12 ++
 block/as-iosched.c       |  280 +++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.c      |   86 ++++++++++++--
 include/linux/elevator.h |    2 +
 4 files changed, 363 insertions(+), 17 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 3a9e7d7..77fc786 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -45,6 +45,18 @@ config IOSCHED_AS
 	  deadline I/O scheduler, it can also be slower in some cases
 	  especially some database loads.
 
+config IOSCHED_AS_HIER
+	bool "Anticipatory Hierarchical Scheduling support"
+	depends on IOSCHED_AS && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in anticipatory. In this mode
+	  anticipatory keeps one IO queue per cgroup instead of a global
+	  queue. Elevator fair queuing logic ensures fairness among various
+	  queues.
+
 config IOSCHED_DEADLINE
 	tristate "Deadline I/O scheduler"
 	default y
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 3aa54a8..23a3d2d 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -16,6 +16,7 @@
 #include <linux/compiler.h>
 #include <linux/rbtree.h>
 #include <linux/interrupt.h>
+#include <linux/blktrace_api.h>
 
 /*
  * See Documentation/block/as-iosched.txt
@@ -84,10 +85,24 @@ struct as_queue {
 	struct list_head fifo_list[2];
 
 	struct request *next_rq[2];	/* next in sort order */
+
+	/*
+	 * If an as_queue is switched while a batch is running, then we
+	 * store the time left before current batch will expire
+	 */
+	long current_batch_time_left;
+
+	/*
+	 * batch data dir when queue was scheduled out. This will be used
+	 * to setup ad->batch_data_dir when queue is scheduled in.
+	 */
+	int saved_batch_data_dir;
+
 	unsigned long last_check_fifo[2];
 	int write_batch_count;		/* max # of reqs in a write batch */
 	int current_write_count;	/* how many requests left this batch */
 	int write_batch_idled;		/* has the write batch gone idle? */
+	int nr_queued[2];
 };
 
 struct as_data {
@@ -123,6 +138,9 @@ struct as_data {
 	unsigned long fifo_expire[2];
 	unsigned long batch_expire[2];
 	unsigned long antic_expire;
+
+	/* elevator requested a queue switch. */
+	int switch_queue;
 };
 
 /*
@@ -144,12 +162,174 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
+#define as_log(ad, fmt, args...)        \
+	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
+
 static DEFINE_PER_CPU(unsigned long, ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
 static void as_move_to_dispatch(struct as_data *ad, struct request *rq);
 static void as_antic_stop(struct as_data *ad);
+static inline int as_batch_expired(struct as_data *ad, struct as_queue *asq);
+
+#ifdef CONFIG_IOSCHED_AS_HIER
+static void as_save_batch_context(struct as_data *ad, struct as_queue *asq)
+{
+	/* Save batch data dir */
+	asq->saved_batch_data_dir = ad->batch_data_dir;
+
+	if (ad->changed_batch) {
+		/*
+		 * In case of force expire, we come here. Batch changeover
+		 * has been signalled but we are waiting for all the
+		 * request to finish from previous batch and then start
+		 * the new batch. Can't wait now. Mark that full batch time
+		 * needs to be allocated when this queue is scheduled again.
+		 */
+		asq->current_batch_time_left =
+				ad->batch_expire[ad->batch_data_dir];
+		ad->changed_batch = 0;
+		goto out;
+	}
+
+	if (ad->new_batch) {
+		/*
+		 * We should come here only when new_batch has been set
+		 * but no read request has been issued or if it is a forced
+		 * expiry.
+		 *
+		 * In both the cases, new batch has not started yet so
+		 * allocate full batch length for next scheduling opportunity.
+		 * We don't do write batch size adjustment in hierarchical
+		 * AS so that should not be an issue.
+		 */
+		asq->current_batch_time_left =
+				ad->batch_expire[ad->batch_data_dir];
+		ad->new_batch = 0;
+		goto out;
+	}
+
+	/* Save how much time is left before current batch expires */
+	if (as_batch_expired(ad, asq))
+		asq->current_batch_time_left = 0;
+	else {
+		asq->current_batch_time_left = ad->current_batch_expires
+							- jiffies;
+		BUG_ON((asq->current_batch_time_left) < 0);
+	}
+
+	if (ad->io_context) {
+		put_io_context(ad->io_context);
+		ad->io_context = NULL;
+	}
+
+out:
+	as_log(ad, "save batch: dir=%c time_left=%d changed_batch=%d"
+			" new_batch=%d, antic_status=%d",
+			ad->batch_data_dir ? 'R' : 'W',
+			asq->current_batch_time_left,
+			ad->changed_batch, ad->new_batch, ad->antic_status);
+	return;
+}
+
+/*
+ * FIXME: In original AS, read batch's time account started only after when
+ * first request had completed (if last batch was a write batch). But here
+ * we might be rescheduling a read batch right away irrespective of the fact
+ * of disk cache state.
+ */
+static void as_restore_batch_context(struct as_data *ad, struct as_queue *asq)
+{
+	/* Adjust the batch expire time */
+	if (asq->current_batch_time_left)
+		ad->current_batch_expires = jiffies +
+						asq->current_batch_time_left;
+	/* restore asq batch_data_dir info */
+	ad->batch_data_dir = asq->saved_batch_data_dir;
+	as_log(ad, "restore batch: dir=%c time=%d reads_q=%d writes_q=%d"
+			" ad->antic_status=%d",
+			ad->batch_data_dir ? 'R' : 'W',
+			asq->current_batch_time_left,
+			asq->nr_queued[1], asq->nr_queued[0],
+			ad->antic_status);
+}
+
+/* ioq has been set. */
+static void as_active_ioq_set(struct request_queue *q, void *sched_queue,
+				int coop)
+{
+	struct as_queue *asq = sched_queue;
+	struct as_data *ad = q->elevator->elevator_data;
+
+	as_restore_batch_context(ad, asq);
+}
+
+/*
+ * This is a notification from common layer that it wishes to expire this
+ * io queue. AS decides whether queue can be expired, if yes, it also
+ * saves the batch context.
+ */
+static int as_expire_ioq(struct request_queue *q, void *sched_queue,
+				int slice_expired, int force)
+{
+	struct as_data *ad = q->elevator->elevator_data;
+	int status = ad->antic_status;
+	struct as_queue *asq = sched_queue;
+
+	as_log(ad, "as_expire_ioq slice_expired=%d, force=%d", slice_expired,
+		force);
+
+	/* Forced expiry. We don't have a choice */
+	if (force) {
+		as_antic_stop(ad);
+		/*
+		 * antic_stop() sets antic_status to FINISHED which signifies
+		 * that either we timed out or we found a close request but
+		 * that's not the case here. Start from scratch.
+		 */
+		ad->antic_status = ANTIC_OFF;
+		as_save_batch_context(ad, asq);
+		ad->switch_queue = 0;
+		return 1;
+	}
+
+	/*
+	 * We are waiting for requests to finish from last
+	 * batch. Don't expire the queue now
+	 */
+	if (ad->changed_batch)
+		goto keep_queue;
+
+	/*
+	 * Wait for all requests from existing batch to finish before we
+	 * switch the queue. New queue might change the batch direction
+	 * and this is to be consistent with AS philosophy of not dispatching
+	 * new requests to underlying drive till requests from requests
+	 * from previous batch are completed.
+	 */
+	if (ad->nr_dispatched)
+		goto keep_queue;
+
+	/*
+	 * If AS anticipation is ON, wait for it to finish.
+	 */
+	BUG_ON(status == ANTIC_WAIT_REQ);
+
+	if (status == ANTIC_WAIT_NEXT)
+		goto keep_queue;
+
+	/* We are good to expire the queue. Save batch context */
+	as_save_batch_context(ad, asq);
+	ad->switch_queue = 0;
+	return 1;
+
+keep_queue:
+	/* Mark that elevator requested for queue switch whenever possible */
+	ad->switch_queue = 1;
+	return 0;
+}
+#endif
 
 /*
  * IO Context helper functions
@@ -429,6 +609,7 @@ static void as_antic_waitnext(struct as_data *ad)
 	mod_timer(&ad->antic_timer, timeout);
 
 	ad->antic_status = ANTIC_WAIT_NEXT;
+	as_log(ad, "antic_waitnext set");
 }
 
 /*
@@ -442,8 +623,10 @@ static void as_antic_waitreq(struct as_data *ad)
 	if (ad->antic_status == ANTIC_OFF) {
 		if (!ad->io_context || ad->ioc_finished)
 			as_antic_waitnext(ad);
-		else
+		else {
 			ad->antic_status = ANTIC_WAIT_REQ;
+			as_log(ad, "antic_waitreq set");
+		}
 	}
 }
 
@@ -455,6 +638,8 @@ static void as_antic_stop(struct as_data *ad)
 {
 	int status = ad->antic_status;
 
+	as_log(ad, "as_antic_stop antic_status=%d", ad->antic_status);
+
 	if (status == ANTIC_WAIT_REQ || status == ANTIC_WAIT_NEXT) {
 		if (status == ANTIC_WAIT_NEXT)
 			del_timer(&ad->antic_timer);
@@ -474,6 +659,7 @@ static void as_antic_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
+	as_log(ad, "as_antic_timeout");
 	if (ad->antic_status == ANTIC_WAIT_REQ
 			|| ad->antic_status == ANTIC_WAIT_NEXT) {
 		struct as_io_context *aic;
@@ -650,6 +836,21 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
 	struct io_context *ioc;
 	struct as_io_context *aic;
 
+#ifdef CONFIG_IOSCHED_AS_HIER
+	/*
+	 * If the active asq and rq's asq are not same, then one can not
+	 * break the anticipation. This primarily becomes useful when a
+	 * request is added to a queue which is not being served currently.
+	 */
+	if (rq) {
+		struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
+		struct as_queue *curr_asq =
+				elv_active_sched_queue(ad->q->elevator);
+
+		if (asq != curr_asq)
+			return 0;
+	}
+#endif
 	ioc = ad->io_context;
 	BUG_ON(!ioc);
 	spin_lock(&ioc->lock);
@@ -808,16 +1009,20 @@ static void as_update_rq(struct as_data *ad, struct request *rq)
 /*
  * Gathers timings and resizes the write batch automatically
  */
-static void update_write_batch(struct as_data *ad)
+static void update_write_batch(struct as_data *ad, struct request *rq)
 {
 	unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
 	long write_time;
-	struct as_queue *asq = elv_get_sched_queue(ad->q, NULL);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	write_time = (jiffies - ad->current_batch_expires) + batch;
 	if (write_time < 0)
 		write_time = 0;
 
+	as_log(ad, "upd write: write_time=%d batch=%d write_batch_idled=%d"
+			" current_write_count=%d", write_time, batch,
+			asq->write_batch_idled, asq->current_write_count);
+
 	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
 			asq->write_batch_count /= 2;
@@ -832,6 +1037,8 @@ static void update_write_batch(struct as_data *ad)
 
 	if (asq->write_batch_count < 1)
 		asq->write_batch_count = 1;
+
+	as_log(ad, "upd write count=%d", asq->write_batch_count);
 }
 
 /*
@@ -841,6 +1048,7 @@ static void update_write_batch(struct as_data *ad)
 static void as_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct as_data *ad = q->elevator->elevator_data;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	WARN_ON(!list_empty(&rq->queuelist));
 
@@ -849,7 +1057,24 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 		goto out;
 	}
 
+	as_log(ad, "complete: reads_q=%d writes_q=%d changed_batch=%d"
+		" new_batch=%d switch_queue=%d, dir=%c",
+		asq->nr_queued[1], asq->nr_queued[0], ad->changed_batch,
+		ad->new_batch, ad->switch_queue,
+		ad->batch_data_dir ? 'R' : 'W');
+
 	if (ad->changed_batch && ad->nr_dispatched == 1) {
+		/*
+		 * If this was write batch finishing, adjust the write batch
+		 * length.
+		 *
+		 * Note, write batch length is being calculated upon completion
+		 * of last write request finished and not completion of first
+		 * read request finished in the next batch.
+		 */
+		if (ad->batch_data_dir == BLK_RW_SYNC)
+			update_write_batch(ad, rq);
+
 		ad->current_batch_expires = jiffies +
 					ad->batch_expire[ad->batch_data_dir];
 		kblockd_schedule_work(q, &ad->antic_work);
@@ -867,7 +1092,6 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 	 * and writeback caches
 	 */
 	if (ad->new_batch && ad->batch_data_dir == rq_is_sync(rq)) {
-		update_write_batch(ad);
 		ad->current_batch_expires = jiffies +
 				ad->batch_expire[BLK_RW_SYNC];
 		ad->new_batch = 0;
@@ -886,6 +1110,13 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 	}
 
 	as_put_io_context(rq);
+
+	/*
+	 * If elevator requested a queue switch, kick the queue in the
+	 * hope that this is right time for switch.
+	 */
+	if (ad->switch_queue)
+		kblockd_schedule_work(q, &ad->antic_work);
 out:
 	RQ_SET_STATE(rq, AS_RQ_POSTSCHED);
 }
@@ -906,6 +1137,9 @@ static void as_remove_queued_request(struct request_queue *q,
 
 	WARN_ON(RQ_STATE(rq) != AS_RQ_QUEUED);
 
+	BUG_ON(asq->nr_queued[data_dir] <= 0);
+	asq->nr_queued[data_dir]--;
+
 	ioc = RQ_IOC(rq);
 	if (ioc && ioc->aic) {
 		BUG_ON(!atomic_read(&ioc->aic->nr_queued));
@@ -1017,6 +1251,8 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 	if (RQ_IOC(rq) && RQ_IOC(rq)->aic)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 	ad->nr_dispatched++;
+	as_log(ad, "dispatch req dir=%c nr_dispatched = %d",
+			data_dir ? 'R' : 'W', ad->nr_dispatched);
 }
 
 /*
@@ -1064,6 +1300,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		}
 		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
+		as_log(ad, "forced dispatch");
 		return dispatched;
 	}
 
@@ -1076,8 +1313,14 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	if (!(reads || writes)
 		|| ad->antic_status == ANTIC_WAIT_REQ
 		|| ad->antic_status == ANTIC_WAIT_NEXT
-		|| ad->changed_batch)
+		|| ad->changed_batch) {
+		as_log(ad, "no dispatch. read_q=%d, writes_q=%d"
+			" ad->antic_status=%d, changed_batch=%d,"
+			" switch_queue=%d new_batch=%d", asq->nr_queued[1],
+			asq->nr_queued[0], ad->antic_status, ad->changed_batch,
+			ad->switch_queue, ad->new_batch);
 		return 0;
+	}
 
 	if (!(reads && writes && as_batch_expired(ad, asq))) {
 		/*
@@ -1090,6 +1333,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
+				as_log(ad, "can_anticipate = 1");
 				as_antic_waitreq(ad);
 				return 0;
 			}
@@ -1109,6 +1353,8 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
+	as_log(ad, "select a fresh batch and request");
+
 	if (reads) {
 		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
 
@@ -1123,6 +1369,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
+		as_log(ad, "new batch dir is sync");
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
 		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
@@ -1147,6 +1394,7 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
+		as_log(ad, "new batch dir is async");
 		asq->current_write_count = asq->write_batch_count;
 		asq->write_batch_idled = 0;
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
@@ -1182,6 +1430,9 @@ fifo_expired:
 		ad->changed_batch = 0;
 	}
 
+	if (ad->switch_queue)
+		return 0;
+
 	/*
 	 * rq is the selected appropriate request.
 	 */
@@ -1205,6 +1456,11 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 
 	rq->elevator_private = as_get_io_context(q->node);
 
+	asq->nr_queued[data_dir]++;
+	as_log(ad, "add a %c request read_q=%d write_q=%d",
+			data_dir ? 'R' : 'W', asq->nr_queued[1],
+			asq->nr_queued[0]);
+
 	if (RQ_IOC(rq)) {
 		as_update_iohist(ad, RQ_IOC(rq)->aic, rq);
 		atomic_inc(&RQ_IOC(rq)->aic->nr_queued);
@@ -1410,6 +1666,7 @@ static void *as_init_queue(struct request_queue *q)
 	ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
 
 	ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
+	ad->switch_queue = 0;
 
 	return ad;
 }
@@ -1495,6 +1752,11 @@ static struct elv_fs_entry as_attrs[] = {
 	AS_ATTR(antic_expire),
 	AS_ATTR(read_batch_expire),
 	AS_ATTR(write_batch_expire),
+#ifdef CONFIG_IOSCHED_AS_HIER
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+#endif
 	__ATTR_NULL
 };
 
@@ -1516,8 +1778,14 @@ static struct elevator_type iosched_as = {
 		.trim =				as_trim,
 		.elevator_alloc_sched_queue_fn = as_alloc_as_queue,
 		.elevator_free_sched_queue_fn = as_free_as_queue,
+#ifdef CONFIG_IOSCHED_AS_HIER
+		.elevator_expire_ioq_fn =       as_expire_ioq,
+		.elevator_active_ioq_set_fn =   as_active_ioq_set,
 	},
-
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+#else
+	},
+#endif
 	.elevator_attrs = as_attrs,
 	.elevator_name = "anticipatory",
 	.elevator_owner = THIS_MODULE,
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 5711a6d..c1f676e 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -39,6 +39,8 @@ static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
 struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 						 int extract);
 void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
+int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
+					int force);
 
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
@@ -2513,6 +2515,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 		elv_clear_ioq_must_dispatch(ioq);
 		elv_clear_ioq_wait_busy_done(ioq);
 		elv_mark_ioq_slice_new(ioq);
+		elv_clear_ioq_must_expire(ioq);
 
 		del_timer(&efqd->idle_slice_timer);
 	}
@@ -2671,6 +2674,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	elv_clear_ioq_wait_request(ioq);
 	elv_clear_ioq_wait_busy(ioq);
 	elv_clear_ioq_wait_busy_done(ioq);
+	elv_clear_ioq_must_expire(ioq);
 
 	/*
 	 * if ioq->slice_end = 0, that means a queue was expired before first
@@ -2809,16 +2813,18 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
 {
 	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
-	elv_ioq_slice_expired(q);
+	if (elv_iosched_expire_ioq(q, 0, 1)) {
+		elv_ioq_slice_expired(q);
 
-	/*
-	 * Put the new queue at the front of the of the current list,
-	 * so we know that it will be selected next.
-	 */
+		/*
+		 * Put the new queue at the front of the of the current list,
+		 * so we know that it will be selected next.
+		 */
 
-	elv_activate_ioq(ioq, 1);
-	elv_ioq_set_slice_end(ioq, 0);
-	elv_mark_ioq_slice_new(ioq);
+		elv_activate_ioq(ioq, 1);
+		elv_ioq_set_slice_end(ioq, 0);
+		elv_mark_ioq_slice_new(ioq);
+	}
 }
 
 void elv_ioq_request_add(struct request_queue *q, struct request *rq)
@@ -2989,12 +2995,56 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	}
 }
 
+/*
+ * Call iosched to let that elevator wants to expire the queue. This gives
+ * iosched like AS to say no (if it is in the middle of batch changeover or
+ * it is anticipating). it also allows iosched to do some house keeping
+ *
+ * force--> it is force dispatch and iosched must clean up its state. This
+ * 	     is useful when elevator wants to drain iosched and wants to
+ * 	     expire currnent active queue.
+ *
+ * slice_expired--> if 1, ioq slice expired hence elevator fair queuing logic
+ * 		    wants to switch the queue. iosched should allow that until
+ * 		    and unless necessary. Currently AS can deny the switch if
+ * 		    in the middle of batch switch.
+ *
+ * 		    if 0, time slice is still remaining. It is up to the iosched
+ * 		    whether it wants to wait on this queue or just want to
+ * 		    expire it and move on to next queue.
+ *
+ */
+int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
+					int force)
+{
+	struct elevator_queue *e = q->elevator;
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+	int ret = 1;
+
+	if (e->ops->elevator_expire_ioq_fn) {
+		ret = e->ops->elevator_expire_ioq_fn(q, ioq->sched_queue,
+							slice_expired, force);
+		/*
+		 * AS denied expiration of queue right now. Mark that elevator
+		 * layer has requested ioscheduler (as) to expire this queue.
+		 * Now as will try to expire this queue as soon as it can.
+		 * Now don't try to dispatch from this queue even if we get
+		 * a new request and if time slice is left. Do expire it once.
+		 */
+		if (!ret)
+			elv_mark_ioq_must_expire(ioq);
+	}
+
+	return ret;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
 	struct io_group *iog;
+	int slice_expired = 1;
 
 	if (!elv_nr_busy_ioq(q->elevator))
 		return NULL;
@@ -3013,6 +3063,10 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/* This queue has been marked for expiry. Try to expire it */
+	if (elv_ioq_must_expire(ioq))
+		goto expire;
+
 	/*
 	 * If there is only root group present, don't expire the queue for
 	 * single queue ioschedulers (noop, deadline, AS). It is unnecessary
@@ -3102,8 +3156,16 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 		goto keep_queue;
 	}
 
+	slice_expired = 0;
 expire:
-	elv_ioq_slice_expired(q);
+	if (elv_iosched_expire_ioq(q, slice_expired, force))
+		elv_ioq_slice_expired(q);
+	else
+		/*
+		 * Not making ioq = NULL, as AS can deny queue expiration and
+		 * continue to dispatch from same queue
+		 */
+		goto keep_queue;
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
 keep_queue:
@@ -3268,7 +3330,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		}
 
 		if (elv_ioq_class_idle(ioq)) {
-			elv_ioq_slice_expired(q);
+			if (elv_iosched_expire_ioq(q, 1, 0))
+				elv_ioq_slice_expired(q);
 			goto done;
 		}
 
@@ -3302,7 +3365,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 				elv_ioq_arm_slice_timer(q, 1);
 			} else {
 				/* Expire the queue */
-				elv_ioq_slice_expired(q);
+				if (elv_iosched_expire_ioq(q, 1, 0))
+					elv_ioq_slice_expired(q);
 			}
 		} else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
 			 && sync && !rq_noidle(rq))
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 3e99bdb..b47ecb3 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -42,6 +42,7 @@ typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
 						struct request*);
 typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
 						void*, int probe);
+typedef int (elevator_expire_ioq_fn) (struct request_queue*, void *, int, int);
 #endif
 
 struct elevator_ops
@@ -81,6 +82,7 @@ struct elevator_ops
 	elevator_should_preempt_fn *elevator_should_preempt_fn;
 	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
 	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
+	elevator_expire_ioq_fn  *elevator_expire_ioq_fn;
 #endif
 };
 
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 13/20] io-controller: anticipatory changes for hierarchical fair queuing
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

This patch changes anticipatory scheduler to use queue scheduling code from
elevator layer.  One can go back to old as by deselecting
CONFIG_IOSCHED_AS_HIER. Even with CONFIG_IOSCHED_AS_HIER=y, with-out any
other cgroup created, AS behavior should remain the same as old.

o AS is a single queue ioschduler, that means there is one AS queue per group.

o common layer code select the queue to dispatch from based on fairness, and
  then AS code selects the request with-in group.

o AS runs reads and writes batches with-in group. So common layer runs timed
  group queues and with-in group time, AS runs timed batches of reads and
  writes.

o Note: Previously AS write batch length was adjusted synamically whenever
  a W->R batch data direction took place and when first request from the
  read batch completed.

  Now write batch updation takes place when last request from the write
  batch has finished during W->R transition.

o AS runs its own anticipation logic to anticipate on reads. common layer also
  does the anticipation on the group if think time of the group is with-in
  slice_idle.

o Introduced few debugging messages in AS.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   12 ++
 block/as-iosched.c       |  280 +++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.c      |   86 ++++++++++++--
 include/linux/elevator.h |    2 +
 4 files changed, 363 insertions(+), 17 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 3a9e7d7..77fc786 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -45,6 +45,18 @@ config IOSCHED_AS
 	  deadline I/O scheduler, it can also be slower in some cases
 	  especially some database loads.
 
+config IOSCHED_AS_HIER
+	bool "Anticipatory Hierarchical Scheduling support"
+	depends on IOSCHED_AS && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in anticipatory. In this mode
+	  anticipatory keeps one IO queue per cgroup instead of a global
+	  queue. Elevator fair queuing logic ensures fairness among various
+	  queues.
+
 config IOSCHED_DEADLINE
 	tristate "Deadline I/O scheduler"
 	default y
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 3aa54a8..23a3d2d 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -16,6 +16,7 @@
 #include <linux/compiler.h>
 #include <linux/rbtree.h>
 #include <linux/interrupt.h>
+#include <linux/blktrace_api.h>
 
 /*
  * See Documentation/block/as-iosched.txt
@@ -84,10 +85,24 @@ struct as_queue {
 	struct list_head fifo_list[2];
 
 	struct request *next_rq[2];	/* next in sort order */
+
+	/*
+	 * If an as_queue is switched while a batch is running, then we
+	 * store the time left before current batch will expire
+	 */
+	long current_batch_time_left;
+
+	/*
+	 * batch data dir when queue was scheduled out. This will be used
+	 * to setup ad->batch_data_dir when queue is scheduled in.
+	 */
+	int saved_batch_data_dir;
+
 	unsigned long last_check_fifo[2];
 	int write_batch_count;		/* max # of reqs in a write batch */
 	int current_write_count;	/* how many requests left this batch */
 	int write_batch_idled;		/* has the write batch gone idle? */
+	int nr_queued[2];
 };
 
 struct as_data {
@@ -123,6 +138,9 @@ struct as_data {
 	unsigned long fifo_expire[2];
 	unsigned long batch_expire[2];
 	unsigned long antic_expire;
+
+	/* elevator requested a queue switch. */
+	int switch_queue;
 };
 
 /*
@@ -144,12 +162,174 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
+#define as_log(ad, fmt, args...)        \
+	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
+
 static DEFINE_PER_CPU(unsigned long, ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
 static void as_move_to_dispatch(struct as_data *ad, struct request *rq);
 static void as_antic_stop(struct as_data *ad);
+static inline int as_batch_expired(struct as_data *ad, struct as_queue *asq);
+
+#ifdef CONFIG_IOSCHED_AS_HIER
+static void as_save_batch_context(struct as_data *ad, struct as_queue *asq)
+{
+	/* Save batch data dir */
+	asq->saved_batch_data_dir = ad->batch_data_dir;
+
+	if (ad->changed_batch) {
+		/*
+		 * In case of force expire, we come here. Batch changeover
+		 * has been signalled but we are waiting for all the
+		 * request to finish from previous batch and then start
+		 * the new batch. Can't wait now. Mark that full batch time
+		 * needs to be allocated when this queue is scheduled again.
+		 */
+		asq->current_batch_time_left =
+				ad->batch_expire[ad->batch_data_dir];
+		ad->changed_batch = 0;
+		goto out;
+	}
+
+	if (ad->new_batch) {
+		/*
+		 * We should come here only when new_batch has been set
+		 * but no read request has been issued or if it is a forced
+		 * expiry.
+		 *
+		 * In both the cases, new batch has not started yet so
+		 * allocate full batch length for next scheduling opportunity.
+		 * We don't do write batch size adjustment in hierarchical
+		 * AS so that should not be an issue.
+		 */
+		asq->current_batch_time_left =
+				ad->batch_expire[ad->batch_data_dir];
+		ad->new_batch = 0;
+		goto out;
+	}
+
+	/* Save how much time is left before current batch expires */
+	if (as_batch_expired(ad, asq))
+		asq->current_batch_time_left = 0;
+	else {
+		asq->current_batch_time_left = ad->current_batch_expires
+							- jiffies;
+		BUG_ON((asq->current_batch_time_left) < 0);
+	}
+
+	if (ad->io_context) {
+		put_io_context(ad->io_context);
+		ad->io_context = NULL;
+	}
+
+out:
+	as_log(ad, "save batch: dir=%c time_left=%d changed_batch=%d"
+			" new_batch=%d, antic_status=%d",
+			ad->batch_data_dir ? 'R' : 'W',
+			asq->current_batch_time_left,
+			ad->changed_batch, ad->new_batch, ad->antic_status);
+	return;
+}
+
+/*
+ * FIXME: In original AS, read batch's time account started only after when
+ * first request had completed (if last batch was a write batch). But here
+ * we might be rescheduling a read batch right away irrespective of the fact
+ * of disk cache state.
+ */
+static void as_restore_batch_context(struct as_data *ad, struct as_queue *asq)
+{
+	/* Adjust the batch expire time */
+	if (asq->current_batch_time_left)
+		ad->current_batch_expires = jiffies +
+						asq->current_batch_time_left;
+	/* restore asq batch_data_dir info */
+	ad->batch_data_dir = asq->saved_batch_data_dir;
+	as_log(ad, "restore batch: dir=%c time=%d reads_q=%d writes_q=%d"
+			" ad->antic_status=%d",
+			ad->batch_data_dir ? 'R' : 'W',
+			asq->current_batch_time_left,
+			asq->nr_queued[1], asq->nr_queued[0],
+			ad->antic_status);
+}
+
+/* ioq has been set. */
+static void as_active_ioq_set(struct request_queue *q, void *sched_queue,
+				int coop)
+{
+	struct as_queue *asq = sched_queue;
+	struct as_data *ad = q->elevator->elevator_data;
+
+	as_restore_batch_context(ad, asq);
+}
+
+/*
+ * This is a notification from common layer that it wishes to expire this
+ * io queue. AS decides whether queue can be expired, if yes, it also
+ * saves the batch context.
+ */
+static int as_expire_ioq(struct request_queue *q, void *sched_queue,
+				int slice_expired, int force)
+{
+	struct as_data *ad = q->elevator->elevator_data;
+	int status = ad->antic_status;
+	struct as_queue *asq = sched_queue;
+
+	as_log(ad, "as_expire_ioq slice_expired=%d, force=%d", slice_expired,
+		force);
+
+	/* Forced expiry. We don't have a choice */
+	if (force) {
+		as_antic_stop(ad);
+		/*
+		 * antic_stop() sets antic_status to FINISHED which signifies
+		 * that either we timed out or we found a close request but
+		 * that's not the case here. Start from scratch.
+		 */
+		ad->antic_status = ANTIC_OFF;
+		as_save_batch_context(ad, asq);
+		ad->switch_queue = 0;
+		return 1;
+	}
+
+	/*
+	 * We are waiting for requests to finish from last
+	 * batch. Don't expire the queue now
+	 */
+	if (ad->changed_batch)
+		goto keep_queue;
+
+	/*
+	 * Wait for all requests from existing batch to finish before we
+	 * switch the queue. New queue might change the batch direction
+	 * and this is to be consistent with AS philosophy of not dispatching
+	 * new requests to underlying drive till requests from requests
+	 * from previous batch are completed.
+	 */
+	if (ad->nr_dispatched)
+		goto keep_queue;
+
+	/*
+	 * If AS anticipation is ON, wait for it to finish.
+	 */
+	BUG_ON(status == ANTIC_WAIT_REQ);
+
+	if (status == ANTIC_WAIT_NEXT)
+		goto keep_queue;
+
+	/* We are good to expire the queue. Save batch context */
+	as_save_batch_context(ad, asq);
+	ad->switch_queue = 0;
+	return 1;
+
+keep_queue:
+	/* Mark that elevator requested for queue switch whenever possible */
+	ad->switch_queue = 1;
+	return 0;
+}
+#endif
 
 /*
  * IO Context helper functions
@@ -429,6 +609,7 @@ static void as_antic_waitnext(struct as_data *ad)
 	mod_timer(&ad->antic_timer, timeout);
 
 	ad->antic_status = ANTIC_WAIT_NEXT;
+	as_log(ad, "antic_waitnext set");
 }
 
 /*
@@ -442,8 +623,10 @@ static void as_antic_waitreq(struct as_data *ad)
 	if (ad->antic_status == ANTIC_OFF) {
 		if (!ad->io_context || ad->ioc_finished)
 			as_antic_waitnext(ad);
-		else
+		else {
 			ad->antic_status = ANTIC_WAIT_REQ;
+			as_log(ad, "antic_waitreq set");
+		}
 	}
 }
 
@@ -455,6 +638,8 @@ static void as_antic_stop(struct as_data *ad)
 {
 	int status = ad->antic_status;
 
+	as_log(ad, "as_antic_stop antic_status=%d", ad->antic_status);
+
 	if (status == ANTIC_WAIT_REQ || status == ANTIC_WAIT_NEXT) {
 		if (status == ANTIC_WAIT_NEXT)
 			del_timer(&ad->antic_timer);
@@ -474,6 +659,7 @@ static void as_antic_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
+	as_log(ad, "as_antic_timeout");
 	if (ad->antic_status == ANTIC_WAIT_REQ
 			|| ad->antic_status == ANTIC_WAIT_NEXT) {
 		struct as_io_context *aic;
@@ -650,6 +836,21 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
 	struct io_context *ioc;
 	struct as_io_context *aic;
 
+#ifdef CONFIG_IOSCHED_AS_HIER
+	/*
+	 * If the active asq and rq's asq are not same, then one can not
+	 * break the anticipation. This primarily becomes useful when a
+	 * request is added to a queue which is not being served currently.
+	 */
+	if (rq) {
+		struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
+		struct as_queue *curr_asq =
+				elv_active_sched_queue(ad->q->elevator);
+
+		if (asq != curr_asq)
+			return 0;
+	}
+#endif
 	ioc = ad->io_context;
 	BUG_ON(!ioc);
 	spin_lock(&ioc->lock);
@@ -808,16 +1009,20 @@ static void as_update_rq(struct as_data *ad, struct request *rq)
 /*
  * Gathers timings and resizes the write batch automatically
  */
-static void update_write_batch(struct as_data *ad)
+static void update_write_batch(struct as_data *ad, struct request *rq)
 {
 	unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
 	long write_time;
-	struct as_queue *asq = elv_get_sched_queue(ad->q, NULL);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	write_time = (jiffies - ad->current_batch_expires) + batch;
 	if (write_time < 0)
 		write_time = 0;
 
+	as_log(ad, "upd write: write_time=%d batch=%d write_batch_idled=%d"
+			" current_write_count=%d", write_time, batch,
+			asq->write_batch_idled, asq->current_write_count);
+
 	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
 			asq->write_batch_count /= 2;
@@ -832,6 +1037,8 @@ static void update_write_batch(struct as_data *ad)
 
 	if (asq->write_batch_count < 1)
 		asq->write_batch_count = 1;
+
+	as_log(ad, "upd write count=%d", asq->write_batch_count);
 }
 
 /*
@@ -841,6 +1048,7 @@ static void update_write_batch(struct as_data *ad)
 static void as_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct as_data *ad = q->elevator->elevator_data;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	WARN_ON(!list_empty(&rq->queuelist));
 
@@ -849,7 +1057,24 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 		goto out;
 	}
 
+	as_log(ad, "complete: reads_q=%d writes_q=%d changed_batch=%d"
+		" new_batch=%d switch_queue=%d, dir=%c",
+		asq->nr_queued[1], asq->nr_queued[0], ad->changed_batch,
+		ad->new_batch, ad->switch_queue,
+		ad->batch_data_dir ? 'R' : 'W');
+
 	if (ad->changed_batch && ad->nr_dispatched == 1) {
+		/*
+		 * If this was write batch finishing, adjust the write batch
+		 * length.
+		 *
+		 * Note, write batch length is being calculated upon completion
+		 * of last write request finished and not completion of first
+		 * read request finished in the next batch.
+		 */
+		if (ad->batch_data_dir == BLK_RW_SYNC)
+			update_write_batch(ad, rq);
+
 		ad->current_batch_expires = jiffies +
 					ad->batch_expire[ad->batch_data_dir];
 		kblockd_schedule_work(q, &ad->antic_work);
@@ -867,7 +1092,6 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 	 * and writeback caches
 	 */
 	if (ad->new_batch && ad->batch_data_dir == rq_is_sync(rq)) {
-		update_write_batch(ad);
 		ad->current_batch_expires = jiffies +
 				ad->batch_expire[BLK_RW_SYNC];
 		ad->new_batch = 0;
@@ -886,6 +1110,13 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 	}
 
 	as_put_io_context(rq);
+
+	/*
+	 * If elevator requested a queue switch, kick the queue in the
+	 * hope that this is right time for switch.
+	 */
+	if (ad->switch_queue)
+		kblockd_schedule_work(q, &ad->antic_work);
 out:
 	RQ_SET_STATE(rq, AS_RQ_POSTSCHED);
 }
@@ -906,6 +1137,9 @@ static void as_remove_queued_request(struct request_queue *q,
 
 	WARN_ON(RQ_STATE(rq) != AS_RQ_QUEUED);
 
+	BUG_ON(asq->nr_queued[data_dir] <= 0);
+	asq->nr_queued[data_dir]--;
+
 	ioc = RQ_IOC(rq);
 	if (ioc && ioc->aic) {
 		BUG_ON(!atomic_read(&ioc->aic->nr_queued));
@@ -1017,6 +1251,8 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 	if (RQ_IOC(rq) && RQ_IOC(rq)->aic)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 	ad->nr_dispatched++;
+	as_log(ad, "dispatch req dir=%c nr_dispatched = %d",
+			data_dir ? 'R' : 'W', ad->nr_dispatched);
 }
 
 /*
@@ -1064,6 +1300,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		}
 		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
+		as_log(ad, "forced dispatch");
 		return dispatched;
 	}
 
@@ -1076,8 +1313,14 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	if (!(reads || writes)
 		|| ad->antic_status == ANTIC_WAIT_REQ
 		|| ad->antic_status == ANTIC_WAIT_NEXT
-		|| ad->changed_batch)
+		|| ad->changed_batch) {
+		as_log(ad, "no dispatch. read_q=%d, writes_q=%d"
+			" ad->antic_status=%d, changed_batch=%d,"
+			" switch_queue=%d new_batch=%d", asq->nr_queued[1],
+			asq->nr_queued[0], ad->antic_status, ad->changed_batch,
+			ad->switch_queue, ad->new_batch);
 		return 0;
+	}
 
 	if (!(reads && writes && as_batch_expired(ad, asq))) {
 		/*
@@ -1090,6 +1333,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
+				as_log(ad, "can_anticipate = 1");
 				as_antic_waitreq(ad);
 				return 0;
 			}
@@ -1109,6 +1353,8 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
+	as_log(ad, "select a fresh batch and request");
+
 	if (reads) {
 		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
 
@@ -1123,6 +1369,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
+		as_log(ad, "new batch dir is sync");
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
 		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
@@ -1147,6 +1394,7 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
+		as_log(ad, "new batch dir is async");
 		asq->current_write_count = asq->write_batch_count;
 		asq->write_batch_idled = 0;
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
@@ -1182,6 +1430,9 @@ fifo_expired:
 		ad->changed_batch = 0;
 	}
 
+	if (ad->switch_queue)
+		return 0;
+
 	/*
 	 * rq is the selected appropriate request.
 	 */
@@ -1205,6 +1456,11 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 
 	rq->elevator_private = as_get_io_context(q->node);
 
+	asq->nr_queued[data_dir]++;
+	as_log(ad, "add a %c request read_q=%d write_q=%d",
+			data_dir ? 'R' : 'W', asq->nr_queued[1],
+			asq->nr_queued[0]);
+
 	if (RQ_IOC(rq)) {
 		as_update_iohist(ad, RQ_IOC(rq)->aic, rq);
 		atomic_inc(&RQ_IOC(rq)->aic->nr_queued);
@@ -1410,6 +1666,7 @@ static void *as_init_queue(struct request_queue *q)
 	ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
 
 	ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
+	ad->switch_queue = 0;
 
 	return ad;
 }
@@ -1495,6 +1752,11 @@ static struct elv_fs_entry as_attrs[] = {
 	AS_ATTR(antic_expire),
 	AS_ATTR(read_batch_expire),
 	AS_ATTR(write_batch_expire),
+#ifdef CONFIG_IOSCHED_AS_HIER
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+#endif
 	__ATTR_NULL
 };
 
@@ -1516,8 +1778,14 @@ static struct elevator_type iosched_as = {
 		.trim =				as_trim,
 		.elevator_alloc_sched_queue_fn = as_alloc_as_queue,
 		.elevator_free_sched_queue_fn = as_free_as_queue,
+#ifdef CONFIG_IOSCHED_AS_HIER
+		.elevator_expire_ioq_fn =       as_expire_ioq,
+		.elevator_active_ioq_set_fn =   as_active_ioq_set,
 	},
-
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+#else
+	},
+#endif
 	.elevator_attrs = as_attrs,
 	.elevator_name = "anticipatory",
 	.elevator_owner = THIS_MODULE,
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 5711a6d..c1f676e 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -39,6 +39,8 @@ static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
 struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 						 int extract);
 void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
+int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
+					int force);
 
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
@@ -2513,6 +2515,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 		elv_clear_ioq_must_dispatch(ioq);
 		elv_clear_ioq_wait_busy_done(ioq);
 		elv_mark_ioq_slice_new(ioq);
+		elv_clear_ioq_must_expire(ioq);
 
 		del_timer(&efqd->idle_slice_timer);
 	}
@@ -2671,6 +2674,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	elv_clear_ioq_wait_request(ioq);
 	elv_clear_ioq_wait_busy(ioq);
 	elv_clear_ioq_wait_busy_done(ioq);
+	elv_clear_ioq_must_expire(ioq);
 
 	/*
 	 * if ioq->slice_end = 0, that means a queue was expired before first
@@ -2809,16 +2813,18 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
 {
 	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
-	elv_ioq_slice_expired(q);
+	if (elv_iosched_expire_ioq(q, 0, 1)) {
+		elv_ioq_slice_expired(q);
 
-	/*
-	 * Put the new queue at the front of the of the current list,
-	 * so we know that it will be selected next.
-	 */
+		/*
+		 * Put the new queue at the front of the of the current list,
+		 * so we know that it will be selected next.
+		 */
 
-	elv_activate_ioq(ioq, 1);
-	elv_ioq_set_slice_end(ioq, 0);
-	elv_mark_ioq_slice_new(ioq);
+		elv_activate_ioq(ioq, 1);
+		elv_ioq_set_slice_end(ioq, 0);
+		elv_mark_ioq_slice_new(ioq);
+	}
 }
 
 void elv_ioq_request_add(struct request_queue *q, struct request *rq)
@@ -2989,12 +2995,56 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	}
 }
 
+/*
+ * Call iosched to let that elevator wants to expire the queue. This gives
+ * iosched like AS to say no (if it is in the middle of batch changeover or
+ * it is anticipating). it also allows iosched to do some house keeping
+ *
+ * force--> it is force dispatch and iosched must clean up its state. This
+ * 	     is useful when elevator wants to drain iosched and wants to
+ * 	     expire currnent active queue.
+ *
+ * slice_expired--> if 1, ioq slice expired hence elevator fair queuing logic
+ * 		    wants to switch the queue. iosched should allow that until
+ * 		    and unless necessary. Currently AS can deny the switch if
+ * 		    in the middle of batch switch.
+ *
+ * 		    if 0, time slice is still remaining. It is up to the iosched
+ * 		    whether it wants to wait on this queue or just want to
+ * 		    expire it and move on to next queue.
+ *
+ */
+int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
+					int force)
+{
+	struct elevator_queue *e = q->elevator;
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+	int ret = 1;
+
+	if (e->ops->elevator_expire_ioq_fn) {
+		ret = e->ops->elevator_expire_ioq_fn(q, ioq->sched_queue,
+							slice_expired, force);
+		/*
+		 * AS denied expiration of queue right now. Mark that elevator
+		 * layer has requested ioscheduler (as) to expire this queue.
+		 * Now as will try to expire this queue as soon as it can.
+		 * Now don't try to dispatch from this queue even if we get
+		 * a new request and if time slice is left. Do expire it once.
+		 */
+		if (!ret)
+			elv_mark_ioq_must_expire(ioq);
+	}
+
+	return ret;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
 	struct io_group *iog;
+	int slice_expired = 1;
 
 	if (!elv_nr_busy_ioq(q->elevator))
 		return NULL;
@@ -3013,6 +3063,10 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/* This queue has been marked for expiry. Try to expire it */
+	if (elv_ioq_must_expire(ioq))
+		goto expire;
+
 	/*
 	 * If there is only root group present, don't expire the queue for
 	 * single queue ioschedulers (noop, deadline, AS). It is unnecessary
@@ -3102,8 +3156,16 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 		goto keep_queue;
 	}
 
+	slice_expired = 0;
 expire:
-	elv_ioq_slice_expired(q);
+	if (elv_iosched_expire_ioq(q, slice_expired, force))
+		elv_ioq_slice_expired(q);
+	else
+		/*
+		 * Not making ioq = NULL, as AS can deny queue expiration and
+		 * continue to dispatch from same queue
+		 */
+		goto keep_queue;
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
 keep_queue:
@@ -3268,7 +3330,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		}
 
 		if (elv_ioq_class_idle(ioq)) {
-			elv_ioq_slice_expired(q);
+			if (elv_iosched_expire_ioq(q, 1, 0))
+				elv_ioq_slice_expired(q);
 			goto done;
 		}
 
@@ -3302,7 +3365,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 				elv_ioq_arm_slice_timer(q, 1);
 			} else {
 				/* Expire the queue */
-				elv_ioq_slice_expired(q);
+				if (elv_iosched_expire_ioq(q, 1, 0))
+					elv_ioq_slice_expired(q);
 			}
 		} else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
 			 && sync && !rq_noidle(rq))
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 3e99bdb..b47ecb3 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -42,6 +42,7 @@ typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
 						struct request*);
 typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
 						void*, int probe);
+typedef int (elevator_expire_ioq_fn) (struct request_queue*, void *, int, int);
 #endif
 
 struct elevator_ops
@@ -81,6 +82,7 @@ struct elevator_ops
 	elevator_should_preempt_fn *elevator_should_preempt_fn;
 	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
 	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
+	elevator_expire_ioq_fn  *elevator_expire_ioq_fn;
 #endif
 };
 
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 13/20] io-controller: anticipatory changes for hierarchical fair queuing
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

This patch changes anticipatory scheduler to use queue scheduling code from
elevator layer.  One can go back to old as by deselecting
CONFIG_IOSCHED_AS_HIER. Even with CONFIG_IOSCHED_AS_HIER=y, with-out any
other cgroup created, AS behavior should remain the same as old.

o AS is a single queue ioschduler, that means there is one AS queue per group.

o common layer code select the queue to dispatch from based on fairness, and
  then AS code selects the request with-in group.

o AS runs reads and writes batches with-in group. So common layer runs timed
  group queues and with-in group time, AS runs timed batches of reads and
  writes.

o Note: Previously AS write batch length was adjusted synamically whenever
  a W->R batch data direction took place and when first request from the
  read batch completed.

  Now write batch updation takes place when last request from the write
  batch has finished during W->R transition.

o AS runs its own anticipation logic to anticipate on reads. common layer also
  does the anticipation on the group if think time of the group is with-in
  slice_idle.

o Introduced few debugging messages in AS.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   12 ++
 block/as-iosched.c       |  280 +++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.c      |   86 ++++++++++++--
 include/linux/elevator.h |    2 +
 4 files changed, 363 insertions(+), 17 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 3a9e7d7..77fc786 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -45,6 +45,18 @@ config IOSCHED_AS
 	  deadline I/O scheduler, it can also be slower in some cases
 	  especially some database loads.
 
+config IOSCHED_AS_HIER
+	bool "Anticipatory Hierarchical Scheduling support"
+	depends on IOSCHED_AS && CGROUPS
+	select ELV_FAIR_QUEUING
+	select GROUP_IOSCHED
+	default n
+	---help---
+	  Enable hierarhical scheduling in anticipatory. In this mode
+	  anticipatory keeps one IO queue per cgroup instead of a global
+	  queue. Elevator fair queuing logic ensures fairness among various
+	  queues.
+
 config IOSCHED_DEADLINE
 	tristate "Deadline I/O scheduler"
 	default y
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 3aa54a8..23a3d2d 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -16,6 +16,7 @@
 #include <linux/compiler.h>
 #include <linux/rbtree.h>
 #include <linux/interrupt.h>
+#include <linux/blktrace_api.h>
 
 /*
  * See Documentation/block/as-iosched.txt
@@ -84,10 +85,24 @@ struct as_queue {
 	struct list_head fifo_list[2];
 
 	struct request *next_rq[2];	/* next in sort order */
+
+	/*
+	 * If an as_queue is switched while a batch is running, then we
+	 * store the time left before current batch will expire
+	 */
+	long current_batch_time_left;
+
+	/*
+	 * batch data dir when queue was scheduled out. This will be used
+	 * to setup ad->batch_data_dir when queue is scheduled in.
+	 */
+	int saved_batch_data_dir;
+
 	unsigned long last_check_fifo[2];
 	int write_batch_count;		/* max # of reqs in a write batch */
 	int current_write_count;	/* how many requests left this batch */
 	int write_batch_idled;		/* has the write batch gone idle? */
+	int nr_queued[2];
 };
 
 struct as_data {
@@ -123,6 +138,9 @@ struct as_data {
 	unsigned long fifo_expire[2];
 	unsigned long batch_expire[2];
 	unsigned long antic_expire;
+
+	/* elevator requested a queue switch. */
+	int switch_queue;
 };
 
 /*
@@ -144,12 +162,174 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
+#define as_log(ad, fmt, args...)        \
+	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
+
 static DEFINE_PER_CPU(unsigned long, ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
 static void as_move_to_dispatch(struct as_data *ad, struct request *rq);
 static void as_antic_stop(struct as_data *ad);
+static inline int as_batch_expired(struct as_data *ad, struct as_queue *asq);
+
+#ifdef CONFIG_IOSCHED_AS_HIER
+static void as_save_batch_context(struct as_data *ad, struct as_queue *asq)
+{
+	/* Save batch data dir */
+	asq->saved_batch_data_dir = ad->batch_data_dir;
+
+	if (ad->changed_batch) {
+		/*
+		 * In case of force expire, we come here. Batch changeover
+		 * has been signalled but we are waiting for all the
+		 * request to finish from previous batch and then start
+		 * the new batch. Can't wait now. Mark that full batch time
+		 * needs to be allocated when this queue is scheduled again.
+		 */
+		asq->current_batch_time_left =
+				ad->batch_expire[ad->batch_data_dir];
+		ad->changed_batch = 0;
+		goto out;
+	}
+
+	if (ad->new_batch) {
+		/*
+		 * We should come here only when new_batch has been set
+		 * but no read request has been issued or if it is a forced
+		 * expiry.
+		 *
+		 * In both the cases, new batch has not started yet so
+		 * allocate full batch length for next scheduling opportunity.
+		 * We don't do write batch size adjustment in hierarchical
+		 * AS so that should not be an issue.
+		 */
+		asq->current_batch_time_left =
+				ad->batch_expire[ad->batch_data_dir];
+		ad->new_batch = 0;
+		goto out;
+	}
+
+	/* Save how much time is left before current batch expires */
+	if (as_batch_expired(ad, asq))
+		asq->current_batch_time_left = 0;
+	else {
+		asq->current_batch_time_left = ad->current_batch_expires
+							- jiffies;
+		BUG_ON((asq->current_batch_time_left) < 0);
+	}
+
+	if (ad->io_context) {
+		put_io_context(ad->io_context);
+		ad->io_context = NULL;
+	}
+
+out:
+	as_log(ad, "save batch: dir=%c time_left=%d changed_batch=%d"
+			" new_batch=%d, antic_status=%d",
+			ad->batch_data_dir ? 'R' : 'W',
+			asq->current_batch_time_left,
+			ad->changed_batch, ad->new_batch, ad->antic_status);
+	return;
+}
+
+/*
+ * FIXME: In original AS, read batch's time account started only after when
+ * first request had completed (if last batch was a write batch). But here
+ * we might be rescheduling a read batch right away irrespective of the fact
+ * of disk cache state.
+ */
+static void as_restore_batch_context(struct as_data *ad, struct as_queue *asq)
+{
+	/* Adjust the batch expire time */
+	if (asq->current_batch_time_left)
+		ad->current_batch_expires = jiffies +
+						asq->current_batch_time_left;
+	/* restore asq batch_data_dir info */
+	ad->batch_data_dir = asq->saved_batch_data_dir;
+	as_log(ad, "restore batch: dir=%c time=%d reads_q=%d writes_q=%d"
+			" ad->antic_status=%d",
+			ad->batch_data_dir ? 'R' : 'W',
+			asq->current_batch_time_left,
+			asq->nr_queued[1], asq->nr_queued[0],
+			ad->antic_status);
+}
+
+/* ioq has been set. */
+static void as_active_ioq_set(struct request_queue *q, void *sched_queue,
+				int coop)
+{
+	struct as_queue *asq = sched_queue;
+	struct as_data *ad = q->elevator->elevator_data;
+
+	as_restore_batch_context(ad, asq);
+}
+
+/*
+ * This is a notification from common layer that it wishes to expire this
+ * io queue. AS decides whether queue can be expired, if yes, it also
+ * saves the batch context.
+ */
+static int as_expire_ioq(struct request_queue *q, void *sched_queue,
+				int slice_expired, int force)
+{
+	struct as_data *ad = q->elevator->elevator_data;
+	int status = ad->antic_status;
+	struct as_queue *asq = sched_queue;
+
+	as_log(ad, "as_expire_ioq slice_expired=%d, force=%d", slice_expired,
+		force);
+
+	/* Forced expiry. We don't have a choice */
+	if (force) {
+		as_antic_stop(ad);
+		/*
+		 * antic_stop() sets antic_status to FINISHED which signifies
+		 * that either we timed out or we found a close request but
+		 * that's not the case here. Start from scratch.
+		 */
+		ad->antic_status = ANTIC_OFF;
+		as_save_batch_context(ad, asq);
+		ad->switch_queue = 0;
+		return 1;
+	}
+
+	/*
+	 * We are waiting for requests to finish from last
+	 * batch. Don't expire the queue now
+	 */
+	if (ad->changed_batch)
+		goto keep_queue;
+
+	/*
+	 * Wait for all requests from existing batch to finish before we
+	 * switch the queue. New queue might change the batch direction
+	 * and this is to be consistent with AS philosophy of not dispatching
+	 * new requests to underlying drive till requests from requests
+	 * from previous batch are completed.
+	 */
+	if (ad->nr_dispatched)
+		goto keep_queue;
+
+	/*
+	 * If AS anticipation is ON, wait for it to finish.
+	 */
+	BUG_ON(status == ANTIC_WAIT_REQ);
+
+	if (status == ANTIC_WAIT_NEXT)
+		goto keep_queue;
+
+	/* We are good to expire the queue. Save batch context */
+	as_save_batch_context(ad, asq);
+	ad->switch_queue = 0;
+	return 1;
+
+keep_queue:
+	/* Mark that elevator requested for queue switch whenever possible */
+	ad->switch_queue = 1;
+	return 0;
+}
+#endif
 
 /*
  * IO Context helper functions
@@ -429,6 +609,7 @@ static void as_antic_waitnext(struct as_data *ad)
 	mod_timer(&ad->antic_timer, timeout);
 
 	ad->antic_status = ANTIC_WAIT_NEXT;
+	as_log(ad, "antic_waitnext set");
 }
 
 /*
@@ -442,8 +623,10 @@ static void as_antic_waitreq(struct as_data *ad)
 	if (ad->antic_status == ANTIC_OFF) {
 		if (!ad->io_context || ad->ioc_finished)
 			as_antic_waitnext(ad);
-		else
+		else {
 			ad->antic_status = ANTIC_WAIT_REQ;
+			as_log(ad, "antic_waitreq set");
+		}
 	}
 }
 
@@ -455,6 +638,8 @@ static void as_antic_stop(struct as_data *ad)
 {
 	int status = ad->antic_status;
 
+	as_log(ad, "as_antic_stop antic_status=%d", ad->antic_status);
+
 	if (status == ANTIC_WAIT_REQ || status == ANTIC_WAIT_NEXT) {
 		if (status == ANTIC_WAIT_NEXT)
 			del_timer(&ad->antic_timer);
@@ -474,6 +659,7 @@ static void as_antic_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
+	as_log(ad, "as_antic_timeout");
 	if (ad->antic_status == ANTIC_WAIT_REQ
 			|| ad->antic_status == ANTIC_WAIT_NEXT) {
 		struct as_io_context *aic;
@@ -650,6 +836,21 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
 	struct io_context *ioc;
 	struct as_io_context *aic;
 
+#ifdef CONFIG_IOSCHED_AS_HIER
+	/*
+	 * If the active asq and rq's asq are not same, then one can not
+	 * break the anticipation. This primarily becomes useful when a
+	 * request is added to a queue which is not being served currently.
+	 */
+	if (rq) {
+		struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
+		struct as_queue *curr_asq =
+				elv_active_sched_queue(ad->q->elevator);
+
+		if (asq != curr_asq)
+			return 0;
+	}
+#endif
 	ioc = ad->io_context;
 	BUG_ON(!ioc);
 	spin_lock(&ioc->lock);
@@ -808,16 +1009,20 @@ static void as_update_rq(struct as_data *ad, struct request *rq)
 /*
  * Gathers timings and resizes the write batch automatically
  */
-static void update_write_batch(struct as_data *ad)
+static void update_write_batch(struct as_data *ad, struct request *rq)
 {
 	unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
 	long write_time;
-	struct as_queue *asq = elv_get_sched_queue(ad->q, NULL);
+	struct as_queue *asq = elv_get_sched_queue(ad->q, rq);
 
 	write_time = (jiffies - ad->current_batch_expires) + batch;
 	if (write_time < 0)
 		write_time = 0;
 
+	as_log(ad, "upd write: write_time=%d batch=%d write_batch_idled=%d"
+			" current_write_count=%d", write_time, batch,
+			asq->write_batch_idled, asq->current_write_count);
+
 	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
 			asq->write_batch_count /= 2;
@@ -832,6 +1037,8 @@ static void update_write_batch(struct as_data *ad)
 
 	if (asq->write_batch_count < 1)
 		asq->write_batch_count = 1;
+
+	as_log(ad, "upd write count=%d", asq->write_batch_count);
 }
 
 /*
@@ -841,6 +1048,7 @@ static void update_write_batch(struct as_data *ad)
 static void as_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct as_data *ad = q->elevator->elevator_data;
+	struct as_queue *asq = elv_get_sched_queue(q, rq);
 
 	WARN_ON(!list_empty(&rq->queuelist));
 
@@ -849,7 +1057,24 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 		goto out;
 	}
 
+	as_log(ad, "complete: reads_q=%d writes_q=%d changed_batch=%d"
+		" new_batch=%d switch_queue=%d, dir=%c",
+		asq->nr_queued[1], asq->nr_queued[0], ad->changed_batch,
+		ad->new_batch, ad->switch_queue,
+		ad->batch_data_dir ? 'R' : 'W');
+
 	if (ad->changed_batch && ad->nr_dispatched == 1) {
+		/*
+		 * If this was write batch finishing, adjust the write batch
+		 * length.
+		 *
+		 * Note, write batch length is being calculated upon completion
+		 * of last write request finished and not completion of first
+		 * read request finished in the next batch.
+		 */
+		if (ad->batch_data_dir == BLK_RW_SYNC)
+			update_write_batch(ad, rq);
+
 		ad->current_batch_expires = jiffies +
 					ad->batch_expire[ad->batch_data_dir];
 		kblockd_schedule_work(q, &ad->antic_work);
@@ -867,7 +1092,6 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 	 * and writeback caches
 	 */
 	if (ad->new_batch && ad->batch_data_dir == rq_is_sync(rq)) {
-		update_write_batch(ad);
 		ad->current_batch_expires = jiffies +
 				ad->batch_expire[BLK_RW_SYNC];
 		ad->new_batch = 0;
@@ -886,6 +1110,13 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 	}
 
 	as_put_io_context(rq);
+
+	/*
+	 * If elevator requested a queue switch, kick the queue in the
+	 * hope that this is right time for switch.
+	 */
+	if (ad->switch_queue)
+		kblockd_schedule_work(q, &ad->antic_work);
 out:
 	RQ_SET_STATE(rq, AS_RQ_POSTSCHED);
 }
@@ -906,6 +1137,9 @@ static void as_remove_queued_request(struct request_queue *q,
 
 	WARN_ON(RQ_STATE(rq) != AS_RQ_QUEUED);
 
+	BUG_ON(asq->nr_queued[data_dir] <= 0);
+	asq->nr_queued[data_dir]--;
+
 	ioc = RQ_IOC(rq);
 	if (ioc && ioc->aic) {
 		BUG_ON(!atomic_read(&ioc->aic->nr_queued));
@@ -1017,6 +1251,8 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 	if (RQ_IOC(rq) && RQ_IOC(rq)->aic)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 	ad->nr_dispatched++;
+	as_log(ad, "dispatch req dir=%c nr_dispatched = %d",
+			data_dir ? 'R' : 'W', ad->nr_dispatched);
 }
 
 /*
@@ -1064,6 +1300,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		}
 		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
+		as_log(ad, "forced dispatch");
 		return dispatched;
 	}
 
@@ -1076,8 +1313,14 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	if (!(reads || writes)
 		|| ad->antic_status == ANTIC_WAIT_REQ
 		|| ad->antic_status == ANTIC_WAIT_NEXT
-		|| ad->changed_batch)
+		|| ad->changed_batch) {
+		as_log(ad, "no dispatch. read_q=%d, writes_q=%d"
+			" ad->antic_status=%d, changed_batch=%d,"
+			" switch_queue=%d new_batch=%d", asq->nr_queued[1],
+			asq->nr_queued[0], ad->antic_status, ad->changed_batch,
+			ad->switch_queue, ad->new_batch);
 		return 0;
+	}
 
 	if (!(reads && writes && as_batch_expired(ad, asq))) {
 		/*
@@ -1090,6 +1333,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
+				as_log(ad, "can_anticipate = 1");
 				as_antic_waitreq(ad);
 				return 0;
 			}
@@ -1109,6 +1353,8 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
+	as_log(ad, "select a fresh batch and request");
+
 	if (reads) {
 		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
 
@@ -1123,6 +1369,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
+		as_log(ad, "new batch dir is sync");
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
 		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
@@ -1147,6 +1394,7 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
+		as_log(ad, "new batch dir is async");
 		asq->current_write_count = asq->write_batch_count;
 		asq->write_batch_idled = 0;
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
@@ -1182,6 +1430,9 @@ fifo_expired:
 		ad->changed_batch = 0;
 	}
 
+	if (ad->switch_queue)
+		return 0;
+
 	/*
 	 * rq is the selected appropriate request.
 	 */
@@ -1205,6 +1456,11 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 
 	rq->elevator_private = as_get_io_context(q->node);
 
+	asq->nr_queued[data_dir]++;
+	as_log(ad, "add a %c request read_q=%d write_q=%d",
+			data_dir ? 'R' : 'W', asq->nr_queued[1],
+			asq->nr_queued[0]);
+
 	if (RQ_IOC(rq)) {
 		as_update_iohist(ad, RQ_IOC(rq)->aic, rq);
 		atomic_inc(&RQ_IOC(rq)->aic->nr_queued);
@@ -1410,6 +1666,7 @@ static void *as_init_queue(struct request_queue *q)
 	ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
 
 	ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
+	ad->switch_queue = 0;
 
 	return ad;
 }
@@ -1495,6 +1752,11 @@ static struct elv_fs_entry as_attrs[] = {
 	AS_ATTR(antic_expire),
 	AS_ATTR(read_batch_expire),
 	AS_ATTR(write_batch_expire),
+#ifdef CONFIG_IOSCHED_AS_HIER
+	ELV_ATTR(fairness),
+	ELV_ATTR(slice_idle),
+	ELV_ATTR(slice_sync),
+#endif
 	__ATTR_NULL
 };
 
@@ -1516,8 +1778,14 @@ static struct elevator_type iosched_as = {
 		.trim =				as_trim,
 		.elevator_alloc_sched_queue_fn = as_alloc_as_queue,
 		.elevator_free_sched_queue_fn = as_free_as_queue,
+#ifdef CONFIG_IOSCHED_AS_HIER
+		.elevator_expire_ioq_fn =       as_expire_ioq,
+		.elevator_active_ioq_set_fn =   as_active_ioq_set,
 	},
-
+	.elevator_features = ELV_IOSCHED_NEED_FQ | ELV_IOSCHED_SINGLE_IOQ,
+#else
+	},
+#endif
 	.elevator_attrs = as_attrs,
 	.elevator_name = "anticipatory",
 	.elevator_owner = THIS_MODULE,
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 5711a6d..c1f676e 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -39,6 +39,8 @@ static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
 struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 						 int extract);
 void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
+int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
+					int force);
 
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
@@ -2513,6 +2515,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 		elv_clear_ioq_must_dispatch(ioq);
 		elv_clear_ioq_wait_busy_done(ioq);
 		elv_mark_ioq_slice_new(ioq);
+		elv_clear_ioq_must_expire(ioq);
 
 		del_timer(&efqd->idle_slice_timer);
 	}
@@ -2671,6 +2674,7 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
 	elv_clear_ioq_wait_request(ioq);
 	elv_clear_ioq_wait_busy(ioq);
 	elv_clear_ioq_wait_busy_done(ioq);
+	elv_clear_ioq_must_expire(ioq);
 
 	/*
 	 * if ioq->slice_end = 0, that means a queue was expired before first
@@ -2809,16 +2813,18 @@ int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
 static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
 {
 	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
-	elv_ioq_slice_expired(q);
+	if (elv_iosched_expire_ioq(q, 0, 1)) {
+		elv_ioq_slice_expired(q);
 
-	/*
-	 * Put the new queue at the front of the of the current list,
-	 * so we know that it will be selected next.
-	 */
+		/*
+		 * Put the new queue at the front of the of the current list,
+		 * so we know that it will be selected next.
+		 */
 
-	elv_activate_ioq(ioq, 1);
-	elv_ioq_set_slice_end(ioq, 0);
-	elv_mark_ioq_slice_new(ioq);
+		elv_activate_ioq(ioq, 1);
+		elv_ioq_set_slice_end(ioq, 0);
+		elv_mark_ioq_slice_new(ioq);
+	}
 }
 
 void elv_ioq_request_add(struct request_queue *q, struct request *rq)
@@ -2989,12 +2995,56 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	}
 }
 
+/*
+ * Call iosched to let that elevator wants to expire the queue. This gives
+ * iosched like AS to say no (if it is in the middle of batch changeover or
+ * it is anticipating). it also allows iosched to do some house keeping
+ *
+ * force--> it is force dispatch and iosched must clean up its state. This
+ * 	     is useful when elevator wants to drain iosched and wants to
+ * 	     expire currnent active queue.
+ *
+ * slice_expired--> if 1, ioq slice expired hence elevator fair queuing logic
+ * 		    wants to switch the queue. iosched should allow that until
+ * 		    and unless necessary. Currently AS can deny the switch if
+ * 		    in the middle of batch switch.
+ *
+ * 		    if 0, time slice is still remaining. It is up to the iosched
+ * 		    whether it wants to wait on this queue or just want to
+ * 		    expire it and move on to next queue.
+ *
+ */
+int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
+					int force)
+{
+	struct elevator_queue *e = q->elevator;
+	struct io_queue *ioq = elv_active_ioq(q->elevator);
+	int ret = 1;
+
+	if (e->ops->elevator_expire_ioq_fn) {
+		ret = e->ops->elevator_expire_ioq_fn(q, ioq->sched_queue,
+							slice_expired, force);
+		/*
+		 * AS denied expiration of queue right now. Mark that elevator
+		 * layer has requested ioscheduler (as) to expire this queue.
+		 * Now as will try to expire this queue as soon as it can.
+		 * Now don't try to dispatch from this queue even if we get
+		 * a new request and if time slice is left. Do expire it once.
+		 */
+		if (!ret)
+			elv_mark_ioq_must_expire(ioq);
+	}
+
+	return ret;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
 	struct io_group *iog;
+	int slice_expired = 1;
 
 	if (!elv_nr_busy_ioq(q->elevator))
 		return NULL;
@@ -3013,6 +3063,10 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 			goto expire;
 	}
 
+	/* This queue has been marked for expiry. Try to expire it */
+	if (elv_ioq_must_expire(ioq))
+		goto expire;
+
 	/*
 	 * If there is only root group present, don't expire the queue for
 	 * single queue ioschedulers (noop, deadline, AS). It is unnecessary
@@ -3102,8 +3156,16 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 		goto keep_queue;
 	}
 
+	slice_expired = 0;
 expire:
-	elv_ioq_slice_expired(q);
+	if (elv_iosched_expire_ioq(q, slice_expired, force))
+		elv_ioq_slice_expired(q);
+	else
+		/*
+		 * Not making ioq = NULL, as AS can deny queue expiration and
+		 * continue to dispatch from same queue
+		 */
+		goto keep_queue;
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
 keep_queue:
@@ -3268,7 +3330,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 		}
 
 		if (elv_ioq_class_idle(ioq)) {
-			elv_ioq_slice_expired(q);
+			if (elv_iosched_expire_ioq(q, 1, 0))
+				elv_ioq_slice_expired(q);
 			goto done;
 		}
 
@@ -3302,7 +3365,8 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 				elv_ioq_arm_slice_timer(q, 1);
 			} else {
 				/* Expire the queue */
-				elv_ioq_slice_expired(q);
+				if (elv_iosched_expire_ioq(q, 1, 0))
+					elv_ioq_slice_expired(q);
 			}
 		} else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
 			 && sync && !rq_noidle(rq))
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 3e99bdb..b47ecb3 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -42,6 +42,7 @@ typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
 						struct request*);
 typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
 						void*, int probe);
+typedef int (elevator_expire_ioq_fn) (struct request_queue*, void *, int, int);
 #endif
 
 struct elevator_ops
@@ -81,6 +82,7 @@ struct elevator_ops
 	elevator_should_preempt_fn *elevator_should_preempt_fn;
 	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
 	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
+	elevator_expire_ioq_fn  *elevator_expire_ioq_fn;
 #endif
 };
 
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios.
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (12 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 13/20] io-controller: anticipatory " Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 15/20] io-controller: map async requests to appropriate cgroup Vivek Goyal
                     ` (7 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o blkio_cgroup patches from Ryo to track async bios.

o Fernando is also working on another IO tracking mechanism. We are not
  particular about any IO tracking mechanism. This patchset can make use
  of any mechanism which makes it to upstream. For the time being making
  use of Ryo's posting.

Based on 2.6.30-rc3-git3
Signed-off-by: Hirokazu Takahashi <taka-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
Signed-off-by: Ryo Tsuruta <ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
---
 block/blk-ioc.c               |   37 +++---
 fs/buffer.c                   |    2 +
 fs/direct-io.c                |    2 +
 include/linux/biotrack.h      |   97 +++++++++++++
 include/linux/cgroup_subsys.h |    6 +
 include/linux/iocontext.h     |    1 +
 include/linux/memcontrol.h    |    6 +
 include/linux/mmzone.h        |    4 +-
 include/linux/page_cgroup.h   |   31 ++++-
 init/Kconfig                  |   15 ++
 mm/Makefile                   |    4 +-
 mm/biotrack.c                 |  300 +++++++++++++++++++++++++++++++++++++++++
 mm/bounce.c                   |    2 +
 mm/filemap.c                  |    2 +
 mm/memcontrol.c               |    6 +
 mm/memory.c                   |    5 +
 mm/page-writeback.c           |    2 +
 mm/page_cgroup.c              |   17 ++-
 mm/swap_state.c               |    2 +
 19 files changed, 511 insertions(+), 30 deletions(-)
 create mode 100644 include/linux/biotrack.h
 create mode 100644 mm/biotrack.c

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 8f0f6cf..ccde40e 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -84,27 +84,32 @@ void exit_io_context(void)
 	}
 }
 
+void init_io_context(struct io_context *ioc)
+{
+	atomic_set(&ioc->refcount, 1);
+	atomic_set(&ioc->nr_tasks, 1);
+	spin_lock_init(&ioc->lock);
+	ioc->ioprio_changed = 0;
+	ioc->ioprio = 0;
+#ifdef CONFIG_GROUP_IOSCHED
+	ioc->cgroup_changed = 0;
+#endif
+	ioc->last_waited = jiffies; /* doesn't matter... */
+	ioc->nr_batch_requests = 0; /* because this is 0 */
+	ioc->aic = NULL;
+	INIT_RADIX_TREE(&ioc->radix_root, GFP_ATOMIC | __GFP_HIGH);
+	INIT_HLIST_HEAD(&ioc->cic_list);
+	ioc->ioc_data = NULL;
+}
+
+
 struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
 {
 	struct io_context *ret;
 
 	ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
-	if (ret) {
-		atomic_set(&ret->refcount, 1);
-		atomic_set(&ret->nr_tasks, 1);
-		spin_lock_init(&ret->lock);
-		ret->ioprio_changed = 0;
-		ret->ioprio = 0;
-#ifdef CONFIG_GROUP_IOSCHED
-		ret->cgroup_changed = 0;
-#endif
-		ret->last_waited = jiffies; /* doesn't matter... */
-		ret->nr_batch_requests = 0; /* because this is 0 */
-		ret->aic = NULL;
-		INIT_RADIX_TREE(&ret->radix_root, GFP_ATOMIC | __GFP_HIGH);
-		INIT_HLIST_HEAD(&ret->cic_list);
-		ret->ioc_data = NULL;
-	}
+	if (ret)
+		init_io_context(ret);
 
 	return ret;
 }
diff --git a/fs/buffer.c b/fs/buffer.c
index 4910612..8142677 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -36,6 +36,7 @@
 #include <linux/buffer_head.h>
 #include <linux/task_io_accounting_ops.h>
 #include <linux/bio.h>
+#include <linux/biotrack.h>
 #include <linux/notifier.h>
 #include <linux/cpu.h>
 #include <linux/bitops.h>
@@ -668,6 +669,7 @@ static void __set_page_dirty(struct page *page,
 	if (page->mapping) {	/* Race with truncate? */
 		WARN_ON_ONCE(warn && !PageUptodate(page));
 		account_page_dirtied(page, mapping);
+		blkio_cgroup_reset_owner_pagedirty(page, current->mm);
 		radix_tree_tag_set(&mapping->page_tree,
 				page_index(page), PAGECACHE_TAG_DIRTY);
 	}
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 05763bb..60b1a99 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -33,6 +33,7 @@
 #include <linux/err.h>
 #include <linux/blkdev.h>
 #include <linux/buffer_head.h>
+#include <linux/biotrack.h>
 #include <linux/rwsem.h>
 #include <linux/uio.h>
 #include <asm/atomic.h>
@@ -797,6 +798,7 @@ static int do_direct_IO(struct dio *dio)
 			ret = PTR_ERR(page);
 			goto out;
 		}
+		blkio_cgroup_reset_owner(page, current->mm);
 
 		while (block_in_page < blocks_per_page) {
 			unsigned offset_in_page = block_in_page << blkbits;
diff --git a/include/linux/biotrack.h b/include/linux/biotrack.h
new file mode 100644
index 0000000..741a8b5
--- /dev/null
+++ b/include/linux/biotrack.h
@@ -0,0 +1,97 @@
+#include <linux/cgroup.h>
+#include <linux/mm.h>
+#include <linux/page_cgroup.h>
+
+#ifndef _LINUX_BIOTRACK_H
+#define _LINUX_BIOTRACK_H
+
+#ifdef	CONFIG_CGROUP_BLKIO
+
+struct io_context;
+struct block_device;
+
+struct blkio_cgroup {
+	struct cgroup_subsys_state css;
+	struct io_context *io_context;	/* default io_context */
+/*	struct radix_tree_root io_context_root; per device io_context */
+};
+
+/**
+ * __init_blkio_page_cgroup() - initialize a blkio_page_cgroup
+ * @pc:		page_cgroup of the page
+ *
+ * Reset the owner ID of a page.
+ */
+static inline void __init_blkio_page_cgroup(struct page_cgroup *pc)
+{
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, 0);
+	unlock_page_cgroup(pc);
+}
+
+/**
+ * blkio_cgroup_disabled - check whether blkio_cgroup is disabled
+ *
+ * Returns true if disabled, false if not.
+ */
+static inline bool blkio_cgroup_disabled(void)
+{
+	if (blkio_cgroup_subsys.disabled)
+		return true;
+	return false;
+}
+
+extern void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm);
+extern void blkio_cgroup_reset_owner(struct page *page, struct mm_struct *mm);
+extern void blkio_cgroup_reset_owner_pagedirty(struct page *page,
+						 struct mm_struct *mm);
+extern void blkio_cgroup_copy_owner(struct page *page, struct page *opage);
+
+extern struct io_context *get_blkio_cgroup_iocontext(struct bio *bio);
+extern unsigned long get_blkio_cgroup_id(struct bio *bio);
+extern struct cgroup *blkio_cgroup_lookup(int id);
+
+#else	/* CONFIG_CGROUP_BIO */
+
+struct blkio_cgroup;
+
+static inline void __init_blkio_page_cgroup(struct page_cgroup *pc)
+{
+}
+
+static inline bool blkio_cgroup_disabled(void)
+{
+	return true;
+}
+
+static inline void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_reset_owner(struct page *page,
+						struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_reset_owner_pagedirty(struct page *page,
+						struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_copy_owner(struct page *page, struct page *opage)
+{
+}
+
+static inline struct io_context *get_blkio_cgroup_iocontext(struct bio *bio)
+{
+	return NULL;
+}
+
+static inline unsigned long get_blkio_cgroup_id(struct bio *bio)
+{
+	return 0;
+}
+
+#endif	/* CONFIG_CGROUP_BLKIO */
+
+#endif /* _LINUX_BIOTRACK_H */
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 68ea6bd..f214e6e 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -43,6 +43,12 @@ SUBSYS(mem_cgroup)
 
 /* */
 
+#ifdef CONFIG_CGROUP_BLKIO
+SUBSYS(blkio_cgroup)
+#endif
+
+/* */
+
 #ifdef CONFIG_CGROUP_DEVICE
 SUBSYS(devices)
 #endif
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 73027b6..9c4587b 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -104,6 +104,7 @@ int put_io_context(struct io_context *ioc);
 void exit_io_context(void);
 struct io_context *get_io_context(gfp_t gfp_flags, int node);
 struct io_context *alloc_io_context(gfp_t gfp_flags, int node);
+void init_io_context(struct io_context *ioc);
 void copy_io_context(struct io_context **pdst, struct io_context **psrc);
 #else
 static inline void exit_io_context(void)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 25b9ca9..d74b462 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -37,6 +37,8 @@ struct mm_struct;
  * (Of course, if memcg does memory allocation in future, GFP_KERNEL is sane.)
  */
 
+extern void __init_mem_page_cgroup(struct page_cgroup *pc);
+
 extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
 				gfp_t gfp_mask);
 /* for swap handling */
@@ -120,6 +122,10 @@ extern bool mem_cgroup_oom_called(struct task_struct *task);
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
 struct mem_cgroup;
 
+static inline void __init_mem_page_cgroup(struct page_cgroup *pc)
+{
+}
+
 static inline int mem_cgroup_newpage_charge(struct page *page,
 					struct mm_struct *mm, gfp_t gfp_mask)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a47c879..14477cb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -607,7 +607,7 @@ typedef struct pglist_data {
 	int nr_zones;
 #ifdef CONFIG_FLAT_NODE_MEM_MAP	/* means !SPARSEMEM */
 	struct page *node_mem_map;
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 	struct page_cgroup *node_page_cgroup;
 #endif
 #endif
@@ -958,7 +958,7 @@ struct mem_section {
 
 	/* See declaration of similar field in struct zone */
 	unsigned long *pageblock_flags;
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 	/*
 	 * If !SPARSEMEM, pgdat doesn't have page_cgroup pointer. We use
 	 * section. (see memcontrol.h/page_cgroup.h about this.)
diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h
index 7339c7b..dd7f71c 100644
--- a/include/linux/page_cgroup.h
+++ b/include/linux/page_cgroup.h
@@ -1,7 +1,7 @@
 #ifndef __LINUX_PAGE_CGROUP_H
 #define __LINUX_PAGE_CGROUP_H
 
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 #include <linux/bit_spinlock.h>
 /*
  * Page Cgroup can be considered as an extended mem_map.
@@ -12,9 +12,11 @@
  */
 struct page_cgroup {
 	unsigned long flags;
-	struct mem_cgroup *mem_cgroup;
 	struct page *page;
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+	struct mem_cgroup *mem_cgroup;
 	struct list_head lru;		/* per cgroup LRU list */
+#endif
 };
 
 void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat);
@@ -71,7 +73,7 @@ static inline void unlock_page_cgroup(struct page_cgroup *pc)
 	bit_spin_unlock(PCG_LOCK, &pc->flags);
 }
 
-#else /* CONFIG_CGROUP_MEM_RES_CTLR */
+#else /* CONFIG_CGROUP_PAGE */
 struct page_cgroup;
 
 static inline void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat)
@@ -122,4 +124,27 @@ static inline void swap_cgroup_swapoff(int type)
 }
 
 #endif
+
+#ifdef CONFIG_CGROUP_BLKIO
+/*
+ * use lower 16 bits for flags and reserve the rest for the page tracking id
+ */
+#define PCG_TRACKING_ID_SHIFT	(16)
+#define PCG_TRACKING_ID_BITS \
+	(8 * sizeof(unsigned long) - PCG_TRACKING_ID_SHIFT)
+
+/* NOTE: must be called with page_cgroup() held */
+static inline unsigned long page_cgroup_get_id(struct page_cgroup *pc)
+{
+	return pc->flags >> PCG_TRACKING_ID_SHIFT;
+}
+
+/* NOTE: must be called with page_cgroup() held */
+static inline void page_cgroup_set_id(struct page_cgroup *pc, unsigned long id)
+{
+	WARN_ON(id >= (1UL << PCG_TRACKING_ID_BITS));
+	pc->flags &= (1UL << PCG_TRACKING_ID_SHIFT) - 1;
+	pc->flags |= (unsigned long)(id << PCG_TRACKING_ID_SHIFT);
+}
+#endif
 #endif
diff --git a/init/Kconfig b/init/Kconfig
index 1a4686d..ee16d6f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -616,6 +616,21 @@ config GROUP_IOSCHED
 
 endif # CGROUPS
 
+config CGROUP_BLKIO
+	bool "Block I/O cgroup subsystem"
+	depends on CGROUPS && BLOCK
+	select MM_OWNER
+	help
+	  Provides a Resource Controller which enables to track the onwner
+	  of every Block I/O requests.
+	  The information this subsystem provides can be used from any
+	  kind of module such as dm-ioband device mapper modules or
+	  the cfq-scheduler.
+
+config CGROUP_PAGE
+	def_bool y
+	depends on CGROUP_MEM_RES_CTLR || CGROUP_BLKIO
+
 config MM_OWNER
 	bool
 
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..76c3436 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -37,4 +37,6 @@ else
 obj-$(CONFIG_SMP) += allocpercpu.o
 endif
 obj-$(CONFIG_QUICKLIST) += quicklist.o
-obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
+obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o
+obj-$(CONFIG_CGROUP_PAGE) += page_cgroup.o
+obj-$(CONFIG_CGROUP_BLKIO) += biotrack.o
diff --git a/mm/biotrack.c b/mm/biotrack.c
new file mode 100644
index 0000000..2baf1f0
--- /dev/null
+++ b/mm/biotrack.c
@@ -0,0 +1,300 @@
+/* biotrack.c - Block I/O Tracking
+ *
+ * Copyright (C) VA Linux Systems Japan, 2008-2009
+ * Developed by Hirokazu Takahashi <taka-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
+ *
+ * Copyright (C) 2008 Andrea Righi <righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
+ * Use part of page_cgroup->flags to store blkio-cgroup ID.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/smp.h>
+#include <linux/bit_spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/biotrack.h>
+#include <linux/mm_inline.h>
+
+/*
+ * The block I/O tracking mechanism is implemented on the cgroup memory
+ * controller framework. It helps to find the the owner of an I/O request
+ * because every I/O request has a target page and the owner of the page
+ * can be easily determined on the framework.
+ */
+
+/* Return the blkio_cgroup that associates with a cgroup. */
+static inline struct blkio_cgroup *cgroup_blkio(struct cgroup *cgrp)
+{
+	return container_of(cgroup_subsys_state(cgrp, blkio_cgroup_subsys_id),
+					struct blkio_cgroup, css);
+}
+
+/* Return the blkio_cgroup that associates with a process. */
+static inline struct blkio_cgroup *blkio_cgroup_from_task(struct task_struct *p)
+{
+	return container_of(task_subsys_state(p, blkio_cgroup_subsys_id),
+					struct blkio_cgroup, css);
+}
+
+static struct io_context default_blkio_io_context;
+static struct blkio_cgroup default_blkio_cgroup = {
+	.io_context	= &default_blkio_io_context,
+};
+
+/**
+ * blkio_cgroup_set_owner() - set the owner ID of a page.
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Make a given page have the blkio-cgroup ID of the owner of this page.
+ */
+void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm)
+{
+	struct blkio_cgroup *biog;
+	struct page_cgroup *pc;
+	unsigned long id;
+
+	if (blkio_cgroup_disabled())
+		return;
+	pc = lookup_page_cgroup(page);
+	if (unlikely(!pc))
+		return;
+
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, 0);	/* 0: default blkio_cgroup id */
+	unlock_page_cgroup(pc);
+	if (!mm)
+		return;
+
+	rcu_read_lock();
+	biog = blkio_cgroup_from_task(rcu_dereference(mm->owner));
+	if (unlikely(!biog)) {
+		rcu_read_unlock();
+		return;
+	}
+	/*
+	 * css_get(&bio->css) isn't called to increment the reference
+	 * count of this blkio_cgroup "biog" so the css_id might turn
+	 * invalid even if this page is still active.
+	 * This approach is chosen to minimize the overhead.
+	 */
+	id = css_id(&biog->css);
+	rcu_read_unlock();
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, id);
+	unlock_page_cgroup(pc);
+}
+
+/**
+ * blkio_cgroup_reset_owner() - reset the owner ID of a page
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Change the owner of a given page if necessary.
+ */
+void blkio_cgroup_reset_owner(struct page *page, struct mm_struct *mm)
+{
+	blkio_cgroup_set_owner(page, mm);
+}
+
+/**
+ * blkio_cgroup_reset_owner_pagedirty() - reset the owner ID of a pagecache page
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Change the owner of a given page if the page is in the pagecache.
+ */
+void blkio_cgroup_reset_owner_pagedirty(struct page *page, struct mm_struct *mm)
+{
+	if (!page_is_file_cache(page))
+		return;
+	if (current->flags & PF_MEMALLOC)
+		return;
+
+	blkio_cgroup_reset_owner(page, mm);
+}
+
+/**
+ * blkio_cgroup_copy_owner() - copy the owner ID of a page into another page
+ * @npage:	the page where we want to copy the owner
+ * @opage:	the page from which we want to copy the ID
+ *
+ * Copy the owner ID of @opage into @npage.
+ */
+void blkio_cgroup_copy_owner(struct page *npage, struct page *opage)
+{
+	struct page_cgroup *npc, *opc;
+	unsigned long id;
+
+	if (blkio_cgroup_disabled())
+		return;
+	npc = lookup_page_cgroup(npage);
+	if (unlikely(!npc))
+		return;
+	opc = lookup_page_cgroup(opage);
+	if (unlikely(!opc))
+		return;
+
+	lock_page_cgroup(opc);
+	lock_page_cgroup(npc);
+	id = page_cgroup_get_id(opc);
+	page_cgroup_set_id(npc, id);
+	unlock_page_cgroup(npc);
+	unlock_page_cgroup(opc);
+}
+
+/* Create a new blkio-cgroup. */
+static struct cgroup_subsys_state *
+blkio_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	struct blkio_cgroup *biog;
+	struct io_context *ioc;
+
+	if (!cgrp->parent) {
+		biog = &default_blkio_cgroup;
+		init_io_context(biog->io_context);
+		/* Increment the referrence count not to be released ever. */
+		atomic_inc(&biog->io_context->refcount);
+		return &biog->css;
+	}
+
+	biog = kzalloc(sizeof(*biog), GFP_KERNEL);
+	if (!biog)
+		return ERR_PTR(-ENOMEM);
+	ioc = alloc_io_context(GFP_KERNEL, -1);
+	if (!ioc) {
+		kfree(biog);
+		return ERR_PTR(-ENOMEM);
+	}
+	biog->io_context = ioc;
+	return &biog->css;
+}
+
+/* Delete the blkio-cgroup. */
+static void blkio_cgroup_destroy(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	struct blkio_cgroup *biog = cgroup_blkio(cgrp);
+
+	put_io_context(biog->io_context);
+	free_css_id(&blkio_cgroup_subsys, &biog->css);
+	kfree(biog);
+}
+
+/**
+ * get_blkio_cgroup_id() - determine the blkio-cgroup ID
+ * @bio:	the &struct bio which describes the I/O
+ *
+ * Returns the blkio-cgroup ID of a given bio. A return value zero
+ * means that the page associated with the bio belongs to default_blkio_cgroup.
+ */
+unsigned long get_blkio_cgroup_id(struct bio *bio)
+{
+	struct page_cgroup *pc;
+	struct page *page = bio_iovec_idx(bio, 0)->bv_page;
+	unsigned long id = 0;
+
+	pc = lookup_page_cgroup(page);
+	if (pc) {
+		lock_page_cgroup(pc);
+		id = page_cgroup_get_id(pc);
+		unlock_page_cgroup(pc);
+	}
+	return id;
+}
+
+/**
+ * get_blkio_cgroup_iocontext() - determine the blkio-cgroup iocontext
+ * @bio:	the &struct bio which describe the I/O
+ *
+ * Returns the iocontext of blkio-cgroup that issued a given bio.
+ */
+struct io_context *get_blkio_cgroup_iocontext(struct bio *bio)
+{
+	struct cgroup_subsys_state *css;
+	struct blkio_cgroup *biog;
+	struct io_context *ioc;
+	unsigned long id;
+
+	id = get_blkio_cgroup_id(bio);
+	rcu_read_lock();
+	css = css_lookup(&blkio_cgroup_subsys, id);
+	if (css)
+		biog = container_of(css, struct blkio_cgroup, css);
+	else
+		biog = &default_blkio_cgroup;
+	ioc = biog->io_context;	/* default io_context for this cgroup */
+	atomic_inc(&ioc->refcount);
+	rcu_read_unlock();
+	return ioc;
+}
+
+/**
+ * blkio_cgroup_lookup() - lookup a cgroup by blkio-cgroup ID
+ * @id:		blkio-cgroup ID
+ *
+ * Returns the cgroup associated with the specified ID, or NULL if lookup
+ * fails.
+ *
+ * Note:
+ * This function should be called under rcu_read_lock().
+ */
+struct cgroup *blkio_cgroup_lookup(int id)
+{
+	struct cgroup *cgrp;
+	struct cgroup_subsys_state *css;
+
+	if (blkio_cgroup_disabled())
+		return NULL;
+
+	css = css_lookup(&blkio_cgroup_subsys, id);
+	if (!css)
+		return NULL;
+	cgrp = css->cgroup;
+	return cgrp;
+}
+EXPORT_SYMBOL(get_blkio_cgroup_iocontext);
+EXPORT_SYMBOL(get_blkio_cgroup_id);
+EXPORT_SYMBOL(blkio_cgroup_lookup);
+
+static u64 blkio_id_read(struct cgroup *cgrp, struct cftype *cft)
+{
+	struct blkio_cgroup *biog = cgroup_blkio(cgrp);
+	unsigned long id;
+
+	rcu_read_lock();
+	id = css_id(&biog->css);
+	rcu_read_unlock();
+	return (u64)id;
+}
+
+
+static struct cftype blkio_files[] = {
+	{
+		.name = "id",
+		.read_u64 = blkio_id_read,
+	},
+};
+
+static int blkio_cgroup_populate(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	return cgroup_add_files(cgrp, ss, blkio_files,
+					ARRAY_SIZE(blkio_files));
+}
+
+struct cgroup_subsys blkio_cgroup_subsys = {
+	.name		= "blkio",
+	.create		= blkio_cgroup_create,
+	.destroy	= blkio_cgroup_destroy,
+	.populate	= blkio_cgroup_populate,
+	.subsys_id	= blkio_cgroup_subsys_id,
+	.use_id		= 1,
+};
diff --git a/mm/bounce.c b/mm/bounce.c
index e590272..875380c 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -14,6 +14,7 @@
 #include <linux/hash.h>
 #include <linux/highmem.h>
 #include <linux/blktrace_api.h>
+#include <linux/biotrack.h>
 #include <trace/block.h>
 #include <asm/tlbflush.h>
 
@@ -212,6 +213,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 		to->bv_len = from->bv_len;
 		to->bv_offset = from->bv_offset;
 		inc_zone_page_state(to->bv_page, NR_BOUNCE);
+		blkio_cgroup_copy_owner(to->bv_page, page);
 
 		if (rw == WRITE) {
 			char *vto, *vfrom;
diff --git a/mm/filemap.c b/mm/filemap.c
index 1b60f30..073a633 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -33,6 +33,7 @@
 #include <linux/cpuset.h>
 #include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
 #include <linux/memcontrol.h>
+#include <linux/biotrack.h>
 #include <linux/mm_inline.h> /* for page_is_file_cache() */
 #include "internal.h"
 
@@ -464,6 +465,7 @@ int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
 					gfp_mask & GFP_RECLAIM_MASK);
 	if (error)
 		goto out;
+	blkio_cgroup_set_owner(page, current->mm);
 
 	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
 	if (error == 0) {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 78eb855..b47e467 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -128,6 +128,12 @@ struct mem_cgroup_lru_info {
 	struct mem_cgroup_per_node *nodeinfo[MAX_NUMNODES];
 };
 
+void __meminit __init_mem_page_cgroup(struct page_cgroup *pc)
+{
+	pc->mem_cgroup = NULL;
+	INIT_LIST_HEAD(&pc->lru);
+}
+
 /*
  * The memory controller data structure. The memory controller controls both
  * page cache and RSS per cgroup. We would eventually like to provide
diff --git a/mm/memory.c b/mm/memory.c
index 4126dd1..0da0e70 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -51,6 +51,7 @@
 #include <linux/init.h>
 #include <linux/writeback.h>
 #include <linux/memcontrol.h>
+#include <linux/biotrack.h>
 #include <linux/mmu_notifier.h>
 #include <linux/kallsyms.h>
 #include <linux/swapops.h>
@@ -2064,6 +2065,7 @@ gotten:
 		 */
 		ptep_clear_flush_notify(vma, address, page_table);
 		page_add_new_anon_rmap(new_page, vma, address);
+		blkio_cgroup_set_owner(new_page, mm);
 		set_pte_at(mm, address, page_table, entry);
 		update_mmu_cache(vma, address, entry);
 		if (old_page) {
@@ -2529,6 +2531,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	flush_icache_page(vma, page);
 	set_pte_at(mm, address, page_table, pte);
 	page_add_anon_rmap(page, vma, address);
+	blkio_cgroup_reset_owner(page, mm);
 	/* It's better to call commit-charge after rmap is established */
 	mem_cgroup_commit_charge_swapin(page, ptr);
 
@@ -2593,6 +2596,7 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		goto release;
 	inc_mm_counter(mm, anon_rss);
 	page_add_new_anon_rmap(page, vma, address);
+	blkio_cgroup_set_owner(page, mm);
 	set_pte_at(mm, address, page_table, entry);
 
 	/* No need to invalidate - it was non-present before */
@@ -2740,6 +2744,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		if (anon) {
 			inc_mm_counter(mm, anon_rss);
 			page_add_new_anon_rmap(page, vma, address);
+			blkio_cgroup_set_owner(page, mm);
 		} else {
 			inc_mm_counter(mm, file_rss);
 			page_add_file_rmap(page);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..3604c35 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -23,6 +23,7 @@
 #include <linux/init.h>
 #include <linux/backing-dev.h>
 #include <linux/task_io_accounting_ops.h>
+#include <linux/biotrack.h>
 #include <linux/blkdev.h>
 #include <linux/mpage.h>
 #include <linux/rmap.h>
@@ -1243,6 +1244,7 @@ int __set_page_dirty_nobuffers(struct page *page)
 			BUG_ON(mapping2 != mapping);
 			WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
 			account_page_dirtied(page, mapping);
+			blkio_cgroup_reset_owner_pagedirty(page, current->mm);
 			radix_tree_tag_set(&mapping->page_tree,
 				page_index(page), PAGECACHE_TAG_DIRTY);
 		}
diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
index 791905c..e143d04 100644
--- a/mm/page_cgroup.c
+++ b/mm/page_cgroup.c
@@ -9,14 +9,15 @@
 #include <linux/vmalloc.h>
 #include <linux/cgroup.h>
 #include <linux/swapops.h>
+#include <linux/biotrack.h>
 
 static void __meminit
 __init_page_cgroup(struct page_cgroup *pc, unsigned long pfn)
 {
 	pc->flags = 0;
-	pc->mem_cgroup = NULL;
 	pc->page = pfn_to_page(pfn);
-	INIT_LIST_HEAD(&pc->lru);
+	__init_mem_page_cgroup(pc);
+	__init_blkio_page_cgroup(pc);
 }
 static unsigned long total_usage;
 
@@ -74,7 +75,7 @@ void __init page_cgroup_init(void)
 
 	int nid, fail;
 
-	if (mem_cgroup_disabled())
+	if (mem_cgroup_disabled() && blkio_cgroup_disabled())
 		return;
 
 	for_each_online_node(nid)  {
@@ -83,12 +84,12 @@ void __init page_cgroup_init(void)
 			goto fail;
 	}
 	printk(KERN_INFO "allocated %ld bytes of page_cgroup\n", total_usage);
-	printk(KERN_INFO "please try cgroup_disable=memory option if you"
+	printk(KERN_INFO "please try cgroup_disable=memory,blkio option if you"
 	" don't want\n");
 	return;
 fail:
 	printk(KERN_CRIT "allocation of page_cgroup was failed.\n");
-	printk(KERN_CRIT "please try cgroup_disable=memory boot option\n");
+	printk(KERN_CRIT "please try cgroup_disable=memory,blkio boot options\n");
 	panic("Out of memory");
 }
 
@@ -248,7 +249,7 @@ void __init page_cgroup_init(void)
 	unsigned long pfn;
 	int fail = 0;
 
-	if (mem_cgroup_disabled())
+	if (mem_cgroup_disabled() && blkio_cgroup_disabled())
 		return;
 
 	for (pfn = 0; !fail && pfn < max_pfn; pfn += PAGES_PER_SECTION) {
@@ -263,8 +264,8 @@ void __init page_cgroup_init(void)
 		hotplug_memory_notifier(page_cgroup_callback, 0);
 	}
 	printk(KERN_INFO "allocated %ld bytes of page_cgroup\n", total_usage);
-	printk(KERN_INFO "please try cgroup_disable=memory option if you don't"
-	" want\n");
+	printk(KERN_INFO "please try cgroup_disable=memory,blkio option"
+	" if you don't want\n");
 }
 
 void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat)
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 1416e7e..df9d6bb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -18,6 +18,7 @@
 #include <linux/pagevec.h>
 #include <linux/migrate.h>
 #include <linux/page_cgroup.h>
+#include <linux/biotrack.h>
 
 #include <asm/pgtable.h>
 
@@ -306,6 +307,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		 */
 		__set_page_locked(new_page);
 		SetPageSwapBacked(new_page);
+		blkio_cgroup_set_owner(new_page, current->mm);
 		err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL);
 		if (likely(!err)) {
 			/*
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios.
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o blkio_cgroup patches from Ryo to track async bios.

o Fernando is also working on another IO tracking mechanism. We are not
  particular about any IO tracking mechanism. This patchset can make use
  of any mechanism which makes it to upstream. For the time being making
  use of Ryo's posting.

Based on 2.6.30-rc3-git3
Signed-off-by: Hirokazu Takahashi <taka@valinux.co.jp>
Signed-off-by: Ryo Tsuruta <ryov@valinux.co.jp>
---
 block/blk-ioc.c               |   37 +++---
 fs/buffer.c                   |    2 +
 fs/direct-io.c                |    2 +
 include/linux/biotrack.h      |   97 +++++++++++++
 include/linux/cgroup_subsys.h |    6 +
 include/linux/iocontext.h     |    1 +
 include/linux/memcontrol.h    |    6 +
 include/linux/mmzone.h        |    4 +-
 include/linux/page_cgroup.h   |   31 ++++-
 init/Kconfig                  |   15 ++
 mm/Makefile                   |    4 +-
 mm/biotrack.c                 |  300 +++++++++++++++++++++++++++++++++++++++++
 mm/bounce.c                   |    2 +
 mm/filemap.c                  |    2 +
 mm/memcontrol.c               |    6 +
 mm/memory.c                   |    5 +
 mm/page-writeback.c           |    2 +
 mm/page_cgroup.c              |   17 ++-
 mm/swap_state.c               |    2 +
 19 files changed, 511 insertions(+), 30 deletions(-)
 create mode 100644 include/linux/biotrack.h
 create mode 100644 mm/biotrack.c

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 8f0f6cf..ccde40e 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -84,27 +84,32 @@ void exit_io_context(void)
 	}
 }
 
+void init_io_context(struct io_context *ioc)
+{
+	atomic_set(&ioc->refcount, 1);
+	atomic_set(&ioc->nr_tasks, 1);
+	spin_lock_init(&ioc->lock);
+	ioc->ioprio_changed = 0;
+	ioc->ioprio = 0;
+#ifdef CONFIG_GROUP_IOSCHED
+	ioc->cgroup_changed = 0;
+#endif
+	ioc->last_waited = jiffies; /* doesn't matter... */
+	ioc->nr_batch_requests = 0; /* because this is 0 */
+	ioc->aic = NULL;
+	INIT_RADIX_TREE(&ioc->radix_root, GFP_ATOMIC | __GFP_HIGH);
+	INIT_HLIST_HEAD(&ioc->cic_list);
+	ioc->ioc_data = NULL;
+}
+
+
 struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
 {
 	struct io_context *ret;
 
 	ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
-	if (ret) {
-		atomic_set(&ret->refcount, 1);
-		atomic_set(&ret->nr_tasks, 1);
-		spin_lock_init(&ret->lock);
-		ret->ioprio_changed = 0;
-		ret->ioprio = 0;
-#ifdef CONFIG_GROUP_IOSCHED
-		ret->cgroup_changed = 0;
-#endif
-		ret->last_waited = jiffies; /* doesn't matter... */
-		ret->nr_batch_requests = 0; /* because this is 0 */
-		ret->aic = NULL;
-		INIT_RADIX_TREE(&ret->radix_root, GFP_ATOMIC | __GFP_HIGH);
-		INIT_HLIST_HEAD(&ret->cic_list);
-		ret->ioc_data = NULL;
-	}
+	if (ret)
+		init_io_context(ret);
 
 	return ret;
 }
diff --git a/fs/buffer.c b/fs/buffer.c
index 4910612..8142677 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -36,6 +36,7 @@
 #include <linux/buffer_head.h>
 #include <linux/task_io_accounting_ops.h>
 #include <linux/bio.h>
+#include <linux/biotrack.h>
 #include <linux/notifier.h>
 #include <linux/cpu.h>
 #include <linux/bitops.h>
@@ -668,6 +669,7 @@ static void __set_page_dirty(struct page *page,
 	if (page->mapping) {	/* Race with truncate? */
 		WARN_ON_ONCE(warn && !PageUptodate(page));
 		account_page_dirtied(page, mapping);
+		blkio_cgroup_reset_owner_pagedirty(page, current->mm);
 		radix_tree_tag_set(&mapping->page_tree,
 				page_index(page), PAGECACHE_TAG_DIRTY);
 	}
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 05763bb..60b1a99 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -33,6 +33,7 @@
 #include <linux/err.h>
 #include <linux/blkdev.h>
 #include <linux/buffer_head.h>
+#include <linux/biotrack.h>
 #include <linux/rwsem.h>
 #include <linux/uio.h>
 #include <asm/atomic.h>
@@ -797,6 +798,7 @@ static int do_direct_IO(struct dio *dio)
 			ret = PTR_ERR(page);
 			goto out;
 		}
+		blkio_cgroup_reset_owner(page, current->mm);
 
 		while (block_in_page < blocks_per_page) {
 			unsigned offset_in_page = block_in_page << blkbits;
diff --git a/include/linux/biotrack.h b/include/linux/biotrack.h
new file mode 100644
index 0000000..741a8b5
--- /dev/null
+++ b/include/linux/biotrack.h
@@ -0,0 +1,97 @@
+#include <linux/cgroup.h>
+#include <linux/mm.h>
+#include <linux/page_cgroup.h>
+
+#ifndef _LINUX_BIOTRACK_H
+#define _LINUX_BIOTRACK_H
+
+#ifdef	CONFIG_CGROUP_BLKIO
+
+struct io_context;
+struct block_device;
+
+struct blkio_cgroup {
+	struct cgroup_subsys_state css;
+	struct io_context *io_context;	/* default io_context */
+/*	struct radix_tree_root io_context_root; per device io_context */
+};
+
+/**
+ * __init_blkio_page_cgroup() - initialize a blkio_page_cgroup
+ * @pc:		page_cgroup of the page
+ *
+ * Reset the owner ID of a page.
+ */
+static inline void __init_blkio_page_cgroup(struct page_cgroup *pc)
+{
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, 0);
+	unlock_page_cgroup(pc);
+}
+
+/**
+ * blkio_cgroup_disabled - check whether blkio_cgroup is disabled
+ *
+ * Returns true if disabled, false if not.
+ */
+static inline bool blkio_cgroup_disabled(void)
+{
+	if (blkio_cgroup_subsys.disabled)
+		return true;
+	return false;
+}
+
+extern void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm);
+extern void blkio_cgroup_reset_owner(struct page *page, struct mm_struct *mm);
+extern void blkio_cgroup_reset_owner_pagedirty(struct page *page,
+						 struct mm_struct *mm);
+extern void blkio_cgroup_copy_owner(struct page *page, struct page *opage);
+
+extern struct io_context *get_blkio_cgroup_iocontext(struct bio *bio);
+extern unsigned long get_blkio_cgroup_id(struct bio *bio);
+extern struct cgroup *blkio_cgroup_lookup(int id);
+
+#else	/* CONFIG_CGROUP_BIO */
+
+struct blkio_cgroup;
+
+static inline void __init_blkio_page_cgroup(struct page_cgroup *pc)
+{
+}
+
+static inline bool blkio_cgroup_disabled(void)
+{
+	return true;
+}
+
+static inline void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_reset_owner(struct page *page,
+						struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_reset_owner_pagedirty(struct page *page,
+						struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_copy_owner(struct page *page, struct page *opage)
+{
+}
+
+static inline struct io_context *get_blkio_cgroup_iocontext(struct bio *bio)
+{
+	return NULL;
+}
+
+static inline unsigned long get_blkio_cgroup_id(struct bio *bio)
+{
+	return 0;
+}
+
+#endif	/* CONFIG_CGROUP_BLKIO */
+
+#endif /* _LINUX_BIOTRACK_H */
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 68ea6bd..f214e6e 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -43,6 +43,12 @@ SUBSYS(mem_cgroup)
 
 /* */
 
+#ifdef CONFIG_CGROUP_BLKIO
+SUBSYS(blkio_cgroup)
+#endif
+
+/* */
+
 #ifdef CONFIG_CGROUP_DEVICE
 SUBSYS(devices)
 #endif
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 73027b6..9c4587b 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -104,6 +104,7 @@ int put_io_context(struct io_context *ioc);
 void exit_io_context(void);
 struct io_context *get_io_context(gfp_t gfp_flags, int node);
 struct io_context *alloc_io_context(gfp_t gfp_flags, int node);
+void init_io_context(struct io_context *ioc);
 void copy_io_context(struct io_context **pdst, struct io_context **psrc);
 #else
 static inline void exit_io_context(void)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 25b9ca9..d74b462 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -37,6 +37,8 @@ struct mm_struct;
  * (Of course, if memcg does memory allocation in future, GFP_KERNEL is sane.)
  */
 
+extern void __init_mem_page_cgroup(struct page_cgroup *pc);
+
 extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
 				gfp_t gfp_mask);
 /* for swap handling */
@@ -120,6 +122,10 @@ extern bool mem_cgroup_oom_called(struct task_struct *task);
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
 struct mem_cgroup;
 
+static inline void __init_mem_page_cgroup(struct page_cgroup *pc)
+{
+}
+
 static inline int mem_cgroup_newpage_charge(struct page *page,
 					struct mm_struct *mm, gfp_t gfp_mask)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a47c879..14477cb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -607,7 +607,7 @@ typedef struct pglist_data {
 	int nr_zones;
 #ifdef CONFIG_FLAT_NODE_MEM_MAP	/* means !SPARSEMEM */
 	struct page *node_mem_map;
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 	struct page_cgroup *node_page_cgroup;
 #endif
 #endif
@@ -958,7 +958,7 @@ struct mem_section {
 
 	/* See declaration of similar field in struct zone */
 	unsigned long *pageblock_flags;
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 	/*
 	 * If !SPARSEMEM, pgdat doesn't have page_cgroup pointer. We use
 	 * section. (see memcontrol.h/page_cgroup.h about this.)
diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h
index 7339c7b..dd7f71c 100644
--- a/include/linux/page_cgroup.h
+++ b/include/linux/page_cgroup.h
@@ -1,7 +1,7 @@
 #ifndef __LINUX_PAGE_CGROUP_H
 #define __LINUX_PAGE_CGROUP_H
 
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 #include <linux/bit_spinlock.h>
 /*
  * Page Cgroup can be considered as an extended mem_map.
@@ -12,9 +12,11 @@
  */
 struct page_cgroup {
 	unsigned long flags;
-	struct mem_cgroup *mem_cgroup;
 	struct page *page;
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+	struct mem_cgroup *mem_cgroup;
 	struct list_head lru;		/* per cgroup LRU list */
+#endif
 };
 
 void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat);
@@ -71,7 +73,7 @@ static inline void unlock_page_cgroup(struct page_cgroup *pc)
 	bit_spin_unlock(PCG_LOCK, &pc->flags);
 }
 
-#else /* CONFIG_CGROUP_MEM_RES_CTLR */
+#else /* CONFIG_CGROUP_PAGE */
 struct page_cgroup;
 
 static inline void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat)
@@ -122,4 +124,27 @@ static inline void swap_cgroup_swapoff(int type)
 }
 
 #endif
+
+#ifdef CONFIG_CGROUP_BLKIO
+/*
+ * use lower 16 bits for flags and reserve the rest for the page tracking id
+ */
+#define PCG_TRACKING_ID_SHIFT	(16)
+#define PCG_TRACKING_ID_BITS \
+	(8 * sizeof(unsigned long) - PCG_TRACKING_ID_SHIFT)
+
+/* NOTE: must be called with page_cgroup() held */
+static inline unsigned long page_cgroup_get_id(struct page_cgroup *pc)
+{
+	return pc->flags >> PCG_TRACKING_ID_SHIFT;
+}
+
+/* NOTE: must be called with page_cgroup() held */
+static inline void page_cgroup_set_id(struct page_cgroup *pc, unsigned long id)
+{
+	WARN_ON(id >= (1UL << PCG_TRACKING_ID_BITS));
+	pc->flags &= (1UL << PCG_TRACKING_ID_SHIFT) - 1;
+	pc->flags |= (unsigned long)(id << PCG_TRACKING_ID_SHIFT);
+}
+#endif
 #endif
diff --git a/init/Kconfig b/init/Kconfig
index 1a4686d..ee16d6f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -616,6 +616,21 @@ config GROUP_IOSCHED
 
 endif # CGROUPS
 
+config CGROUP_BLKIO
+	bool "Block I/O cgroup subsystem"
+	depends on CGROUPS && BLOCK
+	select MM_OWNER
+	help
+	  Provides a Resource Controller which enables to track the onwner
+	  of every Block I/O requests.
+	  The information this subsystem provides can be used from any
+	  kind of module such as dm-ioband device mapper modules or
+	  the cfq-scheduler.
+
+config CGROUP_PAGE
+	def_bool y
+	depends on CGROUP_MEM_RES_CTLR || CGROUP_BLKIO
+
 config MM_OWNER
 	bool
 
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..76c3436 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -37,4 +37,6 @@ else
 obj-$(CONFIG_SMP) += allocpercpu.o
 endif
 obj-$(CONFIG_QUICKLIST) += quicklist.o
-obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
+obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o
+obj-$(CONFIG_CGROUP_PAGE) += page_cgroup.o
+obj-$(CONFIG_CGROUP_BLKIO) += biotrack.o
diff --git a/mm/biotrack.c b/mm/biotrack.c
new file mode 100644
index 0000000..2baf1f0
--- /dev/null
+++ b/mm/biotrack.c
@@ -0,0 +1,300 @@
+/* biotrack.c - Block I/O Tracking
+ *
+ * Copyright (C) VA Linux Systems Japan, 2008-2009
+ * Developed by Hirokazu Takahashi <taka@valinux.co.jp>
+ *
+ * Copyright (C) 2008 Andrea Righi <righi.andrea@gmail.com>
+ * Use part of page_cgroup->flags to store blkio-cgroup ID.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/smp.h>
+#include <linux/bit_spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/biotrack.h>
+#include <linux/mm_inline.h>
+
+/*
+ * The block I/O tracking mechanism is implemented on the cgroup memory
+ * controller framework. It helps to find the the owner of an I/O request
+ * because every I/O request has a target page and the owner of the page
+ * can be easily determined on the framework.
+ */
+
+/* Return the blkio_cgroup that associates with a cgroup. */
+static inline struct blkio_cgroup *cgroup_blkio(struct cgroup *cgrp)
+{
+	return container_of(cgroup_subsys_state(cgrp, blkio_cgroup_subsys_id),
+					struct blkio_cgroup, css);
+}
+
+/* Return the blkio_cgroup that associates with a process. */
+static inline struct blkio_cgroup *blkio_cgroup_from_task(struct task_struct *p)
+{
+	return container_of(task_subsys_state(p, blkio_cgroup_subsys_id),
+					struct blkio_cgroup, css);
+}
+
+static struct io_context default_blkio_io_context;
+static struct blkio_cgroup default_blkio_cgroup = {
+	.io_context	= &default_blkio_io_context,
+};
+
+/**
+ * blkio_cgroup_set_owner() - set the owner ID of a page.
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Make a given page have the blkio-cgroup ID of the owner of this page.
+ */
+void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm)
+{
+	struct blkio_cgroup *biog;
+	struct page_cgroup *pc;
+	unsigned long id;
+
+	if (blkio_cgroup_disabled())
+		return;
+	pc = lookup_page_cgroup(page);
+	if (unlikely(!pc))
+		return;
+
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, 0);	/* 0: default blkio_cgroup id */
+	unlock_page_cgroup(pc);
+	if (!mm)
+		return;
+
+	rcu_read_lock();
+	biog = blkio_cgroup_from_task(rcu_dereference(mm->owner));
+	if (unlikely(!biog)) {
+		rcu_read_unlock();
+		return;
+	}
+	/*
+	 * css_get(&bio->css) isn't called to increment the reference
+	 * count of this blkio_cgroup "biog" so the css_id might turn
+	 * invalid even if this page is still active.
+	 * This approach is chosen to minimize the overhead.
+	 */
+	id = css_id(&biog->css);
+	rcu_read_unlock();
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, id);
+	unlock_page_cgroup(pc);
+}
+
+/**
+ * blkio_cgroup_reset_owner() - reset the owner ID of a page
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Change the owner of a given page if necessary.
+ */
+void blkio_cgroup_reset_owner(struct page *page, struct mm_struct *mm)
+{
+	blkio_cgroup_set_owner(page, mm);
+}
+
+/**
+ * blkio_cgroup_reset_owner_pagedirty() - reset the owner ID of a pagecache page
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Change the owner of a given page if the page is in the pagecache.
+ */
+void blkio_cgroup_reset_owner_pagedirty(struct page *page, struct mm_struct *mm)
+{
+	if (!page_is_file_cache(page))
+		return;
+	if (current->flags & PF_MEMALLOC)
+		return;
+
+	blkio_cgroup_reset_owner(page, mm);
+}
+
+/**
+ * blkio_cgroup_copy_owner() - copy the owner ID of a page into another page
+ * @npage:	the page where we want to copy the owner
+ * @opage:	the page from which we want to copy the ID
+ *
+ * Copy the owner ID of @opage into @npage.
+ */
+void blkio_cgroup_copy_owner(struct page *npage, struct page *opage)
+{
+	struct page_cgroup *npc, *opc;
+	unsigned long id;
+
+	if (blkio_cgroup_disabled())
+		return;
+	npc = lookup_page_cgroup(npage);
+	if (unlikely(!npc))
+		return;
+	opc = lookup_page_cgroup(opage);
+	if (unlikely(!opc))
+		return;
+
+	lock_page_cgroup(opc);
+	lock_page_cgroup(npc);
+	id = page_cgroup_get_id(opc);
+	page_cgroup_set_id(npc, id);
+	unlock_page_cgroup(npc);
+	unlock_page_cgroup(opc);
+}
+
+/* Create a new blkio-cgroup. */
+static struct cgroup_subsys_state *
+blkio_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	struct blkio_cgroup *biog;
+	struct io_context *ioc;
+
+	if (!cgrp->parent) {
+		biog = &default_blkio_cgroup;
+		init_io_context(biog->io_context);
+		/* Increment the referrence count not to be released ever. */
+		atomic_inc(&biog->io_context->refcount);
+		return &biog->css;
+	}
+
+	biog = kzalloc(sizeof(*biog), GFP_KERNEL);
+	if (!biog)
+		return ERR_PTR(-ENOMEM);
+	ioc = alloc_io_context(GFP_KERNEL, -1);
+	if (!ioc) {
+		kfree(biog);
+		return ERR_PTR(-ENOMEM);
+	}
+	biog->io_context = ioc;
+	return &biog->css;
+}
+
+/* Delete the blkio-cgroup. */
+static void blkio_cgroup_destroy(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	struct blkio_cgroup *biog = cgroup_blkio(cgrp);
+
+	put_io_context(biog->io_context);
+	free_css_id(&blkio_cgroup_subsys, &biog->css);
+	kfree(biog);
+}
+
+/**
+ * get_blkio_cgroup_id() - determine the blkio-cgroup ID
+ * @bio:	the &struct bio which describes the I/O
+ *
+ * Returns the blkio-cgroup ID of a given bio. A return value zero
+ * means that the page associated with the bio belongs to default_blkio_cgroup.
+ */
+unsigned long get_blkio_cgroup_id(struct bio *bio)
+{
+	struct page_cgroup *pc;
+	struct page *page = bio_iovec_idx(bio, 0)->bv_page;
+	unsigned long id = 0;
+
+	pc = lookup_page_cgroup(page);
+	if (pc) {
+		lock_page_cgroup(pc);
+		id = page_cgroup_get_id(pc);
+		unlock_page_cgroup(pc);
+	}
+	return id;
+}
+
+/**
+ * get_blkio_cgroup_iocontext() - determine the blkio-cgroup iocontext
+ * @bio:	the &struct bio which describe the I/O
+ *
+ * Returns the iocontext of blkio-cgroup that issued a given bio.
+ */
+struct io_context *get_blkio_cgroup_iocontext(struct bio *bio)
+{
+	struct cgroup_subsys_state *css;
+	struct blkio_cgroup *biog;
+	struct io_context *ioc;
+	unsigned long id;
+
+	id = get_blkio_cgroup_id(bio);
+	rcu_read_lock();
+	css = css_lookup(&blkio_cgroup_subsys, id);
+	if (css)
+		biog = container_of(css, struct blkio_cgroup, css);
+	else
+		biog = &default_blkio_cgroup;
+	ioc = biog->io_context;	/* default io_context for this cgroup */
+	atomic_inc(&ioc->refcount);
+	rcu_read_unlock();
+	return ioc;
+}
+
+/**
+ * blkio_cgroup_lookup() - lookup a cgroup by blkio-cgroup ID
+ * @id:		blkio-cgroup ID
+ *
+ * Returns the cgroup associated with the specified ID, or NULL if lookup
+ * fails.
+ *
+ * Note:
+ * This function should be called under rcu_read_lock().
+ */
+struct cgroup *blkio_cgroup_lookup(int id)
+{
+	struct cgroup *cgrp;
+	struct cgroup_subsys_state *css;
+
+	if (blkio_cgroup_disabled())
+		return NULL;
+
+	css = css_lookup(&blkio_cgroup_subsys, id);
+	if (!css)
+		return NULL;
+	cgrp = css->cgroup;
+	return cgrp;
+}
+EXPORT_SYMBOL(get_blkio_cgroup_iocontext);
+EXPORT_SYMBOL(get_blkio_cgroup_id);
+EXPORT_SYMBOL(blkio_cgroup_lookup);
+
+static u64 blkio_id_read(struct cgroup *cgrp, struct cftype *cft)
+{
+	struct blkio_cgroup *biog = cgroup_blkio(cgrp);
+	unsigned long id;
+
+	rcu_read_lock();
+	id = css_id(&biog->css);
+	rcu_read_unlock();
+	return (u64)id;
+}
+
+
+static struct cftype blkio_files[] = {
+	{
+		.name = "id",
+		.read_u64 = blkio_id_read,
+	},
+};
+
+static int blkio_cgroup_populate(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	return cgroup_add_files(cgrp, ss, blkio_files,
+					ARRAY_SIZE(blkio_files));
+}
+
+struct cgroup_subsys blkio_cgroup_subsys = {
+	.name		= "blkio",
+	.create		= blkio_cgroup_create,
+	.destroy	= blkio_cgroup_destroy,
+	.populate	= blkio_cgroup_populate,
+	.subsys_id	= blkio_cgroup_subsys_id,
+	.use_id		= 1,
+};
diff --git a/mm/bounce.c b/mm/bounce.c
index e590272..875380c 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -14,6 +14,7 @@
 #include <linux/hash.h>
 #include <linux/highmem.h>
 #include <linux/blktrace_api.h>
+#include <linux/biotrack.h>
 #include <trace/block.h>
 #include <asm/tlbflush.h>
 
@@ -212,6 +213,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 		to->bv_len = from->bv_len;
 		to->bv_offset = from->bv_offset;
 		inc_zone_page_state(to->bv_page, NR_BOUNCE);
+		blkio_cgroup_copy_owner(to->bv_page, page);
 
 		if (rw == WRITE) {
 			char *vto, *vfrom;
diff --git a/mm/filemap.c b/mm/filemap.c
index 1b60f30..073a633 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -33,6 +33,7 @@
 #include <linux/cpuset.h>
 #include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
 #include <linux/memcontrol.h>
+#include <linux/biotrack.h>
 #include <linux/mm_inline.h> /* for page_is_file_cache() */
 #include "internal.h"
 
@@ -464,6 +465,7 @@ int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
 					gfp_mask & GFP_RECLAIM_MASK);
 	if (error)
 		goto out;
+	blkio_cgroup_set_owner(page, current->mm);
 
 	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
 	if (error == 0) {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 78eb855..b47e467 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -128,6 +128,12 @@ struct mem_cgroup_lru_info {
 	struct mem_cgroup_per_node *nodeinfo[MAX_NUMNODES];
 };
 
+void __meminit __init_mem_page_cgroup(struct page_cgroup *pc)
+{
+	pc->mem_cgroup = NULL;
+	INIT_LIST_HEAD(&pc->lru);
+}
+
 /*
  * The memory controller data structure. The memory controller controls both
  * page cache and RSS per cgroup. We would eventually like to provide
diff --git a/mm/memory.c b/mm/memory.c
index 4126dd1..0da0e70 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -51,6 +51,7 @@
 #include <linux/init.h>
 #include <linux/writeback.h>
 #include <linux/memcontrol.h>
+#include <linux/biotrack.h>
 #include <linux/mmu_notifier.h>
 #include <linux/kallsyms.h>
 #include <linux/swapops.h>
@@ -2064,6 +2065,7 @@ gotten:
 		 */
 		ptep_clear_flush_notify(vma, address, page_table);
 		page_add_new_anon_rmap(new_page, vma, address);
+		blkio_cgroup_set_owner(new_page, mm);
 		set_pte_at(mm, address, page_table, entry);
 		update_mmu_cache(vma, address, entry);
 		if (old_page) {
@@ -2529,6 +2531,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	flush_icache_page(vma, page);
 	set_pte_at(mm, address, page_table, pte);
 	page_add_anon_rmap(page, vma, address);
+	blkio_cgroup_reset_owner(page, mm);
 	/* It's better to call commit-charge after rmap is established */
 	mem_cgroup_commit_charge_swapin(page, ptr);
 
@@ -2593,6 +2596,7 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		goto release;
 	inc_mm_counter(mm, anon_rss);
 	page_add_new_anon_rmap(page, vma, address);
+	blkio_cgroup_set_owner(page, mm);
 	set_pte_at(mm, address, page_table, entry);
 
 	/* No need to invalidate - it was non-present before */
@@ -2740,6 +2744,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		if (anon) {
 			inc_mm_counter(mm, anon_rss);
 			page_add_new_anon_rmap(page, vma, address);
+			blkio_cgroup_set_owner(page, mm);
 		} else {
 			inc_mm_counter(mm, file_rss);
 			page_add_file_rmap(page);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..3604c35 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -23,6 +23,7 @@
 #include <linux/init.h>
 #include <linux/backing-dev.h>
 #include <linux/task_io_accounting_ops.h>
+#include <linux/biotrack.h>
 #include <linux/blkdev.h>
 #include <linux/mpage.h>
 #include <linux/rmap.h>
@@ -1243,6 +1244,7 @@ int __set_page_dirty_nobuffers(struct page *page)
 			BUG_ON(mapping2 != mapping);
 			WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
 			account_page_dirtied(page, mapping);
+			blkio_cgroup_reset_owner_pagedirty(page, current->mm);
 			radix_tree_tag_set(&mapping->page_tree,
 				page_index(page), PAGECACHE_TAG_DIRTY);
 		}
diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
index 791905c..e143d04 100644
--- a/mm/page_cgroup.c
+++ b/mm/page_cgroup.c
@@ -9,14 +9,15 @@
 #include <linux/vmalloc.h>
 #include <linux/cgroup.h>
 #include <linux/swapops.h>
+#include <linux/biotrack.h>
 
 static void __meminit
 __init_page_cgroup(struct page_cgroup *pc, unsigned long pfn)
 {
 	pc->flags = 0;
-	pc->mem_cgroup = NULL;
 	pc->page = pfn_to_page(pfn);
-	INIT_LIST_HEAD(&pc->lru);
+	__init_mem_page_cgroup(pc);
+	__init_blkio_page_cgroup(pc);
 }
 static unsigned long total_usage;
 
@@ -74,7 +75,7 @@ void __init page_cgroup_init(void)
 
 	int nid, fail;
 
-	if (mem_cgroup_disabled())
+	if (mem_cgroup_disabled() && blkio_cgroup_disabled())
 		return;
 
 	for_each_online_node(nid)  {
@@ -83,12 +84,12 @@ void __init page_cgroup_init(void)
 			goto fail;
 	}
 	printk(KERN_INFO "allocated %ld bytes of page_cgroup\n", total_usage);
-	printk(KERN_INFO "please try cgroup_disable=memory option if you"
+	printk(KERN_INFO "please try cgroup_disable=memory,blkio option if you"
 	" don't want\n");
 	return;
 fail:
 	printk(KERN_CRIT "allocation of page_cgroup was failed.\n");
-	printk(KERN_CRIT "please try cgroup_disable=memory boot option\n");
+	printk(KERN_CRIT "please try cgroup_disable=memory,blkio boot options\n");
 	panic("Out of memory");
 }
 
@@ -248,7 +249,7 @@ void __init page_cgroup_init(void)
 	unsigned long pfn;
 	int fail = 0;
 
-	if (mem_cgroup_disabled())
+	if (mem_cgroup_disabled() && blkio_cgroup_disabled())
 		return;
 
 	for (pfn = 0; !fail && pfn < max_pfn; pfn += PAGES_PER_SECTION) {
@@ -263,8 +264,8 @@ void __init page_cgroup_init(void)
 		hotplug_memory_notifier(page_cgroup_callback, 0);
 	}
 	printk(KERN_INFO "allocated %ld bytes of page_cgroup\n", total_usage);
-	printk(KERN_INFO "please try cgroup_disable=memory option if you don't"
-	" want\n");
+	printk(KERN_INFO "please try cgroup_disable=memory,blkio option"
+	" if you don't want\n");
 }
 
 void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat)
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 1416e7e..df9d6bb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -18,6 +18,7 @@
 #include <linux/pagevec.h>
 #include <linux/migrate.h>
 #include <linux/page_cgroup.h>
+#include <linux/biotrack.h>
 
 #include <asm/pgtable.h>
 
@@ -306,6 +307,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		 */
 		__set_page_locked(new_page);
 		SetPageSwapBacked(new_page);
+		blkio_cgroup_set_owner(new_page, current->mm);
 		err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL);
 		if (likely(!err)) {
 			/*
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios.
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o blkio_cgroup patches from Ryo to track async bios.

o Fernando is also working on another IO tracking mechanism. We are not
  particular about any IO tracking mechanism. This patchset can make use
  of any mechanism which makes it to upstream. For the time being making
  use of Ryo's posting.

Based on 2.6.30-rc3-git3
Signed-off-by: Hirokazu Takahashi <taka@valinux.co.jp>
Signed-off-by: Ryo Tsuruta <ryov@valinux.co.jp>
---
 block/blk-ioc.c               |   37 +++---
 fs/buffer.c                   |    2 +
 fs/direct-io.c                |    2 +
 include/linux/biotrack.h      |   97 +++++++++++++
 include/linux/cgroup_subsys.h |    6 +
 include/linux/iocontext.h     |    1 +
 include/linux/memcontrol.h    |    6 +
 include/linux/mmzone.h        |    4 +-
 include/linux/page_cgroup.h   |   31 ++++-
 init/Kconfig                  |   15 ++
 mm/Makefile                   |    4 +-
 mm/biotrack.c                 |  300 +++++++++++++++++++++++++++++++++++++++++
 mm/bounce.c                   |    2 +
 mm/filemap.c                  |    2 +
 mm/memcontrol.c               |    6 +
 mm/memory.c                   |    5 +
 mm/page-writeback.c           |    2 +
 mm/page_cgroup.c              |   17 ++-
 mm/swap_state.c               |    2 +
 19 files changed, 511 insertions(+), 30 deletions(-)
 create mode 100644 include/linux/biotrack.h
 create mode 100644 mm/biotrack.c

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 8f0f6cf..ccde40e 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -84,27 +84,32 @@ void exit_io_context(void)
 	}
 }
 
+void init_io_context(struct io_context *ioc)
+{
+	atomic_set(&ioc->refcount, 1);
+	atomic_set(&ioc->nr_tasks, 1);
+	spin_lock_init(&ioc->lock);
+	ioc->ioprio_changed = 0;
+	ioc->ioprio = 0;
+#ifdef CONFIG_GROUP_IOSCHED
+	ioc->cgroup_changed = 0;
+#endif
+	ioc->last_waited = jiffies; /* doesn't matter... */
+	ioc->nr_batch_requests = 0; /* because this is 0 */
+	ioc->aic = NULL;
+	INIT_RADIX_TREE(&ioc->radix_root, GFP_ATOMIC | __GFP_HIGH);
+	INIT_HLIST_HEAD(&ioc->cic_list);
+	ioc->ioc_data = NULL;
+}
+
+
 struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
 {
 	struct io_context *ret;
 
 	ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
-	if (ret) {
-		atomic_set(&ret->refcount, 1);
-		atomic_set(&ret->nr_tasks, 1);
-		spin_lock_init(&ret->lock);
-		ret->ioprio_changed = 0;
-		ret->ioprio = 0;
-#ifdef CONFIG_GROUP_IOSCHED
-		ret->cgroup_changed = 0;
-#endif
-		ret->last_waited = jiffies; /* doesn't matter... */
-		ret->nr_batch_requests = 0; /* because this is 0 */
-		ret->aic = NULL;
-		INIT_RADIX_TREE(&ret->radix_root, GFP_ATOMIC | __GFP_HIGH);
-		INIT_HLIST_HEAD(&ret->cic_list);
-		ret->ioc_data = NULL;
-	}
+	if (ret)
+		init_io_context(ret);
 
 	return ret;
 }
diff --git a/fs/buffer.c b/fs/buffer.c
index 4910612..8142677 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -36,6 +36,7 @@
 #include <linux/buffer_head.h>
 #include <linux/task_io_accounting_ops.h>
 #include <linux/bio.h>
+#include <linux/biotrack.h>
 #include <linux/notifier.h>
 #include <linux/cpu.h>
 #include <linux/bitops.h>
@@ -668,6 +669,7 @@ static void __set_page_dirty(struct page *page,
 	if (page->mapping) {	/* Race with truncate? */
 		WARN_ON_ONCE(warn && !PageUptodate(page));
 		account_page_dirtied(page, mapping);
+		blkio_cgroup_reset_owner_pagedirty(page, current->mm);
 		radix_tree_tag_set(&mapping->page_tree,
 				page_index(page), PAGECACHE_TAG_DIRTY);
 	}
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 05763bb..60b1a99 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -33,6 +33,7 @@
 #include <linux/err.h>
 #include <linux/blkdev.h>
 #include <linux/buffer_head.h>
+#include <linux/biotrack.h>
 #include <linux/rwsem.h>
 #include <linux/uio.h>
 #include <asm/atomic.h>
@@ -797,6 +798,7 @@ static int do_direct_IO(struct dio *dio)
 			ret = PTR_ERR(page);
 			goto out;
 		}
+		blkio_cgroup_reset_owner(page, current->mm);
 
 		while (block_in_page < blocks_per_page) {
 			unsigned offset_in_page = block_in_page << blkbits;
diff --git a/include/linux/biotrack.h b/include/linux/biotrack.h
new file mode 100644
index 0000000..741a8b5
--- /dev/null
+++ b/include/linux/biotrack.h
@@ -0,0 +1,97 @@
+#include <linux/cgroup.h>
+#include <linux/mm.h>
+#include <linux/page_cgroup.h>
+
+#ifndef _LINUX_BIOTRACK_H
+#define _LINUX_BIOTRACK_H
+
+#ifdef	CONFIG_CGROUP_BLKIO
+
+struct io_context;
+struct block_device;
+
+struct blkio_cgroup {
+	struct cgroup_subsys_state css;
+	struct io_context *io_context;	/* default io_context */
+/*	struct radix_tree_root io_context_root; per device io_context */
+};
+
+/**
+ * __init_blkio_page_cgroup() - initialize a blkio_page_cgroup
+ * @pc:		page_cgroup of the page
+ *
+ * Reset the owner ID of a page.
+ */
+static inline void __init_blkio_page_cgroup(struct page_cgroup *pc)
+{
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, 0);
+	unlock_page_cgroup(pc);
+}
+
+/**
+ * blkio_cgroup_disabled - check whether blkio_cgroup is disabled
+ *
+ * Returns true if disabled, false if not.
+ */
+static inline bool blkio_cgroup_disabled(void)
+{
+	if (blkio_cgroup_subsys.disabled)
+		return true;
+	return false;
+}
+
+extern void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm);
+extern void blkio_cgroup_reset_owner(struct page *page, struct mm_struct *mm);
+extern void blkio_cgroup_reset_owner_pagedirty(struct page *page,
+						 struct mm_struct *mm);
+extern void blkio_cgroup_copy_owner(struct page *page, struct page *opage);
+
+extern struct io_context *get_blkio_cgroup_iocontext(struct bio *bio);
+extern unsigned long get_blkio_cgroup_id(struct bio *bio);
+extern struct cgroup *blkio_cgroup_lookup(int id);
+
+#else	/* CONFIG_CGROUP_BIO */
+
+struct blkio_cgroup;
+
+static inline void __init_blkio_page_cgroup(struct page_cgroup *pc)
+{
+}
+
+static inline bool blkio_cgroup_disabled(void)
+{
+	return true;
+}
+
+static inline void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_reset_owner(struct page *page,
+						struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_reset_owner_pagedirty(struct page *page,
+						struct mm_struct *mm)
+{
+}
+
+static inline void blkio_cgroup_copy_owner(struct page *page, struct page *opage)
+{
+}
+
+static inline struct io_context *get_blkio_cgroup_iocontext(struct bio *bio)
+{
+	return NULL;
+}
+
+static inline unsigned long get_blkio_cgroup_id(struct bio *bio)
+{
+	return 0;
+}
+
+#endif	/* CONFIG_CGROUP_BLKIO */
+
+#endif /* _LINUX_BIOTRACK_H */
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 68ea6bd..f214e6e 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -43,6 +43,12 @@ SUBSYS(mem_cgroup)
 
 /* */
 
+#ifdef CONFIG_CGROUP_BLKIO
+SUBSYS(blkio_cgroup)
+#endif
+
+/* */
+
 #ifdef CONFIG_CGROUP_DEVICE
 SUBSYS(devices)
 #endif
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 73027b6..9c4587b 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -104,6 +104,7 @@ int put_io_context(struct io_context *ioc);
 void exit_io_context(void);
 struct io_context *get_io_context(gfp_t gfp_flags, int node);
 struct io_context *alloc_io_context(gfp_t gfp_flags, int node);
+void init_io_context(struct io_context *ioc);
 void copy_io_context(struct io_context **pdst, struct io_context **psrc);
 #else
 static inline void exit_io_context(void)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 25b9ca9..d74b462 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -37,6 +37,8 @@ struct mm_struct;
  * (Of course, if memcg does memory allocation in future, GFP_KERNEL is sane.)
  */
 
+extern void __init_mem_page_cgroup(struct page_cgroup *pc);
+
 extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
 				gfp_t gfp_mask);
 /* for swap handling */
@@ -120,6 +122,10 @@ extern bool mem_cgroup_oom_called(struct task_struct *task);
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
 struct mem_cgroup;
 
+static inline void __init_mem_page_cgroup(struct page_cgroup *pc)
+{
+}
+
 static inline int mem_cgroup_newpage_charge(struct page *page,
 					struct mm_struct *mm, gfp_t gfp_mask)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a47c879..14477cb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -607,7 +607,7 @@ typedef struct pglist_data {
 	int nr_zones;
 #ifdef CONFIG_FLAT_NODE_MEM_MAP	/* means !SPARSEMEM */
 	struct page *node_mem_map;
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 	struct page_cgroup *node_page_cgroup;
 #endif
 #endif
@@ -958,7 +958,7 @@ struct mem_section {
 
 	/* See declaration of similar field in struct zone */
 	unsigned long *pageblock_flags;
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 	/*
 	 * If !SPARSEMEM, pgdat doesn't have page_cgroup pointer. We use
 	 * section. (see memcontrol.h/page_cgroup.h about this.)
diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h
index 7339c7b..dd7f71c 100644
--- a/include/linux/page_cgroup.h
+++ b/include/linux/page_cgroup.h
@@ -1,7 +1,7 @@
 #ifndef __LINUX_PAGE_CGROUP_H
 #define __LINUX_PAGE_CGROUP_H
 
-#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+#ifdef CONFIG_CGROUP_PAGE
 #include <linux/bit_spinlock.h>
 /*
  * Page Cgroup can be considered as an extended mem_map.
@@ -12,9 +12,11 @@
  */
 struct page_cgroup {
 	unsigned long flags;
-	struct mem_cgroup *mem_cgroup;
 	struct page *page;
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+	struct mem_cgroup *mem_cgroup;
 	struct list_head lru;		/* per cgroup LRU list */
+#endif
 };
 
 void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat);
@@ -71,7 +73,7 @@ static inline void unlock_page_cgroup(struct page_cgroup *pc)
 	bit_spin_unlock(PCG_LOCK, &pc->flags);
 }
 
-#else /* CONFIG_CGROUP_MEM_RES_CTLR */
+#else /* CONFIG_CGROUP_PAGE */
 struct page_cgroup;
 
 static inline void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat)
@@ -122,4 +124,27 @@ static inline void swap_cgroup_swapoff(int type)
 }
 
 #endif
+
+#ifdef CONFIG_CGROUP_BLKIO
+/*
+ * use lower 16 bits for flags and reserve the rest for the page tracking id
+ */
+#define PCG_TRACKING_ID_SHIFT	(16)
+#define PCG_TRACKING_ID_BITS \
+	(8 * sizeof(unsigned long) - PCG_TRACKING_ID_SHIFT)
+
+/* NOTE: must be called with page_cgroup() held */
+static inline unsigned long page_cgroup_get_id(struct page_cgroup *pc)
+{
+	return pc->flags >> PCG_TRACKING_ID_SHIFT;
+}
+
+/* NOTE: must be called with page_cgroup() held */
+static inline void page_cgroup_set_id(struct page_cgroup *pc, unsigned long id)
+{
+	WARN_ON(id >= (1UL << PCG_TRACKING_ID_BITS));
+	pc->flags &= (1UL << PCG_TRACKING_ID_SHIFT) - 1;
+	pc->flags |= (unsigned long)(id << PCG_TRACKING_ID_SHIFT);
+}
+#endif
 #endif
diff --git a/init/Kconfig b/init/Kconfig
index 1a4686d..ee16d6f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -616,6 +616,21 @@ config GROUP_IOSCHED
 
 endif # CGROUPS
 
+config CGROUP_BLKIO
+	bool "Block I/O cgroup subsystem"
+	depends on CGROUPS && BLOCK
+	select MM_OWNER
+	help
+	  Provides a Resource Controller which enables to track the onwner
+	  of every Block I/O requests.
+	  The information this subsystem provides can be used from any
+	  kind of module such as dm-ioband device mapper modules or
+	  the cfq-scheduler.
+
+config CGROUP_PAGE
+	def_bool y
+	depends on CGROUP_MEM_RES_CTLR || CGROUP_BLKIO
+
 config MM_OWNER
 	bool
 
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..76c3436 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -37,4 +37,6 @@ else
 obj-$(CONFIG_SMP) += allocpercpu.o
 endif
 obj-$(CONFIG_QUICKLIST) += quicklist.o
-obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
+obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o
+obj-$(CONFIG_CGROUP_PAGE) += page_cgroup.o
+obj-$(CONFIG_CGROUP_BLKIO) += biotrack.o
diff --git a/mm/biotrack.c b/mm/biotrack.c
new file mode 100644
index 0000000..2baf1f0
--- /dev/null
+++ b/mm/biotrack.c
@@ -0,0 +1,300 @@
+/* biotrack.c - Block I/O Tracking
+ *
+ * Copyright (C) VA Linux Systems Japan, 2008-2009
+ * Developed by Hirokazu Takahashi <taka@valinux.co.jp>
+ *
+ * Copyright (C) 2008 Andrea Righi <righi.andrea@gmail.com>
+ * Use part of page_cgroup->flags to store blkio-cgroup ID.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/smp.h>
+#include <linux/bit_spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/biotrack.h>
+#include <linux/mm_inline.h>
+
+/*
+ * The block I/O tracking mechanism is implemented on the cgroup memory
+ * controller framework. It helps to find the the owner of an I/O request
+ * because every I/O request has a target page and the owner of the page
+ * can be easily determined on the framework.
+ */
+
+/* Return the blkio_cgroup that associates with a cgroup. */
+static inline struct blkio_cgroup *cgroup_blkio(struct cgroup *cgrp)
+{
+	return container_of(cgroup_subsys_state(cgrp, blkio_cgroup_subsys_id),
+					struct blkio_cgroup, css);
+}
+
+/* Return the blkio_cgroup that associates with a process. */
+static inline struct blkio_cgroup *blkio_cgroup_from_task(struct task_struct *p)
+{
+	return container_of(task_subsys_state(p, blkio_cgroup_subsys_id),
+					struct blkio_cgroup, css);
+}
+
+static struct io_context default_blkio_io_context;
+static struct blkio_cgroup default_blkio_cgroup = {
+	.io_context	= &default_blkio_io_context,
+};
+
+/**
+ * blkio_cgroup_set_owner() - set the owner ID of a page.
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Make a given page have the blkio-cgroup ID of the owner of this page.
+ */
+void blkio_cgroup_set_owner(struct page *page, struct mm_struct *mm)
+{
+	struct blkio_cgroup *biog;
+	struct page_cgroup *pc;
+	unsigned long id;
+
+	if (blkio_cgroup_disabled())
+		return;
+	pc = lookup_page_cgroup(page);
+	if (unlikely(!pc))
+		return;
+
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, 0);	/* 0: default blkio_cgroup id */
+	unlock_page_cgroup(pc);
+	if (!mm)
+		return;
+
+	rcu_read_lock();
+	biog = blkio_cgroup_from_task(rcu_dereference(mm->owner));
+	if (unlikely(!biog)) {
+		rcu_read_unlock();
+		return;
+	}
+	/*
+	 * css_get(&bio->css) isn't called to increment the reference
+	 * count of this blkio_cgroup "biog" so the css_id might turn
+	 * invalid even if this page is still active.
+	 * This approach is chosen to minimize the overhead.
+	 */
+	id = css_id(&biog->css);
+	rcu_read_unlock();
+	lock_page_cgroup(pc);
+	page_cgroup_set_id(pc, id);
+	unlock_page_cgroup(pc);
+}
+
+/**
+ * blkio_cgroup_reset_owner() - reset the owner ID of a page
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Change the owner of a given page if necessary.
+ */
+void blkio_cgroup_reset_owner(struct page *page, struct mm_struct *mm)
+{
+	blkio_cgroup_set_owner(page, mm);
+}
+
+/**
+ * blkio_cgroup_reset_owner_pagedirty() - reset the owner ID of a pagecache page
+ * @page:	the page we want to tag
+ * @mm:		the mm_struct of a page owner
+ *
+ * Change the owner of a given page if the page is in the pagecache.
+ */
+void blkio_cgroup_reset_owner_pagedirty(struct page *page, struct mm_struct *mm)
+{
+	if (!page_is_file_cache(page))
+		return;
+	if (current->flags & PF_MEMALLOC)
+		return;
+
+	blkio_cgroup_reset_owner(page, mm);
+}
+
+/**
+ * blkio_cgroup_copy_owner() - copy the owner ID of a page into another page
+ * @npage:	the page where we want to copy the owner
+ * @opage:	the page from which we want to copy the ID
+ *
+ * Copy the owner ID of @opage into @npage.
+ */
+void blkio_cgroup_copy_owner(struct page *npage, struct page *opage)
+{
+	struct page_cgroup *npc, *opc;
+	unsigned long id;
+
+	if (blkio_cgroup_disabled())
+		return;
+	npc = lookup_page_cgroup(npage);
+	if (unlikely(!npc))
+		return;
+	opc = lookup_page_cgroup(opage);
+	if (unlikely(!opc))
+		return;
+
+	lock_page_cgroup(opc);
+	lock_page_cgroup(npc);
+	id = page_cgroup_get_id(opc);
+	page_cgroup_set_id(npc, id);
+	unlock_page_cgroup(npc);
+	unlock_page_cgroup(opc);
+}
+
+/* Create a new blkio-cgroup. */
+static struct cgroup_subsys_state *
+blkio_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	struct blkio_cgroup *biog;
+	struct io_context *ioc;
+
+	if (!cgrp->parent) {
+		biog = &default_blkio_cgroup;
+		init_io_context(biog->io_context);
+		/* Increment the referrence count not to be released ever. */
+		atomic_inc(&biog->io_context->refcount);
+		return &biog->css;
+	}
+
+	biog = kzalloc(sizeof(*biog), GFP_KERNEL);
+	if (!biog)
+		return ERR_PTR(-ENOMEM);
+	ioc = alloc_io_context(GFP_KERNEL, -1);
+	if (!ioc) {
+		kfree(biog);
+		return ERR_PTR(-ENOMEM);
+	}
+	biog->io_context = ioc;
+	return &biog->css;
+}
+
+/* Delete the blkio-cgroup. */
+static void blkio_cgroup_destroy(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	struct blkio_cgroup *biog = cgroup_blkio(cgrp);
+
+	put_io_context(biog->io_context);
+	free_css_id(&blkio_cgroup_subsys, &biog->css);
+	kfree(biog);
+}
+
+/**
+ * get_blkio_cgroup_id() - determine the blkio-cgroup ID
+ * @bio:	the &struct bio which describes the I/O
+ *
+ * Returns the blkio-cgroup ID of a given bio. A return value zero
+ * means that the page associated with the bio belongs to default_blkio_cgroup.
+ */
+unsigned long get_blkio_cgroup_id(struct bio *bio)
+{
+	struct page_cgroup *pc;
+	struct page *page = bio_iovec_idx(bio, 0)->bv_page;
+	unsigned long id = 0;
+
+	pc = lookup_page_cgroup(page);
+	if (pc) {
+		lock_page_cgroup(pc);
+		id = page_cgroup_get_id(pc);
+		unlock_page_cgroup(pc);
+	}
+	return id;
+}
+
+/**
+ * get_blkio_cgroup_iocontext() - determine the blkio-cgroup iocontext
+ * @bio:	the &struct bio which describe the I/O
+ *
+ * Returns the iocontext of blkio-cgroup that issued a given bio.
+ */
+struct io_context *get_blkio_cgroup_iocontext(struct bio *bio)
+{
+	struct cgroup_subsys_state *css;
+	struct blkio_cgroup *biog;
+	struct io_context *ioc;
+	unsigned long id;
+
+	id = get_blkio_cgroup_id(bio);
+	rcu_read_lock();
+	css = css_lookup(&blkio_cgroup_subsys, id);
+	if (css)
+		biog = container_of(css, struct blkio_cgroup, css);
+	else
+		biog = &default_blkio_cgroup;
+	ioc = biog->io_context;	/* default io_context for this cgroup */
+	atomic_inc(&ioc->refcount);
+	rcu_read_unlock();
+	return ioc;
+}
+
+/**
+ * blkio_cgroup_lookup() - lookup a cgroup by blkio-cgroup ID
+ * @id:		blkio-cgroup ID
+ *
+ * Returns the cgroup associated with the specified ID, or NULL if lookup
+ * fails.
+ *
+ * Note:
+ * This function should be called under rcu_read_lock().
+ */
+struct cgroup *blkio_cgroup_lookup(int id)
+{
+	struct cgroup *cgrp;
+	struct cgroup_subsys_state *css;
+
+	if (blkio_cgroup_disabled())
+		return NULL;
+
+	css = css_lookup(&blkio_cgroup_subsys, id);
+	if (!css)
+		return NULL;
+	cgrp = css->cgroup;
+	return cgrp;
+}
+EXPORT_SYMBOL(get_blkio_cgroup_iocontext);
+EXPORT_SYMBOL(get_blkio_cgroup_id);
+EXPORT_SYMBOL(blkio_cgroup_lookup);
+
+static u64 blkio_id_read(struct cgroup *cgrp, struct cftype *cft)
+{
+	struct blkio_cgroup *biog = cgroup_blkio(cgrp);
+	unsigned long id;
+
+	rcu_read_lock();
+	id = css_id(&biog->css);
+	rcu_read_unlock();
+	return (u64)id;
+}
+
+
+static struct cftype blkio_files[] = {
+	{
+		.name = "id",
+		.read_u64 = blkio_id_read,
+	},
+};
+
+static int blkio_cgroup_populate(struct cgroup_subsys *ss, struct cgroup *cgrp)
+{
+	return cgroup_add_files(cgrp, ss, blkio_files,
+					ARRAY_SIZE(blkio_files));
+}
+
+struct cgroup_subsys blkio_cgroup_subsys = {
+	.name		= "blkio",
+	.create		= blkio_cgroup_create,
+	.destroy	= blkio_cgroup_destroy,
+	.populate	= blkio_cgroup_populate,
+	.subsys_id	= blkio_cgroup_subsys_id,
+	.use_id		= 1,
+};
diff --git a/mm/bounce.c b/mm/bounce.c
index e590272..875380c 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -14,6 +14,7 @@
 #include <linux/hash.h>
 #include <linux/highmem.h>
 #include <linux/blktrace_api.h>
+#include <linux/biotrack.h>
 #include <trace/block.h>
 #include <asm/tlbflush.h>
 
@@ -212,6 +213,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 		to->bv_len = from->bv_len;
 		to->bv_offset = from->bv_offset;
 		inc_zone_page_state(to->bv_page, NR_BOUNCE);
+		blkio_cgroup_copy_owner(to->bv_page, page);
 
 		if (rw == WRITE) {
 			char *vto, *vfrom;
diff --git a/mm/filemap.c b/mm/filemap.c
index 1b60f30..073a633 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -33,6 +33,7 @@
 #include <linux/cpuset.h>
 #include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
 #include <linux/memcontrol.h>
+#include <linux/biotrack.h>
 #include <linux/mm_inline.h> /* for page_is_file_cache() */
 #include "internal.h"
 
@@ -464,6 +465,7 @@ int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
 					gfp_mask & GFP_RECLAIM_MASK);
 	if (error)
 		goto out;
+	blkio_cgroup_set_owner(page, current->mm);
 
 	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
 	if (error == 0) {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 78eb855..b47e467 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -128,6 +128,12 @@ struct mem_cgroup_lru_info {
 	struct mem_cgroup_per_node *nodeinfo[MAX_NUMNODES];
 };
 
+void __meminit __init_mem_page_cgroup(struct page_cgroup *pc)
+{
+	pc->mem_cgroup = NULL;
+	INIT_LIST_HEAD(&pc->lru);
+}
+
 /*
  * The memory controller data structure. The memory controller controls both
  * page cache and RSS per cgroup. We would eventually like to provide
diff --git a/mm/memory.c b/mm/memory.c
index 4126dd1..0da0e70 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -51,6 +51,7 @@
 #include <linux/init.h>
 #include <linux/writeback.h>
 #include <linux/memcontrol.h>
+#include <linux/biotrack.h>
 #include <linux/mmu_notifier.h>
 #include <linux/kallsyms.h>
 #include <linux/swapops.h>
@@ -2064,6 +2065,7 @@ gotten:
 		 */
 		ptep_clear_flush_notify(vma, address, page_table);
 		page_add_new_anon_rmap(new_page, vma, address);
+		blkio_cgroup_set_owner(new_page, mm);
 		set_pte_at(mm, address, page_table, entry);
 		update_mmu_cache(vma, address, entry);
 		if (old_page) {
@@ -2529,6 +2531,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	flush_icache_page(vma, page);
 	set_pte_at(mm, address, page_table, pte);
 	page_add_anon_rmap(page, vma, address);
+	blkio_cgroup_reset_owner(page, mm);
 	/* It's better to call commit-charge after rmap is established */
 	mem_cgroup_commit_charge_swapin(page, ptr);
 
@@ -2593,6 +2596,7 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		goto release;
 	inc_mm_counter(mm, anon_rss);
 	page_add_new_anon_rmap(page, vma, address);
+	blkio_cgroup_set_owner(page, mm);
 	set_pte_at(mm, address, page_table, entry);
 
 	/* No need to invalidate - it was non-present before */
@@ -2740,6 +2744,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		if (anon) {
 			inc_mm_counter(mm, anon_rss);
 			page_add_new_anon_rmap(page, vma, address);
+			blkio_cgroup_set_owner(page, mm);
 		} else {
 			inc_mm_counter(mm, file_rss);
 			page_add_file_rmap(page);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..3604c35 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -23,6 +23,7 @@
 #include <linux/init.h>
 #include <linux/backing-dev.h>
 #include <linux/task_io_accounting_ops.h>
+#include <linux/biotrack.h>
 #include <linux/blkdev.h>
 #include <linux/mpage.h>
 #include <linux/rmap.h>
@@ -1243,6 +1244,7 @@ int __set_page_dirty_nobuffers(struct page *page)
 			BUG_ON(mapping2 != mapping);
 			WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
 			account_page_dirtied(page, mapping);
+			blkio_cgroup_reset_owner_pagedirty(page, current->mm);
 			radix_tree_tag_set(&mapping->page_tree,
 				page_index(page), PAGECACHE_TAG_DIRTY);
 		}
diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
index 791905c..e143d04 100644
--- a/mm/page_cgroup.c
+++ b/mm/page_cgroup.c
@@ -9,14 +9,15 @@
 #include <linux/vmalloc.h>
 #include <linux/cgroup.h>
 #include <linux/swapops.h>
+#include <linux/biotrack.h>
 
 static void __meminit
 __init_page_cgroup(struct page_cgroup *pc, unsigned long pfn)
 {
 	pc->flags = 0;
-	pc->mem_cgroup = NULL;
 	pc->page = pfn_to_page(pfn);
-	INIT_LIST_HEAD(&pc->lru);
+	__init_mem_page_cgroup(pc);
+	__init_blkio_page_cgroup(pc);
 }
 static unsigned long total_usage;
 
@@ -74,7 +75,7 @@ void __init page_cgroup_init(void)
 
 	int nid, fail;
 
-	if (mem_cgroup_disabled())
+	if (mem_cgroup_disabled() && blkio_cgroup_disabled())
 		return;
 
 	for_each_online_node(nid)  {
@@ -83,12 +84,12 @@ void __init page_cgroup_init(void)
 			goto fail;
 	}
 	printk(KERN_INFO "allocated %ld bytes of page_cgroup\n", total_usage);
-	printk(KERN_INFO "please try cgroup_disable=memory option if you"
+	printk(KERN_INFO "please try cgroup_disable=memory,blkio option if you"
 	" don't want\n");
 	return;
 fail:
 	printk(KERN_CRIT "allocation of page_cgroup was failed.\n");
-	printk(KERN_CRIT "please try cgroup_disable=memory boot option\n");
+	printk(KERN_CRIT "please try cgroup_disable=memory,blkio boot options\n");
 	panic("Out of memory");
 }
 
@@ -248,7 +249,7 @@ void __init page_cgroup_init(void)
 	unsigned long pfn;
 	int fail = 0;
 
-	if (mem_cgroup_disabled())
+	if (mem_cgroup_disabled() && blkio_cgroup_disabled())
 		return;
 
 	for (pfn = 0; !fail && pfn < max_pfn; pfn += PAGES_PER_SECTION) {
@@ -263,8 +264,8 @@ void __init page_cgroup_init(void)
 		hotplug_memory_notifier(page_cgroup_callback, 0);
 	}
 	printk(KERN_INFO "allocated %ld bytes of page_cgroup\n", total_usage);
-	printk(KERN_INFO "please try cgroup_disable=memory option if you don't"
-	" want\n");
+	printk(KERN_INFO "please try cgroup_disable=memory,blkio option"
+	" if you don't want\n");
 }
 
 void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat)
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 1416e7e..df9d6bb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -18,6 +18,7 @@
 #include <linux/pagevec.h>
 #include <linux/migrate.h>
 #include <linux/page_cgroup.h>
+#include <linux/biotrack.h>
 
 #include <asm/pgtable.h>
 
@@ -306,6 +307,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		 */
 		__set_page_locked(new_page);
 		SetPageSwapBacked(new_page);
+		blkio_cgroup_set_owner(new_page, current->mm);
 		err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL);
 		if (likely(!err)) {
 			/*
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 15/20] io-controller: map async requests to appropriate cgroup
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (13 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 16/20] io-controller: Per cgroup request descriptor support Vivek Goyal
                     ` (6 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o So far we were assuming that a bio/rq belongs to the task who is submitting
  it. It did not hold good in case of async writes. This patch makes use of
  blkio_cgroup pataches to attribute the aysnc writes to right group instead
  of task submitting the bio.

o For sync requests, we continue to assume that io belongs to the task
  submitting it. Only in case of async requests, we make use of io tracking
  patches to track the owner cgroup.

o So far cfq always caches the async queue pointer. With async requests now
  not necessarily being tied to submitting task io context, caching the
  pointer will not help for async queues. This patch introduces a new config
  option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
  old behavior where async queue pointer is cached in task context. If it
  is not set, async queue pointer is not cached and we take help of bio
  tracking patches to determine group bio belongs to and then map it to
  async queue of that group.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched    |   16 +++++
 block/as-iosched.c       |    2 +-
 block/blk-core.c         |    7 +-
 block/cfq-iosched.c      |  152 ++++++++++++++++++++++++++++++++++++----------
 block/deadline-iosched.c |    2 +-
 block/elevator-fq.c      |   97 ++++++++++++++++++++++++-----
 block/elevator-fq.h      |   23 ++++++-
 block/elevator.c         |   15 +++--
 include/linux/elevator.h |   21 ++++++-
 9 files changed, 268 insertions(+), 67 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 77fc786..0677099 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -124,6 +124,22 @@ config DEFAULT_IOSCHED
 	default "cfq" if DEFAULT_CFQ
 	default "noop" if DEFAULT_NOOP
 
+config TRACK_ASYNC_CONTEXT
+	bool "Determine async request context from bio"
+	depends on GROUP_IOSCHED
+	select CGROUP_BLKIO
+	default n
+	---help---
+	  Normally async request is attributed to the task submitting the
+	  request. With group ioscheduling, for accurate accounting of
+	  async writes, one needs to map the request to original task/cgroup
+	  which originated the request and not the submitter of the request.
+
+	  Currently there are generic io tracking patches to provide facility
+	  to map bio to original owner. If this option is set, for async
+	  request, original owner of the bio is decided by using io tracking
+	  patches otherwise we continue to attribute the request to the
+	  submitting thread.
 endmenu
 
 endif
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 23a3d2d..68200b3 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1499,7 +1499,7 @@ as_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
-	struct as_queue *asq = elv_get_sched_queue_current(q);
+	struct as_queue *asq = elv_get_sched_queue_bio(q, bio);
 
 	if (!asq)
 		return ELEVATOR_NO_MERGE;
diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..c77b5b2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -643,7 +643,8 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
 }
 
 static struct request *
-blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
+blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
+					gfp_t gfp_mask)
 {
 	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
 
@@ -655,7 +656,7 @@ blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
 	rq->cmd_flags = flags | REQ_ALLOCED;
 
 	if (priv) {
-		if (unlikely(elv_set_request(q, rq, gfp_mask))) {
+		if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
 			mempool_free(rq, q->rq.rq_pool);
 			return NULL;
 		}
@@ -796,7 +797,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		rw_flags |= REQ_IO_STAT;
 	spin_unlock_irq(q->queue_lock);
 
-	rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
+	rq = blk_alloc_request(q, bio, rw_flags, priv, gfp_mask);
 	if (unlikely(!rq)) {
 		/*
 		 * Allocation failed presumably due to memory. Undo anything
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index bba85b1..77bbe6c 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -160,8 +160,8 @@ CFQ_CFQQ_FNS(coop);
 	blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args)
 
 static void cfq_dispatch_insert(struct request_queue *, struct request *);
-static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,
-				       struct io_context *, gfp_t);
+static struct cfq_queue *cfq_get_queue(struct cfq_data *, struct bio *bio,
+					int, struct io_context *, gfp_t);
 static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *,
 						struct io_context *);
 
@@ -171,22 +171,56 @@ static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic,
 	return cic->cfqq[!!is_sync];
 }
 
-static inline void cic_set_cfqq(struct cfq_io_context *cic,
-				struct cfq_queue *cfqq, int is_sync)
-{
-	cic->cfqq[!!is_sync] = cfqq;
-}
-
 /*
- * We regard a request as SYNC, if it's either a read or has the SYNC bit
- * set (in which case it could also be direct WRITE).
+ * Determine the cfq queue bio should go in. This is primarily used by
+ * front merge and allow merge functions.
+ *
+ * Currently this function takes the ioprio and iprio_class from task
+ * submitting async bio. Later save the task information in the page_cgroup
+ * and retrieve task's ioprio and class from there.
  */
-static inline int cfq_bio_sync(struct bio *bio)
+static struct cfq_queue *cic_bio_to_cfqq(struct cfq_data *cfqd,
+		struct cfq_io_context *cic, struct bio *bio, int is_sync)
 {
-	if (bio_data_dir(bio) == READ || bio_sync(bio))
-		return 1;
+	struct cfq_queue *cfqq = NULL;
 
-	return 0;
+	cfqq = cic_to_cfqq(cic, is_sync);
+
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (!cfqq && !is_sync) {
+		const int ioprio = task_ioprio(cic->ioc);
+		const int ioprio_class = task_ioprio_class(cic->ioc);
+		struct io_group *iog;
+		/*
+		 * async bio tracking is enabled and we are not caching
+		 * async queue pointer in cic.
+		 */
+		iog = io_get_io_group(cfqd->queue, bio, 0);
+		if (!iog) {
+			/*
+			 * May be this is first rq/bio and io group has not
+			 * been setup yet.
+			 */
+			return NULL;
+		}
+		return io_group_async_queue_prio(iog, ioprio_class, ioprio);
+	}
+#endif
+	return cfqq;
+}
+
+static inline void cic_set_cfqq(struct cfq_io_context *cic,
+				struct cfq_queue *cfqq, int is_sync)
+{
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	/*
+	 * Don't cache async queue pointer as now one io context might
+	 * be submitting async io for various different async queues
+	 */
+	if (!is_sync)
+		return;
+#endif
+	cic->cfqq[!!is_sync] = cfqq;
 }
 
 static inline struct io_group *cfqq_to_io_group(struct cfq_queue *cfqq)
@@ -499,7 +533,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
 	if (!cic)
 		return NULL;
 
-	cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
+	cfqq = cic_bio_to_cfqq(cfqd, cic, bio, elv_bio_sync(bio));
 	if (cfqq) {
 		sector_t sector = bio->bi_sector + bio_sectors(bio);
 
@@ -581,7 +615,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	/*
 	 * Disallow merge of a sync bio into an async request.
 	 */
-	if (cfq_bio_sync(bio) && !rq_is_sync(rq))
+	if (elv_bio_sync(bio) && !rq_is_sync(rq))
 		return 0;
 
 	/*
@@ -592,7 +626,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	if (!cic)
 		return 0;
 
-	cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
+	cfqq = cic_bio_to_cfqq(cfqd, cic, bio, elv_bio_sync(bio));
 	if (cfqq == RQ_CFQQ(rq))
 		return 1;
 
@@ -1199,14 +1233,28 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	spin_lock_irqsave(q->queue_lock, flags);
 
 	cfqq = cic->cfqq[BLK_RW_ASYNC];
+
 	if (cfqq) {
 		struct cfq_queue *new_cfqq;
-		new_cfqq = cfq_get_queue(cfqd, BLK_RW_ASYNC, cic->ioc,
+
+		/*
+		 * Drop the reference to old queue unconditionally. Don't
+		 * worry whether new async prio queue has been allocated
+		 * or not.
+		 */
+		cic_set_cfqq(cic, NULL, BLK_RW_ASYNC);
+		cfq_put_queue(cfqq);
+
+		/*
+		 * Why to allocate new queue now? Will it not be automatically
+		 * allocated whenever another async request from same context
+		 * comes? Keeping it for the time being because existing cfq
+		 * code allocates the new queue immediately upon prio change
+		 */
+		new_cfqq = cfq_get_queue(cfqd, NULL, BLK_RW_ASYNC, cic->ioc,
 						GFP_ATOMIC);
-		if (new_cfqq) {
-			cic->cfqq[BLK_RW_ASYNC] = new_cfqq;
-			cfq_put_queue(cfqq);
-		}
+		if (new_cfqq)
+			cic_set_cfqq(cic, new_cfqq, BLK_RW_ASYNC);
 	}
 
 	cfqq = cic->cfqq[BLK_RW_SYNC];
@@ -1239,7 +1287,7 @@ static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic)
 
 	spin_lock_irqsave(q->queue_lock, flags);
 
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, NULL, 0);
 
 	if (async_cfqq != NULL) {
 		__iog = cfqq_to_io_group(async_cfqq);
@@ -1277,7 +1325,7 @@ static void cfq_ioc_set_cgroup(struct io_context *ioc)
 #endif  /* CONFIG_IOSCHED_CFQ_HIER */
 
 static struct cfq_queue *
-cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
+cfq_find_alloc_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 				struct io_context *ioc, gfp_t gfp_mask)
 {
 	struct cfq_queue *cfqq, *new_cfqq = NULL;
@@ -1286,12 +1334,28 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
 	struct io_group *iog = NULL;
 retry:
-	iog = io_get_io_group(q, 1);
+	iog = io_get_io_group(q, bio, 1);
 
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
 	cfqq = cic_to_cfqq(cic, is_sync);
 
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (!cfqq && !is_sync) {
+		const int ioprio = task_ioprio(cic->ioc);
+		const int ioprio_class = task_ioprio_class(cic->ioc);
+
+		/*
+		 * We have not cached async queue pointer as bio tracking
+		 * is enabled. Look into group async queue array using ioc
+		 * class and prio to see if somebody already allocated the
+		 * queue.
+		 */
+
+		cfqq = io_group_async_queue_prio(iog, ioprio_class, ioprio);
+	}
+#endif
+
 	if (!cfqq) {
 		if (new_cfqq) {
 			goto alloc_ioq;
@@ -1381,14 +1445,14 @@ out:
 }
 
 static struct cfq_queue *
-cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
-					gfp_t gfp_mask)
+cfq_get_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
+		struct io_context *ioc, gfp_t gfp_mask)
 {
 	const int ioprio = task_ioprio(ioc);
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_get_io_group(cfqd->queue, 1);
+	struct io_group *iog = io_get_io_group(cfqd->queue, bio, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
@@ -1397,7 +1461,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	}
 
 	if (!cfqq) {
-		cfqq = cfq_find_alloc_queue(cfqd, is_sync, ioc, gfp_mask);
+		cfqq = cfq_find_alloc_queue(cfqd, bio, is_sync, ioc, gfp_mask);
 		if (!cfqq)
 			return NULL;
 	}
@@ -1405,8 +1469,30 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	if (!is_sync && !async_cfqq)
 		io_group_set_async_queue(iog, ioprio_class, ioprio, cfqq->ioq);
 
-	/* ioc reference */
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	/*
+	 * ioc reference. If async request queue/group is determined from the
+	 * original task/cgroup and not from submitter task, io context can
+	 * not cache the pointer to async queue and everytime a request comes,
+	 * it will be determined by going through the async queue array.
+	 *
+	 * This comes from the fact that we might be getting async requests
+	 * which belong to a different cgroup altogether than the cgroup
+	 * iocontext belongs to. And this thread might be submitting bios
+	 * from various cgroups. So every time async queue will be different
+	 * based on the cgroup of the bio/rq. Can't cache the async cfqq
+	 * pointer in cic.
+	 */
+	if (is_sync)
+		elv_get_ioq(cfqq->ioq);
+#else
+	/*
+	 * async requests are being attributed to task submitting
+	 * it, hence cic can cache async cfqq pointer. Take the
+	 * queue reference even for async queue.
+	 */
 	elv_get_ioq(cfqq->ioq);
+#endif
 	return cfqq;
 }
 
@@ -1802,7 +1888,8 @@ static void cfq_put_request(struct request *rq)
  * Allocate cfq data structures associated with this request.
  */
 static int
-cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+cfq_set_request(struct request_queue *q, struct request *rq, struct bio *bio,
+				gfp_t gfp_mask)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_io_context *cic;
@@ -1822,7 +1909,8 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 	cfqq = cic_to_cfqq(cic, is_sync);
 	if (!cfqq) {
-		cfqq = cfq_get_queue(cfqd, is_sync, cic->ioc, gfp_mask);
+		cfqq = cfq_get_queue(cfqd, bio, is_sync, cic->ioc,
+						gfp_mask);
 
 		if (!cfqq)
 			goto queue_fail;
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index bae8e44..84fd338 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -133,7 +133,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	int ret;
 	struct deadline_queue *dq;
 
-	dq = elv_get_sched_queue_current(q);
+	dq = elv_get_sched_queue_bio(q, bio);
 	if (!dq)
 		return ELEVATOR_NO_MERGE;
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index c1f676e..18dbcc1 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -14,6 +14,7 @@
 #include "elevator-fq.h"
 #include <linux/blktrace_api.h>
 #include <linux/seq_file.h>
+#include <linux/biotrack.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -1074,6 +1075,9 @@ void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 
 struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
 {
+	if (!cgroup)
+		return &io_root_cgroup;
+
 	return container_of(cgroup_subsys_state(cgroup, io_subsys_id),
 			    struct io_cgroup, css);
 }
@@ -1424,9 +1428,47 @@ end:
 	return iog;
 }
 
+/* Map a bio to respective cgroup. Null return means, map it to root cgroup */
+static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
+{
+	unsigned long bio_cgroup_id;
+	struct cgroup *cgroup;
+
+	/* blk_get_request can reach here without passing a bio */
+	if (!bio)
+		return NULL;
+
+	if (bio_barrier(bio)) {
+		/*
+		 * Map barrier requests to root group. May be more special
+		 * bio cases should come here
+		 */
+		return NULL;
+	}
+
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (elv_bio_sync(bio)) {
+		/* sync io. Determine cgroup from submitting task context. */
+		cgroup = task_cgroup(current, io_subsys_id);
+		return cgroup;
+	}
+
+	/* Async io. Determine cgroup from with cgroup id stored in page */
+	bio_cgroup_id = get_blkio_cgroup_id(bio);
+
+	if (!bio_cgroup_id)
+		return NULL;
+
+	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
+#else
+	cgroup = task_cgroup(current, io_subsys_id);
+#endif
+	return cgroup;
+}
+
 /*
- * Search for the io group current task belongs to. If create=1, then also
- * create the io group if it is not already there.
+ * Find the io group bio belongs to.
+ * If "create" is set, io group is created if it is not already present.
  *
  * Note: This function should be called with queue lock held. It returns
  * a pointer to io group without taking any reference. That group will
@@ -1435,7 +1477,8 @@ end:
  * pointer even after dropping queue lock, take a reference to the group
  * before dropping queue lock.
  */
-struct io_group *io_get_io_group(struct request_queue *q, int create)
+struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+					int create)
 {
 	struct cgroup *cgroup;
 	struct io_group *iog;
@@ -1444,18 +1487,33 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 	assert_spin_locked(q->queue_lock);
 
 	rcu_read_lock();
-	cgroup = task_cgroup(current, io_subsys_id);
-	iog = io_find_alloc_group(q, cgroup, efqd, create, NULL);
-	if (!iog) {
+
+	if (!bio)
+		cgroup = task_cgroup(current, io_subsys_id);
+	else
+		cgroup = get_cgroup_from_bio(bio);
+
+	if (!cgroup) {
 		if (create)
 			iog = efqd->root_group;
-		else
+		else {
 			/*
 			 * bio merge functions doing lookup don't want to
 			 * map bio to root group by default
 			 */
 			iog = NULL;
+		}
+		goto out;
 	}
+
+	iog = io_find_alloc_group(q, cgroup, efqd, create, bio);
+	if (!iog) {
+		if (create)
+			iog = efqd->root_group;
+		else
+			iog = NULL;
+	}
+out:
 	rcu_read_unlock();
 	return iog;
 }
@@ -1861,7 +1919,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 		return 1;
 
 	/* Determine the io group of the bio submitting task */
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, bio, 0);
 	if (!iog) {
 		/* May be task belongs to a differet cgroup for which io
 		 * group has not been setup yet. */
@@ -1885,7 +1943,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
  * function is not invoked.
  */
 int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
-					gfp_t gfp_mask)
+				struct bio *bio, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 	unsigned long flags;
@@ -1901,7 +1959,7 @@ int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
 
 retry:
 	/* Determine the io group request belongs to */
-	iog = io_get_io_group(q, 1);
+	iog = io_get_io_group(q, bio, 1);
 	BUG_ON(!iog);
 
 	/* Get the iosched queue */
@@ -1986,17 +2044,17 @@ queue_fail:
 }
 
 /*
- * Find out the io queue of current task. Optimization for single ioq
+ * Find out the io queue of bio belongs to. Optimization for single ioq
  * per io group io schedulers.
  */
-struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+struct io_queue *elv_lookup_ioq_bio(struct request_queue *q, struct bio *bio)
 {
 	struct io_group *iog;
 
 	/* Determine the io group and io queue of the bio submitting task */
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, bio, 0);
 	if (!iog) {
-		/* May be task belongs to a cgroup for which io group has
+		/* May be bio belongs to a cgroup for which io group has
 		 * not been setup yet. */
 		return NULL;
 	}
@@ -2061,7 +2119,8 @@ void io_free_root_group(struct elevator_queue *e)
 	kfree(iog);
 }
 
-struct io_group *io_get_io_group(struct request_queue *q, int create)
+struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+						int create)
 {
 	return q->elevator->efqd.root_group;
 }
@@ -3169,6 +3228,10 @@ expire:
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
 keep_queue:
+	if (ioq)
+		elv_log_ioq(efqd, ioq, "select busy=%d qued=%d disp=%d",
+				elv_nr_busy_ioq(q->elevator), ioq->nr_queued,
+				elv_ioq_nr_dispatched(ioq));
 	return ioq;
 }
 
@@ -3304,7 +3367,9 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	ioq = rq->ioq;
 	iog = ioq_to_io_group(ioq);
 
-	elv_log_ioq(efqd, ioq, "complete");
+	elv_log_ioq(efqd, ioq, "complete rq_queued=%d drv=%d disp=%d",
+				ioq->nr_queued, efqd->rq_in_driver,
+				elv_ioq_nr_dispatched(ioq));
 
 	elv_update_hw_tag(efqd);
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 7281451..6d0df21 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -529,10 +529,12 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 }
 
 extern int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
-					gfp_t gfp_mask);
+					struct bio *bio, gfp_t gfp_mask);
 extern void elv_fq_unset_request_ioq(struct request_queue *q,
 					struct request *rq);
 extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
+extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio);
 
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
@@ -590,7 +592,7 @@ static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
 }
 
 static inline int elv_fq_set_request_ioq(struct request_queue *q,
-					struct request *rq, gfp_t gfp_mask)
+			struct request *rq, struct bio *bio, gfp_t gfp_mask)
 {
 	return 0;
 }
@@ -605,6 +607,12 @@ static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
 	return NULL;
 }
 
+static inline struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
@@ -658,7 +666,8 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio);
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
-extern struct io_group *io_get_io_group(struct request_queue *q, int create);
+extern struct io_group *io_get_io_group(struct request_queue *q,
+					struct bio *bio, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
 extern void elv_free_ioq(struct io_queue *ioq);
@@ -717,7 +726,7 @@ static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 	return 1;
 }
 static inline int elv_fq_set_request_ioq(struct request_queue *q,
-					struct request *rq, gfp_t gfp_mask)
+			struct request *rq, struct bio *bio, gfp_t gfp_mask)
 {
 	return 0;
 }
@@ -732,5 +741,11 @@ static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
 	return NULL;
 }
 
+static inline struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio)
+{
+	return NULL;
+}
+
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index de42fd6..b49efd6 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -967,7 +967,8 @@ struct request *elv_former_request(struct request_queue *q, struct request *rq)
 	return NULL;
 }
 
-int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+int elv_set_request(struct request_queue *q, struct request *rq,
+			struct bio *bio, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 
@@ -976,10 +977,10 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 	 * ioq per io group
 	 */
 	if (elv_iosched_single_ioq(e))
-		return elv_fq_set_request_ioq(q, rq, gfp_mask);
+		return elv_fq_set_request_ioq(q, rq, bio, gfp_mask);
 
 	if (e->ops->elevator_set_req_fn)
-		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
+		return e->ops->elevator_set_req_fn(q, rq, bio, gfp_mask);
 
 	rq->elevator_private = NULL;
 	return 0;
@@ -1368,19 +1369,19 @@ void *elv_select_sched_queue(struct request_queue *q, int force)
 EXPORT_SYMBOL(elv_select_sched_queue);
 
 /*
- * Get the io scheduler queue pointer for current task.
+ * Get the io scheduler queue pointer for the group bio belongs to.
  *
  * If fair queuing is enabled, determine the io group of task and retrieve
  * the ioq pointer from that. This is used by only single queue ioschedulers
  * for retrieving the queue associated with the group to decide whether the
  * new bio can do a front merge or not.
  */
-void *elv_get_sched_queue_current(struct request_queue *q)
+void *elv_get_sched_queue_bio(struct request_queue *q, struct bio *bio)
 {
 	/* Fair queuing is not enabled. There is only one queue. */
 	if (!elv_iosched_fair_queuing_enabled(q->elevator))
 		return q->elevator->sched_queue;
 
-	return ioq_sched_queue(elv_lookup_ioq_current(q));
+	return ioq_sched_queue(elv_lookup_ioq_bio(q, bio));
 }
-EXPORT_SYMBOL(elv_get_sched_queue_current);
+EXPORT_SYMBOL(elv_get_sched_queue_bio);
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index b47ecb3..1177bfe 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -23,7 +23,7 @@ typedef struct request *(elevator_request_list_fn) (struct request_queue *, stru
 typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *);
 typedef int (elevator_may_queue_fn) (struct request_queue *, int);
 
-typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, gfp_t);
+typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, struct bio *bio, gfp_t);
 typedef void (elevator_put_req_fn) (struct request *);
 typedef void (elevator_activate_req_fn) (struct request_queue *, struct request *);
 typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct request *);
@@ -150,7 +150,8 @@ extern void elv_unregister_queue(struct request_queue *q);
 extern int elv_may_queue(struct request_queue *, int);
 extern void elv_abort_queue(struct request_queue *);
 extern void elv_completed_request(struct request_queue *, struct request *);
-extern int elv_set_request(struct request_queue *, struct request *, gfp_t);
+extern int elv_set_request(struct request_queue *, struct request *,
+					struct bio *bio, gfp_t);
 extern void elv_put_request(struct request_queue *, struct request *);
 extern void elv_drain_elevator(struct request_queue *);
 
@@ -279,6 +280,20 @@ static inline int elv_iosched_single_ioq(struct elevator_queue *e)
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
-extern void *elv_get_sched_queue_current(struct request_queue *q);
+extern void *elv_get_sched_queue_bio(struct request_queue *q, struct bio *bio);
+
+/*
+ * This is equivalent of rq_is_sync()/cfq_bio_sync() function where we
+ * determine whether an rq/bio is sync or not. There are cases like during
+ * merging and during * request allocation, where we don't have rq but bio
+ * and needs to find out * if this bio will be considered as sync or async by
+ * elevator/iosched. This function is useful in such cases.
+ */
+static inline int elv_bio_sync(struct bio *bio)
+{
+	if ((bio_data_dir(bio) == READ) || bio_sync(bio))
+		return 1;
+	return 0;
+}
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 15/20] io-controller: map async requests to appropriate cgroup
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o So far we were assuming that a bio/rq belongs to the task who is submitting
  it. It did not hold good in case of async writes. This patch makes use of
  blkio_cgroup pataches to attribute the aysnc writes to right group instead
  of task submitting the bio.

o For sync requests, we continue to assume that io belongs to the task
  submitting it. Only in case of async requests, we make use of io tracking
  patches to track the owner cgroup.

o So far cfq always caches the async queue pointer. With async requests now
  not necessarily being tied to submitting task io context, caching the
  pointer will not help for async queues. This patch introduces a new config
  option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
  old behavior where async queue pointer is cached in task context. If it
  is not set, async queue pointer is not cached and we take help of bio
  tracking patches to determine group bio belongs to and then map it to
  async queue of that group.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   16 +++++
 block/as-iosched.c       |    2 +-
 block/blk-core.c         |    7 +-
 block/cfq-iosched.c      |  152 ++++++++++++++++++++++++++++++++++++----------
 block/deadline-iosched.c |    2 +-
 block/elevator-fq.c      |   97 ++++++++++++++++++++++++-----
 block/elevator-fq.h      |   23 ++++++-
 block/elevator.c         |   15 +++--
 include/linux/elevator.h |   21 ++++++-
 9 files changed, 268 insertions(+), 67 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 77fc786..0677099 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -124,6 +124,22 @@ config DEFAULT_IOSCHED
 	default "cfq" if DEFAULT_CFQ
 	default "noop" if DEFAULT_NOOP
 
+config TRACK_ASYNC_CONTEXT
+	bool "Determine async request context from bio"
+	depends on GROUP_IOSCHED
+	select CGROUP_BLKIO
+	default n
+	---help---
+	  Normally async request is attributed to the task submitting the
+	  request. With group ioscheduling, for accurate accounting of
+	  async writes, one needs to map the request to original task/cgroup
+	  which originated the request and not the submitter of the request.
+
+	  Currently there are generic io tracking patches to provide facility
+	  to map bio to original owner. If this option is set, for async
+	  request, original owner of the bio is decided by using io tracking
+	  patches otherwise we continue to attribute the request to the
+	  submitting thread.
 endmenu
 
 endif
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 23a3d2d..68200b3 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1499,7 +1499,7 @@ as_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
-	struct as_queue *asq = elv_get_sched_queue_current(q);
+	struct as_queue *asq = elv_get_sched_queue_bio(q, bio);
 
 	if (!asq)
 		return ELEVATOR_NO_MERGE;
diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..c77b5b2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -643,7 +643,8 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
 }
 
 static struct request *
-blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
+blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
+					gfp_t gfp_mask)
 {
 	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
 
@@ -655,7 +656,7 @@ blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
 	rq->cmd_flags = flags | REQ_ALLOCED;
 
 	if (priv) {
-		if (unlikely(elv_set_request(q, rq, gfp_mask))) {
+		if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
 			mempool_free(rq, q->rq.rq_pool);
 			return NULL;
 		}
@@ -796,7 +797,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		rw_flags |= REQ_IO_STAT;
 	spin_unlock_irq(q->queue_lock);
 
-	rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
+	rq = blk_alloc_request(q, bio, rw_flags, priv, gfp_mask);
 	if (unlikely(!rq)) {
 		/*
 		 * Allocation failed presumably due to memory. Undo anything
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index bba85b1..77bbe6c 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -160,8 +160,8 @@ CFQ_CFQQ_FNS(coop);
 	blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args)
 
 static void cfq_dispatch_insert(struct request_queue *, struct request *);
-static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,
-				       struct io_context *, gfp_t);
+static struct cfq_queue *cfq_get_queue(struct cfq_data *, struct bio *bio,
+					int, struct io_context *, gfp_t);
 static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *,
 						struct io_context *);
 
@@ -171,22 +171,56 @@ static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic,
 	return cic->cfqq[!!is_sync];
 }
 
-static inline void cic_set_cfqq(struct cfq_io_context *cic,
-				struct cfq_queue *cfqq, int is_sync)
-{
-	cic->cfqq[!!is_sync] = cfqq;
-}
-
 /*
- * We regard a request as SYNC, if it's either a read or has the SYNC bit
- * set (in which case it could also be direct WRITE).
+ * Determine the cfq queue bio should go in. This is primarily used by
+ * front merge and allow merge functions.
+ *
+ * Currently this function takes the ioprio and iprio_class from task
+ * submitting async bio. Later save the task information in the page_cgroup
+ * and retrieve task's ioprio and class from there.
  */
-static inline int cfq_bio_sync(struct bio *bio)
+static struct cfq_queue *cic_bio_to_cfqq(struct cfq_data *cfqd,
+		struct cfq_io_context *cic, struct bio *bio, int is_sync)
 {
-	if (bio_data_dir(bio) == READ || bio_sync(bio))
-		return 1;
+	struct cfq_queue *cfqq = NULL;
 
-	return 0;
+	cfqq = cic_to_cfqq(cic, is_sync);
+
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (!cfqq && !is_sync) {
+		const int ioprio = task_ioprio(cic->ioc);
+		const int ioprio_class = task_ioprio_class(cic->ioc);
+		struct io_group *iog;
+		/*
+		 * async bio tracking is enabled and we are not caching
+		 * async queue pointer in cic.
+		 */
+		iog = io_get_io_group(cfqd->queue, bio, 0);
+		if (!iog) {
+			/*
+			 * May be this is first rq/bio and io group has not
+			 * been setup yet.
+			 */
+			return NULL;
+		}
+		return io_group_async_queue_prio(iog, ioprio_class, ioprio);
+	}
+#endif
+	return cfqq;
+}
+
+static inline void cic_set_cfqq(struct cfq_io_context *cic,
+				struct cfq_queue *cfqq, int is_sync)
+{
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	/*
+	 * Don't cache async queue pointer as now one io context might
+	 * be submitting async io for various different async queues
+	 */
+	if (!is_sync)
+		return;
+#endif
+	cic->cfqq[!!is_sync] = cfqq;
 }
 
 static inline struct io_group *cfqq_to_io_group(struct cfq_queue *cfqq)
@@ -499,7 +533,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
 	if (!cic)
 		return NULL;
 
-	cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
+	cfqq = cic_bio_to_cfqq(cfqd, cic, bio, elv_bio_sync(bio));
 	if (cfqq) {
 		sector_t sector = bio->bi_sector + bio_sectors(bio);
 
@@ -581,7 +615,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	/*
 	 * Disallow merge of a sync bio into an async request.
 	 */
-	if (cfq_bio_sync(bio) && !rq_is_sync(rq))
+	if (elv_bio_sync(bio) && !rq_is_sync(rq))
 		return 0;
 
 	/*
@@ -592,7 +626,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	if (!cic)
 		return 0;
 
-	cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
+	cfqq = cic_bio_to_cfqq(cfqd, cic, bio, elv_bio_sync(bio));
 	if (cfqq == RQ_CFQQ(rq))
 		return 1;
 
@@ -1199,14 +1233,28 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	spin_lock_irqsave(q->queue_lock, flags);
 
 	cfqq = cic->cfqq[BLK_RW_ASYNC];
+
 	if (cfqq) {
 		struct cfq_queue *new_cfqq;
-		new_cfqq = cfq_get_queue(cfqd, BLK_RW_ASYNC, cic->ioc,
+
+		/*
+		 * Drop the reference to old queue unconditionally. Don't
+		 * worry whether new async prio queue has been allocated
+		 * or not.
+		 */
+		cic_set_cfqq(cic, NULL, BLK_RW_ASYNC);
+		cfq_put_queue(cfqq);
+
+		/*
+		 * Why to allocate new queue now? Will it not be automatically
+		 * allocated whenever another async request from same context
+		 * comes? Keeping it for the time being because existing cfq
+		 * code allocates the new queue immediately upon prio change
+		 */
+		new_cfqq = cfq_get_queue(cfqd, NULL, BLK_RW_ASYNC, cic->ioc,
 						GFP_ATOMIC);
-		if (new_cfqq) {
-			cic->cfqq[BLK_RW_ASYNC] = new_cfqq;
-			cfq_put_queue(cfqq);
-		}
+		if (new_cfqq)
+			cic_set_cfqq(cic, new_cfqq, BLK_RW_ASYNC);
 	}
 
 	cfqq = cic->cfqq[BLK_RW_SYNC];
@@ -1239,7 +1287,7 @@ static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic)
 
 	spin_lock_irqsave(q->queue_lock, flags);
 
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, NULL, 0);
 
 	if (async_cfqq != NULL) {
 		__iog = cfqq_to_io_group(async_cfqq);
@@ -1277,7 +1325,7 @@ static void cfq_ioc_set_cgroup(struct io_context *ioc)
 #endif  /* CONFIG_IOSCHED_CFQ_HIER */
 
 static struct cfq_queue *
-cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
+cfq_find_alloc_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 				struct io_context *ioc, gfp_t gfp_mask)
 {
 	struct cfq_queue *cfqq, *new_cfqq = NULL;
@@ -1286,12 +1334,28 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
 	struct io_group *iog = NULL;
 retry:
-	iog = io_get_io_group(q, 1);
+	iog = io_get_io_group(q, bio, 1);
 
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
 	cfqq = cic_to_cfqq(cic, is_sync);
 
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (!cfqq && !is_sync) {
+		const int ioprio = task_ioprio(cic->ioc);
+		const int ioprio_class = task_ioprio_class(cic->ioc);
+
+		/*
+		 * We have not cached async queue pointer as bio tracking
+		 * is enabled. Look into group async queue array using ioc
+		 * class and prio to see if somebody already allocated the
+		 * queue.
+		 */
+
+		cfqq = io_group_async_queue_prio(iog, ioprio_class, ioprio);
+	}
+#endif
+
 	if (!cfqq) {
 		if (new_cfqq) {
 			goto alloc_ioq;
@@ -1381,14 +1445,14 @@ out:
 }
 
 static struct cfq_queue *
-cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
-					gfp_t gfp_mask)
+cfq_get_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
+		struct io_context *ioc, gfp_t gfp_mask)
 {
 	const int ioprio = task_ioprio(ioc);
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_get_io_group(cfqd->queue, 1);
+	struct io_group *iog = io_get_io_group(cfqd->queue, bio, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
@@ -1397,7 +1461,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	}
 
 	if (!cfqq) {
-		cfqq = cfq_find_alloc_queue(cfqd, is_sync, ioc, gfp_mask);
+		cfqq = cfq_find_alloc_queue(cfqd, bio, is_sync, ioc, gfp_mask);
 		if (!cfqq)
 			return NULL;
 	}
@@ -1405,8 +1469,30 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	if (!is_sync && !async_cfqq)
 		io_group_set_async_queue(iog, ioprio_class, ioprio, cfqq->ioq);
 
-	/* ioc reference */
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	/*
+	 * ioc reference. If async request queue/group is determined from the
+	 * original task/cgroup and not from submitter task, io context can
+	 * not cache the pointer to async queue and everytime a request comes,
+	 * it will be determined by going through the async queue array.
+	 *
+	 * This comes from the fact that we might be getting async requests
+	 * which belong to a different cgroup altogether than the cgroup
+	 * iocontext belongs to. And this thread might be submitting bios
+	 * from various cgroups. So every time async queue will be different
+	 * based on the cgroup of the bio/rq. Can't cache the async cfqq
+	 * pointer in cic.
+	 */
+	if (is_sync)
+		elv_get_ioq(cfqq->ioq);
+#else
+	/*
+	 * async requests are being attributed to task submitting
+	 * it, hence cic can cache async cfqq pointer. Take the
+	 * queue reference even for async queue.
+	 */
 	elv_get_ioq(cfqq->ioq);
+#endif
 	return cfqq;
 }
 
@@ -1802,7 +1888,8 @@ static void cfq_put_request(struct request *rq)
  * Allocate cfq data structures associated with this request.
  */
 static int
-cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+cfq_set_request(struct request_queue *q, struct request *rq, struct bio *bio,
+				gfp_t gfp_mask)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_io_context *cic;
@@ -1822,7 +1909,8 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 	cfqq = cic_to_cfqq(cic, is_sync);
 	if (!cfqq) {
-		cfqq = cfq_get_queue(cfqd, is_sync, cic->ioc, gfp_mask);
+		cfqq = cfq_get_queue(cfqd, bio, is_sync, cic->ioc,
+						gfp_mask);
 
 		if (!cfqq)
 			goto queue_fail;
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index bae8e44..84fd338 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -133,7 +133,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	int ret;
 	struct deadline_queue *dq;
 
-	dq = elv_get_sched_queue_current(q);
+	dq = elv_get_sched_queue_bio(q, bio);
 	if (!dq)
 		return ELEVATOR_NO_MERGE;
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index c1f676e..18dbcc1 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -14,6 +14,7 @@
 #include "elevator-fq.h"
 #include <linux/blktrace_api.h>
 #include <linux/seq_file.h>
+#include <linux/biotrack.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -1074,6 +1075,9 @@ void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 
 struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
 {
+	if (!cgroup)
+		return &io_root_cgroup;
+
 	return container_of(cgroup_subsys_state(cgroup, io_subsys_id),
 			    struct io_cgroup, css);
 }
@@ -1424,9 +1428,47 @@ end:
 	return iog;
 }
 
+/* Map a bio to respective cgroup. Null return means, map it to root cgroup */
+static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
+{
+	unsigned long bio_cgroup_id;
+	struct cgroup *cgroup;
+
+	/* blk_get_request can reach here without passing a bio */
+	if (!bio)
+		return NULL;
+
+	if (bio_barrier(bio)) {
+		/*
+		 * Map barrier requests to root group. May be more special
+		 * bio cases should come here
+		 */
+		return NULL;
+	}
+
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (elv_bio_sync(bio)) {
+		/* sync io. Determine cgroup from submitting task context. */
+		cgroup = task_cgroup(current, io_subsys_id);
+		return cgroup;
+	}
+
+	/* Async io. Determine cgroup from with cgroup id stored in page */
+	bio_cgroup_id = get_blkio_cgroup_id(bio);
+
+	if (!bio_cgroup_id)
+		return NULL;
+
+	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
+#else
+	cgroup = task_cgroup(current, io_subsys_id);
+#endif
+	return cgroup;
+}
+
 /*
- * Search for the io group current task belongs to. If create=1, then also
- * create the io group if it is not already there.
+ * Find the io group bio belongs to.
+ * If "create" is set, io group is created if it is not already present.
  *
  * Note: This function should be called with queue lock held. It returns
  * a pointer to io group without taking any reference. That group will
@@ -1435,7 +1477,8 @@ end:
  * pointer even after dropping queue lock, take a reference to the group
  * before dropping queue lock.
  */
-struct io_group *io_get_io_group(struct request_queue *q, int create)
+struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+					int create)
 {
 	struct cgroup *cgroup;
 	struct io_group *iog;
@@ -1444,18 +1487,33 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 	assert_spin_locked(q->queue_lock);
 
 	rcu_read_lock();
-	cgroup = task_cgroup(current, io_subsys_id);
-	iog = io_find_alloc_group(q, cgroup, efqd, create, NULL);
-	if (!iog) {
+
+	if (!bio)
+		cgroup = task_cgroup(current, io_subsys_id);
+	else
+		cgroup = get_cgroup_from_bio(bio);
+
+	if (!cgroup) {
 		if (create)
 			iog = efqd->root_group;
-		else
+		else {
 			/*
 			 * bio merge functions doing lookup don't want to
 			 * map bio to root group by default
 			 */
 			iog = NULL;
+		}
+		goto out;
 	}
+
+	iog = io_find_alloc_group(q, cgroup, efqd, create, bio);
+	if (!iog) {
+		if (create)
+			iog = efqd->root_group;
+		else
+			iog = NULL;
+	}
+out:
 	rcu_read_unlock();
 	return iog;
 }
@@ -1861,7 +1919,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 		return 1;
 
 	/* Determine the io group of the bio submitting task */
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, bio, 0);
 	if (!iog) {
 		/* May be task belongs to a differet cgroup for which io
 		 * group has not been setup yet. */
@@ -1885,7 +1943,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
  * function is not invoked.
  */
 int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
-					gfp_t gfp_mask)
+				struct bio *bio, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 	unsigned long flags;
@@ -1901,7 +1959,7 @@ int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
 
 retry:
 	/* Determine the io group request belongs to */
-	iog = io_get_io_group(q, 1);
+	iog = io_get_io_group(q, bio, 1);
 	BUG_ON(!iog);
 
 	/* Get the iosched queue */
@@ -1986,17 +2044,17 @@ queue_fail:
 }
 
 /*
- * Find out the io queue of current task. Optimization for single ioq
+ * Find out the io queue of bio belongs to. Optimization for single ioq
  * per io group io schedulers.
  */
-struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+struct io_queue *elv_lookup_ioq_bio(struct request_queue *q, struct bio *bio)
 {
 	struct io_group *iog;
 
 	/* Determine the io group and io queue of the bio submitting task */
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, bio, 0);
 	if (!iog) {
-		/* May be task belongs to a cgroup for which io group has
+		/* May be bio belongs to a cgroup for which io group has
 		 * not been setup yet. */
 		return NULL;
 	}
@@ -2061,7 +2119,8 @@ void io_free_root_group(struct elevator_queue *e)
 	kfree(iog);
 }
 
-struct io_group *io_get_io_group(struct request_queue *q, int create)
+struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+						int create)
 {
 	return q->elevator->efqd.root_group;
 }
@@ -3169,6 +3228,10 @@ expire:
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
 keep_queue:
+	if (ioq)
+		elv_log_ioq(efqd, ioq, "select busy=%d qued=%d disp=%d",
+				elv_nr_busy_ioq(q->elevator), ioq->nr_queued,
+				elv_ioq_nr_dispatched(ioq));
 	return ioq;
 }
 
@@ -3304,7 +3367,9 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	ioq = rq->ioq;
 	iog = ioq_to_io_group(ioq);
 
-	elv_log_ioq(efqd, ioq, "complete");
+	elv_log_ioq(efqd, ioq, "complete rq_queued=%d drv=%d disp=%d",
+				ioq->nr_queued, efqd->rq_in_driver,
+				elv_ioq_nr_dispatched(ioq));
 
 	elv_update_hw_tag(efqd);
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 7281451..6d0df21 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -529,10 +529,12 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 }
 
 extern int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
-					gfp_t gfp_mask);
+					struct bio *bio, gfp_t gfp_mask);
 extern void elv_fq_unset_request_ioq(struct request_queue *q,
 					struct request *rq);
 extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
+extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio);
 
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
@@ -590,7 +592,7 @@ static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
 }
 
 static inline int elv_fq_set_request_ioq(struct request_queue *q,
-					struct request *rq, gfp_t gfp_mask)
+			struct request *rq, struct bio *bio, gfp_t gfp_mask)
 {
 	return 0;
 }
@@ -605,6 +607,12 @@ static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
 	return NULL;
 }
 
+static inline struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
@@ -658,7 +666,8 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio);
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
-extern struct io_group *io_get_io_group(struct request_queue *q, int create);
+extern struct io_group *io_get_io_group(struct request_queue *q,
+					struct bio *bio, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
 extern void elv_free_ioq(struct io_queue *ioq);
@@ -717,7 +726,7 @@ static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 	return 1;
 }
 static inline int elv_fq_set_request_ioq(struct request_queue *q,
-					struct request *rq, gfp_t gfp_mask)
+			struct request *rq, struct bio *bio, gfp_t gfp_mask)
 {
 	return 0;
 }
@@ -732,5 +741,11 @@ static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
 	return NULL;
 }
 
+static inline struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio)
+{
+	return NULL;
+}
+
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index de42fd6..b49efd6 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -967,7 +967,8 @@ struct request *elv_former_request(struct request_queue *q, struct request *rq)
 	return NULL;
 }
 
-int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+int elv_set_request(struct request_queue *q, struct request *rq,
+			struct bio *bio, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 
@@ -976,10 +977,10 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 	 * ioq per io group
 	 */
 	if (elv_iosched_single_ioq(e))
-		return elv_fq_set_request_ioq(q, rq, gfp_mask);
+		return elv_fq_set_request_ioq(q, rq, bio, gfp_mask);
 
 	if (e->ops->elevator_set_req_fn)
-		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
+		return e->ops->elevator_set_req_fn(q, rq, bio, gfp_mask);
 
 	rq->elevator_private = NULL;
 	return 0;
@@ -1368,19 +1369,19 @@ void *elv_select_sched_queue(struct request_queue *q, int force)
 EXPORT_SYMBOL(elv_select_sched_queue);
 
 /*
- * Get the io scheduler queue pointer for current task.
+ * Get the io scheduler queue pointer for the group bio belongs to.
  *
  * If fair queuing is enabled, determine the io group of task and retrieve
  * the ioq pointer from that. This is used by only single queue ioschedulers
  * for retrieving the queue associated with the group to decide whether the
  * new bio can do a front merge or not.
  */
-void *elv_get_sched_queue_current(struct request_queue *q)
+void *elv_get_sched_queue_bio(struct request_queue *q, struct bio *bio)
 {
 	/* Fair queuing is not enabled. There is only one queue. */
 	if (!elv_iosched_fair_queuing_enabled(q->elevator))
 		return q->elevator->sched_queue;
 
-	return ioq_sched_queue(elv_lookup_ioq_current(q));
+	return ioq_sched_queue(elv_lookup_ioq_bio(q, bio));
 }
-EXPORT_SYMBOL(elv_get_sched_queue_current);
+EXPORT_SYMBOL(elv_get_sched_queue_bio);
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index b47ecb3..1177bfe 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -23,7 +23,7 @@ typedef struct request *(elevator_request_list_fn) (struct request_queue *, stru
 typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *);
 typedef int (elevator_may_queue_fn) (struct request_queue *, int);
 
-typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, gfp_t);
+typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, struct bio *bio, gfp_t);
 typedef void (elevator_put_req_fn) (struct request *);
 typedef void (elevator_activate_req_fn) (struct request_queue *, struct request *);
 typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct request *);
@@ -150,7 +150,8 @@ extern void elv_unregister_queue(struct request_queue *q);
 extern int elv_may_queue(struct request_queue *, int);
 extern void elv_abort_queue(struct request_queue *);
 extern void elv_completed_request(struct request_queue *, struct request *);
-extern int elv_set_request(struct request_queue *, struct request *, gfp_t);
+extern int elv_set_request(struct request_queue *, struct request *,
+					struct bio *bio, gfp_t);
 extern void elv_put_request(struct request_queue *, struct request *);
 extern void elv_drain_elevator(struct request_queue *);
 
@@ -279,6 +280,20 @@ static inline int elv_iosched_single_ioq(struct elevator_queue *e)
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
-extern void *elv_get_sched_queue_current(struct request_queue *q);
+extern void *elv_get_sched_queue_bio(struct request_queue *q, struct bio *bio);
+
+/*
+ * This is equivalent of rq_is_sync()/cfq_bio_sync() function where we
+ * determine whether an rq/bio is sync or not. There are cases like during
+ * merging and during * request allocation, where we don't have rq but bio
+ * and needs to find out * if this bio will be considered as sync or async by
+ * elevator/iosched. This function is useful in such cases.
+ */
+static inline int elv_bio_sync(struct bio *bio)
+{
+	if ((bio_data_dir(bio) == READ) || bio_sync(bio))
+		return 1;
+	return 0;
+}
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 15/20] io-controller: map async requests to appropriate cgroup
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o So far we were assuming that a bio/rq belongs to the task who is submitting
  it. It did not hold good in case of async writes. This patch makes use of
  blkio_cgroup pataches to attribute the aysnc writes to right group instead
  of task submitting the bio.

o For sync requests, we continue to assume that io belongs to the task
  submitting it. Only in case of async requests, we make use of io tracking
  patches to track the owner cgroup.

o So far cfq always caches the async queue pointer. With async requests now
  not necessarily being tied to submitting task io context, caching the
  pointer will not help for async queues. This patch introduces a new config
  option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
  old behavior where async queue pointer is cached in task context. If it
  is not set, async queue pointer is not cached and we take help of bio
  tracking patches to determine group bio belongs to and then map it to
  async queue of that group.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched    |   16 +++++
 block/as-iosched.c       |    2 +-
 block/blk-core.c         |    7 +-
 block/cfq-iosched.c      |  152 ++++++++++++++++++++++++++++++++++++----------
 block/deadline-iosched.c |    2 +-
 block/elevator-fq.c      |   97 ++++++++++++++++++++++++-----
 block/elevator-fq.h      |   23 ++++++-
 block/elevator.c         |   15 +++--
 include/linux/elevator.h |   21 ++++++-
 9 files changed, 268 insertions(+), 67 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 77fc786..0677099 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -124,6 +124,22 @@ config DEFAULT_IOSCHED
 	default "cfq" if DEFAULT_CFQ
 	default "noop" if DEFAULT_NOOP
 
+config TRACK_ASYNC_CONTEXT
+	bool "Determine async request context from bio"
+	depends on GROUP_IOSCHED
+	select CGROUP_BLKIO
+	default n
+	---help---
+	  Normally async request is attributed to the task submitting the
+	  request. With group ioscheduling, for accurate accounting of
+	  async writes, one needs to map the request to original task/cgroup
+	  which originated the request and not the submitter of the request.
+
+	  Currently there are generic io tracking patches to provide facility
+	  to map bio to original owner. If this option is set, for async
+	  request, original owner of the bio is decided by using io tracking
+	  patches otherwise we continue to attribute the request to the
+	  submitting thread.
 endmenu
 
 endif
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 23a3d2d..68200b3 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1499,7 +1499,7 @@ as_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
-	struct as_queue *asq = elv_get_sched_queue_current(q);
+	struct as_queue *asq = elv_get_sched_queue_bio(q, bio);
 
 	if (!asq)
 		return ELEVATOR_NO_MERGE;
diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..c77b5b2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -643,7 +643,8 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
 }
 
 static struct request *
-blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
+blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
+					gfp_t gfp_mask)
 {
 	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
 
@@ -655,7 +656,7 @@ blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
 	rq->cmd_flags = flags | REQ_ALLOCED;
 
 	if (priv) {
-		if (unlikely(elv_set_request(q, rq, gfp_mask))) {
+		if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
 			mempool_free(rq, q->rq.rq_pool);
 			return NULL;
 		}
@@ -796,7 +797,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		rw_flags |= REQ_IO_STAT;
 	spin_unlock_irq(q->queue_lock);
 
-	rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
+	rq = blk_alloc_request(q, bio, rw_flags, priv, gfp_mask);
 	if (unlikely(!rq)) {
 		/*
 		 * Allocation failed presumably due to memory. Undo anything
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index bba85b1..77bbe6c 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -160,8 +160,8 @@ CFQ_CFQQ_FNS(coop);
 	blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args)
 
 static void cfq_dispatch_insert(struct request_queue *, struct request *);
-static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,
-				       struct io_context *, gfp_t);
+static struct cfq_queue *cfq_get_queue(struct cfq_data *, struct bio *bio,
+					int, struct io_context *, gfp_t);
 static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *,
 						struct io_context *);
 
@@ -171,22 +171,56 @@ static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic,
 	return cic->cfqq[!!is_sync];
 }
 
-static inline void cic_set_cfqq(struct cfq_io_context *cic,
-				struct cfq_queue *cfqq, int is_sync)
-{
-	cic->cfqq[!!is_sync] = cfqq;
-}
-
 /*
- * We regard a request as SYNC, if it's either a read or has the SYNC bit
- * set (in which case it could also be direct WRITE).
+ * Determine the cfq queue bio should go in. This is primarily used by
+ * front merge and allow merge functions.
+ *
+ * Currently this function takes the ioprio and iprio_class from task
+ * submitting async bio. Later save the task information in the page_cgroup
+ * and retrieve task's ioprio and class from there.
  */
-static inline int cfq_bio_sync(struct bio *bio)
+static struct cfq_queue *cic_bio_to_cfqq(struct cfq_data *cfqd,
+		struct cfq_io_context *cic, struct bio *bio, int is_sync)
 {
-	if (bio_data_dir(bio) == READ || bio_sync(bio))
-		return 1;
+	struct cfq_queue *cfqq = NULL;
 
-	return 0;
+	cfqq = cic_to_cfqq(cic, is_sync);
+
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (!cfqq && !is_sync) {
+		const int ioprio = task_ioprio(cic->ioc);
+		const int ioprio_class = task_ioprio_class(cic->ioc);
+		struct io_group *iog;
+		/*
+		 * async bio tracking is enabled and we are not caching
+		 * async queue pointer in cic.
+		 */
+		iog = io_get_io_group(cfqd->queue, bio, 0);
+		if (!iog) {
+			/*
+			 * May be this is first rq/bio and io group has not
+			 * been setup yet.
+			 */
+			return NULL;
+		}
+		return io_group_async_queue_prio(iog, ioprio_class, ioprio);
+	}
+#endif
+	return cfqq;
+}
+
+static inline void cic_set_cfqq(struct cfq_io_context *cic,
+				struct cfq_queue *cfqq, int is_sync)
+{
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	/*
+	 * Don't cache async queue pointer as now one io context might
+	 * be submitting async io for various different async queues
+	 */
+	if (!is_sync)
+		return;
+#endif
+	cic->cfqq[!!is_sync] = cfqq;
 }
 
 static inline struct io_group *cfqq_to_io_group(struct cfq_queue *cfqq)
@@ -499,7 +533,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
 	if (!cic)
 		return NULL;
 
-	cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
+	cfqq = cic_bio_to_cfqq(cfqd, cic, bio, elv_bio_sync(bio));
 	if (cfqq) {
 		sector_t sector = bio->bi_sector + bio_sectors(bio);
 
@@ -581,7 +615,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	/*
 	 * Disallow merge of a sync bio into an async request.
 	 */
-	if (cfq_bio_sync(bio) && !rq_is_sync(rq))
+	if (elv_bio_sync(bio) && !rq_is_sync(rq))
 		return 0;
 
 	/*
@@ -592,7 +626,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
 	if (!cic)
 		return 0;
 
-	cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
+	cfqq = cic_bio_to_cfqq(cfqd, cic, bio, elv_bio_sync(bio));
 	if (cfqq == RQ_CFQQ(rq))
 		return 1;
 
@@ -1199,14 +1233,28 @@ static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
 	spin_lock_irqsave(q->queue_lock, flags);
 
 	cfqq = cic->cfqq[BLK_RW_ASYNC];
+
 	if (cfqq) {
 		struct cfq_queue *new_cfqq;
-		new_cfqq = cfq_get_queue(cfqd, BLK_RW_ASYNC, cic->ioc,
+
+		/*
+		 * Drop the reference to old queue unconditionally. Don't
+		 * worry whether new async prio queue has been allocated
+		 * or not.
+		 */
+		cic_set_cfqq(cic, NULL, BLK_RW_ASYNC);
+		cfq_put_queue(cfqq);
+
+		/*
+		 * Why to allocate new queue now? Will it not be automatically
+		 * allocated whenever another async request from same context
+		 * comes? Keeping it for the time being because existing cfq
+		 * code allocates the new queue immediately upon prio change
+		 */
+		new_cfqq = cfq_get_queue(cfqd, NULL, BLK_RW_ASYNC, cic->ioc,
 						GFP_ATOMIC);
-		if (new_cfqq) {
-			cic->cfqq[BLK_RW_ASYNC] = new_cfqq;
-			cfq_put_queue(cfqq);
-		}
+		if (new_cfqq)
+			cic_set_cfqq(cic, new_cfqq, BLK_RW_ASYNC);
 	}
 
 	cfqq = cic->cfqq[BLK_RW_SYNC];
@@ -1239,7 +1287,7 @@ static void changed_cgroup(struct io_context *ioc, struct cfq_io_context *cic)
 
 	spin_lock_irqsave(q->queue_lock, flags);
 
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, NULL, 0);
 
 	if (async_cfqq != NULL) {
 		__iog = cfqq_to_io_group(async_cfqq);
@@ -1277,7 +1325,7 @@ static void cfq_ioc_set_cgroup(struct io_context *ioc)
 #endif  /* CONFIG_IOSCHED_CFQ_HIER */
 
 static struct cfq_queue *
-cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
+cfq_find_alloc_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 				struct io_context *ioc, gfp_t gfp_mask)
 {
 	struct cfq_queue *cfqq, *new_cfqq = NULL;
@@ -1286,12 +1334,28 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
 	struct io_group *iog = NULL;
 retry:
-	iog = io_get_io_group(q, 1);
+	iog = io_get_io_group(q, bio, 1);
 
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
 	cfqq = cic_to_cfqq(cic, is_sync);
 
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (!cfqq && !is_sync) {
+		const int ioprio = task_ioprio(cic->ioc);
+		const int ioprio_class = task_ioprio_class(cic->ioc);
+
+		/*
+		 * We have not cached async queue pointer as bio tracking
+		 * is enabled. Look into group async queue array using ioc
+		 * class and prio to see if somebody already allocated the
+		 * queue.
+		 */
+
+		cfqq = io_group_async_queue_prio(iog, ioprio_class, ioprio);
+	}
+#endif
+
 	if (!cfqq) {
 		if (new_cfqq) {
 			goto alloc_ioq;
@@ -1381,14 +1445,14 @@ out:
 }
 
 static struct cfq_queue *
-cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
-					gfp_t gfp_mask)
+cfq_get_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
+		struct io_context *ioc, gfp_t gfp_mask)
 {
 	const int ioprio = task_ioprio(ioc);
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_get_io_group(cfqd->queue, 1);
+	struct io_group *iog = io_get_io_group(cfqd->queue, bio, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
@@ -1397,7 +1461,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	}
 
 	if (!cfqq) {
-		cfqq = cfq_find_alloc_queue(cfqd, is_sync, ioc, gfp_mask);
+		cfqq = cfq_find_alloc_queue(cfqd, bio, is_sync, ioc, gfp_mask);
 		if (!cfqq)
 			return NULL;
 	}
@@ -1405,8 +1469,30 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
 	if (!is_sync && !async_cfqq)
 		io_group_set_async_queue(iog, ioprio_class, ioprio, cfqq->ioq);
 
-	/* ioc reference */
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	/*
+	 * ioc reference. If async request queue/group is determined from the
+	 * original task/cgroup and not from submitter task, io context can
+	 * not cache the pointer to async queue and everytime a request comes,
+	 * it will be determined by going through the async queue array.
+	 *
+	 * This comes from the fact that we might be getting async requests
+	 * which belong to a different cgroup altogether than the cgroup
+	 * iocontext belongs to. And this thread might be submitting bios
+	 * from various cgroups. So every time async queue will be different
+	 * based on the cgroup of the bio/rq. Can't cache the async cfqq
+	 * pointer in cic.
+	 */
+	if (is_sync)
+		elv_get_ioq(cfqq->ioq);
+#else
+	/*
+	 * async requests are being attributed to task submitting
+	 * it, hence cic can cache async cfqq pointer. Take the
+	 * queue reference even for async queue.
+	 */
 	elv_get_ioq(cfqq->ioq);
+#endif
 	return cfqq;
 }
 
@@ -1802,7 +1888,8 @@ static void cfq_put_request(struct request *rq)
  * Allocate cfq data structures associated with this request.
  */
 static int
-cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+cfq_set_request(struct request_queue *q, struct request *rq, struct bio *bio,
+				gfp_t gfp_mask)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_io_context *cic;
@@ -1822,7 +1909,8 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 	cfqq = cic_to_cfqq(cic, is_sync);
 	if (!cfqq) {
-		cfqq = cfq_get_queue(cfqd, is_sync, cic->ioc, gfp_mask);
+		cfqq = cfq_get_queue(cfqd, bio, is_sync, cic->ioc,
+						gfp_mask);
 
 		if (!cfqq)
 			goto queue_fail;
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index bae8e44..84fd338 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -133,7 +133,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	int ret;
 	struct deadline_queue *dq;
 
-	dq = elv_get_sched_queue_current(q);
+	dq = elv_get_sched_queue_bio(q, bio);
 	if (!dq)
 		return ELEVATOR_NO_MERGE;
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index c1f676e..18dbcc1 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -14,6 +14,7 @@
 #include "elevator-fq.h"
 #include <linux/blktrace_api.h>
 #include <linux/seq_file.h>
+#include <linux/biotrack.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -1074,6 +1075,9 @@ void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
 
 struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
 {
+	if (!cgroup)
+		return &io_root_cgroup;
+
 	return container_of(cgroup_subsys_state(cgroup, io_subsys_id),
 			    struct io_cgroup, css);
 }
@@ -1424,9 +1428,47 @@ end:
 	return iog;
 }
 
+/* Map a bio to respective cgroup. Null return means, map it to root cgroup */
+static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
+{
+	unsigned long bio_cgroup_id;
+	struct cgroup *cgroup;
+
+	/* blk_get_request can reach here without passing a bio */
+	if (!bio)
+		return NULL;
+
+	if (bio_barrier(bio)) {
+		/*
+		 * Map barrier requests to root group. May be more special
+		 * bio cases should come here
+		 */
+		return NULL;
+	}
+
+#ifdef CONFIG_TRACK_ASYNC_CONTEXT
+	if (elv_bio_sync(bio)) {
+		/* sync io. Determine cgroup from submitting task context. */
+		cgroup = task_cgroup(current, io_subsys_id);
+		return cgroup;
+	}
+
+	/* Async io. Determine cgroup from with cgroup id stored in page */
+	bio_cgroup_id = get_blkio_cgroup_id(bio);
+
+	if (!bio_cgroup_id)
+		return NULL;
+
+	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
+#else
+	cgroup = task_cgroup(current, io_subsys_id);
+#endif
+	return cgroup;
+}
+
 /*
- * Search for the io group current task belongs to. If create=1, then also
- * create the io group if it is not already there.
+ * Find the io group bio belongs to.
+ * If "create" is set, io group is created if it is not already present.
  *
  * Note: This function should be called with queue lock held. It returns
  * a pointer to io group without taking any reference. That group will
@@ -1435,7 +1477,8 @@ end:
  * pointer even after dropping queue lock, take a reference to the group
  * before dropping queue lock.
  */
-struct io_group *io_get_io_group(struct request_queue *q, int create)
+struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+					int create)
 {
 	struct cgroup *cgroup;
 	struct io_group *iog;
@@ -1444,18 +1487,33 @@ struct io_group *io_get_io_group(struct request_queue *q, int create)
 	assert_spin_locked(q->queue_lock);
 
 	rcu_read_lock();
-	cgroup = task_cgroup(current, io_subsys_id);
-	iog = io_find_alloc_group(q, cgroup, efqd, create, NULL);
-	if (!iog) {
+
+	if (!bio)
+		cgroup = task_cgroup(current, io_subsys_id);
+	else
+		cgroup = get_cgroup_from_bio(bio);
+
+	if (!cgroup) {
 		if (create)
 			iog = efqd->root_group;
-		else
+		else {
 			/*
 			 * bio merge functions doing lookup don't want to
 			 * map bio to root group by default
 			 */
 			iog = NULL;
+		}
+		goto out;
 	}
+
+	iog = io_find_alloc_group(q, cgroup, efqd, create, bio);
+	if (!iog) {
+		if (create)
+			iog = efqd->root_group;
+		else
+			iog = NULL;
+	}
+out:
 	rcu_read_unlock();
 	return iog;
 }
@@ -1861,7 +1919,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 		return 1;
 
 	/* Determine the io group of the bio submitting task */
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, bio, 0);
 	if (!iog) {
 		/* May be task belongs to a differet cgroup for which io
 		 * group has not been setup yet. */
@@ -1885,7 +1943,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
  * function is not invoked.
  */
 int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
-					gfp_t gfp_mask)
+				struct bio *bio, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 	unsigned long flags;
@@ -1901,7 +1959,7 @@ int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
 
 retry:
 	/* Determine the io group request belongs to */
-	iog = io_get_io_group(q, 1);
+	iog = io_get_io_group(q, bio, 1);
 	BUG_ON(!iog);
 
 	/* Get the iosched queue */
@@ -1986,17 +2044,17 @@ queue_fail:
 }
 
 /*
- * Find out the io queue of current task. Optimization for single ioq
+ * Find out the io queue of bio belongs to. Optimization for single ioq
  * per io group io schedulers.
  */
-struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
+struct io_queue *elv_lookup_ioq_bio(struct request_queue *q, struct bio *bio)
 {
 	struct io_group *iog;
 
 	/* Determine the io group and io queue of the bio submitting task */
-	iog = io_get_io_group(q, 0);
+	iog = io_get_io_group(q, bio, 0);
 	if (!iog) {
-		/* May be task belongs to a cgroup for which io group has
+		/* May be bio belongs to a cgroup for which io group has
 		 * not been setup yet. */
 		return NULL;
 	}
@@ -2061,7 +2119,8 @@ void io_free_root_group(struct elevator_queue *e)
 	kfree(iog);
 }
 
-struct io_group *io_get_io_group(struct request_queue *q, int create)
+struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+						int create)
 {
 	return q->elevator->efqd.root_group;
 }
@@ -3169,6 +3228,10 @@ expire:
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
 keep_queue:
+	if (ioq)
+		elv_log_ioq(efqd, ioq, "select busy=%d qued=%d disp=%d",
+				elv_nr_busy_ioq(q->elevator), ioq->nr_queued,
+				elv_ioq_nr_dispatched(ioq));
 	return ioq;
 }
 
@@ -3304,7 +3367,9 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 	ioq = rq->ioq;
 	iog = ioq_to_io_group(ioq);
 
-	elv_log_ioq(efqd, ioq, "complete");
+	elv_log_ioq(efqd, ioq, "complete rq_queued=%d drv=%d disp=%d",
+				ioq->nr_queued, efqd->rq_in_driver,
+				elv_ioq_nr_dispatched(ioq));
 
 	elv_update_hw_tag(efqd);
 
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 7281451..6d0df21 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -529,10 +529,12 @@ static inline int update_requeue(struct io_queue *ioq, int requeue)
 }
 
 extern int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
-					gfp_t gfp_mask);
+					struct bio *bio, gfp_t gfp_mask);
 extern void elv_fq_unset_request_ioq(struct request_queue *q,
 					struct request *rq);
 extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
+extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio);
 
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
@@ -590,7 +592,7 @@ static inline void io_group_set_ioq(struct io_group *iog, struct io_queue *ioq)
 }
 
 static inline int elv_fq_set_request_ioq(struct request_queue *q,
-					struct request *rq, gfp_t gfp_mask)
+			struct request *rq, struct bio *bio, gfp_t gfp_mask)
 {
 	return 0;
 }
@@ -605,6 +607,12 @@ static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
 	return NULL;
 }
 
+static inline struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
@@ -658,7 +666,8 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 					int ioprio);
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
-extern struct io_group *io_get_io_group(struct request_queue *q, int create);
+extern struct io_group *io_get_io_group(struct request_queue *q,
+					struct bio *bio, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
 extern void elv_free_ioq(struct io_queue *ioq);
@@ -717,7 +726,7 @@ static inline int io_group_allow_merge(struct request *rq, struct bio *bio)
 	return 1;
 }
 static inline int elv_fq_set_request_ioq(struct request_queue *q,
-					struct request *rq, gfp_t gfp_mask)
+			struct request *rq, struct bio *bio, gfp_t gfp_mask)
 {
 	return 0;
 }
@@ -732,5 +741,11 @@ static inline struct io_queue *elv_lookup_ioq_current(struct request_queue *q)
 	return NULL;
 }
 
+static inline struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
+						struct bio *bio)
+{
+	return NULL;
+}
+
 #endif /* CONFIG_ELV_FAIR_QUEUING */
 #endif /* _BFQ_SCHED_H */
diff --git a/block/elevator.c b/block/elevator.c
index de42fd6..b49efd6 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -967,7 +967,8 @@ struct request *elv_former_request(struct request_queue *q, struct request *rq)
 	return NULL;
 }
 
-int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+int elv_set_request(struct request_queue *q, struct request *rq,
+			struct bio *bio, gfp_t gfp_mask)
 {
 	struct elevator_queue *e = q->elevator;
 
@@ -976,10 +977,10 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 	 * ioq per io group
 	 */
 	if (elv_iosched_single_ioq(e))
-		return elv_fq_set_request_ioq(q, rq, gfp_mask);
+		return elv_fq_set_request_ioq(q, rq, bio, gfp_mask);
 
 	if (e->ops->elevator_set_req_fn)
-		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
+		return e->ops->elevator_set_req_fn(q, rq, bio, gfp_mask);
 
 	rq->elevator_private = NULL;
 	return 0;
@@ -1368,19 +1369,19 @@ void *elv_select_sched_queue(struct request_queue *q, int force)
 EXPORT_SYMBOL(elv_select_sched_queue);
 
 /*
- * Get the io scheduler queue pointer for current task.
+ * Get the io scheduler queue pointer for the group bio belongs to.
  *
  * If fair queuing is enabled, determine the io group of task and retrieve
  * the ioq pointer from that. This is used by only single queue ioschedulers
  * for retrieving the queue associated with the group to decide whether the
  * new bio can do a front merge or not.
  */
-void *elv_get_sched_queue_current(struct request_queue *q)
+void *elv_get_sched_queue_bio(struct request_queue *q, struct bio *bio)
 {
 	/* Fair queuing is not enabled. There is only one queue. */
 	if (!elv_iosched_fair_queuing_enabled(q->elevator))
 		return q->elevator->sched_queue;
 
-	return ioq_sched_queue(elv_lookup_ioq_current(q));
+	return ioq_sched_queue(elv_lookup_ioq_bio(q, bio));
 }
-EXPORT_SYMBOL(elv_get_sched_queue_current);
+EXPORT_SYMBOL(elv_get_sched_queue_bio);
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index b47ecb3..1177bfe 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -23,7 +23,7 @@ typedef struct request *(elevator_request_list_fn) (struct request_queue *, stru
 typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *);
 typedef int (elevator_may_queue_fn) (struct request_queue *, int);
 
-typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, gfp_t);
+typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, struct bio *bio, gfp_t);
 typedef void (elevator_put_req_fn) (struct request *);
 typedef void (elevator_activate_req_fn) (struct request_queue *, struct request *);
 typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct request *);
@@ -150,7 +150,8 @@ extern void elv_unregister_queue(struct request_queue *q);
 extern int elv_may_queue(struct request_queue *, int);
 extern void elv_abort_queue(struct request_queue *);
 extern void elv_completed_request(struct request_queue *, struct request *);
-extern int elv_set_request(struct request_queue *, struct request *, gfp_t);
+extern int elv_set_request(struct request_queue *, struct request *,
+					struct bio *bio, gfp_t);
 extern void elv_put_request(struct request_queue *, struct request *);
 extern void elv_drain_elevator(struct request_queue *);
 
@@ -279,6 +280,20 @@ static inline int elv_iosched_single_ioq(struct elevator_queue *e)
 #endif /* ELV_IOSCHED_FAIR_QUEUING */
 extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
 extern void *elv_select_sched_queue(struct request_queue *q, int force);
-extern void *elv_get_sched_queue_current(struct request_queue *q);
+extern void *elv_get_sched_queue_bio(struct request_queue *q, struct bio *bio);
+
+/*
+ * This is equivalent of rq_is_sync()/cfq_bio_sync() function where we
+ * determine whether an rq/bio is sync or not. There are cases like during
+ * merging and during * request allocation, where we don't have rq but bio
+ * and needs to find out * if this bio will be considered as sync or async by
+ * elevator/iosched. This function is useful in such cases.
+ */
+static inline int elv_bio_sync(struct bio *bio)
+{
+	if ((bio_data_dir(bio) == READ) || bio_sync(bio))
+		return 1;
+	return 0;
+}
 #endif /* CONFIG_BLOCK */
 #endif
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 16/20] io-controller: Per cgroup request descriptor support
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (14 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 15/20] io-controller: map async requests to appropriate cgroup Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 17/20] io-controller: Per io group bdi congestion interface Vivek Goyal
                     ` (5 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o Currently a request queue has got fixed number of request descriptors for
  sync and async requests. Once the request descriptors are consumed, new
  processes are put to sleep and they effectively become serialized. Because
  sync and async queues are separate, async requests don't impact sync ones
  but if one is looking for fairness between async requests, that is not
  achievable if request queue descriptors become bottleneck.

o Make request descriptor's per io group so that if there is lots of IO
  going on in one cgroup, it does not impact the IO of other group.

o This is just one relatively simple way of doing things. This patch will
  probably change after the feedback. Folks have raised concerns that in
  hierchical setup, child's request descriptors should be capped by parent's
  request descriptors. May be we need to have per cgroup per device files
  in cgroups where one can specify the upper limit of request descriptors
  and whenever a cgroup is created one needs to assign request descritor
  limit making sure total sum of child's request descriptor is not more than
  of parent.

  I guess something like memory controller. Anyway, that would be the next
  step. For the time being, we have implemented something simpler as follows.

o This patch implements the per cgroup request descriptors. request pool per
  queue is still common but every group will have its own wait list and its
  own count of request descriptors allocated to that group for sync and async
  queues. So effectively request_list becomes per io group property and not a
  global request queue feature.

o Currently one can define q->nr_requests to limit request descriptors
  allocated for the queue. Now there is another tunable q->nr_group_requests
  which controls the requests descriptr limit per group. q->nr_requests
  supercedes q->nr_group_requests to make sure if there are lots of groups
  present, we don't end up allocating too many request descriptors on the
  queue.

o Issues: Currently notion of congestion is per queue. With per group request
  descriptor it is possible that queue is not congested but the group bio
  will go into is congested.

Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/blk-core.c       |  305 +++++++++++++++++++++++++++++++++++++----------
 block/blk-settings.c   |    1 +
 block/blk-sysfs.c      |   58 +++++++--
 block/elevator-fq.c    |   14 +++
 block/elevator-fq.h    |    5 +
 block/elevator.c       |    6 +-
 include/linux/blkdev.h |   87 +++++++++++++-
 7 files changed, 394 insertions(+), 82 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c77b5b2..35e3725 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -480,20 +480,30 @@ void blk_cleanup_queue(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_cleanup_queue);
 
-static int blk_init_free_list(struct request_queue *q)
+void blk_init_request_list(struct request_list *rl)
 {
-	struct request_list *rl = &q->rq;
 
 	rl->count[BLK_RW_SYNC] = rl->count[BLK_RW_ASYNC] = 0;
-	rl->starved[BLK_RW_SYNC] = rl->starved[BLK_RW_ASYNC] = 0;
-	rl->elvpriv = 0;
 	init_waitqueue_head(&rl->wait[BLK_RW_SYNC]);
 	init_waitqueue_head(&rl->wait[BLK_RW_ASYNC]);
+}
 
-	rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
-				mempool_free_slab, request_cachep, q->node);
+static int blk_init_free_list(struct request_queue *q)
+{
+	/*
+	 * Initialize the queue request list in case there are non-hiearchical
+	 * io schedulers not making use of fair queuing infrastructure.
+	 *
+	 * For ioschedulers making use of fair queuing infrastructure, request
+	 * list is inside the associated group and when that group is
+	 * instanciated, it takes care of initializing the request list also.
+	 */
+	blk_init_request_list(&q->rq);
+	q->rq_data.rq_pool = mempool_create_node(BLKDEV_MIN_RQ,
+				mempool_alloc_slab, mempool_free_slab,
+				request_cachep, q->node);
 
-	if (!rl->rq_pool)
+	if (!q->rq_data.rq_pool)
 		return -ENOMEM;
 
 	return 0;
@@ -590,6 +600,9 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
 		return NULL;
 	}
 
+	/* init starved waiter wait queue */
+	init_waitqueue_head(&q->rq_data.starved_wait);
+
 	/*
 	 * if caller didn't supply a lock, they get per-queue locking with
 	 * our embedded lock
@@ -639,14 +652,14 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
 {
 	if (rq->cmd_flags & REQ_ELVPRIV)
 		elv_put_request(q, rq);
-	mempool_free(rq, q->rq.rq_pool);
+	mempool_free(rq, q->rq_data.rq_pool);
 }
 
 static struct request *
 blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
 					gfp_t gfp_mask)
 {
-	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+	struct request *rq = mempool_alloc(q->rq_data.rq_pool, gfp_mask);
 
 	if (!rq)
 		return NULL;
@@ -657,7 +670,7 @@ blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
 
 	if (priv) {
 		if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
-			mempool_free(rq, q->rq.rq_pool);
+			mempool_free(rq, q->rq_data.rq_pool);
 			return NULL;
 		}
 		rq->cmd_flags |= REQ_ELVPRIV;
@@ -700,18 +713,18 @@ static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
 	ioc->last_waited = jiffies;
 }
 
-static void __freed_request(struct request_queue *q, int sync)
+static void __freed_request(struct request_queue *q, int sync,
+					struct request_list *rl)
 {
-	struct request_list *rl = &q->rq;
-
-	if (rl->count[sync] < queue_congestion_off_threshold(q))
+	if (q->rq_data.count[sync] < queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, sync);
 
-	if (rl->count[sync] + 1 <= q->nr_requests) {
+	if (q->rq_data.count[sync] + 1 <= q->nr_requests)
+		blk_clear_queue_full(q, sync);
+
+	if (rl->count[sync] + 1 <= q->nr_group_requests) {
 		if (waitqueue_active(&rl->wait[sync]))
 			wake_up(&rl->wait[sync]);
-
-		blk_clear_queue_full(q, sync);
 	}
 }
 
@@ -719,63 +732,133 @@ static void __freed_request(struct request_queue *q, int sync)
  * A request has just been released.  Account for it, update the full and
  * congestion status, wake up any waiters.   Called under q->queue_lock.
  */
-static void freed_request(struct request_queue *q, int sync, int priv)
-{
-	struct request_list *rl = &q->rq;
+static void freed_request(struct request_queue *q, int sync, int priv,
+					struct request_list *rl)
+{
+	/* There is a window during request allocation where request is
+	 * mapped to one group but by the time a queue for the group is
+	 * allocated, it is possible that original cgroup/io group has been
+	 * deleted and now io queue is allocated in a different group (root)
+	 * altogether.
+	 *
+	 * One solution to the problem is that rq should take io group
+	 * reference. But it looks too much to do that to solve this issue.
+	 * The only side affect to the hard to hit issue seems to be that
+	 * we will try to decrement the rl->count for a request list which
+	 * did not allocate that request. Chcek for rl->count going less than
+	 * zero and do not decrement it if that's the case.
+	 */
+
+	if (priv && rl->count[sync] > 0)
+		rl->count[sync]--;
+
+	BUG_ON(!q->rq_data.count[sync]);
+	q->rq_data.count[sync]--;
 
-	rl->count[sync]--;
 	if (priv)
-		rl->elvpriv--;
+		q->rq_data.elvpriv--;
 
-	__freed_request(q, sync);
+	__freed_request(q, sync, rl);
 
 	if (unlikely(rl->starved[sync ^ 1]))
-		__freed_request(q, sync ^ 1);
+		__freed_request(q, sync ^ 1, rl);
+
+	/* Wake up the starved process on global list, if any */
+	if (unlikely(q->rq_data.starved)) {
+		if (waitqueue_active(&q->rq_data.starved_wait))
+			wake_up(&q->rq_data.starved_wait);
+		q->rq_data.starved--;
+	}
+}
+
+/*
+ * Returns whether one can sleep on this request list or not. There are
+ * cases (elevator switch) where request list might not have allocated
+ * any request descriptor but we deny request allocation due to gloabl
+ * limits. In that case one should sleep on global list as on this request
+ * list no wakeup will take place.
+ *
+ * Also sets the request list starved flag if there are no requests pending
+ * in the direction of rq.
+ *
+ * Return 1 --> sleep on request list, 0 --> sleep on global list
+ */
+static int can_sleep_on_request_list(struct request_list *rl, int is_sync)
+{
+	if (unlikely(rl->count[is_sync] == 0)) {
+		/*
+		 * If there is a request pending in other direction
+		 * in same io group, then set the starved flag of
+		 * the group request list. Otherwise, we need to
+		 * make this process sleep in global starved list
+		 * to make sure it will not sleep indefinitely.
+		 */
+		if (rl->count[is_sync ^ 1] != 0) {
+			rl->starved[is_sync] = 1;
+			return 1;
+		} else
+			return 0;
+	}
+
+	return 1;
 }
 
 /*
  * Get a free request, queue_lock must be held.
- * Returns NULL on failure, with queue_lock held.
+ * Returns NULL on failure, with queue_lock held. Also sets the "reason" field
+ * in case of failure. This reason field helps caller decide to whether sleep
+ * on per group list or global per queue list.
+ * reason = 0 sleep on per group list
+ * reason = 1 sleep on global list
+ *
  * Returns !NULL on success, with queue_lock *not held*.
  */
 static struct request *get_request(struct request_queue *q, int rw_flags,
-				   struct bio *bio, gfp_t gfp_mask)
+					struct bio *bio, gfp_t gfp_mask,
+					struct request_list *rl, int *reason)
 {
 	struct request *rq = NULL;
-	struct request_list *rl = &q->rq;
 	struct io_context *ioc = NULL;
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
 	int may_queue, priv;
+	int sleep_on_global = 0;
 
 	may_queue = elv_may_queue(q, rw_flags);
 	if (may_queue == ELV_MQUEUE_NO)
 		goto rq_starved;
 
-	if (rl->count[is_sync]+1 >= queue_congestion_on_threshold(q)) {
-		if (rl->count[is_sync]+1 >= q->nr_requests) {
-			ioc = current_io_context(GFP_ATOMIC, q->node);
-			/*
-			 * The queue will fill after this allocation, so set
-			 * it as full, and mark this process as "batching".
-			 * This process will be allowed to complete a batch of
-			 * requests, others will be blocked.
-			 */
-			if (!blk_queue_full(q, is_sync)) {
-				ioc_set_batching(q, ioc);
-				blk_set_queue_full(q, is_sync);
-			} else {
-				if (may_queue != ELV_MQUEUE_MUST
-						&& !ioc_batching(q, ioc)) {
-					/*
-					 * The queue is full and the allocating
-					 * process is not a "batcher", and not
-					 * exempted by the IO scheduler
-					 */
-					goto out;
-				}
+	if (q->rq_data.count[is_sync]+1 >= queue_congestion_on_threshold(q))
+		blk_set_queue_congested(q, is_sync);
+
+	/*
+	 * Looks like there is no user of queue full now.
+	 * Keeping it for time being.
+	 */
+	if (q->rq_data.count[is_sync]+1 >= q->nr_requests)
+		blk_set_queue_full(q, is_sync);
+
+	if (rl->count[is_sync]+1 >= q->nr_group_requests) {
+		ioc = current_io_context(GFP_ATOMIC, q->node);
+		/*
+		 * The queue request descriptor group will fill after this
+		 * allocation, so set
+		 * it as full, and mark this process as "batching".
+		 * This process will be allowed to complete a batch of
+		 * requests, others will be blocked.
+		 */
+		if (rl->count[is_sync] <= q->nr_group_requests)
+			ioc_set_batching(q, ioc);
+		else {
+			if (may_queue != ELV_MQUEUE_MUST
+					&& !ioc_batching(q, ioc)) {
+				/*
+				 * The queue is full and the allocating
+				 * process is not a "batcher", and not
+				 * exempted by the IO scheduler
+				 */
+				goto out;
 			}
 		}
-		blk_set_queue_congested(q, is_sync);
 	}
 
 	/*
@@ -783,21 +866,60 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 	 * limit of requests, otherwise we could have thousands of requests
 	 * allocated with any setting of ->nr_requests
 	 */
-	if (rl->count[is_sync] >= (3 * q->nr_requests / 2))
+
+	if (q->rq_data.count[is_sync] >= (3 * q->nr_requests / 2)) {
+		/*
+		 * Queue is too full for allocation. On which request queue
+		 * the task should sleep? Generally it should sleep on its
+		 * request list but if elevator switch is happening, in that
+		 * window, request descriptors are allocated from global
+		 * pool and are not accounted against any particular request
+		 * list as group is going away.
+		 *
+		 * So it might happen that request list does not have any
+		 * requests allocated at all and if process sleeps on per
+		 * group request list, it will not be woken up. In such case,
+		 * make it sleep on global starved list.
+		 */
+		if (test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags)
+		    || !can_sleep_on_request_list(rl, is_sync))
+			sleep_on_global = 1;
+		goto out;
+	}
+
+	/*
+	 * Allocation of request is allowed from queue perspective. Now check
+	 * from per group request list
+	 */
+
+	if (rl->count[is_sync] >= (3 * q->nr_group_requests / 2))
 		goto out;
 
-	rl->count[is_sync]++;
 	rl->starved[is_sync] = 0;
 
+	q->rq_data.count[is_sync]++;
+
 	priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
-	if (priv)
-		rl->elvpriv++;
+	if (priv) {
+		q->rq_data.elvpriv++;
+		/*
+		 * Account the request to request list only if request is
+		 * going to elevator. During elevator switch, there will
+		 * be small window where group is going away and new group
+		 * will not be allocated till elevator switch is complete.
+		 * So till then instead of slowing down the application,
+		 * we will continue to allocate request from total common
+		 * pool instead of per group limit
+		 */
+		rl->count[is_sync]++;
+	}
 
 	if (blk_queue_io_stat(q))
 		rw_flags |= REQ_IO_STAT;
 	spin_unlock_irq(q->queue_lock);
 
 	rq = blk_alloc_request(q, bio, rw_flags, priv, gfp_mask);
+
 	if (unlikely(!rq)) {
 		/*
 		 * Allocation failed presumably due to memory. Undo anything
@@ -807,7 +929,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		 * wait queue, but this is pretty rare.
 		 */
 		spin_lock_irq(q->queue_lock);
-		freed_request(q, is_sync, priv);
+		freed_request(q, is_sync, priv, rl);
 
 		/*
 		 * in the very unlikely event that allocation failed and no
@@ -817,9 +939,8 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		 * rq mempool into READ and WRITE
 		 */
 rq_starved:
-		if (unlikely(rl->count[is_sync] == 0))
-			rl->starved[is_sync] = 1;
-
+		if (!can_sleep_on_request_list(rl, is_sync))
+			sleep_on_global = 1;
 		goto out;
 	}
 
@@ -834,6 +955,8 @@ rq_starved:
 
 	trace_block_getrq(q, bio, rw_flags & 1);
 out:
+	if (reason && sleep_on_global)
+		*reason = 1;
 	return rq;
 }
 
@@ -847,16 +970,44 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 					struct bio *bio)
 {
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
+	int sleep_on_global = 0;
 	struct request *rq;
+	struct request_list *rl = blk_get_request_list(q, bio);
+	struct io_group *iog = NULL;
 
-	rq = get_request(q, rw_flags, bio, GFP_NOIO);
+	rq = get_request(q, rw_flags, bio, GFP_NOIO, rl, &sleep_on_global);
 	while (!rq) {
 		DEFINE_WAIT(wait);
 		struct io_context *ioc;
-		struct request_list *rl = &q->rq;
 
-		prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
-				TASK_UNINTERRUPTIBLE);
+		if (sleep_on_global) {
+			/*
+			 * Task failed allocation and needs to wait and
+			 * try again. There are no requests pending from
+			 * the io group hence need to sleep on global
+			 * wait queue. Most likely the allocation failed
+			 * because of memory issues.
+			 */
+
+			q->rq_data.starved++;
+			prepare_to_wait_exclusive(&q->rq_data.starved_wait,
+					&wait, TASK_UNINTERRUPTIBLE);
+		} else {
+			/*
+			 * We are about to sleep on a request list and we
+			 * drop queue lock. After waking up, we will do
+			 * finish_wait() on request list and in the mean
+			 * time group might be gone. Take a reference to
+			 * the group now.
+			 */
+			prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
+					TASK_UNINTERRUPTIBLE);
+#ifdef CONFIG_GROUP_IOSCHED
+			iog = rl_iog(rl);
+			if (iog)
+				elv_get_iog(iog);
+#endif
+		}
 
 		trace_block_sleeprq(q, bio, rw_flags & 1);
 
@@ -874,9 +1025,30 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 		ioc_set_batching(q, ioc);
 
 		spin_lock_irq(q->queue_lock);
-		finish_wait(&rl->wait[is_sync], &wait);
 
-		rq = get_request(q, rw_flags, bio, GFP_NOIO);
+		if (sleep_on_global) {
+			finish_wait(&q->rq_data.starved_wait, &wait);
+			sleep_on_global = 0;
+		} else {
+			finish_wait(&rl->wait[is_sync], &wait);
+#ifdef CONFIG_GROUP_IOSCHED
+			/*
+			 * We had taken a reference to the rl/iog.
+			 * Put that now
+			 */
+			iog = rl_iog(rl);
+			if (iog)
+				elv_put_iog(iog);
+#endif
+		}
+
+		/*
+		 * After the sleep check the rl again in case cgrop bio
+		 * belonged to is gone and it is mapped to root group now
+		 */
+		rl = blk_get_request_list(q, bio);
+		rq = get_request(q, rw_flags, bio, GFP_NOIO, rl,
+					&sleep_on_global);
 	};
 
 	return rq;
@@ -885,14 +1057,16 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
 {
 	struct request *rq;
+	struct request_list *rl;
 
 	BUG_ON(rw != READ && rw != WRITE);
 
 	spin_lock_irq(q->queue_lock);
+	rl = blk_get_request_list(q, NULL);
 	if (gfp_mask & __GFP_WAIT) {
 		rq = get_request_wait(q, rw, NULL);
 	} else {
-		rq = get_request(q, rw, NULL, gfp_mask);
+		rq = get_request(q, rw, NULL, gfp_mask, rl, NULL);
 		if (!rq)
 			spin_unlock_irq(q->queue_lock);
 	}
@@ -1075,12 +1249,13 @@ void __blk_put_request(struct request_queue *q, struct request *req)
 	if (req->cmd_flags & REQ_ALLOCED) {
 		int is_sync = rq_is_sync(req) != 0;
 		int priv = req->cmd_flags & REQ_ELVPRIV;
+		struct request_list *rl = rq_rl(q, req);
 
 		BUG_ON(!list_empty(&req->queuelist));
 		BUG_ON(!hlist_unhashed(&req->hash));
 
 		blk_free_request(q, req);
-		freed_request(q, is_sync, priv);
+		freed_request(q, is_sync, priv, rl);
 	}
 }
 EXPORT_SYMBOL_GPL(__blk_put_request);
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 57af728..3230d1f 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -123,6 +123,7 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
 	 * set defaults
 	 */
 	q->nr_requests = BLKDEV_MAX_RQ;
+	q->nr_group_requests = BLKDEV_MAX_GROUP_RQ;
 	blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
 	blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
 	blk_queue_segment_boundary(q, BLK_SEG_BOUNDARY_MASK);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 3ff9bba..3a108ff 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -38,42 +38,66 @@ static ssize_t queue_requests_show(struct request_queue *q, char *page)
 static ssize_t
 queue_requests_store(struct request_queue *q, const char *page, size_t count)
 {
-	struct request_list *rl = &q->rq;
+	struct request_list *rl;
 	unsigned long nr;
 	int ret = queue_var_store(&nr, page, count);
 	if (nr < BLKDEV_MIN_RQ)
 		nr = BLKDEV_MIN_RQ;
 
 	spin_lock_irq(q->queue_lock);
+	rl = blk_get_request_list(q, NULL);
 	q->nr_requests = nr;
 	blk_queue_congestion_threshold(q);
 
-	if (rl->count[BLK_RW_SYNC] >= queue_congestion_on_threshold(q))
+	if (q->rq_data.count[BLK_RW_SYNC] >= queue_congestion_on_threshold(q))
 		blk_set_queue_congested(q, BLK_RW_SYNC);
-	else if (rl->count[BLK_RW_SYNC] < queue_congestion_off_threshold(q))
+	else if (q->rq_data.count[BLK_RW_SYNC] <
+				queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, BLK_RW_SYNC);
 
-	if (rl->count[BLK_RW_ASYNC] >= queue_congestion_on_threshold(q))
+	if (q->rq_data.count[BLK_RW_ASYNC] >= queue_congestion_on_threshold(q))
 		blk_set_queue_congested(q, BLK_RW_ASYNC);
-	else if (rl->count[BLK_RW_ASYNC] < queue_congestion_off_threshold(q))
+	else if (q->rq_data.count[BLK_RW_ASYNC] <
+				queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, BLK_RW_ASYNC);
 
-	if (rl->count[BLK_RW_SYNC] >= q->nr_requests) {
+	if (q->rq_data.count[BLK_RW_SYNC] >= q->nr_requests) {
 		blk_set_queue_full(q, BLK_RW_SYNC);
-	} else if (rl->count[BLK_RW_SYNC]+1 <= q->nr_requests) {
+	} else if (q->rq_data.count[BLK_RW_SYNC]+1 <= q->nr_requests) {
 		blk_clear_queue_full(q, BLK_RW_SYNC);
 		wake_up(&rl->wait[BLK_RW_SYNC]);
 	}
 
-	if (rl->count[BLK_RW_ASYNC] >= q->nr_requests) {
+	if (q->rq_data.count[BLK_RW_ASYNC] >= q->nr_requests) {
 		blk_set_queue_full(q, BLK_RW_ASYNC);
-	} else if (rl->count[BLK_RW_ASYNC]+1 <= q->nr_requests) {
+	} else if (q->rq_data.count[BLK_RW_ASYNC]+1 <= q->nr_requests) {
 		blk_clear_queue_full(q, BLK_RW_ASYNC);
 		wake_up(&rl->wait[BLK_RW_ASYNC]);
 	}
 	spin_unlock_irq(q->queue_lock);
 	return ret;
 }
+#ifdef CONFIG_GROUP_IOSCHED
+static ssize_t queue_group_requests_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(q->nr_group_requests, (page));
+}
+
+static ssize_t
+queue_group_requests_store(struct request_queue *q, const char *page,
+					size_t count)
+{
+	unsigned long nr;
+	int ret = queue_var_store(&nr, page, count);
+	if (nr < BLKDEV_MIN_RQ)
+		nr = BLKDEV_MIN_RQ;
+
+	spin_lock_irq(q->queue_lock);
+	q->nr_group_requests = nr;
+	spin_unlock_irq(q->queue_lock);
+	return ret;
+}
+#endif
 
 static ssize_t queue_ra_show(struct request_queue *q, char *page)
 {
@@ -224,6 +248,14 @@ static struct queue_sysfs_entry queue_requests_entry = {
 	.store = queue_requests_store,
 };
 
+#ifdef CONFIG_GROUP_IOSCHED
+static struct queue_sysfs_entry queue_group_requests_entry = {
+	.attr = {.name = "nr_group_requests", .mode = S_IRUGO | S_IWUSR },
+	.show = queue_group_requests_show,
+	.store = queue_group_requests_store,
+};
+#endif
+
 static struct queue_sysfs_entry queue_ra_entry = {
 	.attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
 	.show = queue_ra_show,
@@ -278,6 +310,9 @@ static struct queue_sysfs_entry queue_iostats_entry = {
 
 static struct attribute *default_attrs[] = {
 	&queue_requests_entry.attr,
+#ifdef CONFIG_GROUP_IOSCHED
+	&queue_group_requests_entry.attr,
+#endif
 	&queue_ra_entry.attr,
 	&queue_max_hw_sectors_entry.attr,
 	&queue_max_sectors_entry.attr,
@@ -353,12 +388,11 @@ static void blk_release_queue(struct kobject *kobj)
 {
 	struct request_queue *q =
 		container_of(kobj, struct request_queue, kobj);
-	struct request_list *rl = &q->rq;
 
 	blk_sync_queue(q);
 
-	if (rl->rq_pool)
-		mempool_destroy(rl->rq_pool);
+	if (q->rq_data.rq_pool)
+		mempool_destroy(q->rq_data.rq_pool);
 
 	if (q->queue_tags)
 		__blk_queue_free_tags(q);
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 18dbcc1..16f75ad 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1082,6 +1082,16 @@ struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
 			    struct io_cgroup, css);
 }
 
+struct request_list *io_group_get_request_list(struct request_queue *q,
+						struct bio *bio)
+{
+	struct io_group *iog;
+
+	iog = io_get_io_group(q, bio, 1);
+	BUG_ON(!iog);
+	return &iog->rl;
+}
+
 /*
  * Search the bfq_group for bfqd into the hash table (by now only a list)
  * of bgrp.  Must be called under rcu_read_lock().
@@ -1297,6 +1307,8 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		 */
 		elv_get_iog(iog);
 
+		blk_init_request_list(&iog->rl);
+
 		if (leaf == NULL) {
 			leaf = iog;
 			prev = leaf;
@@ -1557,6 +1569,8 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
 		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
 
+	blk_init_request_list(&iog->rl);
+
 	iocg = &io_root_cgroup;
 	spin_lock_irq(&iocg->lock);
 	rcu_assign_pointer(iog->key, key);
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 6d0df21..c2f71d7 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -257,6 +257,9 @@ struct io_group {
 
 	/* Single ioq per group, used for noop, deadline, anticipatory */
 	struct io_queue *ioq;
+
+	/* request list associated with the group */
+	struct request_list rl;
 };
 
 /**
@@ -535,6 +538,8 @@ extern void elv_fq_unset_request_ioq(struct request_queue *q,
 extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
 extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
 						struct bio *bio);
+extern struct request_list *io_group_get_request_list(struct request_queue *q,
+						struct bio *bio);
 
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
diff --git a/block/elevator.c b/block/elevator.c
index b49efd6..d8ceca8 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -668,7 +668,7 @@ void elv_quiesce_start(struct request_queue *q)
 	 * make sure we don't have any requests in flight
 	 */
 	elv_drain_elevator(q);
-	while (q->rq.elvpriv) {
+	while (q->rq_data.elvpriv) {
 		blk_start_queueing(q);
 		spin_unlock_irq(q->queue_lock);
 		msleep(10);
@@ -768,8 +768,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 	}
 
 	if (unplug_it && blk_queue_plugged(q)) {
-		int nrq = q->rq.count[BLK_RW_SYNC] + q->rq.count[BLK_RW_ASYNC]
-			- q->in_flight;
+		int nrq = q->rq_data.count[BLK_RW_SYNC] +
+				q->rq_data.count[BLK_RW_ASYNC] - q->in_flight;
 
 		if (nrq >= q->unplug_thresh)
 			__generic_unplug_device(q);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 539cb9d..7fd7d33 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -32,21 +32,51 @@ struct request;
 struct sg_io_hdr;
 
 #define BLKDEV_MIN_RQ	4
+
+#ifdef CONFIG_GROUP_IOSCHED
+#define BLKDEV_MAX_RQ	512	/* Default maximum for queue */
+#define BLKDEV_MAX_GROUP_RQ    128      /* Default maximum per group*/
+#else
 #define BLKDEV_MAX_RQ	128	/* Default maximum */
+/*
+ * This is eqivalent to case of only one group present (root group). Let
+ * it consume all the request descriptors available on the queue .
+ */
+#define BLKDEV_MAX_GROUP_RQ    BLKDEV_MAX_RQ      /* Default maximum */
+#endif
 
 struct request;
 typedef void (rq_end_io_fn)(struct request *, int);
 
 struct request_list {
 	/*
-	 * count[], starved[], and wait[] are indexed by
+	 * count[], starved and wait[] are indexed by
 	 * BLK_RW_SYNC/BLK_RW_ASYNC
 	 */
 	int count[2];
 	int starved[2];
+	wait_queue_head_t wait[2];
+};
+
+/*
+ * This data structures keeps track of mempool of requests for the queue
+ * and some overall statistics.
+ */
+struct request_data {
+	/*
+	 * Per queue request descriptor count. This is in addition to per
+	 * cgroup count
+	 */
+	int count[2];
 	int elvpriv;
 	mempool_t *rq_pool;
-	wait_queue_head_t wait[2];
+	int starved;
+	/*
+	 * Global list for starved tasks. A task will be queued here if
+	 * it could not allocate request descriptor and the associated
+	 * group request list does not have any requests pending.
+	 */
+	wait_queue_head_t starved_wait;
 };
 
 /*
@@ -337,6 +367,9 @@ struct request_queue
 	 */
 	struct request_list	rq;
 
+	/* Contains request pool and other data like starved data */
+	struct request_data	rq_data;
+
 	request_fn_proc		*request_fn;
 	make_request_fn		*make_request_fn;
 	prep_rq_fn		*prep_rq_fn;
@@ -399,6 +432,8 @@ struct request_queue
 	 * queue settings
 	 */
 	unsigned long		nr_requests;	/* Max # of requests */
+	/* Max # of per io group requests */
+	unsigned long		nr_group_requests;
 	unsigned int		nr_congestion_on;
 	unsigned int		nr_congestion_off;
 	unsigned int		nr_batching;
@@ -772,6 +807,54 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			 struct scsi_ioctl_command __user *);
 
+extern void blk_init_request_list(struct request_list *rl);
+
+static inline struct request_list *blk_get_request_list(struct request_queue *q,
+						struct bio *bio)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return &q->rq;
+
+	return io_group_get_request_list(q, bio);
+#else
+	return &q->rq;
+#endif
+}
+
+static inline struct request_list *rq_rl(struct request_queue *q,
+						struct request *rq)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	struct io_group *iog;
+	int priv = rq->cmd_flags & REQ_ELVPRIV;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return &q->rq;
+
+	BUG_ON(priv && !rq->ioq);
+
+	if (priv)
+		iog = ioq_to_io_group(rq->ioq);
+	else
+		iog = q->elevator->efqd.root_group;
+
+	BUG_ON(!iog);
+	return &iog->rl;
+#else
+	return &q->rq;
+#endif
+}
+
+static inline struct io_group *rl_iog(struct request_list *rl)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	return container_of(rl, struct io_group, rl);
+#else
+	return NULL;
+#endif
+}
+
 /*
  * Temporary export, until SCSI gets fixed up.
  */
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 16/20] io-controller: Per cgroup request descriptor support
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o Currently a request queue has got fixed number of request descriptors for
  sync and async requests. Once the request descriptors are consumed, new
  processes are put to sleep and they effectively become serialized. Because
  sync and async queues are separate, async requests don't impact sync ones
  but if one is looking for fairness between async requests, that is not
  achievable if request queue descriptors become bottleneck.

o Make request descriptor's per io group so that if there is lots of IO
  going on in one cgroup, it does not impact the IO of other group.

o This is just one relatively simple way of doing things. This patch will
  probably change after the feedback. Folks have raised concerns that in
  hierchical setup, child's request descriptors should be capped by parent's
  request descriptors. May be we need to have per cgroup per device files
  in cgroups where one can specify the upper limit of request descriptors
  and whenever a cgroup is created one needs to assign request descritor
  limit making sure total sum of child's request descriptor is not more than
  of parent.

  I guess something like memory controller. Anyway, that would be the next
  step. For the time being, we have implemented something simpler as follows.

o This patch implements the per cgroup request descriptors. request pool per
  queue is still common but every group will have its own wait list and its
  own count of request descriptors allocated to that group for sync and async
  queues. So effectively request_list becomes per io group property and not a
  global request queue feature.

o Currently one can define q->nr_requests to limit request descriptors
  allocated for the queue. Now there is another tunable q->nr_group_requests
  which controls the requests descriptr limit per group. q->nr_requests
  supercedes q->nr_group_requests to make sure if there are lots of groups
  present, we don't end up allocating too many request descriptors on the
  queue.

o Issues: Currently notion of congestion is per queue. With per group request
  descriptor it is possible that queue is not congested but the group bio
  will go into is congested.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/blk-core.c       |  305 +++++++++++++++++++++++++++++++++++++----------
 block/blk-settings.c   |    1 +
 block/blk-sysfs.c      |   58 +++++++--
 block/elevator-fq.c    |   14 +++
 block/elevator-fq.h    |    5 +
 block/elevator.c       |    6 +-
 include/linux/blkdev.h |   87 +++++++++++++-
 7 files changed, 394 insertions(+), 82 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c77b5b2..35e3725 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -480,20 +480,30 @@ void blk_cleanup_queue(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_cleanup_queue);
 
-static int blk_init_free_list(struct request_queue *q)
+void blk_init_request_list(struct request_list *rl)
 {
-	struct request_list *rl = &q->rq;
 
 	rl->count[BLK_RW_SYNC] = rl->count[BLK_RW_ASYNC] = 0;
-	rl->starved[BLK_RW_SYNC] = rl->starved[BLK_RW_ASYNC] = 0;
-	rl->elvpriv = 0;
 	init_waitqueue_head(&rl->wait[BLK_RW_SYNC]);
 	init_waitqueue_head(&rl->wait[BLK_RW_ASYNC]);
+}
 
-	rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
-				mempool_free_slab, request_cachep, q->node);
+static int blk_init_free_list(struct request_queue *q)
+{
+	/*
+	 * Initialize the queue request list in case there are non-hiearchical
+	 * io schedulers not making use of fair queuing infrastructure.
+	 *
+	 * For ioschedulers making use of fair queuing infrastructure, request
+	 * list is inside the associated group and when that group is
+	 * instanciated, it takes care of initializing the request list also.
+	 */
+	blk_init_request_list(&q->rq);
+	q->rq_data.rq_pool = mempool_create_node(BLKDEV_MIN_RQ,
+				mempool_alloc_slab, mempool_free_slab,
+				request_cachep, q->node);
 
-	if (!rl->rq_pool)
+	if (!q->rq_data.rq_pool)
 		return -ENOMEM;
 
 	return 0;
@@ -590,6 +600,9 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
 		return NULL;
 	}
 
+	/* init starved waiter wait queue */
+	init_waitqueue_head(&q->rq_data.starved_wait);
+
 	/*
 	 * if caller didn't supply a lock, they get per-queue locking with
 	 * our embedded lock
@@ -639,14 +652,14 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
 {
 	if (rq->cmd_flags & REQ_ELVPRIV)
 		elv_put_request(q, rq);
-	mempool_free(rq, q->rq.rq_pool);
+	mempool_free(rq, q->rq_data.rq_pool);
 }
 
 static struct request *
 blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
 					gfp_t gfp_mask)
 {
-	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+	struct request *rq = mempool_alloc(q->rq_data.rq_pool, gfp_mask);
 
 	if (!rq)
 		return NULL;
@@ -657,7 +670,7 @@ blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
 
 	if (priv) {
 		if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
-			mempool_free(rq, q->rq.rq_pool);
+			mempool_free(rq, q->rq_data.rq_pool);
 			return NULL;
 		}
 		rq->cmd_flags |= REQ_ELVPRIV;
@@ -700,18 +713,18 @@ static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
 	ioc->last_waited = jiffies;
 }
 
-static void __freed_request(struct request_queue *q, int sync)
+static void __freed_request(struct request_queue *q, int sync,
+					struct request_list *rl)
 {
-	struct request_list *rl = &q->rq;
-
-	if (rl->count[sync] < queue_congestion_off_threshold(q))
+	if (q->rq_data.count[sync] < queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, sync);
 
-	if (rl->count[sync] + 1 <= q->nr_requests) {
+	if (q->rq_data.count[sync] + 1 <= q->nr_requests)
+		blk_clear_queue_full(q, sync);
+
+	if (rl->count[sync] + 1 <= q->nr_group_requests) {
 		if (waitqueue_active(&rl->wait[sync]))
 			wake_up(&rl->wait[sync]);
-
-		blk_clear_queue_full(q, sync);
 	}
 }
 
@@ -719,63 +732,133 @@ static void __freed_request(struct request_queue *q, int sync)
  * A request has just been released.  Account for it, update the full and
  * congestion status, wake up any waiters.   Called under q->queue_lock.
  */
-static void freed_request(struct request_queue *q, int sync, int priv)
-{
-	struct request_list *rl = &q->rq;
+static void freed_request(struct request_queue *q, int sync, int priv,
+					struct request_list *rl)
+{
+	/* There is a window during request allocation where request is
+	 * mapped to one group but by the time a queue for the group is
+	 * allocated, it is possible that original cgroup/io group has been
+	 * deleted and now io queue is allocated in a different group (root)
+	 * altogether.
+	 *
+	 * One solution to the problem is that rq should take io group
+	 * reference. But it looks too much to do that to solve this issue.
+	 * The only side affect to the hard to hit issue seems to be that
+	 * we will try to decrement the rl->count for a request list which
+	 * did not allocate that request. Chcek for rl->count going less than
+	 * zero and do not decrement it if that's the case.
+	 */
+
+	if (priv && rl->count[sync] > 0)
+		rl->count[sync]--;
+
+	BUG_ON(!q->rq_data.count[sync]);
+	q->rq_data.count[sync]--;
 
-	rl->count[sync]--;
 	if (priv)
-		rl->elvpriv--;
+		q->rq_data.elvpriv--;
 
-	__freed_request(q, sync);
+	__freed_request(q, sync, rl);
 
 	if (unlikely(rl->starved[sync ^ 1]))
-		__freed_request(q, sync ^ 1);
+		__freed_request(q, sync ^ 1, rl);
+
+	/* Wake up the starved process on global list, if any */
+	if (unlikely(q->rq_data.starved)) {
+		if (waitqueue_active(&q->rq_data.starved_wait))
+			wake_up(&q->rq_data.starved_wait);
+		q->rq_data.starved--;
+	}
+}
+
+/*
+ * Returns whether one can sleep on this request list or not. There are
+ * cases (elevator switch) where request list might not have allocated
+ * any request descriptor but we deny request allocation due to gloabl
+ * limits. In that case one should sleep on global list as on this request
+ * list no wakeup will take place.
+ *
+ * Also sets the request list starved flag if there are no requests pending
+ * in the direction of rq.
+ *
+ * Return 1 --> sleep on request list, 0 --> sleep on global list
+ */
+static int can_sleep_on_request_list(struct request_list *rl, int is_sync)
+{
+	if (unlikely(rl->count[is_sync] == 0)) {
+		/*
+		 * If there is a request pending in other direction
+		 * in same io group, then set the starved flag of
+		 * the group request list. Otherwise, we need to
+		 * make this process sleep in global starved list
+		 * to make sure it will not sleep indefinitely.
+		 */
+		if (rl->count[is_sync ^ 1] != 0) {
+			rl->starved[is_sync] = 1;
+			return 1;
+		} else
+			return 0;
+	}
+
+	return 1;
 }
 
 /*
  * Get a free request, queue_lock must be held.
- * Returns NULL on failure, with queue_lock held.
+ * Returns NULL on failure, with queue_lock held. Also sets the "reason" field
+ * in case of failure. This reason field helps caller decide to whether sleep
+ * on per group list or global per queue list.
+ * reason = 0 sleep on per group list
+ * reason = 1 sleep on global list
+ *
  * Returns !NULL on success, with queue_lock *not held*.
  */
 static struct request *get_request(struct request_queue *q, int rw_flags,
-				   struct bio *bio, gfp_t gfp_mask)
+					struct bio *bio, gfp_t gfp_mask,
+					struct request_list *rl, int *reason)
 {
 	struct request *rq = NULL;
-	struct request_list *rl = &q->rq;
 	struct io_context *ioc = NULL;
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
 	int may_queue, priv;
+	int sleep_on_global = 0;
 
 	may_queue = elv_may_queue(q, rw_flags);
 	if (may_queue == ELV_MQUEUE_NO)
 		goto rq_starved;
 
-	if (rl->count[is_sync]+1 >= queue_congestion_on_threshold(q)) {
-		if (rl->count[is_sync]+1 >= q->nr_requests) {
-			ioc = current_io_context(GFP_ATOMIC, q->node);
-			/*
-			 * The queue will fill after this allocation, so set
-			 * it as full, and mark this process as "batching".
-			 * This process will be allowed to complete a batch of
-			 * requests, others will be blocked.
-			 */
-			if (!blk_queue_full(q, is_sync)) {
-				ioc_set_batching(q, ioc);
-				blk_set_queue_full(q, is_sync);
-			} else {
-				if (may_queue != ELV_MQUEUE_MUST
-						&& !ioc_batching(q, ioc)) {
-					/*
-					 * The queue is full and the allocating
-					 * process is not a "batcher", and not
-					 * exempted by the IO scheduler
-					 */
-					goto out;
-				}
+	if (q->rq_data.count[is_sync]+1 >= queue_congestion_on_threshold(q))
+		blk_set_queue_congested(q, is_sync);
+
+	/*
+	 * Looks like there is no user of queue full now.
+	 * Keeping it for time being.
+	 */
+	if (q->rq_data.count[is_sync]+1 >= q->nr_requests)
+		blk_set_queue_full(q, is_sync);
+
+	if (rl->count[is_sync]+1 >= q->nr_group_requests) {
+		ioc = current_io_context(GFP_ATOMIC, q->node);
+		/*
+		 * The queue request descriptor group will fill after this
+		 * allocation, so set
+		 * it as full, and mark this process as "batching".
+		 * This process will be allowed to complete a batch of
+		 * requests, others will be blocked.
+		 */
+		if (rl->count[is_sync] <= q->nr_group_requests)
+			ioc_set_batching(q, ioc);
+		else {
+			if (may_queue != ELV_MQUEUE_MUST
+					&& !ioc_batching(q, ioc)) {
+				/*
+				 * The queue is full and the allocating
+				 * process is not a "batcher", and not
+				 * exempted by the IO scheduler
+				 */
+				goto out;
 			}
 		}
-		blk_set_queue_congested(q, is_sync);
 	}
 
 	/*
@@ -783,21 +866,60 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 	 * limit of requests, otherwise we could have thousands of requests
 	 * allocated with any setting of ->nr_requests
 	 */
-	if (rl->count[is_sync] >= (3 * q->nr_requests / 2))
+
+	if (q->rq_data.count[is_sync] >= (3 * q->nr_requests / 2)) {
+		/*
+		 * Queue is too full for allocation. On which request queue
+		 * the task should sleep? Generally it should sleep on its
+		 * request list but if elevator switch is happening, in that
+		 * window, request descriptors are allocated from global
+		 * pool and are not accounted against any particular request
+		 * list as group is going away.
+		 *
+		 * So it might happen that request list does not have any
+		 * requests allocated at all and if process sleeps on per
+		 * group request list, it will not be woken up. In such case,
+		 * make it sleep on global starved list.
+		 */
+		if (test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags)
+		    || !can_sleep_on_request_list(rl, is_sync))
+			sleep_on_global = 1;
+		goto out;
+	}
+
+	/*
+	 * Allocation of request is allowed from queue perspective. Now check
+	 * from per group request list
+	 */
+
+	if (rl->count[is_sync] >= (3 * q->nr_group_requests / 2))
 		goto out;
 
-	rl->count[is_sync]++;
 	rl->starved[is_sync] = 0;
 
+	q->rq_data.count[is_sync]++;
+
 	priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
-	if (priv)
-		rl->elvpriv++;
+	if (priv) {
+		q->rq_data.elvpriv++;
+		/*
+		 * Account the request to request list only if request is
+		 * going to elevator. During elevator switch, there will
+		 * be small window where group is going away and new group
+		 * will not be allocated till elevator switch is complete.
+		 * So till then instead of slowing down the application,
+		 * we will continue to allocate request from total common
+		 * pool instead of per group limit
+		 */
+		rl->count[is_sync]++;
+	}
 
 	if (blk_queue_io_stat(q))
 		rw_flags |= REQ_IO_STAT;
 	spin_unlock_irq(q->queue_lock);
 
 	rq = blk_alloc_request(q, bio, rw_flags, priv, gfp_mask);
+
 	if (unlikely(!rq)) {
 		/*
 		 * Allocation failed presumably due to memory. Undo anything
@@ -807,7 +929,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		 * wait queue, but this is pretty rare.
 		 */
 		spin_lock_irq(q->queue_lock);
-		freed_request(q, is_sync, priv);
+		freed_request(q, is_sync, priv, rl);
 
 		/*
 		 * in the very unlikely event that allocation failed and no
@@ -817,9 +939,8 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		 * rq mempool into READ and WRITE
 		 */
 rq_starved:
-		if (unlikely(rl->count[is_sync] == 0))
-			rl->starved[is_sync] = 1;
-
+		if (!can_sleep_on_request_list(rl, is_sync))
+			sleep_on_global = 1;
 		goto out;
 	}
 
@@ -834,6 +955,8 @@ rq_starved:
 
 	trace_block_getrq(q, bio, rw_flags & 1);
 out:
+	if (reason && sleep_on_global)
+		*reason = 1;
 	return rq;
 }
 
@@ -847,16 +970,44 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 					struct bio *bio)
 {
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
+	int sleep_on_global = 0;
 	struct request *rq;
+	struct request_list *rl = blk_get_request_list(q, bio);
+	struct io_group *iog = NULL;
 
-	rq = get_request(q, rw_flags, bio, GFP_NOIO);
+	rq = get_request(q, rw_flags, bio, GFP_NOIO, rl, &sleep_on_global);
 	while (!rq) {
 		DEFINE_WAIT(wait);
 		struct io_context *ioc;
-		struct request_list *rl = &q->rq;
 
-		prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
-				TASK_UNINTERRUPTIBLE);
+		if (sleep_on_global) {
+			/*
+			 * Task failed allocation and needs to wait and
+			 * try again. There are no requests pending from
+			 * the io group hence need to sleep on global
+			 * wait queue. Most likely the allocation failed
+			 * because of memory issues.
+			 */
+
+			q->rq_data.starved++;
+			prepare_to_wait_exclusive(&q->rq_data.starved_wait,
+					&wait, TASK_UNINTERRUPTIBLE);
+		} else {
+			/*
+			 * We are about to sleep on a request list and we
+			 * drop queue lock. After waking up, we will do
+			 * finish_wait() on request list and in the mean
+			 * time group might be gone. Take a reference to
+			 * the group now.
+			 */
+			prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
+					TASK_UNINTERRUPTIBLE);
+#ifdef CONFIG_GROUP_IOSCHED
+			iog = rl_iog(rl);
+			if (iog)
+				elv_get_iog(iog);
+#endif
+		}
 
 		trace_block_sleeprq(q, bio, rw_flags & 1);
 
@@ -874,9 +1025,30 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 		ioc_set_batching(q, ioc);
 
 		spin_lock_irq(q->queue_lock);
-		finish_wait(&rl->wait[is_sync], &wait);
 
-		rq = get_request(q, rw_flags, bio, GFP_NOIO);
+		if (sleep_on_global) {
+			finish_wait(&q->rq_data.starved_wait, &wait);
+			sleep_on_global = 0;
+		} else {
+			finish_wait(&rl->wait[is_sync], &wait);
+#ifdef CONFIG_GROUP_IOSCHED
+			/*
+			 * We had taken a reference to the rl/iog.
+			 * Put that now
+			 */
+			iog = rl_iog(rl);
+			if (iog)
+				elv_put_iog(iog);
+#endif
+		}
+
+		/*
+		 * After the sleep check the rl again in case cgrop bio
+		 * belonged to is gone and it is mapped to root group now
+		 */
+		rl = blk_get_request_list(q, bio);
+		rq = get_request(q, rw_flags, bio, GFP_NOIO, rl,
+					&sleep_on_global);
 	};
 
 	return rq;
@@ -885,14 +1057,16 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
 {
 	struct request *rq;
+	struct request_list *rl;
 
 	BUG_ON(rw != READ && rw != WRITE);
 
 	spin_lock_irq(q->queue_lock);
+	rl = blk_get_request_list(q, NULL);
 	if (gfp_mask & __GFP_WAIT) {
 		rq = get_request_wait(q, rw, NULL);
 	} else {
-		rq = get_request(q, rw, NULL, gfp_mask);
+		rq = get_request(q, rw, NULL, gfp_mask, rl, NULL);
 		if (!rq)
 			spin_unlock_irq(q->queue_lock);
 	}
@@ -1075,12 +1249,13 @@ void __blk_put_request(struct request_queue *q, struct request *req)
 	if (req->cmd_flags & REQ_ALLOCED) {
 		int is_sync = rq_is_sync(req) != 0;
 		int priv = req->cmd_flags & REQ_ELVPRIV;
+		struct request_list *rl = rq_rl(q, req);
 
 		BUG_ON(!list_empty(&req->queuelist));
 		BUG_ON(!hlist_unhashed(&req->hash));
 
 		blk_free_request(q, req);
-		freed_request(q, is_sync, priv);
+		freed_request(q, is_sync, priv, rl);
 	}
 }
 EXPORT_SYMBOL_GPL(__blk_put_request);
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 57af728..3230d1f 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -123,6 +123,7 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
 	 * set defaults
 	 */
 	q->nr_requests = BLKDEV_MAX_RQ;
+	q->nr_group_requests = BLKDEV_MAX_GROUP_RQ;
 	blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
 	blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
 	blk_queue_segment_boundary(q, BLK_SEG_BOUNDARY_MASK);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 3ff9bba..3a108ff 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -38,42 +38,66 @@ static ssize_t queue_requests_show(struct request_queue *q, char *page)
 static ssize_t
 queue_requests_store(struct request_queue *q, const char *page, size_t count)
 {
-	struct request_list *rl = &q->rq;
+	struct request_list *rl;
 	unsigned long nr;
 	int ret = queue_var_store(&nr, page, count);
 	if (nr < BLKDEV_MIN_RQ)
 		nr = BLKDEV_MIN_RQ;
 
 	spin_lock_irq(q->queue_lock);
+	rl = blk_get_request_list(q, NULL);
 	q->nr_requests = nr;
 	blk_queue_congestion_threshold(q);
 
-	if (rl->count[BLK_RW_SYNC] >= queue_congestion_on_threshold(q))
+	if (q->rq_data.count[BLK_RW_SYNC] >= queue_congestion_on_threshold(q))
 		blk_set_queue_congested(q, BLK_RW_SYNC);
-	else if (rl->count[BLK_RW_SYNC] < queue_congestion_off_threshold(q))
+	else if (q->rq_data.count[BLK_RW_SYNC] <
+				queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, BLK_RW_SYNC);
 
-	if (rl->count[BLK_RW_ASYNC] >= queue_congestion_on_threshold(q))
+	if (q->rq_data.count[BLK_RW_ASYNC] >= queue_congestion_on_threshold(q))
 		blk_set_queue_congested(q, BLK_RW_ASYNC);
-	else if (rl->count[BLK_RW_ASYNC] < queue_congestion_off_threshold(q))
+	else if (q->rq_data.count[BLK_RW_ASYNC] <
+				queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, BLK_RW_ASYNC);
 
-	if (rl->count[BLK_RW_SYNC] >= q->nr_requests) {
+	if (q->rq_data.count[BLK_RW_SYNC] >= q->nr_requests) {
 		blk_set_queue_full(q, BLK_RW_SYNC);
-	} else if (rl->count[BLK_RW_SYNC]+1 <= q->nr_requests) {
+	} else if (q->rq_data.count[BLK_RW_SYNC]+1 <= q->nr_requests) {
 		blk_clear_queue_full(q, BLK_RW_SYNC);
 		wake_up(&rl->wait[BLK_RW_SYNC]);
 	}
 
-	if (rl->count[BLK_RW_ASYNC] >= q->nr_requests) {
+	if (q->rq_data.count[BLK_RW_ASYNC] >= q->nr_requests) {
 		blk_set_queue_full(q, BLK_RW_ASYNC);
-	} else if (rl->count[BLK_RW_ASYNC]+1 <= q->nr_requests) {
+	} else if (q->rq_data.count[BLK_RW_ASYNC]+1 <= q->nr_requests) {
 		blk_clear_queue_full(q, BLK_RW_ASYNC);
 		wake_up(&rl->wait[BLK_RW_ASYNC]);
 	}
 	spin_unlock_irq(q->queue_lock);
 	return ret;
 }
+#ifdef CONFIG_GROUP_IOSCHED
+static ssize_t queue_group_requests_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(q->nr_group_requests, (page));
+}
+
+static ssize_t
+queue_group_requests_store(struct request_queue *q, const char *page,
+					size_t count)
+{
+	unsigned long nr;
+	int ret = queue_var_store(&nr, page, count);
+	if (nr < BLKDEV_MIN_RQ)
+		nr = BLKDEV_MIN_RQ;
+
+	spin_lock_irq(q->queue_lock);
+	q->nr_group_requests = nr;
+	spin_unlock_irq(q->queue_lock);
+	return ret;
+}
+#endif
 
 static ssize_t queue_ra_show(struct request_queue *q, char *page)
 {
@@ -224,6 +248,14 @@ static struct queue_sysfs_entry queue_requests_entry = {
 	.store = queue_requests_store,
 };
 
+#ifdef CONFIG_GROUP_IOSCHED
+static struct queue_sysfs_entry queue_group_requests_entry = {
+	.attr = {.name = "nr_group_requests", .mode = S_IRUGO | S_IWUSR },
+	.show = queue_group_requests_show,
+	.store = queue_group_requests_store,
+};
+#endif
+
 static struct queue_sysfs_entry queue_ra_entry = {
 	.attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
 	.show = queue_ra_show,
@@ -278,6 +310,9 @@ static struct queue_sysfs_entry queue_iostats_entry = {
 
 static struct attribute *default_attrs[] = {
 	&queue_requests_entry.attr,
+#ifdef CONFIG_GROUP_IOSCHED
+	&queue_group_requests_entry.attr,
+#endif
 	&queue_ra_entry.attr,
 	&queue_max_hw_sectors_entry.attr,
 	&queue_max_sectors_entry.attr,
@@ -353,12 +388,11 @@ static void blk_release_queue(struct kobject *kobj)
 {
 	struct request_queue *q =
 		container_of(kobj, struct request_queue, kobj);
-	struct request_list *rl = &q->rq;
 
 	blk_sync_queue(q);
 
-	if (rl->rq_pool)
-		mempool_destroy(rl->rq_pool);
+	if (q->rq_data.rq_pool)
+		mempool_destroy(q->rq_data.rq_pool);
 
 	if (q->queue_tags)
 		__blk_queue_free_tags(q);
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 18dbcc1..16f75ad 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1082,6 +1082,16 @@ struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
 			    struct io_cgroup, css);
 }
 
+struct request_list *io_group_get_request_list(struct request_queue *q,
+						struct bio *bio)
+{
+	struct io_group *iog;
+
+	iog = io_get_io_group(q, bio, 1);
+	BUG_ON(!iog);
+	return &iog->rl;
+}
+
 /*
  * Search the bfq_group for bfqd into the hash table (by now only a list)
  * of bgrp.  Must be called under rcu_read_lock().
@@ -1297,6 +1307,8 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		 */
 		elv_get_iog(iog);
 
+		blk_init_request_list(&iog->rl);
+
 		if (leaf == NULL) {
 			leaf = iog;
 			prev = leaf;
@@ -1557,6 +1569,8 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
 		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
 
+	blk_init_request_list(&iog->rl);
+
 	iocg = &io_root_cgroup;
 	spin_lock_irq(&iocg->lock);
 	rcu_assign_pointer(iog->key, key);
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 6d0df21..c2f71d7 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -257,6 +257,9 @@ struct io_group {
 
 	/* Single ioq per group, used for noop, deadline, anticipatory */
 	struct io_queue *ioq;
+
+	/* request list associated with the group */
+	struct request_list rl;
 };
 
 /**
@@ -535,6 +538,8 @@ extern void elv_fq_unset_request_ioq(struct request_queue *q,
 extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
 extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
 						struct bio *bio);
+extern struct request_list *io_group_get_request_list(struct request_queue *q,
+						struct bio *bio);
 
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
diff --git a/block/elevator.c b/block/elevator.c
index b49efd6..d8ceca8 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -668,7 +668,7 @@ void elv_quiesce_start(struct request_queue *q)
 	 * make sure we don't have any requests in flight
 	 */
 	elv_drain_elevator(q);
-	while (q->rq.elvpriv) {
+	while (q->rq_data.elvpriv) {
 		blk_start_queueing(q);
 		spin_unlock_irq(q->queue_lock);
 		msleep(10);
@@ -768,8 +768,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 	}
 
 	if (unplug_it && blk_queue_plugged(q)) {
-		int nrq = q->rq.count[BLK_RW_SYNC] + q->rq.count[BLK_RW_ASYNC]
-			- q->in_flight;
+		int nrq = q->rq_data.count[BLK_RW_SYNC] +
+				q->rq_data.count[BLK_RW_ASYNC] - q->in_flight;
 
 		if (nrq >= q->unplug_thresh)
 			__generic_unplug_device(q);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 539cb9d..7fd7d33 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -32,21 +32,51 @@ struct request;
 struct sg_io_hdr;
 
 #define BLKDEV_MIN_RQ	4
+
+#ifdef CONFIG_GROUP_IOSCHED
+#define BLKDEV_MAX_RQ	512	/* Default maximum for queue */
+#define BLKDEV_MAX_GROUP_RQ    128      /* Default maximum per group*/
+#else
 #define BLKDEV_MAX_RQ	128	/* Default maximum */
+/*
+ * This is eqivalent to case of only one group present (root group). Let
+ * it consume all the request descriptors available on the queue .
+ */
+#define BLKDEV_MAX_GROUP_RQ    BLKDEV_MAX_RQ      /* Default maximum */
+#endif
 
 struct request;
 typedef void (rq_end_io_fn)(struct request *, int);
 
 struct request_list {
 	/*
-	 * count[], starved[], and wait[] are indexed by
+	 * count[], starved and wait[] are indexed by
 	 * BLK_RW_SYNC/BLK_RW_ASYNC
 	 */
 	int count[2];
 	int starved[2];
+	wait_queue_head_t wait[2];
+};
+
+/*
+ * This data structures keeps track of mempool of requests for the queue
+ * and some overall statistics.
+ */
+struct request_data {
+	/*
+	 * Per queue request descriptor count. This is in addition to per
+	 * cgroup count
+	 */
+	int count[2];
 	int elvpriv;
 	mempool_t *rq_pool;
-	wait_queue_head_t wait[2];
+	int starved;
+	/*
+	 * Global list for starved tasks. A task will be queued here if
+	 * it could not allocate request descriptor and the associated
+	 * group request list does not have any requests pending.
+	 */
+	wait_queue_head_t starved_wait;
 };
 
 /*
@@ -337,6 +367,9 @@ struct request_queue
 	 */
 	struct request_list	rq;
 
+	/* Contains request pool and other data like starved data */
+	struct request_data	rq_data;
+
 	request_fn_proc		*request_fn;
 	make_request_fn		*make_request_fn;
 	prep_rq_fn		*prep_rq_fn;
@@ -399,6 +432,8 @@ struct request_queue
 	 * queue settings
 	 */
 	unsigned long		nr_requests;	/* Max # of requests */
+	/* Max # of per io group requests */
+	unsigned long		nr_group_requests;
 	unsigned int		nr_congestion_on;
 	unsigned int		nr_congestion_off;
 	unsigned int		nr_batching;
@@ -772,6 +807,54 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			 struct scsi_ioctl_command __user *);
 
+extern void blk_init_request_list(struct request_list *rl);
+
+static inline struct request_list *blk_get_request_list(struct request_queue *q,
+						struct bio *bio)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return &q->rq;
+
+	return io_group_get_request_list(q, bio);
+#else
+	return &q->rq;
+#endif
+}
+
+static inline struct request_list *rq_rl(struct request_queue *q,
+						struct request *rq)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	struct io_group *iog;
+	int priv = rq->cmd_flags & REQ_ELVPRIV;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return &q->rq;
+
+	BUG_ON(priv && !rq->ioq);
+
+	if (priv)
+		iog = ioq_to_io_group(rq->ioq);
+	else
+		iog = q->elevator->efqd.root_group;
+
+	BUG_ON(!iog);
+	return &iog->rl;
+#else
+	return &q->rq;
+#endif
+}
+
+static inline struct io_group *rl_iog(struct request_list *rl)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	return container_of(rl, struct io_group, rl);
+#else
+	return NULL;
+#endif
+}
+
 /*
  * Temporary export, until SCSI gets fixed up.
  */
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 16/20] io-controller: Per cgroup request descriptor support
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o Currently a request queue has got fixed number of request descriptors for
  sync and async requests. Once the request descriptors are consumed, new
  processes are put to sleep and they effectively become serialized. Because
  sync and async queues are separate, async requests don't impact sync ones
  but if one is looking for fairness between async requests, that is not
  achievable if request queue descriptors become bottleneck.

o Make request descriptor's per io group so that if there is lots of IO
  going on in one cgroup, it does not impact the IO of other group.

o This is just one relatively simple way of doing things. This patch will
  probably change after the feedback. Folks have raised concerns that in
  hierchical setup, child's request descriptors should be capped by parent's
  request descriptors. May be we need to have per cgroup per device files
  in cgroups where one can specify the upper limit of request descriptors
  and whenever a cgroup is created one needs to assign request descritor
  limit making sure total sum of child's request descriptor is not more than
  of parent.

  I guess something like memory controller. Anyway, that would be the next
  step. For the time being, we have implemented something simpler as follows.

o This patch implements the per cgroup request descriptors. request pool per
  queue is still common but every group will have its own wait list and its
  own count of request descriptors allocated to that group for sync and async
  queues. So effectively request_list becomes per io group property and not a
  global request queue feature.

o Currently one can define q->nr_requests to limit request descriptors
  allocated for the queue. Now there is another tunable q->nr_group_requests
  which controls the requests descriptr limit per group. q->nr_requests
  supercedes q->nr_group_requests to make sure if there are lots of groups
  present, we don't end up allocating too many request descriptors on the
  queue.

o Issues: Currently notion of congestion is per queue. With per group request
  descriptor it is possible that queue is not congested but the group bio
  will go into is congested.

Signed-off-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/blk-core.c       |  305 +++++++++++++++++++++++++++++++++++++----------
 block/blk-settings.c   |    1 +
 block/blk-sysfs.c      |   58 +++++++--
 block/elevator-fq.c    |   14 +++
 block/elevator-fq.h    |    5 +
 block/elevator.c       |    6 +-
 include/linux/blkdev.h |   87 +++++++++++++-
 7 files changed, 394 insertions(+), 82 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c77b5b2..35e3725 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -480,20 +480,30 @@ void blk_cleanup_queue(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_cleanup_queue);
 
-static int blk_init_free_list(struct request_queue *q)
+void blk_init_request_list(struct request_list *rl)
 {
-	struct request_list *rl = &q->rq;
 
 	rl->count[BLK_RW_SYNC] = rl->count[BLK_RW_ASYNC] = 0;
-	rl->starved[BLK_RW_SYNC] = rl->starved[BLK_RW_ASYNC] = 0;
-	rl->elvpriv = 0;
 	init_waitqueue_head(&rl->wait[BLK_RW_SYNC]);
 	init_waitqueue_head(&rl->wait[BLK_RW_ASYNC]);
+}
 
-	rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
-				mempool_free_slab, request_cachep, q->node);
+static int blk_init_free_list(struct request_queue *q)
+{
+	/*
+	 * Initialize the queue request list in case there are non-hiearchical
+	 * io schedulers not making use of fair queuing infrastructure.
+	 *
+	 * For ioschedulers making use of fair queuing infrastructure, request
+	 * list is inside the associated group and when that group is
+	 * instanciated, it takes care of initializing the request list also.
+	 */
+	blk_init_request_list(&q->rq);
+	q->rq_data.rq_pool = mempool_create_node(BLKDEV_MIN_RQ,
+				mempool_alloc_slab, mempool_free_slab,
+				request_cachep, q->node);
 
-	if (!rl->rq_pool)
+	if (!q->rq_data.rq_pool)
 		return -ENOMEM;
 
 	return 0;
@@ -590,6 +600,9 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
 		return NULL;
 	}
 
+	/* init starved waiter wait queue */
+	init_waitqueue_head(&q->rq_data.starved_wait);
+
 	/*
 	 * if caller didn't supply a lock, they get per-queue locking with
 	 * our embedded lock
@@ -639,14 +652,14 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
 {
 	if (rq->cmd_flags & REQ_ELVPRIV)
 		elv_put_request(q, rq);
-	mempool_free(rq, q->rq.rq_pool);
+	mempool_free(rq, q->rq_data.rq_pool);
 }
 
 static struct request *
 blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
 					gfp_t gfp_mask)
 {
-	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+	struct request *rq = mempool_alloc(q->rq_data.rq_pool, gfp_mask);
 
 	if (!rq)
 		return NULL;
@@ -657,7 +670,7 @@ blk_alloc_request(struct request_queue *q, struct bio *bio, int flags, int priv,
 
 	if (priv) {
 		if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
-			mempool_free(rq, q->rq.rq_pool);
+			mempool_free(rq, q->rq_data.rq_pool);
 			return NULL;
 		}
 		rq->cmd_flags |= REQ_ELVPRIV;
@@ -700,18 +713,18 @@ static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
 	ioc->last_waited = jiffies;
 }
 
-static void __freed_request(struct request_queue *q, int sync)
+static void __freed_request(struct request_queue *q, int sync,
+					struct request_list *rl)
 {
-	struct request_list *rl = &q->rq;
-
-	if (rl->count[sync] < queue_congestion_off_threshold(q))
+	if (q->rq_data.count[sync] < queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, sync);
 
-	if (rl->count[sync] + 1 <= q->nr_requests) {
+	if (q->rq_data.count[sync] + 1 <= q->nr_requests)
+		blk_clear_queue_full(q, sync);
+
+	if (rl->count[sync] + 1 <= q->nr_group_requests) {
 		if (waitqueue_active(&rl->wait[sync]))
 			wake_up(&rl->wait[sync]);
-
-		blk_clear_queue_full(q, sync);
 	}
 }
 
@@ -719,63 +732,133 @@ static void __freed_request(struct request_queue *q, int sync)
  * A request has just been released.  Account for it, update the full and
  * congestion status, wake up any waiters.   Called under q->queue_lock.
  */
-static void freed_request(struct request_queue *q, int sync, int priv)
-{
-	struct request_list *rl = &q->rq;
+static void freed_request(struct request_queue *q, int sync, int priv,
+					struct request_list *rl)
+{
+	/* There is a window during request allocation where request is
+	 * mapped to one group but by the time a queue for the group is
+	 * allocated, it is possible that original cgroup/io group has been
+	 * deleted and now io queue is allocated in a different group (root)
+	 * altogether.
+	 *
+	 * One solution to the problem is that rq should take io group
+	 * reference. But it looks too much to do that to solve this issue.
+	 * The only side affect to the hard to hit issue seems to be that
+	 * we will try to decrement the rl->count for a request list which
+	 * did not allocate that request. Chcek for rl->count going less than
+	 * zero and do not decrement it if that's the case.
+	 */
+
+	if (priv && rl->count[sync] > 0)
+		rl->count[sync]--;
+
+	BUG_ON(!q->rq_data.count[sync]);
+	q->rq_data.count[sync]--;
 
-	rl->count[sync]--;
 	if (priv)
-		rl->elvpriv--;
+		q->rq_data.elvpriv--;
 
-	__freed_request(q, sync);
+	__freed_request(q, sync, rl);
 
 	if (unlikely(rl->starved[sync ^ 1]))
-		__freed_request(q, sync ^ 1);
+		__freed_request(q, sync ^ 1, rl);
+
+	/* Wake up the starved process on global list, if any */
+	if (unlikely(q->rq_data.starved)) {
+		if (waitqueue_active(&q->rq_data.starved_wait))
+			wake_up(&q->rq_data.starved_wait);
+		q->rq_data.starved--;
+	}
+}
+
+/*
+ * Returns whether one can sleep on this request list or not. There are
+ * cases (elevator switch) where request list might not have allocated
+ * any request descriptor but we deny request allocation due to gloabl
+ * limits. In that case one should sleep on global list as on this request
+ * list no wakeup will take place.
+ *
+ * Also sets the request list starved flag if there are no requests pending
+ * in the direction of rq.
+ *
+ * Return 1 --> sleep on request list, 0 --> sleep on global list
+ */
+static int can_sleep_on_request_list(struct request_list *rl, int is_sync)
+{
+	if (unlikely(rl->count[is_sync] == 0)) {
+		/*
+		 * If there is a request pending in other direction
+		 * in same io group, then set the starved flag of
+		 * the group request list. Otherwise, we need to
+		 * make this process sleep in global starved list
+		 * to make sure it will not sleep indefinitely.
+		 */
+		if (rl->count[is_sync ^ 1] != 0) {
+			rl->starved[is_sync] = 1;
+			return 1;
+		} else
+			return 0;
+	}
+
+	return 1;
 }
 
 /*
  * Get a free request, queue_lock must be held.
- * Returns NULL on failure, with queue_lock held.
+ * Returns NULL on failure, with queue_lock held. Also sets the "reason" field
+ * in case of failure. This reason field helps caller decide to whether sleep
+ * on per group list or global per queue list.
+ * reason = 0 sleep on per group list
+ * reason = 1 sleep on global list
+ *
  * Returns !NULL on success, with queue_lock *not held*.
  */
 static struct request *get_request(struct request_queue *q, int rw_flags,
-				   struct bio *bio, gfp_t gfp_mask)
+					struct bio *bio, gfp_t gfp_mask,
+					struct request_list *rl, int *reason)
 {
 	struct request *rq = NULL;
-	struct request_list *rl = &q->rq;
 	struct io_context *ioc = NULL;
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
 	int may_queue, priv;
+	int sleep_on_global = 0;
 
 	may_queue = elv_may_queue(q, rw_flags);
 	if (may_queue == ELV_MQUEUE_NO)
 		goto rq_starved;
 
-	if (rl->count[is_sync]+1 >= queue_congestion_on_threshold(q)) {
-		if (rl->count[is_sync]+1 >= q->nr_requests) {
-			ioc = current_io_context(GFP_ATOMIC, q->node);
-			/*
-			 * The queue will fill after this allocation, so set
-			 * it as full, and mark this process as "batching".
-			 * This process will be allowed to complete a batch of
-			 * requests, others will be blocked.
-			 */
-			if (!blk_queue_full(q, is_sync)) {
-				ioc_set_batching(q, ioc);
-				blk_set_queue_full(q, is_sync);
-			} else {
-				if (may_queue != ELV_MQUEUE_MUST
-						&& !ioc_batching(q, ioc)) {
-					/*
-					 * The queue is full and the allocating
-					 * process is not a "batcher", and not
-					 * exempted by the IO scheduler
-					 */
-					goto out;
-				}
+	if (q->rq_data.count[is_sync]+1 >= queue_congestion_on_threshold(q))
+		blk_set_queue_congested(q, is_sync);
+
+	/*
+	 * Looks like there is no user of queue full now.
+	 * Keeping it for time being.
+	 */
+	if (q->rq_data.count[is_sync]+1 >= q->nr_requests)
+		blk_set_queue_full(q, is_sync);
+
+	if (rl->count[is_sync]+1 >= q->nr_group_requests) {
+		ioc = current_io_context(GFP_ATOMIC, q->node);
+		/*
+		 * The queue request descriptor group will fill after this
+		 * allocation, so set
+		 * it as full, and mark this process as "batching".
+		 * This process will be allowed to complete a batch of
+		 * requests, others will be blocked.
+		 */
+		if (rl->count[is_sync] <= q->nr_group_requests)
+			ioc_set_batching(q, ioc);
+		else {
+			if (may_queue != ELV_MQUEUE_MUST
+					&& !ioc_batching(q, ioc)) {
+				/*
+				 * The queue is full and the allocating
+				 * process is not a "batcher", and not
+				 * exempted by the IO scheduler
+				 */
+				goto out;
 			}
 		}
-		blk_set_queue_congested(q, is_sync);
 	}
 
 	/*
@@ -783,21 +866,60 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 	 * limit of requests, otherwise we could have thousands of requests
 	 * allocated with any setting of ->nr_requests
 	 */
-	if (rl->count[is_sync] >= (3 * q->nr_requests / 2))
+
+	if (q->rq_data.count[is_sync] >= (3 * q->nr_requests / 2)) {
+		/*
+		 * Queue is too full for allocation. On which request queue
+		 * the task should sleep? Generally it should sleep on its
+		 * request list but if elevator switch is happening, in that
+		 * window, request descriptors are allocated from global
+		 * pool and are not accounted against any particular request
+		 * list as group is going away.
+		 *
+		 * So it might happen that request list does not have any
+		 * requests allocated at all and if process sleeps on per
+		 * group request list, it will not be woken up. In such case,
+		 * make it sleep on global starved list.
+		 */
+		if (test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags)
+		    || !can_sleep_on_request_list(rl, is_sync))
+			sleep_on_global = 1;
+		goto out;
+	}
+
+	/*
+	 * Allocation of request is allowed from queue perspective. Now check
+	 * from per group request list
+	 */
+
+	if (rl->count[is_sync] >= (3 * q->nr_group_requests / 2))
 		goto out;
 
-	rl->count[is_sync]++;
 	rl->starved[is_sync] = 0;
 
+	q->rq_data.count[is_sync]++;
+
 	priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
-	if (priv)
-		rl->elvpriv++;
+	if (priv) {
+		q->rq_data.elvpriv++;
+		/*
+		 * Account the request to request list only if request is
+		 * going to elevator. During elevator switch, there will
+		 * be small window where group is going away and new group
+		 * will not be allocated till elevator switch is complete.
+		 * So till then instead of slowing down the application,
+		 * we will continue to allocate request from total common
+		 * pool instead of per group limit
+		 */
+		rl->count[is_sync]++;
+	}
 
 	if (blk_queue_io_stat(q))
 		rw_flags |= REQ_IO_STAT;
 	spin_unlock_irq(q->queue_lock);
 
 	rq = blk_alloc_request(q, bio, rw_flags, priv, gfp_mask);
+
 	if (unlikely(!rq)) {
 		/*
 		 * Allocation failed presumably due to memory. Undo anything
@@ -807,7 +929,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		 * wait queue, but this is pretty rare.
 		 */
 		spin_lock_irq(q->queue_lock);
-		freed_request(q, is_sync, priv);
+		freed_request(q, is_sync, priv, rl);
 
 		/*
 		 * in the very unlikely event that allocation failed and no
@@ -817,9 +939,8 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 		 * rq mempool into READ and WRITE
 		 */
 rq_starved:
-		if (unlikely(rl->count[is_sync] == 0))
-			rl->starved[is_sync] = 1;
-
+		if (!can_sleep_on_request_list(rl, is_sync))
+			sleep_on_global = 1;
 		goto out;
 	}
 
@@ -834,6 +955,8 @@ rq_starved:
 
 	trace_block_getrq(q, bio, rw_flags & 1);
 out:
+	if (reason && sleep_on_global)
+		*reason = 1;
 	return rq;
 }
 
@@ -847,16 +970,44 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 					struct bio *bio)
 {
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
+	int sleep_on_global = 0;
 	struct request *rq;
+	struct request_list *rl = blk_get_request_list(q, bio);
+	struct io_group *iog = NULL;
 
-	rq = get_request(q, rw_flags, bio, GFP_NOIO);
+	rq = get_request(q, rw_flags, bio, GFP_NOIO, rl, &sleep_on_global);
 	while (!rq) {
 		DEFINE_WAIT(wait);
 		struct io_context *ioc;
-		struct request_list *rl = &q->rq;
 
-		prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
-				TASK_UNINTERRUPTIBLE);
+		if (sleep_on_global) {
+			/*
+			 * Task failed allocation and needs to wait and
+			 * try again. There are no requests pending from
+			 * the io group hence need to sleep on global
+			 * wait queue. Most likely the allocation failed
+			 * because of memory issues.
+			 */
+
+			q->rq_data.starved++;
+			prepare_to_wait_exclusive(&q->rq_data.starved_wait,
+					&wait, TASK_UNINTERRUPTIBLE);
+		} else {
+			/*
+			 * We are about to sleep on a request list and we
+			 * drop queue lock. After waking up, we will do
+			 * finish_wait() on request list and in the mean
+			 * time group might be gone. Take a reference to
+			 * the group now.
+			 */
+			prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
+					TASK_UNINTERRUPTIBLE);
+#ifdef CONFIG_GROUP_IOSCHED
+			iog = rl_iog(rl);
+			if (iog)
+				elv_get_iog(iog);
+#endif
+		}
 
 		trace_block_sleeprq(q, bio, rw_flags & 1);
 
@@ -874,9 +1025,30 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 		ioc_set_batching(q, ioc);
 
 		spin_lock_irq(q->queue_lock);
-		finish_wait(&rl->wait[is_sync], &wait);
 
-		rq = get_request(q, rw_flags, bio, GFP_NOIO);
+		if (sleep_on_global) {
+			finish_wait(&q->rq_data.starved_wait, &wait);
+			sleep_on_global = 0;
+		} else {
+			finish_wait(&rl->wait[is_sync], &wait);
+#ifdef CONFIG_GROUP_IOSCHED
+			/*
+			 * We had taken a reference to the rl/iog.
+			 * Put that now
+			 */
+			iog = rl_iog(rl);
+			if (iog)
+				elv_put_iog(iog);
+#endif
+		}
+
+		/*
+		 * After the sleep check the rl again in case cgrop bio
+		 * belonged to is gone and it is mapped to root group now
+		 */
+		rl = blk_get_request_list(q, bio);
+		rq = get_request(q, rw_flags, bio, GFP_NOIO, rl,
+					&sleep_on_global);
 	};
 
 	return rq;
@@ -885,14 +1057,16 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
 {
 	struct request *rq;
+	struct request_list *rl;
 
 	BUG_ON(rw != READ && rw != WRITE);
 
 	spin_lock_irq(q->queue_lock);
+	rl = blk_get_request_list(q, NULL);
 	if (gfp_mask & __GFP_WAIT) {
 		rq = get_request_wait(q, rw, NULL);
 	} else {
-		rq = get_request(q, rw, NULL, gfp_mask);
+		rq = get_request(q, rw, NULL, gfp_mask, rl, NULL);
 		if (!rq)
 			spin_unlock_irq(q->queue_lock);
 	}
@@ -1075,12 +1249,13 @@ void __blk_put_request(struct request_queue *q, struct request *req)
 	if (req->cmd_flags & REQ_ALLOCED) {
 		int is_sync = rq_is_sync(req) != 0;
 		int priv = req->cmd_flags & REQ_ELVPRIV;
+		struct request_list *rl = rq_rl(q, req);
 
 		BUG_ON(!list_empty(&req->queuelist));
 		BUG_ON(!hlist_unhashed(&req->hash));
 
 		blk_free_request(q, req);
-		freed_request(q, is_sync, priv);
+		freed_request(q, is_sync, priv, rl);
 	}
 }
 EXPORT_SYMBOL_GPL(__blk_put_request);
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 57af728..3230d1f 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -123,6 +123,7 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
 	 * set defaults
 	 */
 	q->nr_requests = BLKDEV_MAX_RQ;
+	q->nr_group_requests = BLKDEV_MAX_GROUP_RQ;
 	blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
 	blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
 	blk_queue_segment_boundary(q, BLK_SEG_BOUNDARY_MASK);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 3ff9bba..3a108ff 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -38,42 +38,66 @@ static ssize_t queue_requests_show(struct request_queue *q, char *page)
 static ssize_t
 queue_requests_store(struct request_queue *q, const char *page, size_t count)
 {
-	struct request_list *rl = &q->rq;
+	struct request_list *rl;
 	unsigned long nr;
 	int ret = queue_var_store(&nr, page, count);
 	if (nr < BLKDEV_MIN_RQ)
 		nr = BLKDEV_MIN_RQ;
 
 	spin_lock_irq(q->queue_lock);
+	rl = blk_get_request_list(q, NULL);
 	q->nr_requests = nr;
 	blk_queue_congestion_threshold(q);
 
-	if (rl->count[BLK_RW_SYNC] >= queue_congestion_on_threshold(q))
+	if (q->rq_data.count[BLK_RW_SYNC] >= queue_congestion_on_threshold(q))
 		blk_set_queue_congested(q, BLK_RW_SYNC);
-	else if (rl->count[BLK_RW_SYNC] < queue_congestion_off_threshold(q))
+	else if (q->rq_data.count[BLK_RW_SYNC] <
+				queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, BLK_RW_SYNC);
 
-	if (rl->count[BLK_RW_ASYNC] >= queue_congestion_on_threshold(q))
+	if (q->rq_data.count[BLK_RW_ASYNC] >= queue_congestion_on_threshold(q))
 		blk_set_queue_congested(q, BLK_RW_ASYNC);
-	else if (rl->count[BLK_RW_ASYNC] < queue_congestion_off_threshold(q))
+	else if (q->rq_data.count[BLK_RW_ASYNC] <
+				queue_congestion_off_threshold(q))
 		blk_clear_queue_congested(q, BLK_RW_ASYNC);
 
-	if (rl->count[BLK_RW_SYNC] >= q->nr_requests) {
+	if (q->rq_data.count[BLK_RW_SYNC] >= q->nr_requests) {
 		blk_set_queue_full(q, BLK_RW_SYNC);
-	} else if (rl->count[BLK_RW_SYNC]+1 <= q->nr_requests) {
+	} else if (q->rq_data.count[BLK_RW_SYNC]+1 <= q->nr_requests) {
 		blk_clear_queue_full(q, BLK_RW_SYNC);
 		wake_up(&rl->wait[BLK_RW_SYNC]);
 	}
 
-	if (rl->count[BLK_RW_ASYNC] >= q->nr_requests) {
+	if (q->rq_data.count[BLK_RW_ASYNC] >= q->nr_requests) {
 		blk_set_queue_full(q, BLK_RW_ASYNC);
-	} else if (rl->count[BLK_RW_ASYNC]+1 <= q->nr_requests) {
+	} else if (q->rq_data.count[BLK_RW_ASYNC]+1 <= q->nr_requests) {
 		blk_clear_queue_full(q, BLK_RW_ASYNC);
 		wake_up(&rl->wait[BLK_RW_ASYNC]);
 	}
 	spin_unlock_irq(q->queue_lock);
 	return ret;
 }
+#ifdef CONFIG_GROUP_IOSCHED
+static ssize_t queue_group_requests_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(q->nr_group_requests, (page));
+}
+
+static ssize_t
+queue_group_requests_store(struct request_queue *q, const char *page,
+					size_t count)
+{
+	unsigned long nr;
+	int ret = queue_var_store(&nr, page, count);
+	if (nr < BLKDEV_MIN_RQ)
+		nr = BLKDEV_MIN_RQ;
+
+	spin_lock_irq(q->queue_lock);
+	q->nr_group_requests = nr;
+	spin_unlock_irq(q->queue_lock);
+	return ret;
+}
+#endif
 
 static ssize_t queue_ra_show(struct request_queue *q, char *page)
 {
@@ -224,6 +248,14 @@ static struct queue_sysfs_entry queue_requests_entry = {
 	.store = queue_requests_store,
 };
 
+#ifdef CONFIG_GROUP_IOSCHED
+static struct queue_sysfs_entry queue_group_requests_entry = {
+	.attr = {.name = "nr_group_requests", .mode = S_IRUGO | S_IWUSR },
+	.show = queue_group_requests_show,
+	.store = queue_group_requests_store,
+};
+#endif
+
 static struct queue_sysfs_entry queue_ra_entry = {
 	.attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
 	.show = queue_ra_show,
@@ -278,6 +310,9 @@ static struct queue_sysfs_entry queue_iostats_entry = {
 
 static struct attribute *default_attrs[] = {
 	&queue_requests_entry.attr,
+#ifdef CONFIG_GROUP_IOSCHED
+	&queue_group_requests_entry.attr,
+#endif
 	&queue_ra_entry.attr,
 	&queue_max_hw_sectors_entry.attr,
 	&queue_max_sectors_entry.attr,
@@ -353,12 +388,11 @@ static void blk_release_queue(struct kobject *kobj)
 {
 	struct request_queue *q =
 		container_of(kobj, struct request_queue, kobj);
-	struct request_list *rl = &q->rq;
 
 	blk_sync_queue(q);
 
-	if (rl->rq_pool)
-		mempool_destroy(rl->rq_pool);
+	if (q->rq_data.rq_pool)
+		mempool_destroy(q->rq_data.rq_pool);
 
 	if (q->queue_tags)
 		__blk_queue_free_tags(q);
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 18dbcc1..16f75ad 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1082,6 +1082,16 @@ struct io_cgroup *cgroup_to_io_cgroup(struct cgroup *cgroup)
 			    struct io_cgroup, css);
 }
 
+struct request_list *io_group_get_request_list(struct request_queue *q,
+						struct bio *bio)
+{
+	struct io_group *iog;
+
+	iog = io_get_io_group(q, bio, 1);
+	BUG_ON(!iog);
+	return &iog->rl;
+}
+
 /*
  * Search the bfq_group for bfqd into the hash table (by now only a list)
  * of bgrp.  Must be called under rcu_read_lock().
@@ -1297,6 +1307,8 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		 */
 		elv_get_iog(iog);
 
+		blk_init_request_list(&iog->rl);
+
 		if (leaf == NULL) {
 			leaf = iog;
 			prev = leaf;
@@ -1557,6 +1569,8 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
 		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
 
+	blk_init_request_list(&iog->rl);
+
 	iocg = &io_root_cgroup;
 	spin_lock_irq(&iocg->lock);
 	rcu_assign_pointer(iog->key, key);
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 6d0df21..c2f71d7 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -257,6 +257,9 @@ struct io_group {
 
 	/* Single ioq per group, used for noop, deadline, anticipatory */
 	struct io_queue *ioq;
+
+	/* request list associated with the group */
+	struct request_list rl;
 };
 
 /**
@@ -535,6 +538,8 @@ extern void elv_fq_unset_request_ioq(struct request_queue *q,
 extern struct io_queue *elv_lookup_ioq_current(struct request_queue *q);
 extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
 						struct bio *bio);
+extern struct request_list *io_group_get_request_list(struct request_queue *q,
+						struct bio *bio);
 
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
diff --git a/block/elevator.c b/block/elevator.c
index b49efd6..d8ceca8 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -668,7 +668,7 @@ void elv_quiesce_start(struct request_queue *q)
 	 * make sure we don't have any requests in flight
 	 */
 	elv_drain_elevator(q);
-	while (q->rq.elvpriv) {
+	while (q->rq_data.elvpriv) {
 		blk_start_queueing(q);
 		spin_unlock_irq(q->queue_lock);
 		msleep(10);
@@ -768,8 +768,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 	}
 
 	if (unplug_it && blk_queue_plugged(q)) {
-		int nrq = q->rq.count[BLK_RW_SYNC] + q->rq.count[BLK_RW_ASYNC]
-			- q->in_flight;
+		int nrq = q->rq_data.count[BLK_RW_SYNC] +
+				q->rq_data.count[BLK_RW_ASYNC] - q->in_flight;
 
 		if (nrq >= q->unplug_thresh)
 			__generic_unplug_device(q);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 539cb9d..7fd7d33 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -32,21 +32,51 @@ struct request;
 struct sg_io_hdr;
 
 #define BLKDEV_MIN_RQ	4
+
+#ifdef CONFIG_GROUP_IOSCHED
+#define BLKDEV_MAX_RQ	512	/* Default maximum for queue */
+#define BLKDEV_MAX_GROUP_RQ    128      /* Default maximum per group*/
+#else
 #define BLKDEV_MAX_RQ	128	/* Default maximum */
+/*
+ * This is eqivalent to case of only one group present (root group). Let
+ * it consume all the request descriptors available on the queue .
+ */
+#define BLKDEV_MAX_GROUP_RQ    BLKDEV_MAX_RQ      /* Default maximum */
+#endif
 
 struct request;
 typedef void (rq_end_io_fn)(struct request *, int);
 
 struct request_list {
 	/*
-	 * count[], starved[], and wait[] are indexed by
+	 * count[], starved and wait[] are indexed by
 	 * BLK_RW_SYNC/BLK_RW_ASYNC
 	 */
 	int count[2];
 	int starved[2];
+	wait_queue_head_t wait[2];
+};
+
+/*
+ * This data structures keeps track of mempool of requests for the queue
+ * and some overall statistics.
+ */
+struct request_data {
+	/*
+	 * Per queue request descriptor count. This is in addition to per
+	 * cgroup count
+	 */
+	int count[2];
 	int elvpriv;
 	mempool_t *rq_pool;
-	wait_queue_head_t wait[2];
+	int starved;
+	/*
+	 * Global list for starved tasks. A task will be queued here if
+	 * it could not allocate request descriptor and the associated
+	 * group request list does not have any requests pending.
+	 */
+	wait_queue_head_t starved_wait;
 };
 
 /*
@@ -337,6 +367,9 @@ struct request_queue
 	 */
 	struct request_list	rq;
 
+	/* Contains request pool and other data like starved data */
+	struct request_data	rq_data;
+
 	request_fn_proc		*request_fn;
 	make_request_fn		*make_request_fn;
 	prep_rq_fn		*prep_rq_fn;
@@ -399,6 +432,8 @@ struct request_queue
 	 * queue settings
 	 */
 	unsigned long		nr_requests;	/* Max # of requests */
+	/* Max # of per io group requests */
+	unsigned long		nr_group_requests;
 	unsigned int		nr_congestion_on;
 	unsigned int		nr_congestion_off;
 	unsigned int		nr_batching;
@@ -772,6 +807,54 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			 struct scsi_ioctl_command __user *);
 
+extern void blk_init_request_list(struct request_list *rl);
+
+static inline struct request_list *blk_get_request_list(struct request_queue *q,
+						struct bio *bio)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return &q->rq;
+
+	return io_group_get_request_list(q, bio);
+#else
+	return &q->rq;
+#endif
+}
+
+static inline struct request_list *rq_rl(struct request_queue *q,
+						struct request *rq)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	struct io_group *iog;
+	int priv = rq->cmd_flags & REQ_ELVPRIV;
+
+	if (!elv_iosched_fair_queuing_enabled(q->elevator))
+		return &q->rq;
+
+	BUG_ON(priv && !rq->ioq);
+
+	if (priv)
+		iog = ioq_to_io_group(rq->ioq);
+	else
+		iog = q->elevator->efqd.root_group;
+
+	BUG_ON(!iog);
+	return &iog->rl;
+#else
+	return &q->rq;
+#endif
+}
+
+static inline struct io_group *rl_iog(struct request_list *rl)
+{
+#ifdef CONFIG_GROUP_IOSCHED
+	return container_of(rl, struct io_group, rl);
+#else
+	return NULL;
+#endif
+}
+
 /*
  * Temporary export, until SCSI gets fixed up.
  */
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 17/20] io-controller: Per io group bdi congestion interface
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (15 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 16/20] io-controller: Per cgroup request descriptor support Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 18/20] io-controller: Support per cgroup per device weights and io class Vivek Goyal
                     ` (4 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o So far there used to be only one pair or queue  of request descriptors
  (one for sync and one for async) per device and number of requests allocated
  used to decide whether associated bdi is congested or not.

  Now with per io group request descriptor infrastructure, there is a pair
  of request descriptor queue per io group per device. So it might happen
  that overall request queue is not congested but a particular io group
  bio belongs to is congested.

  Or, it could be otherwise that group is not congested but overall queue
  is congested. This can happen if user has not properly set the request
  descriptors limits for queue and groups.
  (q->nr_requests < nr_groups * q->nr_group_requests)

  Hence there is a need for new interface which can query deivce congestion
  status per group. This group is determined by the "struct page" IO will be
  done for. If page is null, then group is determined from the current task
  context.

o This patch introduces new set of function bdi_*_congested_group(), which
  take "struct page" as addition argument. These functions will call the
  block layer and in trun elevator to find out if the io group the page will
  go into is congested or not.

o Currently I have introduced the core functions and migrated most of the users.
  But there might be still some left. This is an ongoing TODO item.

o There are some io_get_io_group() related changes which should be pushed into
  higher patches. Still testing this patch. Will push these changes up in next
  posting.

Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/blk-core.c            |   21 +++++
 block/cfq-iosched.c         |    6 +-
 block/elevator-fq.c         |  179 ++++++++++++++++++++++++++++++++++---------
 block/elevator-fq.h         |   10 ++-
 drivers/md/dm-table.c       |   11 ++-
 drivers/md/dm.c             |    5 +-
 drivers/md/dm.h             |    3 +-
 drivers/md/linear.c         |    7 +-
 drivers/md/multipath.c      |    7 +-
 drivers/md/raid0.c          |    6 +-
 drivers/md/raid1.c          |    9 ++-
 drivers/md/raid10.c         |    6 +-
 drivers/md/raid5.c          |    2 +-
 fs/afs/write.c              |    8 ++-
 fs/btrfs/disk-io.c          |    6 +-
 fs/btrfs/extent_io.c        |   12 +++
 fs/btrfs/volumes.c          |    8 ++-
 fs/cifs/file.c              |   11 +++
 fs/ext2/ialloc.c            |    2 +-
 fs/gfs2/ops_address.c       |   12 +++
 fs/nilfs2/segbuf.c          |    3 +-
 fs/xfs/linux-2.6/xfs_aops.c |    2 +-
 fs/xfs/linux-2.6/xfs_buf.c  |    2 +-
 include/linux/backing-dev.h |   61 ++++++++++++++-
 include/linux/biotrack.h    |    6 ++
 include/linux/blkdev.h      |    5 +
 mm/backing-dev.c            |   62 +++++++++++++++
 mm/biotrack.c               |   21 +++++
 mm/page-writeback.c         |   11 +++
 mm/readahead.c              |    4 +-
 30 files changed, 435 insertions(+), 73 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 35e3725..5f16f4a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -99,6 +99,27 @@ void blk_queue_congestion_threshold(struct request_queue *q)
 	q->nr_congestion_off = nr;
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+int blk_queue_io_group_congested(struct backing_dev_info *bdi, int bdi_bits,
+					struct page *page)
+{
+	int ret = 0;
+	struct request_queue *q = bdi->unplug_io_data;
+
+	if (!q && !q->elevator)
+		return bdi_congested(bdi, bdi_bits);
+
+	/* Do we need to hold queue lock? */
+	if (bdi_bits & (1 << BDI_sync_congested))
+		ret |= elv_io_group_congested(q, page, 1);
+
+	if (bdi_bits & (1 << BDI_async_congested))
+		ret |= elv_io_group_congested(q, page, 0);
+
+	return ret;
+}
+#endif
+
 /**
  * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
  * @bdev:	device
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 77bbe6c..b02acf2 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -195,7 +195,7 @@ static struct cfq_queue *cic_bio_to_cfqq(struct cfq_data *cfqd,
 		 * async bio tracking is enabled and we are not caching
 		 * async queue pointer in cic.
 		 */
-		iog = io_get_io_group(cfqd->queue, bio, 0);
+		iog = io_get_io_group_bio(cfqd->queue, bio, 0);
 		if (!iog) {
 			/*
 			 * May be this is first rq/bio and io group has not
@@ -1334,7 +1334,7 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
 	struct io_group *iog = NULL;
 retry:
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
@@ -1452,7 +1452,7 @@ cfq_get_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_get_io_group(cfqd->queue, bio, 1);
+	struct io_group *iog = io_get_io_group_bio(cfqd->queue, bio, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 16f75ad..13c8161 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -42,7 +42,6 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
 int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 					int force);
-
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
 {
@@ -1087,11 +1086,69 @@ struct request_list *io_group_get_request_list(struct request_queue *q,
 {
 	struct io_group *iog;
 
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 	BUG_ON(!iog);
 	return &iog->rl;
 }
 
+/* Set io group congestion on and off thresholds */
+void elv_io_group_congestion_threshold(struct request_queue *q,
+						struct io_group *iog)
+{
+	int nr;
+
+	nr = q->nr_group_requests - (q->nr_group_requests / 8) + 1;
+	if (nr > q->nr_group_requests)
+		nr = q->nr_group_requests;
+	iog->nr_congestion_on = nr;
+
+	nr = q->nr_group_requests - (q->nr_group_requests / 8)
+			- (q->nr_group_requests / 16) - 1;
+	if (nr < 1)
+		nr = 1;
+	iog->nr_congestion_off = nr;
+}
+
+static inline int elv_is_iog_congested(struct request_queue *q,
+					struct io_group *iog, int sync)
+{
+	if (iog->rl.count[sync] >= iog->nr_congestion_on)
+		return 1;
+	return 0;
+}
+
+/* Determine if io group page maps to is congested or not */
+int elv_io_group_congested(struct request_queue *q, struct page *page, int sync)
+{
+	struct io_group *iog;
+	int ret = 0;
+
+	rcu_read_lock();
+
+	iog = io_get_io_group(q, page, 0);
+
+	if (!iog) {
+		/*
+		 * Either cgroup got deleted or this is first request in the
+		 * group and associated io group object has not been created
+		 * yet. Map it to root group.
+		 *
+		 * TODO: Fix the case of group not created yet.
+		 */
+		iog = q->elevator->efqd.root_group;
+	}
+
+	ret = elv_is_iog_congested(q, iog, sync);
+	if (ret)
+		elv_log_iog(&q->elevator->efqd, iog, "iog congested=%d sync=%d"
+			" rl.count[sync]=%d nr_group_requests=%d",
+			ret, sync, iog->rl.count[sync], q->nr_group_requests);
+
+	rcu_read_unlock();
+	return ret;
+}
+
+
 /*
  * Search the bfq_group for bfqd into the hash table (by now only a list)
  * of bgrp.  Must be called under rcu_read_lock().
@@ -1265,11 +1322,13 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
  * to the root has already an allocated group on @bfqd.
  */
 struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
-					struct cgroup *cgroup, struct bio *bio)
+					struct cgroup *cgroup)
 {
 	struct io_cgroup *iocg;
 	struct io_group *iog, *leaf = NULL, *prev = NULL;
 	gfp_t flags = GFP_ATOMIC |  __GFP_ZERO;
+	unsigned int major, minor;
+	struct backing_dev_info *bdi = &q->backing_dev_info;
 
 	for (; cgroup != NULL; cgroup = cgroup->parent) {
 		iocg = cgroup_to_io_cgroup(cgroup);
@@ -1308,6 +1367,7 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		elv_get_iog(iog);
 
 		blk_init_request_list(&iog->rl);
+		elv_io_group_congestion_threshold(q, iog);
 
 		if (leaf == NULL) {
 			leaf = iog;
@@ -1412,7 +1472,7 @@ void io_group_chain_link(struct request_queue *q, void *key,
  */
 struct io_group *io_find_alloc_group(struct request_queue *q,
 			struct cgroup *cgroup, struct elv_fq_data *efqd,
-			int create, struct bio *bio)
+			int create)
 {
 	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
 	struct io_group *iog = NULL;
@@ -1431,7 +1491,7 @@ struct io_group *io_find_alloc_group(struct request_queue *q,
 	if (iog != NULL || !create)
 		goto end;
 
-	iog = io_group_chain_alloc(q, key, cgroup, bio);
+	iog = io_group_chain_alloc(q, key, cgroup);
 	if (iog != NULL)
 		io_group_chain_link(q, key, cgroup, iog, efqd);
 
@@ -1440,46 +1500,60 @@ end:
 	return iog;
 }
 
-/* Map a bio to respective cgroup. Null return means, map it to root cgroup */
-static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
+/* Map a page to respective cgroup. Null return means, map it to root cgroup */
+static inline struct cgroup *get_cgroup_from_page(struct page *page)
 {
 	unsigned long bio_cgroup_id;
 	struct cgroup *cgroup;
 
-	/* blk_get_request can reach here without passing a bio */
-	if (!bio)
+	bio_cgroup_id = get_blkio_cgroup_id_page(page);
+
+	if (!bio_cgroup_id)
 		return NULL;
 
+	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
+	return cgroup;
+}
+
+
+struct io_group *io_get_io_group_bio(struct request_queue *q, struct bio *bio,
+					int create)
+{
+	struct page *page = NULL;
+
+	/*
+	 * Determine the group from task context. Even calls from
+	 * blk_get_request() which don't have any bio info will be mapped
+	 * to the task's group
+	 */
+	if (!bio)
+		goto sync;
+
 	if (bio_barrier(bio)) {
 		/*
 		 * Map barrier requests to root group. May be more special
 		 * bio cases should come here
 		 */
-		return NULL;
+		return q->elevator->efqd.root_group;
 	}
 
-#ifdef CONFIG_TRACK_ASYNC_CONTEXT
-	if (elv_bio_sync(bio)) {
-		/* sync io. Determine cgroup from submitting task context. */
-		cgroup = task_cgroup(current, io_subsys_id);
-		return cgroup;
-	}
+	/* Map the sync bio to the right group using task context */
+	if (elv_bio_sync(bio))
+		goto sync;
 
-	/* Async io. Determine cgroup from with cgroup id stored in page */
-	bio_cgroup_id = get_blkio_cgroup_id(bio);
-
-	if (!bio_cgroup_id)
-		return NULL;
-
-	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
-#else
-	cgroup = task_cgroup(current, io_subsys_id);
+#ifndef CONFIG_TRACK_ASYNC_CONTEXT
+	goto sync;
 #endif
-	return cgroup;
+	/* Determine the group from info stored in page */
+	page = bio_iovec_idx(bio, 0)->bv_page;
+	return io_get_io_group(q, page, create);
+sync:
+	return io_get_io_group(q, NULL, create);
 }
+EXPORT_SYMBOL(io_get_io_group_bio);
 
 /*
- * Find the io group bio belongs to.
+ * Find the io group page belongs to.
  * If "create" is set, io group is created if it is not already present.
  *
  * Note: This function should be called with queue lock held. It returns
@@ -1488,22 +1562,27 @@ static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
  * needs to get hold of queue lock). So if somebody needs to use group
  * pointer even after dropping queue lock, take a reference to the group
  * before dropping queue lock.
+ *
+ * One can call it without queue lock with rcu read lock held for browsing
+ * through the groups.
  */
-struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+struct io_group *io_get_io_group(struct request_queue *q, struct page *page,
 					int create)
 {
 	struct cgroup *cgroup;
 	struct io_group *iog;
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 
-	assert_spin_locked(q->queue_lock);
+
+	if (create)
+		assert_spin_locked(q->queue_lock);
 
 	rcu_read_lock();
 
-	if (!bio)
+	if (!page)
 		cgroup = task_cgroup(current, io_subsys_id);
 	else
-		cgroup = get_cgroup_from_bio(bio);
+		cgroup = get_cgroup_from_page(page);
 
 	if (!cgroup) {
 		if (create)
@@ -1518,7 +1597,7 @@ struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
 		goto out;
 	}
 
-	iog = io_find_alloc_group(q, cgroup, efqd, create, bio);
+	iog = io_find_alloc_group(q, cgroup, efqd, create);
 	if (!iog) {
 		if (create)
 			iog = efqd->root_group;
@@ -1570,6 +1649,7 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
 
 	blk_init_request_list(&iog->rl);
+	elv_io_group_congestion_threshold(q, iog);
 
 	iocg = &io_root_cgroup;
 	spin_lock_irq(&iocg->lock);
@@ -1578,6 +1658,10 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	iog->iocg_id = css_id(&iocg->css);
 	spin_unlock_irq(&iocg->lock);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	io_group_path(iog, iog->path, sizeof(iog->path));
+#endif
+
 	return iog;
 }
 
@@ -1670,6 +1754,14 @@ void iocg_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
 	task_unlock(tsk);
 }
 
+static void io_group_free_rcu(struct rcu_head *head)
+{
+	struct io_group *iog;
+
+	iog = container_of(head, struct io_group, rcu_head);
+	kfree(iog);
+}
+
 /*
  * This cleanup function does the last bit of things to destroy cgroup.
  * It should only get called after io_destroy_group has been invoked.
@@ -1693,7 +1785,13 @@ void io_group_cleanup(struct io_group *iog)
 	BUG_ON(entity != NULL && entity->tree != NULL);
 
 	iog->iocg_id = 0;
-	kfree(iog);
+
+	/*
+	 * Wait for any rcu readers to exit before freeing up the group.
+	 * Primarily useful when io_get_io_group() is called without queue
+	 * lock to access some group data from bdi_congested_group() path.
+	 */
+	call_rcu(&iog->rcu_head, io_group_free_rcu);
 }
 
 void elv_put_iog(struct io_group *iog)
@@ -1933,7 +2031,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 		return 1;
 
 	/* Determine the io group of the bio submitting task */
-	iog = io_get_io_group(q, bio, 0);
+	iog = io_get_io_group_bio(q, bio, 0);
 	if (!iog) {
 		/* May be task belongs to a differet cgroup for which io
 		 * group has not been setup yet. */
@@ -1973,7 +2071,7 @@ int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
 
 retry:
 	/* Determine the io group request belongs to */
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 	BUG_ON(!iog);
 
 	/* Get the iosched queue */
@@ -2066,7 +2164,7 @@ struct io_queue *elv_lookup_ioq_bio(struct request_queue *q, struct bio *bio)
 	struct io_group *iog;
 
 	/* Determine the io group and io queue of the bio submitting task */
-	iog = io_get_io_group(q, bio, 0);
+	iog = io_get_io_group_bio(q, bio, 0);
 	if (!iog) {
 		/* May be bio belongs to a cgroup for which io group has
 		 * not been setup yet. */
@@ -2133,7 +2231,14 @@ void io_free_root_group(struct elevator_queue *e)
 	kfree(iog);
 }
 
-struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+struct io_group *io_get_io_group_bio(struct request_queue *q, struct bio *bio,
+					int create)
+{
+	return q->elevator->efqd.root_group;
+}
+EXPORT_SYMBOL(io_get_io_group_bio);
+
+struct io_group *io_get_io_group(struct request_queue *q, struct page *page,
 						int create)
 {
 	return q->elevator->efqd.root_group;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index c2f71d7..d60105f 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -258,8 +258,13 @@ struct io_group {
 	/* Single ioq per group, used for noop, deadline, anticipatory */
 	struct io_queue *ioq;
 
+	/* io group congestion on and off threshold for request descriptors */
+	unsigned int nr_congestion_on;
+	unsigned int nr_congestion_off;
+
 	/* request list associated with the group */
 	struct request_list rl;
+	struct rcu_head rcu_head;
 };
 
 /**
@@ -540,7 +545,8 @@ extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
 						struct bio *bio);
 extern struct request_list *io_group_get_request_list(struct request_queue *q,
 						struct bio *bio);
-
+extern int elv_io_group_congested(struct request_queue *q, struct page *page,
+					int sync);
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
 {
@@ -672,6 +678,8 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
 extern struct io_group *io_get_io_group(struct request_queue *q,
+					struct page *page, int create);
+extern struct io_group *io_get_io_group_bio(struct request_queue *q,
 					struct bio *bio, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 429b50b..8fe04f1 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1000,7 +1000,8 @@ int dm_table_resume_targets(struct dm_table *t)
 	return 0;
 }
 
-int dm_table_any_congested(struct dm_table *t, int bdi_bits)
+int dm_table_any_congested(struct dm_table *t, int bdi_bits, struct page *page,
+				int group)
 {
 	struct dm_dev_internal *dd;
 	struct list_head *devices = dm_table_get_devices(t);
@@ -1010,9 +1011,11 @@ int dm_table_any_congested(struct dm_table *t, int bdi_bits)
 		struct request_queue *q = bdev_get_queue(dd->dm_dev.bdev);
 		char b[BDEVNAME_SIZE];
 
-		if (likely(q))
-			r |= bdi_congested(&q->backing_dev_info, bdi_bits);
-		else
+		if (likely(q)) {
+			struct backing_dev_info *bdi = &q->backing_dev_info;
+			r |= group ? bdi_congested_group(bdi, bdi_bits, page)
+				: bdi_congested(bdi, bdi_bits);
+		} else
 			DMWARN_LIMIT("%s: any_congested: nonexistent device %s",
 				     dm_device_name(t->md),
 				     bdevname(dd->dm_dev.bdev, b));
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 424f7b0..ef12cee 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -994,7 +994,8 @@ static void dm_unplug_all(struct request_queue *q)
 	}
 }
 
-static int dm_any_congested(void *congested_data, int bdi_bits)
+static int dm_any_congested(void *congested_data, int bdi_bits,
+					struct page *page, int group)
 {
 	int r = bdi_bits;
 	struct mapped_device *md = congested_data;
@@ -1003,7 +1004,7 @@ static int dm_any_congested(void *congested_data, int bdi_bits)
 	if (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) {
 		map = dm_get_table(md);
 		if (map) {
-			r = dm_table_any_congested(map, bdi_bits);
+			r = dm_table_any_congested(map, bdi_bits, page, group);
 			dm_table_put(map);
 		}
 	}
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index a31506d..7efe4b4 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -46,7 +46,8 @@ struct list_head *dm_table_get_devices(struct dm_table *t);
 void dm_table_presuspend_targets(struct dm_table *t);
 void dm_table_postsuspend_targets(struct dm_table *t);
 int dm_table_resume_targets(struct dm_table *t);
-int dm_table_any_congested(struct dm_table *t, int bdi_bits);
+int dm_table_any_congested(struct dm_table *t, int bdi_bits, struct page *page,
+				int group);
 
 /*
  * To check the return value from dm_table_find_target().
diff --git a/drivers/md/linear.c b/drivers/md/linear.c
index 7a36e38..ddf43dd 100644
--- a/drivers/md/linear.c
+++ b/drivers/md/linear.c
@@ -88,7 +88,7 @@ static void linear_unplug(struct request_queue *q)
 	}
 }
 
-static int linear_congested(void *data, int bits)
+static int linear_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	linear_conf_t *conf = mddev_to_conf(mddev);
@@ -96,7 +96,10 @@ static int linear_congested(void *data, int bits)
 
 	for (i = 0; i < mddev->raid_disks && !ret ; i++) {
 		struct request_queue *q = bdev_get_queue(conf->disks[i].rdev->bdev);
-		ret |= bdi_congested(&q->backing_dev_info, bits);
+		struct backing_dev_info *bdi = &q->backing_dev_info;
+
+		ret |= group ? bdi_congested_group(bdi, bits, page) :
+			bdi_congested(bdi, bits);
 	}
 	return ret;
 }
diff --git a/drivers/md/multipath.c b/drivers/md/multipath.c
index 41ced0c..9f25b21 100644
--- a/drivers/md/multipath.c
+++ b/drivers/md/multipath.c
@@ -192,7 +192,8 @@ static void multipath_status (struct seq_file *seq, mddev_t *mddev)
 	seq_printf (seq, "]");
 }
 
-static int multipath_congested(void *data, int bits)
+static int multipath_congested(void *data, int bits, struct page *page,
+					int group)
 {
 	mddev_t *mddev = data;
 	multipath_conf_t *conf = mddev_to_conf(mddev);
@@ -203,8 +204,10 @@ static int multipath_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
-			ret |= bdi_congested(&q->backing_dev_info, bits);
+			ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 			/* Just like multipath_map, we just check the
 			 * first available device
 			 */
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index c08d755..eb1d33a 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -37,7 +37,7 @@ static void raid0_unplug(struct request_queue *q)
 	}
 }
 
-static int raid0_congested(void *data, int bits)
+static int raid0_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	raid0_conf_t *conf = mddev_to_conf(mddev);
@@ -46,8 +46,10 @@ static int raid0_congested(void *data, int bits)
 
 	for (i = 0; i < mddev->raid_disks && !ret ; i++) {
 		struct request_queue *q = bdev_get_queue(devlist[i]->bdev);
+		struct backing_dev_info *bdi = &q->backing_dev_info;
 
-		ret |= bdi_congested(&q->backing_dev_info, bits);
+		ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 	}
 	return ret;
 }
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 36df910..cdd268e 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -570,7 +570,7 @@ static void raid1_unplug(struct request_queue *q)
 	md_wakeup_thread(mddev->thread);
 }
 
-static int raid1_congested(void *data, int bits)
+static int raid1_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	conf_t *conf = mddev_to_conf(mddev);
@@ -581,14 +581,17 @@ static int raid1_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
 			/* Note the '|| 1' - when read_balance prefers
 			 * non-congested targets, it can be removed
 			 */
 			if ((bits & (1<<BDI_async_congested)) || 1)
-				ret |= bdi_congested(&q->backing_dev_info, bits);
+				ret |= group ? bdi_congested_group(bdi, bits,
+					page) : bdi_congested(bdi, bits);
 			else
-				ret &= bdi_congested(&q->backing_dev_info, bits);
+				ret &= group ? bdi_congested_group(bdi, bits,
+					page) : bdi_congested(bdi, bits);
 		}
 	}
 	rcu_read_unlock();
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 499620a..49f41e3 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -625,7 +625,7 @@ static void raid10_unplug(struct request_queue *q)
 	md_wakeup_thread(mddev->thread);
 }
 
-static int raid10_congested(void *data, int bits)
+static int raid10_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	conf_t *conf = mddev_to_conf(mddev);
@@ -636,8 +636,10 @@ static int raid10_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
-			ret |= bdi_congested(&q->backing_dev_info, bits);
+			ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 		}
 	}
 	rcu_read_unlock();
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index bb37fb1..40f76a4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -3324,7 +3324,7 @@ static void raid5_unplug_device(struct request_queue *q)
 	unplug_slaves(mddev);
 }
 
-static int raid5_congested(void *data, int bits)
+static int raid5_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	raid5_conf_t *conf = mddev_to_conf(mddev);
diff --git a/fs/afs/write.c b/fs/afs/write.c
index c2e7a7f..aa8b359 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -455,7 +455,7 @@ int afs_writepage(struct page *page, struct writeback_control *wbc)
 	}
 
 	wbc->nr_to_write -= ret;
-	if (wbc->nonblocking && bdi_write_congested(bdi))
+	if (wbc->nonblocking && bdi_or_group_write_congested(bdi, page))
 		wbc->encountered_congestion = 1;
 
 	_leave(" = 0");
@@ -491,6 +491,12 @@ static int afs_writepages_region(struct address_space *mapping,
 			return 0;
 		}
 
+		if (wbc->nonblocking && bdi_write_congested_group(bdi, page)) {
+			wbc->encountered_congestion = 1;
+			page_cache_release(page);
+			break;
+		}
+
 		/* at this point we hold neither mapping->tree_lock nor lock on
 		 * the page itself: the page may be truncated or invalidated
 		 * (changing page->mapping to NULL), or even swizzled back from
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..245d8f4 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1250,7 +1250,8 @@ struct btrfs_root *btrfs_read_fs_root(struct btrfs_fs_info *fs_info,
 	return root;
 }
 
-static int btrfs_congested_fn(void *congested_data, int bdi_bits)
+static int btrfs_congested_fn(void *congested_data, int bdi_bits,
+					struct page *page, int group)
 {
 	struct btrfs_fs_info *info = (struct btrfs_fs_info *)congested_data;
 	int ret = 0;
@@ -1261,7 +1262,8 @@ static int btrfs_congested_fn(void *congested_data, int bdi_bits)
 		if (!device->bdev)
 			continue;
 		bdi = blk_get_backing_dev_info(device->bdev);
-		if (bdi && bdi_congested(bdi, bdi_bits)) {
+		if (bdi && (group ? bdi_congested_group(bdi, bdi_bits, page) :
+		    bdi_congested(bdi, bdi_bits))) {
 			ret = 1;
 			break;
 		}
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index fe9eb99..fac4299 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2358,6 +2358,18 @@ retry:
 		unsigned i;
 
 		scanned = 1;
+
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a6d35b0..5b19141 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -163,6 +163,7 @@ static noinline int run_scheduled_bios(struct btrfs_device *device)
 	unsigned long num_sync_run;
 	unsigned long limit;
 	unsigned long last_waited = 0;
+	struct page *page;
 
 	bdi = blk_get_backing_dev_info(device->bdev);
 	fs_info = device->dev_root->fs_info;
@@ -265,8 +266,11 @@ loop_lock:
 		 * is now congested.  Back off and let other work structs
 		 * run instead
 		 */
-		if (pending && bdi_write_congested(bdi) && num_run > 16 &&
-		    fs_info->fs_devices->open_devices > 1) {
+		if (pending)
+			page = bio_iovec_idx(pending, 0)->bv_page;
+
+		if (pending && bdi_or_group_write_congested(bdi, page) &&
+		    num_run > 16 && fs_info->fs_devices->open_devices > 1) {
 			struct io_context *ioc;
 
 			ioc = current->io_context;
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 302ea15..71d3fb5 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1466,6 +1466,17 @@ retry:
 		n_iov = 0;
 		bytes_to_write = 0;
 
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking &&
+		    bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			page = pvec.pages[i];
 			/*
diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
index 15387c9..090a961 100644
--- a/fs/ext2/ialloc.c
+++ b/fs/ext2/ialloc.c
@@ -179,7 +179,7 @@ static void ext2_preread_inode(struct inode *inode)
 	struct backing_dev_info *bdi;
 
 	bdi = inode->i_mapping->backing_dev_info;
-	if (bdi_read_congested(bdi))
+	if (bdi_or_group_read_congested(bdi, NULL))
 		return;
 	if (bdi_write_congested(bdi))
 		return;
diff --git a/fs/gfs2/ops_address.c b/fs/gfs2/ops_address.c
index a6dde17..b352f19 100644
--- a/fs/gfs2/ops_address.c
+++ b/fs/gfs2/ops_address.c
@@ -372,6 +372,18 @@ retry:
 					       PAGECACHE_TAG_DIRTY,
 					       min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1))) {
 		scanned = 1;
+
+		/*
+		 * If io group page belongs to is congested. bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, end);
 		if (ret)
 			done = 1;
diff --git a/fs/nilfs2/segbuf.c b/fs/nilfs2/segbuf.c
index 1e68821..abcb161 100644
--- a/fs/nilfs2/segbuf.c
+++ b/fs/nilfs2/segbuf.c
@@ -267,8 +267,9 @@ static int nilfs_submit_seg_bio(struct nilfs_write_info *wi, int mode)
 {
 	struct bio *bio = wi->bio;
 	int err;
+	struct page *page = bio_iovec_idx(bio, 0)->bv_page;
 
-	if (wi->nbio > 0 && bdi_write_congested(wi->bdi)) {
+	if (wi->nbio > 0 && bdi_or_group_write_congested(wi->bdi, page)) {
 		wait_for_completion(&wi->bio_event);
 		wi->nbio--;
 		if (unlikely(atomic_read(&wi->err))) {
diff --git a/fs/xfs/linux-2.6/xfs_aops.c b/fs/xfs/linux-2.6/xfs_aops.c
index 7ec89fc..2a515ab 100644
--- a/fs/xfs/linux-2.6/xfs_aops.c
+++ b/fs/xfs/linux-2.6/xfs_aops.c
@@ -891,7 +891,7 @@ xfs_convert_page(
 
 			bdi = inode->i_mapping->backing_dev_info;
 			wbc->nr_to_write--;
-			if (bdi_write_congested(bdi)) {
+			if (bdi_or_group_write_congested(bdi, page)) {
 				wbc->encountered_congestion = 1;
 				done = 1;
 			} else if (wbc->nr_to_write <= 0) {
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index e28800a..9e000f4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -714,7 +714,7 @@ xfs_buf_readahead(
 	struct backing_dev_info *bdi;
 
 	bdi = target->bt_mapping->backing_dev_info;
-	if (bdi_read_congested(bdi))
+	if (bdi_or_group_read_congested(bdi, NULL))
 		return;
 
 	flags |= (XBF_TRYLOCK|XBF_ASYNC|XBF_READ_AHEAD);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..f06fdbf 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -29,7 +29,7 @@ enum bdi_state {
 	BDI_unused,		/* Available bits start here */
 };
 
-typedef int (congested_fn)(void *, int);
+typedef int (congested_fn)(void *, int, struct page *, int);
 
 enum bdi_stat_item {
 	BDI_RECLAIMABLE,
@@ -209,7 +209,7 @@ int writeback_in_progress(struct backing_dev_info *bdi);
 static inline int bdi_congested(struct backing_dev_info *bdi, int bdi_bits)
 {
 	if (bdi->congested_fn)
-		return bdi->congested_fn(bdi->congested_data, bdi_bits);
+		return bdi->congested_fn(bdi->congested_data, bdi_bits, NULL, 0);
 	return (bdi->state & bdi_bits);
 }
 
@@ -229,6 +229,63 @@ static inline int bdi_rw_congested(struct backing_dev_info *bdi)
 				  (1 << BDI_async_congested));
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int bdi_congested_group(struct backing_dev_info *bdi, int bdi_bits,
+				struct page *page);
+
+extern int bdi_read_congested_group(struct backing_dev_info *bdi,
+						struct page *page);
+
+extern int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_write_congested_group(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_rw_congested_group(struct backing_dev_info *bdi,
+					struct page *page);
+#else /* CONFIG_GROUP_IOSCHED */
+static inline int bdi_congested_group(struct backing_dev_info *bdi,
+					int bdi_bits, struct page *page)
+{
+	return bdi_congested(bdi, bdi_bits);
+}
+
+static inline int bdi_read_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi);
+}
+
+static inline int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi);
+}
+
+static inline int bdi_write_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi);
+}
+
+static inline int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi);
+}
+
+static inline int bdi_rw_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_rw_congested(bdi);
+}
+
+#endif /* CONFIG_GROUP_IOSCHED */
+
 void clear_bdi_congested(struct backing_dev_info *bdi, int rw);
 void set_bdi_congested(struct backing_dev_info *bdi, int rw);
 long congestion_wait(int rw, long timeout);
diff --git a/include/linux/biotrack.h b/include/linux/biotrack.h
index 741a8b5..0b4491a 100644
--- a/include/linux/biotrack.h
+++ b/include/linux/biotrack.h
@@ -49,6 +49,7 @@ extern void blkio_cgroup_copy_owner(struct page *page, struct page *opage);
 
 extern struct io_context *get_blkio_cgroup_iocontext(struct bio *bio);
 extern unsigned long get_blkio_cgroup_id(struct bio *bio);
+extern unsigned long get_blkio_cgroup_id_page(struct page *page);
 extern struct cgroup *blkio_cgroup_lookup(int id);
 
 #else	/* CONFIG_CGROUP_BIO */
@@ -92,6 +93,11 @@ static inline unsigned long get_blkio_cgroup_id(struct bio *bio)
 	return 0;
 }
 
+static inline unsigned long get_blkio_cgroup_id_page(struct page *page)
+{
+	return 0;
+}
+
 #endif	/* CONFIG_CGROUP_BLKIO */
 
 #endif /* _LINUX_BIOTRACK_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7fd7d33..45e4cb7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -880,6 +880,11 @@ static inline void blk_set_queue_congested(struct request_queue *q, int rw)
 	set_bdi_congested(&q->backing_dev_info, rw);
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int blk_queue_io_group_congested(struct backing_dev_info *bdi,
+					int bdi_bits, struct page *page);
+#endif
+
 extern void blk_start_queue(struct request_queue *q);
 extern void blk_stop_queue(struct request_queue *q);
 extern void blk_sync_queue(struct request_queue *q);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..cef038d 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -7,6 +7,7 @@
 #include <linux/module.h>
 #include <linux/writeback.h>
 #include <linux/device.h>
+#include "../block/elevator-fq.h"
 
 void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 {
@@ -328,3 +329,64 @@ long congestion_wait(int rw, long timeout)
 }
 EXPORT_SYMBOL(congestion_wait);
 
+/*
+ * With group IO scheduling, there are request descriptors per io group per
+ * queue. So generic notion of whether queue is congested or not is not
+ * very accurate. Queue might not be congested but the io group in which
+ * request will go might actually be congested.
+ *
+ * Hence to get the correct idea about congestion level, one should query
+ * the io group congestion status on the queue. Pass in the page information
+ * which can be used to determine the io group of the page and congestion
+ * status can be determined accordingly.
+ *
+ * If page info is not passed, io group is determined from the current task
+ * context.
+ */
+#ifdef CONFIG_GROUP_IOSCHED
+int bdi_congested_group(struct backing_dev_info *bdi, int bdi_bits,
+				struct page *page)
+{
+	if (bdi->congested_fn)
+		return bdi->congested_fn(bdi->congested_data, bdi_bits, page, 1);
+
+	return blk_queue_io_group_congested(bdi, bdi_bits, page);
+}
+EXPORT_SYMBOL(bdi_congested_group);
+
+int bdi_read_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, 1 << BDI_sync_congested, page);
+}
+EXPORT_SYMBOL(bdi_read_congested_group);
+
+/* Checks if either bdi or associated group is read congested */
+int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi) || bdi_read_congested_group(bdi, page);
+}
+EXPORT_SYMBOL(bdi_or_group_read_congested);
+
+int bdi_write_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, 1 << BDI_async_congested, page);
+}
+EXPORT_SYMBOL(bdi_write_congested_group);
+
+/* Checks if either bdi or associated group is write congested */
+int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi) || bdi_write_congested_group(bdi, page);
+}
+EXPORT_SYMBOL(bdi_or_group_write_congested);
+
+int bdi_rw_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, (1 << BDI_sync_congested) |
+				  (1 << BDI_async_congested), page);
+}
+EXPORT_SYMBOL(bdi_rw_congested_group);
+
+#endif /* CONFIG_GROUP_IOSCHED */
diff --git a/mm/biotrack.c b/mm/biotrack.c
index 2baf1f0..f7d8efb 100644
--- a/mm/biotrack.c
+++ b/mm/biotrack.c
@@ -212,6 +212,27 @@ unsigned long get_blkio_cgroup_id(struct bio *bio)
 }
 
 /**
+ * get_blkio_cgroup_id_page() - determine the blkio-cgroup ID
+ * @page:	the &struct page which describes the I/O
+ *
+ * Returns the blkio-cgroup ID of a given page. A return value zero
+ * means that the page associated with the IO belongs to default_blkio_cgroup.
+ */
+unsigned long get_blkio_cgroup_id_page(struct page *page)
+{
+	struct page_cgroup *pc;
+	unsigned long id = 0;
+
+	pc = lookup_page_cgroup(page);
+	if (pc) {
+		lock_page_cgroup(pc);
+		id = page_cgroup_get_id(pc);
+		unlock_page_cgroup(pc);
+	}
+	return id;
+}
+
+/**
  * get_blkio_cgroup_iocontext() - determine the blkio-cgroup iocontext
  * @bio:	the &struct bio which describe the I/O
  *
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 3604c35..26b9e0a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -981,6 +981,17 @@ retry:
 		if (nr_pages == 0)
 			break;
 
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
diff --git a/mm/readahead.c b/mm/readahead.c
index 133b6d5..acd9c57 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -240,7 +240,7 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
 int do_page_cache_readahead(struct address_space *mapping, struct file *filp,
 			pgoff_t offset, unsigned long nr_to_read)
 {
-	if (bdi_read_congested(mapping->backing_dev_info))
+	if (bdi_or_group_read_congested(mapping->backing_dev_info, NULL))
 		return -1;
 
 	return __do_page_cache_readahead(mapping, filp, offset, nr_to_read, 0);
@@ -485,7 +485,7 @@ page_cache_async_readahead(struct address_space *mapping,
 	/*
 	 * Defer asynchronous read-ahead on IO congestion.
 	 */
-	if (bdi_read_congested(mapping->backing_dev_info))
+	if (bdi_or_group_read_congested(mapping->backing_dev_info, NULL))
 		return;
 
 	/* do read-ahead */
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 17/20] io-controller: Per io group bdi congestion interface
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o So far there used to be only one pair or queue  of request descriptors
  (one for sync and one for async) per device and number of requests allocated
  used to decide whether associated bdi is congested or not.

  Now with per io group request descriptor infrastructure, there is a pair
  of request descriptor queue per io group per device. So it might happen
  that overall request queue is not congested but a particular io group
  bio belongs to is congested.

  Or, it could be otherwise that group is not congested but overall queue
  is congested. This can happen if user has not properly set the request
  descriptors limits for queue and groups.
  (q->nr_requests < nr_groups * q->nr_group_requests)

  Hence there is a need for new interface which can query deivce congestion
  status per group. This group is determined by the "struct page" IO will be
  done for. If page is null, then group is determined from the current task
  context.

o This patch introduces new set of function bdi_*_congested_group(), which
  take "struct page" as addition argument. These functions will call the
  block layer and in trun elevator to find out if the io group the page will
  go into is congested or not.

o Currently I have introduced the core functions and migrated most of the users.
  But there might be still some left. This is an ongoing TODO item.

o There are some io_get_io_group() related changes which should be pushed into
  higher patches. Still testing this patch. Will push these changes up in next
  posting.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/blk-core.c            |   21 +++++
 block/cfq-iosched.c         |    6 +-
 block/elevator-fq.c         |  179 ++++++++++++++++++++++++++++++++++---------
 block/elevator-fq.h         |   10 ++-
 drivers/md/dm-table.c       |   11 ++-
 drivers/md/dm.c             |    5 +-
 drivers/md/dm.h             |    3 +-
 drivers/md/linear.c         |    7 +-
 drivers/md/multipath.c      |    7 +-
 drivers/md/raid0.c          |    6 +-
 drivers/md/raid1.c          |    9 ++-
 drivers/md/raid10.c         |    6 +-
 drivers/md/raid5.c          |    2 +-
 fs/afs/write.c              |    8 ++-
 fs/btrfs/disk-io.c          |    6 +-
 fs/btrfs/extent_io.c        |   12 +++
 fs/btrfs/volumes.c          |    8 ++-
 fs/cifs/file.c              |   11 +++
 fs/ext2/ialloc.c            |    2 +-
 fs/gfs2/ops_address.c       |   12 +++
 fs/nilfs2/segbuf.c          |    3 +-
 fs/xfs/linux-2.6/xfs_aops.c |    2 +-
 fs/xfs/linux-2.6/xfs_buf.c  |    2 +-
 include/linux/backing-dev.h |   61 ++++++++++++++-
 include/linux/biotrack.h    |    6 ++
 include/linux/blkdev.h      |    5 +
 mm/backing-dev.c            |   62 +++++++++++++++
 mm/biotrack.c               |   21 +++++
 mm/page-writeback.c         |   11 +++
 mm/readahead.c              |    4 +-
 30 files changed, 435 insertions(+), 73 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 35e3725..5f16f4a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -99,6 +99,27 @@ void blk_queue_congestion_threshold(struct request_queue *q)
 	q->nr_congestion_off = nr;
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+int blk_queue_io_group_congested(struct backing_dev_info *bdi, int bdi_bits,
+					struct page *page)
+{
+	int ret = 0;
+	struct request_queue *q = bdi->unplug_io_data;
+
+	if (!q && !q->elevator)
+		return bdi_congested(bdi, bdi_bits);
+
+	/* Do we need to hold queue lock? */
+	if (bdi_bits & (1 << BDI_sync_congested))
+		ret |= elv_io_group_congested(q, page, 1);
+
+	if (bdi_bits & (1 << BDI_async_congested))
+		ret |= elv_io_group_congested(q, page, 0);
+
+	return ret;
+}
+#endif
+
 /**
  * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
  * @bdev:	device
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 77bbe6c..b02acf2 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -195,7 +195,7 @@ static struct cfq_queue *cic_bio_to_cfqq(struct cfq_data *cfqd,
 		 * async bio tracking is enabled and we are not caching
 		 * async queue pointer in cic.
 		 */
-		iog = io_get_io_group(cfqd->queue, bio, 0);
+		iog = io_get_io_group_bio(cfqd->queue, bio, 0);
 		if (!iog) {
 			/*
 			 * May be this is first rq/bio and io group has not
@@ -1334,7 +1334,7 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
 	struct io_group *iog = NULL;
 retry:
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
@@ -1452,7 +1452,7 @@ cfq_get_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_get_io_group(cfqd->queue, bio, 1);
+	struct io_group *iog = io_get_io_group_bio(cfqd->queue, bio, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 16f75ad..13c8161 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -42,7 +42,6 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
 int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 					int force);
-
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
 {
@@ -1087,11 +1086,69 @@ struct request_list *io_group_get_request_list(struct request_queue *q,
 {
 	struct io_group *iog;
 
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 	BUG_ON(!iog);
 	return &iog->rl;
 }
 
+/* Set io group congestion on and off thresholds */
+void elv_io_group_congestion_threshold(struct request_queue *q,
+						struct io_group *iog)
+{
+	int nr;
+
+	nr = q->nr_group_requests - (q->nr_group_requests / 8) + 1;
+	if (nr > q->nr_group_requests)
+		nr = q->nr_group_requests;
+	iog->nr_congestion_on = nr;
+
+	nr = q->nr_group_requests - (q->nr_group_requests / 8)
+			- (q->nr_group_requests / 16) - 1;
+	if (nr < 1)
+		nr = 1;
+	iog->nr_congestion_off = nr;
+}
+
+static inline int elv_is_iog_congested(struct request_queue *q,
+					struct io_group *iog, int sync)
+{
+	if (iog->rl.count[sync] >= iog->nr_congestion_on)
+		return 1;
+	return 0;
+}
+
+/* Determine if io group page maps to is congested or not */
+int elv_io_group_congested(struct request_queue *q, struct page *page, int sync)
+{
+	struct io_group *iog;
+	int ret = 0;
+
+	rcu_read_lock();
+
+	iog = io_get_io_group(q, page, 0);
+
+	if (!iog) {
+		/*
+		 * Either cgroup got deleted or this is first request in the
+		 * group and associated io group object has not been created
+		 * yet. Map it to root group.
+		 *
+		 * TODO: Fix the case of group not created yet.
+		 */
+		iog = q->elevator->efqd.root_group;
+	}
+
+	ret = elv_is_iog_congested(q, iog, sync);
+	if (ret)
+		elv_log_iog(&q->elevator->efqd, iog, "iog congested=%d sync=%d"
+			" rl.count[sync]=%d nr_group_requests=%d",
+			ret, sync, iog->rl.count[sync], q->nr_group_requests);
+
+	rcu_read_unlock();
+	return ret;
+}
+
+
 /*
  * Search the bfq_group for bfqd into the hash table (by now only a list)
  * of bgrp.  Must be called under rcu_read_lock().
@@ -1265,11 +1322,13 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
  * to the root has already an allocated group on @bfqd.
  */
 struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
-					struct cgroup *cgroup, struct bio *bio)
+					struct cgroup *cgroup)
 {
 	struct io_cgroup *iocg;
 	struct io_group *iog, *leaf = NULL, *prev = NULL;
 	gfp_t flags = GFP_ATOMIC |  __GFP_ZERO;
+	unsigned int major, minor;
+	struct backing_dev_info *bdi = &q->backing_dev_info;
 
 	for (; cgroup != NULL; cgroup = cgroup->parent) {
 		iocg = cgroup_to_io_cgroup(cgroup);
@@ -1308,6 +1367,7 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		elv_get_iog(iog);
 
 		blk_init_request_list(&iog->rl);
+		elv_io_group_congestion_threshold(q, iog);
 
 		if (leaf == NULL) {
 			leaf = iog;
@@ -1412,7 +1472,7 @@ void io_group_chain_link(struct request_queue *q, void *key,
  */
 struct io_group *io_find_alloc_group(struct request_queue *q,
 			struct cgroup *cgroup, struct elv_fq_data *efqd,
-			int create, struct bio *bio)
+			int create)
 {
 	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
 	struct io_group *iog = NULL;
@@ -1431,7 +1491,7 @@ struct io_group *io_find_alloc_group(struct request_queue *q,
 	if (iog != NULL || !create)
 		goto end;
 
-	iog = io_group_chain_alloc(q, key, cgroup, bio);
+	iog = io_group_chain_alloc(q, key, cgroup);
 	if (iog != NULL)
 		io_group_chain_link(q, key, cgroup, iog, efqd);
 
@@ -1440,46 +1500,60 @@ end:
 	return iog;
 }
 
-/* Map a bio to respective cgroup. Null return means, map it to root cgroup */
-static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
+/* Map a page to respective cgroup. Null return means, map it to root cgroup */
+static inline struct cgroup *get_cgroup_from_page(struct page *page)
 {
 	unsigned long bio_cgroup_id;
 	struct cgroup *cgroup;
 
-	/* blk_get_request can reach here without passing a bio */
-	if (!bio)
+	bio_cgroup_id = get_blkio_cgroup_id_page(page);
+
+	if (!bio_cgroup_id)
 		return NULL;
 
+	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
+	return cgroup;
+}
+
+
+struct io_group *io_get_io_group_bio(struct request_queue *q, struct bio *bio,
+					int create)
+{
+	struct page *page = NULL;
+
+	/*
+	 * Determine the group from task context. Even calls from
+	 * blk_get_request() which don't have any bio info will be mapped
+	 * to the task's group
+	 */
+	if (!bio)
+		goto sync;
+
 	if (bio_barrier(bio)) {
 		/*
 		 * Map barrier requests to root group. May be more special
 		 * bio cases should come here
 		 */
-		return NULL;
+		return q->elevator->efqd.root_group;
 	}
 
-#ifdef CONFIG_TRACK_ASYNC_CONTEXT
-	if (elv_bio_sync(bio)) {
-		/* sync io. Determine cgroup from submitting task context. */
-		cgroup = task_cgroup(current, io_subsys_id);
-		return cgroup;
-	}
+	/* Map the sync bio to the right group using task context */
+	if (elv_bio_sync(bio))
+		goto sync;
 
-	/* Async io. Determine cgroup from with cgroup id stored in page */
-	bio_cgroup_id = get_blkio_cgroup_id(bio);
-
-	if (!bio_cgroup_id)
-		return NULL;
-
-	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
-#else
-	cgroup = task_cgroup(current, io_subsys_id);
+#ifndef CONFIG_TRACK_ASYNC_CONTEXT
+	goto sync;
 #endif
-	return cgroup;
+	/* Determine the group from info stored in page */
+	page = bio_iovec_idx(bio, 0)->bv_page;
+	return io_get_io_group(q, page, create);
+sync:
+	return io_get_io_group(q, NULL, create);
 }
+EXPORT_SYMBOL(io_get_io_group_bio);
 
 /*
- * Find the io group bio belongs to.
+ * Find the io group page belongs to.
  * If "create" is set, io group is created if it is not already present.
  *
  * Note: This function should be called with queue lock held. It returns
@@ -1488,22 +1562,27 @@ static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
  * needs to get hold of queue lock). So if somebody needs to use group
  * pointer even after dropping queue lock, take a reference to the group
  * before dropping queue lock.
+ *
+ * One can call it without queue lock with rcu read lock held for browsing
+ * through the groups.
  */
-struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+struct io_group *io_get_io_group(struct request_queue *q, struct page *page,
 					int create)
 {
 	struct cgroup *cgroup;
 	struct io_group *iog;
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 
-	assert_spin_locked(q->queue_lock);
+
+	if (create)
+		assert_spin_locked(q->queue_lock);
 
 	rcu_read_lock();
 
-	if (!bio)
+	if (!page)
 		cgroup = task_cgroup(current, io_subsys_id);
 	else
-		cgroup = get_cgroup_from_bio(bio);
+		cgroup = get_cgroup_from_page(page);
 
 	if (!cgroup) {
 		if (create)
@@ -1518,7 +1597,7 @@ struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
 		goto out;
 	}
 
-	iog = io_find_alloc_group(q, cgroup, efqd, create, bio);
+	iog = io_find_alloc_group(q, cgroup, efqd, create);
 	if (!iog) {
 		if (create)
 			iog = efqd->root_group;
@@ -1570,6 +1649,7 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
 
 	blk_init_request_list(&iog->rl);
+	elv_io_group_congestion_threshold(q, iog);
 
 	iocg = &io_root_cgroup;
 	spin_lock_irq(&iocg->lock);
@@ -1578,6 +1658,10 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	iog->iocg_id = css_id(&iocg->css);
 	spin_unlock_irq(&iocg->lock);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	io_group_path(iog, iog->path, sizeof(iog->path));
+#endif
+
 	return iog;
 }
 
@@ -1670,6 +1754,14 @@ void iocg_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
 	task_unlock(tsk);
 }
 
+static void io_group_free_rcu(struct rcu_head *head)
+{
+	struct io_group *iog;
+
+	iog = container_of(head, struct io_group, rcu_head);
+	kfree(iog);
+}
+
 /*
  * This cleanup function does the last bit of things to destroy cgroup.
  * It should only get called after io_destroy_group has been invoked.
@@ -1693,7 +1785,13 @@ void io_group_cleanup(struct io_group *iog)
 	BUG_ON(entity != NULL && entity->tree != NULL);
 
 	iog->iocg_id = 0;
-	kfree(iog);
+
+	/*
+	 * Wait for any rcu readers to exit before freeing up the group.
+	 * Primarily useful when io_get_io_group() is called without queue
+	 * lock to access some group data from bdi_congested_group() path.
+	 */
+	call_rcu(&iog->rcu_head, io_group_free_rcu);
 }
 
 void elv_put_iog(struct io_group *iog)
@@ -1933,7 +2031,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 		return 1;
 
 	/* Determine the io group of the bio submitting task */
-	iog = io_get_io_group(q, bio, 0);
+	iog = io_get_io_group_bio(q, bio, 0);
 	if (!iog) {
 		/* May be task belongs to a differet cgroup for which io
 		 * group has not been setup yet. */
@@ -1973,7 +2071,7 @@ int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
 
 retry:
 	/* Determine the io group request belongs to */
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 	BUG_ON(!iog);
 
 	/* Get the iosched queue */
@@ -2066,7 +2164,7 @@ struct io_queue *elv_lookup_ioq_bio(struct request_queue *q, struct bio *bio)
 	struct io_group *iog;
 
 	/* Determine the io group and io queue of the bio submitting task */
-	iog = io_get_io_group(q, bio, 0);
+	iog = io_get_io_group_bio(q, bio, 0);
 	if (!iog) {
 		/* May be bio belongs to a cgroup for which io group has
 		 * not been setup yet. */
@@ -2133,7 +2231,14 @@ void io_free_root_group(struct elevator_queue *e)
 	kfree(iog);
 }
 
-struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+struct io_group *io_get_io_group_bio(struct request_queue *q, struct bio *bio,
+					int create)
+{
+	return q->elevator->efqd.root_group;
+}
+EXPORT_SYMBOL(io_get_io_group_bio);
+
+struct io_group *io_get_io_group(struct request_queue *q, struct page *page,
 						int create)
 {
 	return q->elevator->efqd.root_group;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index c2f71d7..d60105f 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -258,8 +258,13 @@ struct io_group {
 	/* Single ioq per group, used for noop, deadline, anticipatory */
 	struct io_queue *ioq;
 
+	/* io group congestion on and off threshold for request descriptors */
+	unsigned int nr_congestion_on;
+	unsigned int nr_congestion_off;
+
 	/* request list associated with the group */
 	struct request_list rl;
+	struct rcu_head rcu_head;
 };
 
 /**
@@ -540,7 +545,8 @@ extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
 						struct bio *bio);
 extern struct request_list *io_group_get_request_list(struct request_queue *q,
 						struct bio *bio);
-
+extern int elv_io_group_congested(struct request_queue *q, struct page *page,
+					int sync);
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
 {
@@ -672,6 +678,8 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
 extern struct io_group *io_get_io_group(struct request_queue *q,
+					struct page *page, int create);
+extern struct io_group *io_get_io_group_bio(struct request_queue *q,
 					struct bio *bio, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 429b50b..8fe04f1 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1000,7 +1000,8 @@ int dm_table_resume_targets(struct dm_table *t)
 	return 0;
 }
 
-int dm_table_any_congested(struct dm_table *t, int bdi_bits)
+int dm_table_any_congested(struct dm_table *t, int bdi_bits, struct page *page,
+				int group)
 {
 	struct dm_dev_internal *dd;
 	struct list_head *devices = dm_table_get_devices(t);
@@ -1010,9 +1011,11 @@ int dm_table_any_congested(struct dm_table *t, int bdi_bits)
 		struct request_queue *q = bdev_get_queue(dd->dm_dev.bdev);
 		char b[BDEVNAME_SIZE];
 
-		if (likely(q))
-			r |= bdi_congested(&q->backing_dev_info, bdi_bits);
-		else
+		if (likely(q)) {
+			struct backing_dev_info *bdi = &q->backing_dev_info;
+			r |= group ? bdi_congested_group(bdi, bdi_bits, page)
+				: bdi_congested(bdi, bdi_bits);
+		} else
 			DMWARN_LIMIT("%s: any_congested: nonexistent device %s",
 				     dm_device_name(t->md),
 				     bdevname(dd->dm_dev.bdev, b));
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 424f7b0..ef12cee 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -994,7 +994,8 @@ static void dm_unplug_all(struct request_queue *q)
 	}
 }
 
-static int dm_any_congested(void *congested_data, int bdi_bits)
+static int dm_any_congested(void *congested_data, int bdi_bits,
+					struct page *page, int group)
 {
 	int r = bdi_bits;
 	struct mapped_device *md = congested_data;
@@ -1003,7 +1004,7 @@ static int dm_any_congested(void *congested_data, int bdi_bits)
 	if (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) {
 		map = dm_get_table(md);
 		if (map) {
-			r = dm_table_any_congested(map, bdi_bits);
+			r = dm_table_any_congested(map, bdi_bits, page, group);
 			dm_table_put(map);
 		}
 	}
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index a31506d..7efe4b4 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -46,7 +46,8 @@ struct list_head *dm_table_get_devices(struct dm_table *t);
 void dm_table_presuspend_targets(struct dm_table *t);
 void dm_table_postsuspend_targets(struct dm_table *t);
 int dm_table_resume_targets(struct dm_table *t);
-int dm_table_any_congested(struct dm_table *t, int bdi_bits);
+int dm_table_any_congested(struct dm_table *t, int bdi_bits, struct page *page,
+				int group);
 
 /*
  * To check the return value from dm_table_find_target().
diff --git a/drivers/md/linear.c b/drivers/md/linear.c
index 7a36e38..ddf43dd 100644
--- a/drivers/md/linear.c
+++ b/drivers/md/linear.c
@@ -88,7 +88,7 @@ static void linear_unplug(struct request_queue *q)
 	}
 }
 
-static int linear_congested(void *data, int bits)
+static int linear_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	linear_conf_t *conf = mddev_to_conf(mddev);
@@ -96,7 +96,10 @@ static int linear_congested(void *data, int bits)
 
 	for (i = 0; i < mddev->raid_disks && !ret ; i++) {
 		struct request_queue *q = bdev_get_queue(conf->disks[i].rdev->bdev);
-		ret |= bdi_congested(&q->backing_dev_info, bits);
+		struct backing_dev_info *bdi = &q->backing_dev_info;
+
+		ret |= group ? bdi_congested_group(bdi, bits, page) :
+			bdi_congested(bdi, bits);
 	}
 	return ret;
 }
diff --git a/drivers/md/multipath.c b/drivers/md/multipath.c
index 41ced0c..9f25b21 100644
--- a/drivers/md/multipath.c
+++ b/drivers/md/multipath.c
@@ -192,7 +192,8 @@ static void multipath_status (struct seq_file *seq, mddev_t *mddev)
 	seq_printf (seq, "]");
 }
 
-static int multipath_congested(void *data, int bits)
+static int multipath_congested(void *data, int bits, struct page *page,
+					int group)
 {
 	mddev_t *mddev = data;
 	multipath_conf_t *conf = mddev_to_conf(mddev);
@@ -203,8 +204,10 @@ static int multipath_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
-			ret |= bdi_congested(&q->backing_dev_info, bits);
+			ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 			/* Just like multipath_map, we just check the
 			 * first available device
 			 */
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index c08d755..eb1d33a 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -37,7 +37,7 @@ static void raid0_unplug(struct request_queue *q)
 	}
 }
 
-static int raid0_congested(void *data, int bits)
+static int raid0_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	raid0_conf_t *conf = mddev_to_conf(mddev);
@@ -46,8 +46,10 @@ static int raid0_congested(void *data, int bits)
 
 	for (i = 0; i < mddev->raid_disks && !ret ; i++) {
 		struct request_queue *q = bdev_get_queue(devlist[i]->bdev);
+		struct backing_dev_info *bdi = &q->backing_dev_info;
 
-		ret |= bdi_congested(&q->backing_dev_info, bits);
+		ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 	}
 	return ret;
 }
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 36df910..cdd268e 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -570,7 +570,7 @@ static void raid1_unplug(struct request_queue *q)
 	md_wakeup_thread(mddev->thread);
 }
 
-static int raid1_congested(void *data, int bits)
+static int raid1_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	conf_t *conf = mddev_to_conf(mddev);
@@ -581,14 +581,17 @@ static int raid1_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
 			/* Note the '|| 1' - when read_balance prefers
 			 * non-congested targets, it can be removed
 			 */
 			if ((bits & (1<<BDI_async_congested)) || 1)
-				ret |= bdi_congested(&q->backing_dev_info, bits);
+				ret |= group ? bdi_congested_group(bdi, bits,
+					page) : bdi_congested(bdi, bits);
 			else
-				ret &= bdi_congested(&q->backing_dev_info, bits);
+				ret &= group ? bdi_congested_group(bdi, bits,
+					page) : bdi_congested(bdi, bits);
 		}
 	}
 	rcu_read_unlock();
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 499620a..49f41e3 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -625,7 +625,7 @@ static void raid10_unplug(struct request_queue *q)
 	md_wakeup_thread(mddev->thread);
 }
 
-static int raid10_congested(void *data, int bits)
+static int raid10_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	conf_t *conf = mddev_to_conf(mddev);
@@ -636,8 +636,10 @@ static int raid10_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
-			ret |= bdi_congested(&q->backing_dev_info, bits);
+			ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 		}
 	}
 	rcu_read_unlock();
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index bb37fb1..40f76a4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -3324,7 +3324,7 @@ static void raid5_unplug_device(struct request_queue *q)
 	unplug_slaves(mddev);
 }
 
-static int raid5_congested(void *data, int bits)
+static int raid5_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	raid5_conf_t *conf = mddev_to_conf(mddev);
diff --git a/fs/afs/write.c b/fs/afs/write.c
index c2e7a7f..aa8b359 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -455,7 +455,7 @@ int afs_writepage(struct page *page, struct writeback_control *wbc)
 	}
 
 	wbc->nr_to_write -= ret;
-	if (wbc->nonblocking && bdi_write_congested(bdi))
+	if (wbc->nonblocking && bdi_or_group_write_congested(bdi, page))
 		wbc->encountered_congestion = 1;
 
 	_leave(" = 0");
@@ -491,6 +491,12 @@ static int afs_writepages_region(struct address_space *mapping,
 			return 0;
 		}
 
+		if (wbc->nonblocking && bdi_write_congested_group(bdi, page)) {
+			wbc->encountered_congestion = 1;
+			page_cache_release(page);
+			break;
+		}
+
 		/* at this point we hold neither mapping->tree_lock nor lock on
 		 * the page itself: the page may be truncated or invalidated
 		 * (changing page->mapping to NULL), or even swizzled back from
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..245d8f4 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1250,7 +1250,8 @@ struct btrfs_root *btrfs_read_fs_root(struct btrfs_fs_info *fs_info,
 	return root;
 }
 
-static int btrfs_congested_fn(void *congested_data, int bdi_bits)
+static int btrfs_congested_fn(void *congested_data, int bdi_bits,
+					struct page *page, int group)
 {
 	struct btrfs_fs_info *info = (struct btrfs_fs_info *)congested_data;
 	int ret = 0;
@@ -1261,7 +1262,8 @@ static int btrfs_congested_fn(void *congested_data, int bdi_bits)
 		if (!device->bdev)
 			continue;
 		bdi = blk_get_backing_dev_info(device->bdev);
-		if (bdi && bdi_congested(bdi, bdi_bits)) {
+		if (bdi && (group ? bdi_congested_group(bdi, bdi_bits, page) :
+		    bdi_congested(bdi, bdi_bits))) {
 			ret = 1;
 			break;
 		}
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index fe9eb99..fac4299 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2358,6 +2358,18 @@ retry:
 		unsigned i;
 
 		scanned = 1;
+
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a6d35b0..5b19141 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -163,6 +163,7 @@ static noinline int run_scheduled_bios(struct btrfs_device *device)
 	unsigned long num_sync_run;
 	unsigned long limit;
 	unsigned long last_waited = 0;
+	struct page *page;
 
 	bdi = blk_get_backing_dev_info(device->bdev);
 	fs_info = device->dev_root->fs_info;
@@ -265,8 +266,11 @@ loop_lock:
 		 * is now congested.  Back off and let other work structs
 		 * run instead
 		 */
-		if (pending && bdi_write_congested(bdi) && num_run > 16 &&
-		    fs_info->fs_devices->open_devices > 1) {
+		if (pending)
+			page = bio_iovec_idx(pending, 0)->bv_page;
+
+		if (pending && bdi_or_group_write_congested(bdi, page) &&
+		    num_run > 16 && fs_info->fs_devices->open_devices > 1) {
 			struct io_context *ioc;
 
 			ioc = current->io_context;
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 302ea15..71d3fb5 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1466,6 +1466,17 @@ retry:
 		n_iov = 0;
 		bytes_to_write = 0;
 
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking &&
+		    bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			page = pvec.pages[i];
 			/*
diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
index 15387c9..090a961 100644
--- a/fs/ext2/ialloc.c
+++ b/fs/ext2/ialloc.c
@@ -179,7 +179,7 @@ static void ext2_preread_inode(struct inode *inode)
 	struct backing_dev_info *bdi;
 
 	bdi = inode->i_mapping->backing_dev_info;
-	if (bdi_read_congested(bdi))
+	if (bdi_or_group_read_congested(bdi, NULL))
 		return;
 	if (bdi_write_congested(bdi))
 		return;
diff --git a/fs/gfs2/ops_address.c b/fs/gfs2/ops_address.c
index a6dde17..b352f19 100644
--- a/fs/gfs2/ops_address.c
+++ b/fs/gfs2/ops_address.c
@@ -372,6 +372,18 @@ retry:
 					       PAGECACHE_TAG_DIRTY,
 					       min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1))) {
 		scanned = 1;
+
+		/*
+		 * If io group page belongs to is congested. bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, end);
 		if (ret)
 			done = 1;
diff --git a/fs/nilfs2/segbuf.c b/fs/nilfs2/segbuf.c
index 1e68821..abcb161 100644
--- a/fs/nilfs2/segbuf.c
+++ b/fs/nilfs2/segbuf.c
@@ -267,8 +267,9 @@ static int nilfs_submit_seg_bio(struct nilfs_write_info *wi, int mode)
 {
 	struct bio *bio = wi->bio;
 	int err;
+	struct page *page = bio_iovec_idx(bio, 0)->bv_page;
 
-	if (wi->nbio > 0 && bdi_write_congested(wi->bdi)) {
+	if (wi->nbio > 0 && bdi_or_group_write_congested(wi->bdi, page)) {
 		wait_for_completion(&wi->bio_event);
 		wi->nbio--;
 		if (unlikely(atomic_read(&wi->err))) {
diff --git a/fs/xfs/linux-2.6/xfs_aops.c b/fs/xfs/linux-2.6/xfs_aops.c
index 7ec89fc..2a515ab 100644
--- a/fs/xfs/linux-2.6/xfs_aops.c
+++ b/fs/xfs/linux-2.6/xfs_aops.c
@@ -891,7 +891,7 @@ xfs_convert_page(
 
 			bdi = inode->i_mapping->backing_dev_info;
 			wbc->nr_to_write--;
-			if (bdi_write_congested(bdi)) {
+			if (bdi_or_group_write_congested(bdi, page)) {
 				wbc->encountered_congestion = 1;
 				done = 1;
 			} else if (wbc->nr_to_write <= 0) {
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index e28800a..9e000f4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -714,7 +714,7 @@ xfs_buf_readahead(
 	struct backing_dev_info *bdi;
 
 	bdi = target->bt_mapping->backing_dev_info;
-	if (bdi_read_congested(bdi))
+	if (bdi_or_group_read_congested(bdi, NULL))
 		return;
 
 	flags |= (XBF_TRYLOCK|XBF_ASYNC|XBF_READ_AHEAD);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..f06fdbf 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -29,7 +29,7 @@ enum bdi_state {
 	BDI_unused,		/* Available bits start here */
 };
 
-typedef int (congested_fn)(void *, int);
+typedef int (congested_fn)(void *, int, struct page *, int);
 
 enum bdi_stat_item {
 	BDI_RECLAIMABLE,
@@ -209,7 +209,7 @@ int writeback_in_progress(struct backing_dev_info *bdi);
 static inline int bdi_congested(struct backing_dev_info *bdi, int bdi_bits)
 {
 	if (bdi->congested_fn)
-		return bdi->congested_fn(bdi->congested_data, bdi_bits);
+		return bdi->congested_fn(bdi->congested_data, bdi_bits, NULL, 0);
 	return (bdi->state & bdi_bits);
 }
 
@@ -229,6 +229,63 @@ static inline int bdi_rw_congested(struct backing_dev_info *bdi)
 				  (1 << BDI_async_congested));
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int bdi_congested_group(struct backing_dev_info *bdi, int bdi_bits,
+				struct page *page);
+
+extern int bdi_read_congested_group(struct backing_dev_info *bdi,
+						struct page *page);
+
+extern int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_write_congested_group(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_rw_congested_group(struct backing_dev_info *bdi,
+					struct page *page);
+#else /* CONFIG_GROUP_IOSCHED */
+static inline int bdi_congested_group(struct backing_dev_info *bdi,
+					int bdi_bits, struct page *page)
+{
+	return bdi_congested(bdi, bdi_bits);
+}
+
+static inline int bdi_read_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi);
+}
+
+static inline int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi);
+}
+
+static inline int bdi_write_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi);
+}
+
+static inline int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi);
+}
+
+static inline int bdi_rw_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_rw_congested(bdi);
+}
+
+#endif /* CONFIG_GROUP_IOSCHED */
+
 void clear_bdi_congested(struct backing_dev_info *bdi, int rw);
 void set_bdi_congested(struct backing_dev_info *bdi, int rw);
 long congestion_wait(int rw, long timeout);
diff --git a/include/linux/biotrack.h b/include/linux/biotrack.h
index 741a8b5..0b4491a 100644
--- a/include/linux/biotrack.h
+++ b/include/linux/biotrack.h
@@ -49,6 +49,7 @@ extern void blkio_cgroup_copy_owner(struct page *page, struct page *opage);
 
 extern struct io_context *get_blkio_cgroup_iocontext(struct bio *bio);
 extern unsigned long get_blkio_cgroup_id(struct bio *bio);
+extern unsigned long get_blkio_cgroup_id_page(struct page *page);
 extern struct cgroup *blkio_cgroup_lookup(int id);
 
 #else	/* CONFIG_CGROUP_BIO */
@@ -92,6 +93,11 @@ static inline unsigned long get_blkio_cgroup_id(struct bio *bio)
 	return 0;
 }
 
+static inline unsigned long get_blkio_cgroup_id_page(struct page *page)
+{
+	return 0;
+}
+
 #endif	/* CONFIG_CGROUP_BLKIO */
 
 #endif /* _LINUX_BIOTRACK_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7fd7d33..45e4cb7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -880,6 +880,11 @@ static inline void blk_set_queue_congested(struct request_queue *q, int rw)
 	set_bdi_congested(&q->backing_dev_info, rw);
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int blk_queue_io_group_congested(struct backing_dev_info *bdi,
+					int bdi_bits, struct page *page);
+#endif
+
 extern void blk_start_queue(struct request_queue *q);
 extern void blk_stop_queue(struct request_queue *q);
 extern void blk_sync_queue(struct request_queue *q);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..cef038d 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -7,6 +7,7 @@
 #include <linux/module.h>
 #include <linux/writeback.h>
 #include <linux/device.h>
+#include "../block/elevator-fq.h"
 
 void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 {
@@ -328,3 +329,64 @@ long congestion_wait(int rw, long timeout)
 }
 EXPORT_SYMBOL(congestion_wait);
 
+/*
+ * With group IO scheduling, there are request descriptors per io group per
+ * queue. So generic notion of whether queue is congested or not is not
+ * very accurate. Queue might not be congested but the io group in which
+ * request will go might actually be congested.
+ *
+ * Hence to get the correct idea about congestion level, one should query
+ * the io group congestion status on the queue. Pass in the page information
+ * which can be used to determine the io group of the page and congestion
+ * status can be determined accordingly.
+ *
+ * If page info is not passed, io group is determined from the current task
+ * context.
+ */
+#ifdef CONFIG_GROUP_IOSCHED
+int bdi_congested_group(struct backing_dev_info *bdi, int bdi_bits,
+				struct page *page)
+{
+	if (bdi->congested_fn)
+		return bdi->congested_fn(bdi->congested_data, bdi_bits, page, 1);
+
+	return blk_queue_io_group_congested(bdi, bdi_bits, page);
+}
+EXPORT_SYMBOL(bdi_congested_group);
+
+int bdi_read_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, 1 << BDI_sync_congested, page);
+}
+EXPORT_SYMBOL(bdi_read_congested_group);
+
+/* Checks if either bdi or associated group is read congested */
+int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi) || bdi_read_congested_group(bdi, page);
+}
+EXPORT_SYMBOL(bdi_or_group_read_congested);
+
+int bdi_write_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, 1 << BDI_async_congested, page);
+}
+EXPORT_SYMBOL(bdi_write_congested_group);
+
+/* Checks if either bdi or associated group is write congested */
+int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi) || bdi_write_congested_group(bdi, page);
+}
+EXPORT_SYMBOL(bdi_or_group_write_congested);
+
+int bdi_rw_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, (1 << BDI_sync_congested) |
+				  (1 << BDI_async_congested), page);
+}
+EXPORT_SYMBOL(bdi_rw_congested_group);
+
+#endif /* CONFIG_GROUP_IOSCHED */
diff --git a/mm/biotrack.c b/mm/biotrack.c
index 2baf1f0..f7d8efb 100644
--- a/mm/biotrack.c
+++ b/mm/biotrack.c
@@ -212,6 +212,27 @@ unsigned long get_blkio_cgroup_id(struct bio *bio)
 }
 
 /**
+ * get_blkio_cgroup_id_page() - determine the blkio-cgroup ID
+ * @page:	the &struct page which describes the I/O
+ *
+ * Returns the blkio-cgroup ID of a given page. A return value zero
+ * means that the page associated with the IO belongs to default_blkio_cgroup.
+ */
+unsigned long get_blkio_cgroup_id_page(struct page *page)
+{
+	struct page_cgroup *pc;
+	unsigned long id = 0;
+
+	pc = lookup_page_cgroup(page);
+	if (pc) {
+		lock_page_cgroup(pc);
+		id = page_cgroup_get_id(pc);
+		unlock_page_cgroup(pc);
+	}
+	return id;
+}
+
+/**
  * get_blkio_cgroup_iocontext() - determine the blkio-cgroup iocontext
  * @bio:	the &struct bio which describe the I/O
  *
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 3604c35..26b9e0a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -981,6 +981,17 @@ retry:
 		if (nr_pages == 0)
 			break;
 
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
diff --git a/mm/readahead.c b/mm/readahead.c
index 133b6d5..acd9c57 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -240,7 +240,7 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
 int do_page_cache_readahead(struct address_space *mapping, struct file *filp,
 			pgoff_t offset, unsigned long nr_to_read)
 {
-	if (bdi_read_congested(mapping->backing_dev_info))
+	if (bdi_or_group_read_congested(mapping->backing_dev_info, NULL))
 		return -1;
 
 	return __do_page_cache_readahead(mapping, filp, offset, nr_to_read, 0);
@@ -485,7 +485,7 @@ page_cache_async_readahead(struct address_space *mapping,
 	/*
 	 * Defer asynchronous read-ahead on IO congestion.
 	 */
-	if (bdi_read_congested(mapping->backing_dev_info))
+	if (bdi_or_group_read_congested(mapping->backing_dev_info, NULL))
 		return;
 
 	/* do read-ahead */
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 17/20] io-controller: Per io group bdi congestion interface
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o So far there used to be only one pair or queue  of request descriptors
  (one for sync and one for async) per device and number of requests allocated
  used to decide whether associated bdi is congested or not.

  Now with per io group request descriptor infrastructure, there is a pair
  of request descriptor queue per io group per device. So it might happen
  that overall request queue is not congested but a particular io group
  bio belongs to is congested.

  Or, it could be otherwise that group is not congested but overall queue
  is congested. This can happen if user has not properly set the request
  descriptors limits for queue and groups.
  (q->nr_requests < nr_groups * q->nr_group_requests)

  Hence there is a need for new interface which can query deivce congestion
  status per group. This group is determined by the "struct page" IO will be
  done for. If page is null, then group is determined from the current task
  context.

o This patch introduces new set of function bdi_*_congested_group(), which
  take "struct page" as addition argument. These functions will call the
  block layer and in trun elevator to find out if the io group the page will
  go into is congested or not.

o Currently I have introduced the core functions and migrated most of the users.
  But there might be still some left. This is an ongoing TODO item.

o There are some io_get_io_group() related changes which should be pushed into
  higher patches. Still testing this patch. Will push these changes up in next
  posting.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/blk-core.c            |   21 +++++
 block/cfq-iosched.c         |    6 +-
 block/elevator-fq.c         |  179 ++++++++++++++++++++++++++++++++++---------
 block/elevator-fq.h         |   10 ++-
 drivers/md/dm-table.c       |   11 ++-
 drivers/md/dm.c             |    5 +-
 drivers/md/dm.h             |    3 +-
 drivers/md/linear.c         |    7 +-
 drivers/md/multipath.c      |    7 +-
 drivers/md/raid0.c          |    6 +-
 drivers/md/raid1.c          |    9 ++-
 drivers/md/raid10.c         |    6 +-
 drivers/md/raid5.c          |    2 +-
 fs/afs/write.c              |    8 ++-
 fs/btrfs/disk-io.c          |    6 +-
 fs/btrfs/extent_io.c        |   12 +++
 fs/btrfs/volumes.c          |    8 ++-
 fs/cifs/file.c              |   11 +++
 fs/ext2/ialloc.c            |    2 +-
 fs/gfs2/ops_address.c       |   12 +++
 fs/nilfs2/segbuf.c          |    3 +-
 fs/xfs/linux-2.6/xfs_aops.c |    2 +-
 fs/xfs/linux-2.6/xfs_buf.c  |    2 +-
 include/linux/backing-dev.h |   61 ++++++++++++++-
 include/linux/biotrack.h    |    6 ++
 include/linux/blkdev.h      |    5 +
 mm/backing-dev.c            |   62 +++++++++++++++
 mm/biotrack.c               |   21 +++++
 mm/page-writeback.c         |   11 +++
 mm/readahead.c              |    4 +-
 30 files changed, 435 insertions(+), 73 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 35e3725..5f16f4a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -99,6 +99,27 @@ void blk_queue_congestion_threshold(struct request_queue *q)
 	q->nr_congestion_off = nr;
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+int blk_queue_io_group_congested(struct backing_dev_info *bdi, int bdi_bits,
+					struct page *page)
+{
+	int ret = 0;
+	struct request_queue *q = bdi->unplug_io_data;
+
+	if (!q && !q->elevator)
+		return bdi_congested(bdi, bdi_bits);
+
+	/* Do we need to hold queue lock? */
+	if (bdi_bits & (1 << BDI_sync_congested))
+		ret |= elv_io_group_congested(q, page, 1);
+
+	if (bdi_bits & (1 << BDI_async_congested))
+		ret |= elv_io_group_congested(q, page, 0);
+
+	return ret;
+}
+#endif
+
 /**
  * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
  * @bdev:	device
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 77bbe6c..b02acf2 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -195,7 +195,7 @@ static struct cfq_queue *cic_bio_to_cfqq(struct cfq_data *cfqd,
 		 * async bio tracking is enabled and we are not caching
 		 * async queue pointer in cic.
 		 */
-		iog = io_get_io_group(cfqd->queue, bio, 0);
+		iog = io_get_io_group_bio(cfqd->queue, bio, 0);
 		if (!iog) {
 			/*
 			 * May be this is first rq/bio and io group has not
@@ -1334,7 +1334,7 @@ cfq_find_alloc_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 	struct io_queue *ioq = NULL, *new_ioq = NULL;
 	struct io_group *iog = NULL;
 retry:
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 
 	cic = cfq_cic_lookup(cfqd, ioc);
 	/* cic always exists here */
@@ -1452,7 +1452,7 @@ cfq_get_queue(struct cfq_data *cfqd, struct bio *bio, int is_sync,
 	const int ioprio_class = task_ioprio_class(ioc);
 	struct cfq_queue *async_cfqq = NULL;
 	struct cfq_queue *cfqq = NULL;
-	struct io_group *iog = io_get_io_group(cfqd->queue, bio, 1);
+	struct io_group *iog = io_get_io_group_bio(cfqd->queue, bio, 1);
 
 	if (!is_sync) {
 		async_cfqq = io_group_async_queue_prio(iog, ioprio_class,
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 16f75ad..13c8161 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -42,7 +42,6 @@ struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
 void elv_release_ioq(struct elevator_queue *eq, struct io_queue **ioq_ptr);
 int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 					int force);
-
 static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
 					unsigned short prio)
 {
@@ -1087,11 +1086,69 @@ struct request_list *io_group_get_request_list(struct request_queue *q,
 {
 	struct io_group *iog;
 
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 	BUG_ON(!iog);
 	return &iog->rl;
 }
 
+/* Set io group congestion on and off thresholds */
+void elv_io_group_congestion_threshold(struct request_queue *q,
+						struct io_group *iog)
+{
+	int nr;
+
+	nr = q->nr_group_requests - (q->nr_group_requests / 8) + 1;
+	if (nr > q->nr_group_requests)
+		nr = q->nr_group_requests;
+	iog->nr_congestion_on = nr;
+
+	nr = q->nr_group_requests - (q->nr_group_requests / 8)
+			- (q->nr_group_requests / 16) - 1;
+	if (nr < 1)
+		nr = 1;
+	iog->nr_congestion_off = nr;
+}
+
+static inline int elv_is_iog_congested(struct request_queue *q,
+					struct io_group *iog, int sync)
+{
+	if (iog->rl.count[sync] >= iog->nr_congestion_on)
+		return 1;
+	return 0;
+}
+
+/* Determine if io group page maps to is congested or not */
+int elv_io_group_congested(struct request_queue *q, struct page *page, int sync)
+{
+	struct io_group *iog;
+	int ret = 0;
+
+	rcu_read_lock();
+
+	iog = io_get_io_group(q, page, 0);
+
+	if (!iog) {
+		/*
+		 * Either cgroup got deleted or this is first request in the
+		 * group and associated io group object has not been created
+		 * yet. Map it to root group.
+		 *
+		 * TODO: Fix the case of group not created yet.
+		 */
+		iog = q->elevator->efqd.root_group;
+	}
+
+	ret = elv_is_iog_congested(q, iog, sync);
+	if (ret)
+		elv_log_iog(&q->elevator->efqd, iog, "iog congested=%d sync=%d"
+			" rl.count[sync]=%d nr_group_requests=%d",
+			ret, sync, iog->rl.count[sync], q->nr_group_requests);
+
+	rcu_read_unlock();
+	return ret;
+}
+
+
 /*
  * Search the bfq_group for bfqd into the hash table (by now only a list)
  * of bgrp.  Must be called under rcu_read_lock().
@@ -1265,11 +1322,13 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
  * to the root has already an allocated group on @bfqd.
  */
 struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
-					struct cgroup *cgroup, struct bio *bio)
+					struct cgroup *cgroup)
 {
 	struct io_cgroup *iocg;
 	struct io_group *iog, *leaf = NULL, *prev = NULL;
 	gfp_t flags = GFP_ATOMIC |  __GFP_ZERO;
+	unsigned int major, minor;
+	struct backing_dev_info *bdi = &q->backing_dev_info;
 
 	for (; cgroup != NULL; cgroup = cgroup->parent) {
 		iocg = cgroup_to_io_cgroup(cgroup);
@@ -1308,6 +1367,7 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		elv_get_iog(iog);
 
 		blk_init_request_list(&iog->rl);
+		elv_io_group_congestion_threshold(q, iog);
 
 		if (leaf == NULL) {
 			leaf = iog;
@@ -1412,7 +1472,7 @@ void io_group_chain_link(struct request_queue *q, void *key,
  */
 struct io_group *io_find_alloc_group(struct request_queue *q,
 			struct cgroup *cgroup, struct elv_fq_data *efqd,
-			int create, struct bio *bio)
+			int create)
 {
 	struct io_cgroup *iocg = cgroup_to_io_cgroup(cgroup);
 	struct io_group *iog = NULL;
@@ -1431,7 +1491,7 @@ struct io_group *io_find_alloc_group(struct request_queue *q,
 	if (iog != NULL || !create)
 		goto end;
 
-	iog = io_group_chain_alloc(q, key, cgroup, bio);
+	iog = io_group_chain_alloc(q, key, cgroup);
 	if (iog != NULL)
 		io_group_chain_link(q, key, cgroup, iog, efqd);
 
@@ -1440,46 +1500,60 @@ end:
 	return iog;
 }
 
-/* Map a bio to respective cgroup. Null return means, map it to root cgroup */
-static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
+/* Map a page to respective cgroup. Null return means, map it to root cgroup */
+static inline struct cgroup *get_cgroup_from_page(struct page *page)
 {
 	unsigned long bio_cgroup_id;
 	struct cgroup *cgroup;
 
-	/* blk_get_request can reach here without passing a bio */
-	if (!bio)
+	bio_cgroup_id = get_blkio_cgroup_id_page(page);
+
+	if (!bio_cgroup_id)
 		return NULL;
 
+	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
+	return cgroup;
+}
+
+
+struct io_group *io_get_io_group_bio(struct request_queue *q, struct bio *bio,
+					int create)
+{
+	struct page *page = NULL;
+
+	/*
+	 * Determine the group from task context. Even calls from
+	 * blk_get_request() which don't have any bio info will be mapped
+	 * to the task's group
+	 */
+	if (!bio)
+		goto sync;
+
 	if (bio_barrier(bio)) {
 		/*
 		 * Map barrier requests to root group. May be more special
 		 * bio cases should come here
 		 */
-		return NULL;
+		return q->elevator->efqd.root_group;
 	}
 
-#ifdef CONFIG_TRACK_ASYNC_CONTEXT
-	if (elv_bio_sync(bio)) {
-		/* sync io. Determine cgroup from submitting task context. */
-		cgroup = task_cgroup(current, io_subsys_id);
-		return cgroup;
-	}
+	/* Map the sync bio to the right group using task context */
+	if (elv_bio_sync(bio))
+		goto sync;
 
-	/* Async io. Determine cgroup from with cgroup id stored in page */
-	bio_cgroup_id = get_blkio_cgroup_id(bio);
-
-	if (!bio_cgroup_id)
-		return NULL;
-
-	cgroup = blkio_cgroup_lookup(bio_cgroup_id);
-#else
-	cgroup = task_cgroup(current, io_subsys_id);
+#ifndef CONFIG_TRACK_ASYNC_CONTEXT
+	goto sync;
 #endif
-	return cgroup;
+	/* Determine the group from info stored in page */
+	page = bio_iovec_idx(bio, 0)->bv_page;
+	return io_get_io_group(q, page, create);
+sync:
+	return io_get_io_group(q, NULL, create);
 }
+EXPORT_SYMBOL(io_get_io_group_bio);
 
 /*
- * Find the io group bio belongs to.
+ * Find the io group page belongs to.
  * If "create" is set, io group is created if it is not already present.
  *
  * Note: This function should be called with queue lock held. It returns
@@ -1488,22 +1562,27 @@ static inline struct cgroup *get_cgroup_from_bio(struct bio *bio)
  * needs to get hold of queue lock). So if somebody needs to use group
  * pointer even after dropping queue lock, take a reference to the group
  * before dropping queue lock.
+ *
+ * One can call it without queue lock with rcu read lock held for browsing
+ * through the groups.
  */
-struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+struct io_group *io_get_io_group(struct request_queue *q, struct page *page,
 					int create)
 {
 	struct cgroup *cgroup;
 	struct io_group *iog;
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 
-	assert_spin_locked(q->queue_lock);
+
+	if (create)
+		assert_spin_locked(q->queue_lock);
 
 	rcu_read_lock();
 
-	if (!bio)
+	if (!page)
 		cgroup = task_cgroup(current, io_subsys_id);
 	else
-		cgroup = get_cgroup_from_bio(bio);
+		cgroup = get_cgroup_from_page(page);
 
 	if (!cgroup) {
 		if (create)
@@ -1518,7 +1597,7 @@ struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
 		goto out;
 	}
 
-	iog = io_find_alloc_group(q, cgroup, efqd, create, bio);
+	iog = io_find_alloc_group(q, cgroup, efqd, create);
 	if (!iog) {
 		if (create)
 			iog = efqd->root_group;
@@ -1570,6 +1649,7 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
 
 	blk_init_request_list(&iog->rl);
+	elv_io_group_congestion_threshold(q, iog);
 
 	iocg = &io_root_cgroup;
 	spin_lock_irq(&iocg->lock);
@@ -1578,6 +1658,10 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	iog->iocg_id = css_id(&iocg->css);
 	spin_unlock_irq(&iocg->lock);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	io_group_path(iog, iog->path, sizeof(iog->path));
+#endif
+
 	return iog;
 }
 
@@ -1670,6 +1754,14 @@ void iocg_attach(struct cgroup_subsys *subsys, struct cgroup *cgroup,
 	task_unlock(tsk);
 }
 
+static void io_group_free_rcu(struct rcu_head *head)
+{
+	struct io_group *iog;
+
+	iog = container_of(head, struct io_group, rcu_head);
+	kfree(iog);
+}
+
 /*
  * This cleanup function does the last bit of things to destroy cgroup.
  * It should only get called after io_destroy_group has been invoked.
@@ -1693,7 +1785,13 @@ void io_group_cleanup(struct io_group *iog)
 	BUG_ON(entity != NULL && entity->tree != NULL);
 
 	iog->iocg_id = 0;
-	kfree(iog);
+
+	/*
+	 * Wait for any rcu readers to exit before freeing up the group.
+	 * Primarily useful when io_get_io_group() is called without queue
+	 * lock to access some group data from bdi_congested_group() path.
+	 */
+	call_rcu(&iog->rcu_head, io_group_free_rcu);
 }
 
 void elv_put_iog(struct io_group *iog)
@@ -1933,7 +2031,7 @@ int io_group_allow_merge(struct request *rq, struct bio *bio)
 		return 1;
 
 	/* Determine the io group of the bio submitting task */
-	iog = io_get_io_group(q, bio, 0);
+	iog = io_get_io_group_bio(q, bio, 0);
 	if (!iog) {
 		/* May be task belongs to a differet cgroup for which io
 		 * group has not been setup yet. */
@@ -1973,7 +2071,7 @@ int elv_fq_set_request_ioq(struct request_queue *q, struct request *rq,
 
 retry:
 	/* Determine the io group request belongs to */
-	iog = io_get_io_group(q, bio, 1);
+	iog = io_get_io_group_bio(q, bio, 1);
 	BUG_ON(!iog);
 
 	/* Get the iosched queue */
@@ -2066,7 +2164,7 @@ struct io_queue *elv_lookup_ioq_bio(struct request_queue *q, struct bio *bio)
 	struct io_group *iog;
 
 	/* Determine the io group and io queue of the bio submitting task */
-	iog = io_get_io_group(q, bio, 0);
+	iog = io_get_io_group_bio(q, bio, 0);
 	if (!iog) {
 		/* May be bio belongs to a cgroup for which io group has
 		 * not been setup yet. */
@@ -2133,7 +2231,14 @@ void io_free_root_group(struct elevator_queue *e)
 	kfree(iog);
 }
 
-struct io_group *io_get_io_group(struct request_queue *q, struct bio *bio,
+struct io_group *io_get_io_group_bio(struct request_queue *q, struct bio *bio,
+					int create)
+{
+	return q->elevator->efqd.root_group;
+}
+EXPORT_SYMBOL(io_get_io_group_bio);
+
+struct io_group *io_get_io_group(struct request_queue *q, struct page *page,
 						int create)
 {
 	return q->elevator->efqd.root_group;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index c2f71d7..d60105f 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -258,8 +258,13 @@ struct io_group {
 	/* Single ioq per group, used for noop, deadline, anticipatory */
 	struct io_queue *ioq;
 
+	/* io group congestion on and off threshold for request descriptors */
+	unsigned int nr_congestion_on;
+	unsigned int nr_congestion_off;
+
 	/* request list associated with the group */
 	struct request_list rl;
+	struct rcu_head rcu_head;
 };
 
 /**
@@ -540,7 +545,8 @@ extern struct io_queue *elv_lookup_ioq_bio(struct request_queue *q,
 						struct bio *bio);
 extern struct request_list *io_group_get_request_list(struct request_queue *q,
 						struct bio *bio);
-
+extern int elv_io_group_congested(struct request_queue *q, struct page *page,
+					int sync);
 /* Returns single ioq associated with the io group. */
 static inline struct io_queue *io_group_ioq(struct io_group *iog)
 {
@@ -672,6 +678,8 @@ extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
 extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
 					int ioprio, struct io_queue *ioq);
 extern struct io_group *io_get_io_group(struct request_queue *q,
+					struct page *page, int create);
+extern struct io_group *io_get_io_group_bio(struct request_queue *q,
 					struct bio *bio, int create);
 extern int elv_nr_busy_ioq(struct elevator_queue *e);
 extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 429b50b..8fe04f1 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1000,7 +1000,8 @@ int dm_table_resume_targets(struct dm_table *t)
 	return 0;
 }
 
-int dm_table_any_congested(struct dm_table *t, int bdi_bits)
+int dm_table_any_congested(struct dm_table *t, int bdi_bits, struct page *page,
+				int group)
 {
 	struct dm_dev_internal *dd;
 	struct list_head *devices = dm_table_get_devices(t);
@@ -1010,9 +1011,11 @@ int dm_table_any_congested(struct dm_table *t, int bdi_bits)
 		struct request_queue *q = bdev_get_queue(dd->dm_dev.bdev);
 		char b[BDEVNAME_SIZE];
 
-		if (likely(q))
-			r |= bdi_congested(&q->backing_dev_info, bdi_bits);
-		else
+		if (likely(q)) {
+			struct backing_dev_info *bdi = &q->backing_dev_info;
+			r |= group ? bdi_congested_group(bdi, bdi_bits, page)
+				: bdi_congested(bdi, bdi_bits);
+		} else
 			DMWARN_LIMIT("%s: any_congested: nonexistent device %s",
 				     dm_device_name(t->md),
 				     bdevname(dd->dm_dev.bdev, b));
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 424f7b0..ef12cee 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -994,7 +994,8 @@ static void dm_unplug_all(struct request_queue *q)
 	}
 }
 
-static int dm_any_congested(void *congested_data, int bdi_bits)
+static int dm_any_congested(void *congested_data, int bdi_bits,
+					struct page *page, int group)
 {
 	int r = bdi_bits;
 	struct mapped_device *md = congested_data;
@@ -1003,7 +1004,7 @@ static int dm_any_congested(void *congested_data, int bdi_bits)
 	if (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) {
 		map = dm_get_table(md);
 		if (map) {
-			r = dm_table_any_congested(map, bdi_bits);
+			r = dm_table_any_congested(map, bdi_bits, page, group);
 			dm_table_put(map);
 		}
 	}
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index a31506d..7efe4b4 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -46,7 +46,8 @@ struct list_head *dm_table_get_devices(struct dm_table *t);
 void dm_table_presuspend_targets(struct dm_table *t);
 void dm_table_postsuspend_targets(struct dm_table *t);
 int dm_table_resume_targets(struct dm_table *t);
-int dm_table_any_congested(struct dm_table *t, int bdi_bits);
+int dm_table_any_congested(struct dm_table *t, int bdi_bits, struct page *page,
+				int group);
 
 /*
  * To check the return value from dm_table_find_target().
diff --git a/drivers/md/linear.c b/drivers/md/linear.c
index 7a36e38..ddf43dd 100644
--- a/drivers/md/linear.c
+++ b/drivers/md/linear.c
@@ -88,7 +88,7 @@ static void linear_unplug(struct request_queue *q)
 	}
 }
 
-static int linear_congested(void *data, int bits)
+static int linear_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	linear_conf_t *conf = mddev_to_conf(mddev);
@@ -96,7 +96,10 @@ static int linear_congested(void *data, int bits)
 
 	for (i = 0; i < mddev->raid_disks && !ret ; i++) {
 		struct request_queue *q = bdev_get_queue(conf->disks[i].rdev->bdev);
-		ret |= bdi_congested(&q->backing_dev_info, bits);
+		struct backing_dev_info *bdi = &q->backing_dev_info;
+
+		ret |= group ? bdi_congested_group(bdi, bits, page) :
+			bdi_congested(bdi, bits);
 	}
 	return ret;
 }
diff --git a/drivers/md/multipath.c b/drivers/md/multipath.c
index 41ced0c..9f25b21 100644
--- a/drivers/md/multipath.c
+++ b/drivers/md/multipath.c
@@ -192,7 +192,8 @@ static void multipath_status (struct seq_file *seq, mddev_t *mddev)
 	seq_printf (seq, "]");
 }
 
-static int multipath_congested(void *data, int bits)
+static int multipath_congested(void *data, int bits, struct page *page,
+					int group)
 {
 	mddev_t *mddev = data;
 	multipath_conf_t *conf = mddev_to_conf(mddev);
@@ -203,8 +204,10 @@ static int multipath_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
-			ret |= bdi_congested(&q->backing_dev_info, bits);
+			ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 			/* Just like multipath_map, we just check the
 			 * first available device
 			 */
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index c08d755..eb1d33a 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -37,7 +37,7 @@ static void raid0_unplug(struct request_queue *q)
 	}
 }
 
-static int raid0_congested(void *data, int bits)
+static int raid0_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	raid0_conf_t *conf = mddev_to_conf(mddev);
@@ -46,8 +46,10 @@ static int raid0_congested(void *data, int bits)
 
 	for (i = 0; i < mddev->raid_disks && !ret ; i++) {
 		struct request_queue *q = bdev_get_queue(devlist[i]->bdev);
+		struct backing_dev_info *bdi = &q->backing_dev_info;
 
-		ret |= bdi_congested(&q->backing_dev_info, bits);
+		ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 	}
 	return ret;
 }
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 36df910..cdd268e 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -570,7 +570,7 @@ static void raid1_unplug(struct request_queue *q)
 	md_wakeup_thread(mddev->thread);
 }
 
-static int raid1_congested(void *data, int bits)
+static int raid1_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	conf_t *conf = mddev_to_conf(mddev);
@@ -581,14 +581,17 @@ static int raid1_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
 			/* Note the '|| 1' - when read_balance prefers
 			 * non-congested targets, it can be removed
 			 */
 			if ((bits & (1<<BDI_async_congested)) || 1)
-				ret |= bdi_congested(&q->backing_dev_info, bits);
+				ret |= group ? bdi_congested_group(bdi, bits,
+					page) : bdi_congested(bdi, bits);
 			else
-				ret &= bdi_congested(&q->backing_dev_info, bits);
+				ret &= group ? bdi_congested_group(bdi, bits,
+					page) : bdi_congested(bdi, bits);
 		}
 	}
 	rcu_read_unlock();
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 499620a..49f41e3 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -625,7 +625,7 @@ static void raid10_unplug(struct request_queue *q)
 	md_wakeup_thread(mddev->thread);
 }
 
-static int raid10_congested(void *data, int bits)
+static int raid10_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	conf_t *conf = mddev_to_conf(mddev);
@@ -636,8 +636,10 @@ static int raid10_congested(void *data, int bits)
 		mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
 		if (rdev && !test_bit(Faulty, &rdev->flags)) {
 			struct request_queue *q = bdev_get_queue(rdev->bdev);
+			struct backing_dev_info *bdi = &q->backing_dev_info;
 
-			ret |= bdi_congested(&q->backing_dev_info, bits);
+			ret |= group ? bdi_congested_group(bdi, bits, page)
+				: bdi_congested(bdi, bits);
 		}
 	}
 	rcu_read_unlock();
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index bb37fb1..40f76a4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -3324,7 +3324,7 @@ static void raid5_unplug_device(struct request_queue *q)
 	unplug_slaves(mddev);
 }
 
-static int raid5_congested(void *data, int bits)
+static int raid5_congested(void *data, int bits, struct page *page, int group)
 {
 	mddev_t *mddev = data;
 	raid5_conf_t *conf = mddev_to_conf(mddev);
diff --git a/fs/afs/write.c b/fs/afs/write.c
index c2e7a7f..aa8b359 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -455,7 +455,7 @@ int afs_writepage(struct page *page, struct writeback_control *wbc)
 	}
 
 	wbc->nr_to_write -= ret;
-	if (wbc->nonblocking && bdi_write_congested(bdi))
+	if (wbc->nonblocking && bdi_or_group_write_congested(bdi, page))
 		wbc->encountered_congestion = 1;
 
 	_leave(" = 0");
@@ -491,6 +491,12 @@ static int afs_writepages_region(struct address_space *mapping,
 			return 0;
 		}
 
+		if (wbc->nonblocking && bdi_write_congested_group(bdi, page)) {
+			wbc->encountered_congestion = 1;
+			page_cache_release(page);
+			break;
+		}
+
 		/* at this point we hold neither mapping->tree_lock nor lock on
 		 * the page itself: the page may be truncated or invalidated
 		 * (changing page->mapping to NULL), or even swizzled back from
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..245d8f4 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1250,7 +1250,8 @@ struct btrfs_root *btrfs_read_fs_root(struct btrfs_fs_info *fs_info,
 	return root;
 }
 
-static int btrfs_congested_fn(void *congested_data, int bdi_bits)
+static int btrfs_congested_fn(void *congested_data, int bdi_bits,
+					struct page *page, int group)
 {
 	struct btrfs_fs_info *info = (struct btrfs_fs_info *)congested_data;
 	int ret = 0;
@@ -1261,7 +1262,8 @@ static int btrfs_congested_fn(void *congested_data, int bdi_bits)
 		if (!device->bdev)
 			continue;
 		bdi = blk_get_backing_dev_info(device->bdev);
-		if (bdi && bdi_congested(bdi, bdi_bits)) {
+		if (bdi && (group ? bdi_congested_group(bdi, bdi_bits, page) :
+		    bdi_congested(bdi, bdi_bits))) {
 			ret = 1;
 			break;
 		}
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index fe9eb99..fac4299 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2358,6 +2358,18 @@ retry:
 		unsigned i;
 
 		scanned = 1;
+
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a6d35b0..5b19141 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -163,6 +163,7 @@ static noinline int run_scheduled_bios(struct btrfs_device *device)
 	unsigned long num_sync_run;
 	unsigned long limit;
 	unsigned long last_waited = 0;
+	struct page *page;
 
 	bdi = blk_get_backing_dev_info(device->bdev);
 	fs_info = device->dev_root->fs_info;
@@ -265,8 +266,11 @@ loop_lock:
 		 * is now congested.  Back off and let other work structs
 		 * run instead
 		 */
-		if (pending && bdi_write_congested(bdi) && num_run > 16 &&
-		    fs_info->fs_devices->open_devices > 1) {
+		if (pending)
+			page = bio_iovec_idx(pending, 0)->bv_page;
+
+		if (pending && bdi_or_group_write_congested(bdi, page) &&
+		    num_run > 16 && fs_info->fs_devices->open_devices > 1) {
 			struct io_context *ioc;
 
 			ioc = current->io_context;
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 302ea15..71d3fb5 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1466,6 +1466,17 @@ retry:
 		n_iov = 0;
 		bytes_to_write = 0;
 
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking &&
+		    bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			page = pvec.pages[i];
 			/*
diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
index 15387c9..090a961 100644
--- a/fs/ext2/ialloc.c
+++ b/fs/ext2/ialloc.c
@@ -179,7 +179,7 @@ static void ext2_preread_inode(struct inode *inode)
 	struct backing_dev_info *bdi;
 
 	bdi = inode->i_mapping->backing_dev_info;
-	if (bdi_read_congested(bdi))
+	if (bdi_or_group_read_congested(bdi, NULL))
 		return;
 	if (bdi_write_congested(bdi))
 		return;
diff --git a/fs/gfs2/ops_address.c b/fs/gfs2/ops_address.c
index a6dde17..b352f19 100644
--- a/fs/gfs2/ops_address.c
+++ b/fs/gfs2/ops_address.c
@@ -372,6 +372,18 @@ retry:
 					       PAGECACHE_TAG_DIRTY,
 					       min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1))) {
 		scanned = 1;
+
+		/*
+		 * If io group page belongs to is congested. bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, end);
 		if (ret)
 			done = 1;
diff --git a/fs/nilfs2/segbuf.c b/fs/nilfs2/segbuf.c
index 1e68821..abcb161 100644
--- a/fs/nilfs2/segbuf.c
+++ b/fs/nilfs2/segbuf.c
@@ -267,8 +267,9 @@ static int nilfs_submit_seg_bio(struct nilfs_write_info *wi, int mode)
 {
 	struct bio *bio = wi->bio;
 	int err;
+	struct page *page = bio_iovec_idx(bio, 0)->bv_page;
 
-	if (wi->nbio > 0 && bdi_write_congested(wi->bdi)) {
+	if (wi->nbio > 0 && bdi_or_group_write_congested(wi->bdi, page)) {
 		wait_for_completion(&wi->bio_event);
 		wi->nbio--;
 		if (unlikely(atomic_read(&wi->err))) {
diff --git a/fs/xfs/linux-2.6/xfs_aops.c b/fs/xfs/linux-2.6/xfs_aops.c
index 7ec89fc..2a515ab 100644
--- a/fs/xfs/linux-2.6/xfs_aops.c
+++ b/fs/xfs/linux-2.6/xfs_aops.c
@@ -891,7 +891,7 @@ xfs_convert_page(
 
 			bdi = inode->i_mapping->backing_dev_info;
 			wbc->nr_to_write--;
-			if (bdi_write_congested(bdi)) {
+			if (bdi_or_group_write_congested(bdi, page)) {
 				wbc->encountered_congestion = 1;
 				done = 1;
 			} else if (wbc->nr_to_write <= 0) {
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index e28800a..9e000f4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -714,7 +714,7 @@ xfs_buf_readahead(
 	struct backing_dev_info *bdi;
 
 	bdi = target->bt_mapping->backing_dev_info;
-	if (bdi_read_congested(bdi))
+	if (bdi_or_group_read_congested(bdi, NULL))
 		return;
 
 	flags |= (XBF_TRYLOCK|XBF_ASYNC|XBF_READ_AHEAD);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..f06fdbf 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -29,7 +29,7 @@ enum bdi_state {
 	BDI_unused,		/* Available bits start here */
 };
 
-typedef int (congested_fn)(void *, int);
+typedef int (congested_fn)(void *, int, struct page *, int);
 
 enum bdi_stat_item {
 	BDI_RECLAIMABLE,
@@ -209,7 +209,7 @@ int writeback_in_progress(struct backing_dev_info *bdi);
 static inline int bdi_congested(struct backing_dev_info *bdi, int bdi_bits)
 {
 	if (bdi->congested_fn)
-		return bdi->congested_fn(bdi->congested_data, bdi_bits);
+		return bdi->congested_fn(bdi->congested_data, bdi_bits, NULL, 0);
 	return (bdi->state & bdi_bits);
 }
 
@@ -229,6 +229,63 @@ static inline int bdi_rw_congested(struct backing_dev_info *bdi)
 				  (1 << BDI_async_congested));
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int bdi_congested_group(struct backing_dev_info *bdi, int bdi_bits,
+				struct page *page);
+
+extern int bdi_read_congested_group(struct backing_dev_info *bdi,
+						struct page *page);
+
+extern int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_write_congested_group(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+					struct page *page);
+
+extern int bdi_rw_congested_group(struct backing_dev_info *bdi,
+					struct page *page);
+#else /* CONFIG_GROUP_IOSCHED */
+static inline int bdi_congested_group(struct backing_dev_info *bdi,
+					int bdi_bits, struct page *page)
+{
+	return bdi_congested(bdi, bdi_bits);
+}
+
+static inline int bdi_read_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi);
+}
+
+static inline int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi);
+}
+
+static inline int bdi_write_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi);
+}
+
+static inline int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi);
+}
+
+static inline int bdi_rw_congested_group(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_rw_congested(bdi);
+}
+
+#endif /* CONFIG_GROUP_IOSCHED */
+
 void clear_bdi_congested(struct backing_dev_info *bdi, int rw);
 void set_bdi_congested(struct backing_dev_info *bdi, int rw);
 long congestion_wait(int rw, long timeout);
diff --git a/include/linux/biotrack.h b/include/linux/biotrack.h
index 741a8b5..0b4491a 100644
--- a/include/linux/biotrack.h
+++ b/include/linux/biotrack.h
@@ -49,6 +49,7 @@ extern void blkio_cgroup_copy_owner(struct page *page, struct page *opage);
 
 extern struct io_context *get_blkio_cgroup_iocontext(struct bio *bio);
 extern unsigned long get_blkio_cgroup_id(struct bio *bio);
+extern unsigned long get_blkio_cgroup_id_page(struct page *page);
 extern struct cgroup *blkio_cgroup_lookup(int id);
 
 #else	/* CONFIG_CGROUP_BIO */
@@ -92,6 +93,11 @@ static inline unsigned long get_blkio_cgroup_id(struct bio *bio)
 	return 0;
 }
 
+static inline unsigned long get_blkio_cgroup_id_page(struct page *page)
+{
+	return 0;
+}
+
 #endif	/* CONFIG_CGROUP_BLKIO */
 
 #endif /* _LINUX_BIOTRACK_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7fd7d33..45e4cb7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -880,6 +880,11 @@ static inline void blk_set_queue_congested(struct request_queue *q, int rw)
 	set_bdi_congested(&q->backing_dev_info, rw);
 }
 
+#ifdef CONFIG_GROUP_IOSCHED
+extern int blk_queue_io_group_congested(struct backing_dev_info *bdi,
+					int bdi_bits, struct page *page);
+#endif
+
 extern void blk_start_queue(struct request_queue *q);
 extern void blk_stop_queue(struct request_queue *q);
 extern void blk_sync_queue(struct request_queue *q);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..cef038d 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -7,6 +7,7 @@
 #include <linux/module.h>
 #include <linux/writeback.h>
 #include <linux/device.h>
+#include "../block/elevator-fq.h"
 
 void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 {
@@ -328,3 +329,64 @@ long congestion_wait(int rw, long timeout)
 }
 EXPORT_SYMBOL(congestion_wait);
 
+/*
+ * With group IO scheduling, there are request descriptors per io group per
+ * queue. So generic notion of whether queue is congested or not is not
+ * very accurate. Queue might not be congested but the io group in which
+ * request will go might actually be congested.
+ *
+ * Hence to get the correct idea about congestion level, one should query
+ * the io group congestion status on the queue. Pass in the page information
+ * which can be used to determine the io group of the page and congestion
+ * status can be determined accordingly.
+ *
+ * If page info is not passed, io group is determined from the current task
+ * context.
+ */
+#ifdef CONFIG_GROUP_IOSCHED
+int bdi_congested_group(struct backing_dev_info *bdi, int bdi_bits,
+				struct page *page)
+{
+	if (bdi->congested_fn)
+		return bdi->congested_fn(bdi->congested_data, bdi_bits, page, 1);
+
+	return blk_queue_io_group_congested(bdi, bdi_bits, page);
+}
+EXPORT_SYMBOL(bdi_congested_group);
+
+int bdi_read_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, 1 << BDI_sync_congested, page);
+}
+EXPORT_SYMBOL(bdi_read_congested_group);
+
+/* Checks if either bdi or associated group is read congested */
+int bdi_or_group_read_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_read_congested(bdi) || bdi_read_congested_group(bdi, page);
+}
+EXPORT_SYMBOL(bdi_or_group_read_congested);
+
+int bdi_write_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, 1 << BDI_async_congested, page);
+}
+EXPORT_SYMBOL(bdi_write_congested_group);
+
+/* Checks if either bdi or associated group is write congested */
+int bdi_or_group_write_congested(struct backing_dev_info *bdi,
+						struct page *page)
+{
+	return bdi_write_congested(bdi) || bdi_write_congested_group(bdi, page);
+}
+EXPORT_SYMBOL(bdi_or_group_write_congested);
+
+int bdi_rw_congested_group(struct backing_dev_info *bdi, struct page *page)
+{
+	return bdi_congested_group(bdi, (1 << BDI_sync_congested) |
+				  (1 << BDI_async_congested), page);
+}
+EXPORT_SYMBOL(bdi_rw_congested_group);
+
+#endif /* CONFIG_GROUP_IOSCHED */
diff --git a/mm/biotrack.c b/mm/biotrack.c
index 2baf1f0..f7d8efb 100644
--- a/mm/biotrack.c
+++ b/mm/biotrack.c
@@ -212,6 +212,27 @@ unsigned long get_blkio_cgroup_id(struct bio *bio)
 }
 
 /**
+ * get_blkio_cgroup_id_page() - determine the blkio-cgroup ID
+ * @page:	the &struct page which describes the I/O
+ *
+ * Returns the blkio-cgroup ID of a given page. A return value zero
+ * means that the page associated with the IO belongs to default_blkio_cgroup.
+ */
+unsigned long get_blkio_cgroup_id_page(struct page *page)
+{
+	struct page_cgroup *pc;
+	unsigned long id = 0;
+
+	pc = lookup_page_cgroup(page);
+	if (pc) {
+		lock_page_cgroup(pc);
+		id = page_cgroup_get_id(pc);
+		unlock_page_cgroup(pc);
+	}
+	return id;
+}
+
+/**
  * get_blkio_cgroup_iocontext() - determine the blkio-cgroup iocontext
  * @bio:	the &struct bio which describe the I/O
  *
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 3604c35..26b9e0a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -981,6 +981,17 @@ retry:
 		if (nr_pages == 0)
 			break;
 
+		/*
+		 * If the io group page will go into is congested, bail out.
+		 */
+		if (wbc->nonblocking
+		    && bdi_write_congested_group(bdi, pvec.pages[0])) {
+			wbc->encountered_congestion = 1;
+			done = 1;
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
diff --git a/mm/readahead.c b/mm/readahead.c
index 133b6d5..acd9c57 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -240,7 +240,7 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
 int do_page_cache_readahead(struct address_space *mapping, struct file *filp,
 			pgoff_t offset, unsigned long nr_to_read)
 {
-	if (bdi_read_congested(mapping->backing_dev_info))
+	if (bdi_or_group_read_congested(mapping->backing_dev_info, NULL))
 		return -1;
 
 	return __do_page_cache_readahead(mapping, filp, offset, nr_to_read, 0);
@@ -485,7 +485,7 @@ page_cache_async_readahead(struct address_space *mapping,
 	/*
 	 * Defer asynchronous read-ahead on IO congestion.
 	 */
-	if (bdi_read_congested(mapping->backing_dev_info))
+	if (bdi_or_group_read_congested(mapping->backing_dev_info, NULL))
 		return;
 
 	/* do read-ahead */
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 18/20] io-controller: Support per cgroup per device weights and io class
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (16 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 17/20] io-controller: Per io group bdi congestion interface Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 19/20] io-controller: Debug hierarchical IO scheduling Vivek Goyal
                     ` (3 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

This patch enables per-cgroup per-device weight and ioprio_class handling.
A new cgroup interface "policy" is introduced. You can make use of this
file to configure weight and ioprio_class for each device in a given cgroup.
The original "weight" and "ioprio_class" files are still available. If you
don't do special configuration for a particular device, "weight" and
"ioprio_class" are used as default values in this device.

You can use the following format to play with the new interface.
#echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
weight=0 means removing the policy for DEV.

Examples:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo /dev/hdb:300:2 > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

Configure weight=500 ioprio_class=1 on /dev/hda in this cgroup
# echo /dev/hda:500:1 > io.policy
# cat io.policy
dev weight class
/dev/hda 500 1
/dev/hdb 300 2

Remove the policy for /dev/hda in this cgroup
# echo /dev/hda:0:1 > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

Changelog (v1 -> v2)
- Rename some structures
- Use spin_lock_irqsave() and spin_lock_irqrestore() version to prevent
  from enabling the interrupts unconditionally.
- Fix policy setup bug when switching to another io scheduler.
- If a policy is available for a specific device, don't update weight and
  io class when writing "weight" and "iprio_class".
- Fix a bug when parsing policy string.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/elevator-fq.c |  243 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h |   11 +++
 2 files changed, 250 insertions(+), 4 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 13c8161..326f955 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -15,6 +15,7 @@
 #include <linux/blktrace_api.h>
 #include <linux/seq_file.h>
 #include <linux/biotrack.h>
+#include <linux/genhd.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -1168,12 +1169,31 @@ struct io_group *io_cgroup_lookup_group(struct io_cgroup *iocg, void *key)
 	return NULL;
 }
 
-void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog)
+static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
+						 dev_t dev);
+
+void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog,
+			  dev_t dev)
 {
 	struct io_entity *entity = &iog->entity;
+	struct io_policy_node *pn;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iocg->lock, flags);
+	pn = policy_search_node(iocg, dev);
+	if (pn) {
+		entity->weight = pn->weight;
+		entity->new_weight = pn->weight;
+		entity->ioprio_class = pn->ioprio_class;
+		entity->new_ioprio_class = pn->ioprio_class;
+	} else {
+		entity->weight = iocg->weight;
+		entity->new_weight = iocg->weight;
+		entity->ioprio_class = iocg->ioprio_class;
+		entity->new_ioprio_class = iocg->ioprio_class;
+	}
+	spin_unlock_irqrestore(&iocg->lock, flags);
 
-	entity->weight = entity->new_weight = iocg->weight;
-	entity->ioprio_class = entity->new_ioprio_class = iocg->ioprio_class;
 	entity->ioprio_changed = 1;
 	entity->my_sched_data = &iog->sched_data;
 }
@@ -1225,6 +1245,7 @@ static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
 	struct io_cgroup *iocg;					\
 	struct io_group *iog;						\
 	struct hlist_node *n;						\
+	struct io_policy_node *pn;					\
 									\
 	if (val < (__MIN) || val > (__MAX))				\
 		return -EINVAL;						\
@@ -1237,6 +1258,9 @@ static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
 	spin_lock_irq(&iocg->lock);					\
 	iocg->__VAR = (unsigned long)val;				\
 	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {	\
+		pn = policy_search_node(iocg, iog->dev);		\
+		if (pn)							\
+			continue;					\
 		iog->entity.new_##__VAR = (unsigned long)val;		\
 		smp_wmb();						\
 		iog->entity.ioprio_changed = 1;				\
@@ -1352,7 +1376,7 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);
 		iog->dev = MKDEV(major, minor);
 
-		io_group_init_entity(iocg, iog);
+		io_group_init_entity(iocg, iog, iog->dev);
 		iog->my_entity = &iog->entity;
 
 		atomic_set(&iog->ref, 0);
@@ -1665,8 +1689,212 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	return iog;
 }
 
+static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
+				  struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_policy_node *pn;
+
+	iocg = cgroup_to_io_cgroup(cgrp);
+
+	if (list_empty(&iocg->policy_list))
+		goto out;
+
+	seq_printf(m, "dev weight class\n");
+
+	spin_lock_irq(&iocg->lock);
+	list_for_each_entry(pn, &iocg->policy_list, node) {
+		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
+			   pn->weight, pn->ioprio_class);
+	}
+	spin_unlock_irq(&iocg->lock);
+out:
+	return 0;
+}
+
+static inline void policy_insert_node(struct io_cgroup *iocg,
+					  struct io_policy_node *pn)
+{
+	list_add(&pn->node, &iocg->policy_list);
+}
+
+/* Must be called with iocg->lock held */
+static inline void policy_delete_node(struct io_policy_node *pn)
+{
+	list_del(&pn->node);
+}
+
+/* Must be called with iocg->lock held */
+static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
+						 dev_t dev)
+{
+	struct io_policy_node *pn;
+
+	if (list_empty(&iocg->policy_list))
+		return NULL;
+
+	list_for_each_entry(pn, &iocg->policy_list, node) {
+		if (pn->dev == dev)
+			return pn;
+	}
+
+	return NULL;
+}
+
+static int devname_to_devnum(const char *buf, dev_t *dev)
+{
+	struct block_device *bdev;
+	struct gendisk *disk;
+	int part;
+
+	bdev = lookup_bdev(buf);
+	if (IS_ERR(bdev))
+		return -ENODEV;
+
+	disk = get_gendisk(bdev->bd_dev, &part);
+	if (part)
+		return -EINVAL;
+
+	*dev = MKDEV(disk->major, disk->first_minor);
+	bdput(bdev);
+
+	return 0;
+}
+
+static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
+{
+	char *s[3], *p;
+	int ret;
+	int i = 0;
+
+	memset(s, 0, sizeof(s));
+	while ((p = strsep(&buf, ":")) != NULL) {
+		if (!*p)
+			continue;
+		s[i++] = p;
+	}
+
+	ret = devname_to_devnum(s[0], &newpn->dev);
+	if (ret)
+		return ret;
+
+	strcpy(newpn->dev_name, s[0]);
+
+	if (s[1] == NULL)
+		return -EINVAL;
+
+	ret = strict_strtoul(s[1], 10, &newpn->weight);
+	if (ret || newpn->weight > WEIGHT_MAX)
+		return -EINVAL;
+
+	if (s[2] == NULL)
+		return -EINVAL;
+
+	ret = strict_strtoul(s[2], 10, &newpn->ioprio_class);
+	if (ret || newpn->ioprio_class < IOPRIO_CLASS_RT ||
+	    newpn->ioprio_class > IOPRIO_CLASS_IDLE)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int io_cgroup_policy_write(struct cgroup *cgrp, struct cftype *cft,
+			    const char *buffer)
+{
+	struct io_cgroup *iocg;
+	struct io_policy_node *newpn, *pn;
+	char *buf;
+	int ret = 0;
+	int keep_newpn = 0;
+	struct hlist_node *n;
+	struct io_group *iog;
+
+	buf = kstrdup(buffer, GFP_KERNEL);
+	if (!buf)
+		return -ENOMEM;
+
+	newpn = kzalloc(sizeof(*newpn), GFP_KERNEL);
+	if (!newpn) {
+		ret = -ENOMEM;
+		goto free_buf;
+	}
+
+	ret = policy_parse_and_set(buf, newpn);
+	if (ret)
+		goto free_newpn;
+
+	if (!cgroup_lock_live_group(cgrp)) {
+		ret = -ENODEV;
+		goto free_newpn;
+	}
+
+	iocg = cgroup_to_io_cgroup(cgrp);
+	spin_lock_irq(&iocg->lock);
+
+	pn = policy_search_node(iocg, newpn->dev);
+	if (!pn) {
+		if (newpn->weight != 0) {
+			policy_insert_node(iocg, newpn);
+			keep_newpn = 1;
+		}
+		goto update_io_group;
+	}
+
+	if (newpn->weight == 0) {
+		/* weight == 0 means deleteing a policy */
+		policy_delete_node(pn);
+		goto update_io_group;
+	}
+
+	pn->weight = newpn->weight;
+	pn->ioprio_class = newpn->ioprio_class;
+
+update_io_group:
+	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {
+		if (iog->dev == newpn->dev) {
+			if (newpn->weight) {
+				iog->entity.new_weight = newpn->weight;
+				iog->entity.new_ioprio_class =
+					newpn->ioprio_class;
+				/*
+				 * iog weight and ioprio_class updating
+				 * actually happens if ioprio_changed is set.
+				 * So ensure ioprio_changed is not set until
+				 * new weight and new ioprio_class are updated.
+				 */
+				smp_wmb();
+				iog->entity.ioprio_changed = 1;
+			} else {
+				iog->entity.new_weight = iocg->weight;
+				iog->entity.new_ioprio_class =
+					iocg->ioprio_class;
+
+				/* The same as above */
+				smp_wmb();
+				iog->entity.ioprio_changed = 1;
+			}
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+
+	cgroup_unlock();
+
+free_newpn:
+	if (!keep_newpn)
+		kfree(newpn);
+free_buf:
+	kfree(buf);
+	return ret;
+}
+
 struct cftype bfqio_files[] = {
 	{
+		.name = "policy",
+		.read_seq_string = io_cgroup_policy_read,
+		.write_string = io_cgroup_policy_write,
+		.max_write_len = 256,
+	},
+	{
 		.name = "weight",
 		.read_u64 = io_cgroup_weight_read,
 		.write_u64 = io_cgroup_weight_write,
@@ -1708,6 +1936,7 @@ struct cgroup_subsys_state *iocg_create(struct cgroup_subsys *subsys,
 	INIT_HLIST_HEAD(&iocg->group_data);
 	iocg->weight = IO_DEFAULT_GRP_WEIGHT;
 	iocg->ioprio_class = IO_DEFAULT_GRP_CLASS;
+	INIT_LIST_HEAD(&iocg->policy_list);
 
 	return &iocg->css;
 }
@@ -1911,6 +2140,7 @@ void iocg_destroy(struct cgroup_subsys *subsys, struct cgroup *cgroup)
 	struct io_group *iog;
 	struct elv_fq_data *efqd;
 	unsigned long uninitialized_var(flags);
+	struct io_policy_node *pn, *pntmp;
 
 	/*
 	 * io groups are linked in two lists. One list is maintained
@@ -1949,6 +2179,11 @@ remove_entry:
 	goto remove_entry;
 
 done:
+	list_for_each_entry_safe(pn, pntmp, &iocg->policy_list, node) {
+		policy_delete_node(pn);
+		kfree(pn);
+	}
+
 	free_css_id(&io_subsys, &iocg->css);
 	rcu_read_unlock();
 	BUG_ON(!hlist_empty(&iocg->group_data));
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index d60105f..7102455 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -267,6 +267,14 @@ struct io_group {
 	struct rcu_head rcu_head;
 };
 
+struct io_policy_node {
+	struct list_head node;
+	char dev_name[32];
+	dev_t dev;
+	unsigned long weight;
+	unsigned long ioprio_class;
+};
+
 /**
  * struct bfqio_cgroup - bfq cgroup data structure.
  * @css: subsystem state for bfq in the containing cgroup.
@@ -283,6 +291,9 @@ struct io_cgroup {
 
 	unsigned long weight, ioprio_class;
 
+	/* list of io_policy_node */
+	struct list_head policy_list;
+
 	spinlock_t lock;
 	struct hlist_head group_data;
 };
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 18/20] io-controller: Support per cgroup per device weights and io class
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

This patch enables per-cgroup per-device weight and ioprio_class handling.
A new cgroup interface "policy" is introduced. You can make use of this
file to configure weight and ioprio_class for each device in a given cgroup.
The original "weight" and "ioprio_class" files are still available. If you
don't do special configuration for a particular device, "weight" and
"ioprio_class" are used as default values in this device.

You can use the following format to play with the new interface.
#echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
weight=0 means removing the policy for DEV.

Examples:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo /dev/hdb:300:2 > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

Configure weight=500 ioprio_class=1 on /dev/hda in this cgroup
# echo /dev/hda:500:1 > io.policy
# cat io.policy
dev weight class
/dev/hda 500 1
/dev/hdb 300 2

Remove the policy for /dev/hda in this cgroup
# echo /dev/hda:0:1 > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

Changelog (v1 -> v2)
- Rename some structures
- Use spin_lock_irqsave() and spin_lock_irqrestore() version to prevent
  from enabling the interrupts unconditionally.
- Fix policy setup bug when switching to another io scheduler.
- If a policy is available for a specific device, don't update weight and
  io class when writing "weight" and "iprio_class".
- Fix a bug when parsing policy string.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/elevator-fq.c |  243 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h |   11 +++
 2 files changed, 250 insertions(+), 4 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 13c8161..326f955 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -15,6 +15,7 @@
 #include <linux/blktrace_api.h>
 #include <linux/seq_file.h>
 #include <linux/biotrack.h>
+#include <linux/genhd.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -1168,12 +1169,31 @@ struct io_group *io_cgroup_lookup_group(struct io_cgroup *iocg, void *key)
 	return NULL;
 }
 
-void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog)
+static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
+						 dev_t dev);
+
+void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog,
+			  dev_t dev)
 {
 	struct io_entity *entity = &iog->entity;
+	struct io_policy_node *pn;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iocg->lock, flags);
+	pn = policy_search_node(iocg, dev);
+	if (pn) {
+		entity->weight = pn->weight;
+		entity->new_weight = pn->weight;
+		entity->ioprio_class = pn->ioprio_class;
+		entity->new_ioprio_class = pn->ioprio_class;
+	} else {
+		entity->weight = iocg->weight;
+		entity->new_weight = iocg->weight;
+		entity->ioprio_class = iocg->ioprio_class;
+		entity->new_ioprio_class = iocg->ioprio_class;
+	}
+	spin_unlock_irqrestore(&iocg->lock, flags);
 
-	entity->weight = entity->new_weight = iocg->weight;
-	entity->ioprio_class = entity->new_ioprio_class = iocg->ioprio_class;
 	entity->ioprio_changed = 1;
 	entity->my_sched_data = &iog->sched_data;
 }
@@ -1225,6 +1245,7 @@ static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
 	struct io_cgroup *iocg;					\
 	struct io_group *iog;						\
 	struct hlist_node *n;						\
+	struct io_policy_node *pn;					\
 									\
 	if (val < (__MIN) || val > (__MAX))				\
 		return -EINVAL;						\
@@ -1237,6 +1258,9 @@ static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
 	spin_lock_irq(&iocg->lock);					\
 	iocg->__VAR = (unsigned long)val;				\
 	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {	\
+		pn = policy_search_node(iocg, iog->dev);		\
+		if (pn)							\
+			continue;					\
 		iog->entity.new_##__VAR = (unsigned long)val;		\
 		smp_wmb();						\
 		iog->entity.ioprio_changed = 1;				\
@@ -1352,7 +1376,7 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);
 		iog->dev = MKDEV(major, minor);
 
-		io_group_init_entity(iocg, iog);
+		io_group_init_entity(iocg, iog, iog->dev);
 		iog->my_entity = &iog->entity;
 
 		atomic_set(&iog->ref, 0);
@@ -1665,8 +1689,212 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	return iog;
 }
 
+static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
+				  struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_policy_node *pn;
+
+	iocg = cgroup_to_io_cgroup(cgrp);
+
+	if (list_empty(&iocg->policy_list))
+		goto out;
+
+	seq_printf(m, "dev weight class\n");
+
+	spin_lock_irq(&iocg->lock);
+	list_for_each_entry(pn, &iocg->policy_list, node) {
+		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
+			   pn->weight, pn->ioprio_class);
+	}
+	spin_unlock_irq(&iocg->lock);
+out:
+	return 0;
+}
+
+static inline void policy_insert_node(struct io_cgroup *iocg,
+					  struct io_policy_node *pn)
+{
+	list_add(&pn->node, &iocg->policy_list);
+}
+
+/* Must be called with iocg->lock held */
+static inline void policy_delete_node(struct io_policy_node *pn)
+{
+	list_del(&pn->node);
+}
+
+/* Must be called with iocg->lock held */
+static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
+						 dev_t dev)
+{
+	struct io_policy_node *pn;
+
+	if (list_empty(&iocg->policy_list))
+		return NULL;
+
+	list_for_each_entry(pn, &iocg->policy_list, node) {
+		if (pn->dev == dev)
+			return pn;
+	}
+
+	return NULL;
+}
+
+static int devname_to_devnum(const char *buf, dev_t *dev)
+{
+	struct block_device *bdev;
+	struct gendisk *disk;
+	int part;
+
+	bdev = lookup_bdev(buf);
+	if (IS_ERR(bdev))
+		return -ENODEV;
+
+	disk = get_gendisk(bdev->bd_dev, &part);
+	if (part)
+		return -EINVAL;
+
+	*dev = MKDEV(disk->major, disk->first_minor);
+	bdput(bdev);
+
+	return 0;
+}
+
+static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
+{
+	char *s[3], *p;
+	int ret;
+	int i = 0;
+
+	memset(s, 0, sizeof(s));
+	while ((p = strsep(&buf, ":")) != NULL) {
+		if (!*p)
+			continue;
+		s[i++] = p;
+	}
+
+	ret = devname_to_devnum(s[0], &newpn->dev);
+	if (ret)
+		return ret;
+
+	strcpy(newpn->dev_name, s[0]);
+
+	if (s[1] == NULL)
+		return -EINVAL;
+
+	ret = strict_strtoul(s[1], 10, &newpn->weight);
+	if (ret || newpn->weight > WEIGHT_MAX)
+		return -EINVAL;
+
+	if (s[2] == NULL)
+		return -EINVAL;
+
+	ret = strict_strtoul(s[2], 10, &newpn->ioprio_class);
+	if (ret || newpn->ioprio_class < IOPRIO_CLASS_RT ||
+	    newpn->ioprio_class > IOPRIO_CLASS_IDLE)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int io_cgroup_policy_write(struct cgroup *cgrp, struct cftype *cft,
+			    const char *buffer)
+{
+	struct io_cgroup *iocg;
+	struct io_policy_node *newpn, *pn;
+	char *buf;
+	int ret = 0;
+	int keep_newpn = 0;
+	struct hlist_node *n;
+	struct io_group *iog;
+
+	buf = kstrdup(buffer, GFP_KERNEL);
+	if (!buf)
+		return -ENOMEM;
+
+	newpn = kzalloc(sizeof(*newpn), GFP_KERNEL);
+	if (!newpn) {
+		ret = -ENOMEM;
+		goto free_buf;
+	}
+
+	ret = policy_parse_and_set(buf, newpn);
+	if (ret)
+		goto free_newpn;
+
+	if (!cgroup_lock_live_group(cgrp)) {
+		ret = -ENODEV;
+		goto free_newpn;
+	}
+
+	iocg = cgroup_to_io_cgroup(cgrp);
+	spin_lock_irq(&iocg->lock);
+
+	pn = policy_search_node(iocg, newpn->dev);
+	if (!pn) {
+		if (newpn->weight != 0) {
+			policy_insert_node(iocg, newpn);
+			keep_newpn = 1;
+		}
+		goto update_io_group;
+	}
+
+	if (newpn->weight == 0) {
+		/* weight == 0 means deleteing a policy */
+		policy_delete_node(pn);
+		goto update_io_group;
+	}
+
+	pn->weight = newpn->weight;
+	pn->ioprio_class = newpn->ioprio_class;
+
+update_io_group:
+	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {
+		if (iog->dev == newpn->dev) {
+			if (newpn->weight) {
+				iog->entity.new_weight = newpn->weight;
+				iog->entity.new_ioprio_class =
+					newpn->ioprio_class;
+				/*
+				 * iog weight and ioprio_class updating
+				 * actually happens if ioprio_changed is set.
+				 * So ensure ioprio_changed is not set until
+				 * new weight and new ioprio_class are updated.
+				 */
+				smp_wmb();
+				iog->entity.ioprio_changed = 1;
+			} else {
+				iog->entity.new_weight = iocg->weight;
+				iog->entity.new_ioprio_class =
+					iocg->ioprio_class;
+
+				/* The same as above */
+				smp_wmb();
+				iog->entity.ioprio_changed = 1;
+			}
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+
+	cgroup_unlock();
+
+free_newpn:
+	if (!keep_newpn)
+		kfree(newpn);
+free_buf:
+	kfree(buf);
+	return ret;
+}
+
 struct cftype bfqio_files[] = {
 	{
+		.name = "policy",
+		.read_seq_string = io_cgroup_policy_read,
+		.write_string = io_cgroup_policy_write,
+		.max_write_len = 256,
+	},
+	{
 		.name = "weight",
 		.read_u64 = io_cgroup_weight_read,
 		.write_u64 = io_cgroup_weight_write,
@@ -1708,6 +1936,7 @@ struct cgroup_subsys_state *iocg_create(struct cgroup_subsys *subsys,
 	INIT_HLIST_HEAD(&iocg->group_data);
 	iocg->weight = IO_DEFAULT_GRP_WEIGHT;
 	iocg->ioprio_class = IO_DEFAULT_GRP_CLASS;
+	INIT_LIST_HEAD(&iocg->policy_list);
 
 	return &iocg->css;
 }
@@ -1911,6 +2140,7 @@ void iocg_destroy(struct cgroup_subsys *subsys, struct cgroup *cgroup)
 	struct io_group *iog;
 	struct elv_fq_data *efqd;
 	unsigned long uninitialized_var(flags);
+	struct io_policy_node *pn, *pntmp;
 
 	/*
 	 * io groups are linked in two lists. One list is maintained
@@ -1949,6 +2179,11 @@ remove_entry:
 	goto remove_entry;
 
 done:
+	list_for_each_entry_safe(pn, pntmp, &iocg->policy_list, node) {
+		policy_delete_node(pn);
+		kfree(pn);
+	}
+
 	free_css_id(&io_subsys, &iocg->css);
 	rcu_read_unlock();
 	BUG_ON(!hlist_empty(&iocg->group_data));
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index d60105f..7102455 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -267,6 +267,14 @@ struct io_group {
 	struct rcu_head rcu_head;
 };
 
+struct io_policy_node {
+	struct list_head node;
+	char dev_name[32];
+	dev_t dev;
+	unsigned long weight;
+	unsigned long ioprio_class;
+};
+
 /**
  * struct bfqio_cgroup - bfq cgroup data structure.
  * @css: subsystem state for bfq in the containing cgroup.
@@ -283,6 +291,9 @@ struct io_cgroup {
 
 	unsigned long weight, ioprio_class;
 
+	/* list of io_policy_node */
+	struct list_head policy_list;
+
 	spinlock_t lock;
 	struct hlist_head group_data;
 };
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 18/20] io-controller: Support per cgroup per device weights and io class
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

This patch enables per-cgroup per-device weight and ioprio_class handling.
A new cgroup interface "policy" is introduced. You can make use of this
file to configure weight and ioprio_class for each device in a given cgroup.
The original "weight" and "ioprio_class" files are still available. If you
don't do special configuration for a particular device, "weight" and
"ioprio_class" are used as default values in this device.

You can use the following format to play with the new interface.
#echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
weight=0 means removing the policy for DEV.

Examples:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo /dev/hdb:300:2 > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

Configure weight=500 ioprio_class=1 on /dev/hda in this cgroup
# echo /dev/hda:500:1 > io.policy
# cat io.policy
dev weight class
/dev/hda 500 1
/dev/hdb 300 2

Remove the policy for /dev/hda in this cgroup
# echo /dev/hda:0:1 > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

Changelog (v1 -> v2)
- Rename some structures
- Use spin_lock_irqsave() and spin_lock_irqrestore() version to prevent
  from enabling the interrupts unconditionally.
- Fix policy setup bug when switching to another io scheduler.
- If a policy is available for a specific device, don't update weight and
  io class when writing "weight" and "iprio_class".
- Fix a bug when parsing policy string.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/elevator-fq.c |  243 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h |   11 +++
 2 files changed, 250 insertions(+), 4 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 13c8161..326f955 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -15,6 +15,7 @@
 #include <linux/blktrace_api.h>
 #include <linux/seq_file.h>
 #include <linux/biotrack.h>
+#include <linux/genhd.h>
 
 /* Values taken from cfq */
 const int elv_slice_sync = HZ / 10;
@@ -1168,12 +1169,31 @@ struct io_group *io_cgroup_lookup_group(struct io_cgroup *iocg, void *key)
 	return NULL;
 }
 
-void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog)
+static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
+						 dev_t dev);
+
+void io_group_init_entity(struct io_cgroup *iocg, struct io_group *iog,
+			  dev_t dev)
 {
 	struct io_entity *entity = &iog->entity;
+	struct io_policy_node *pn;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iocg->lock, flags);
+	pn = policy_search_node(iocg, dev);
+	if (pn) {
+		entity->weight = pn->weight;
+		entity->new_weight = pn->weight;
+		entity->ioprio_class = pn->ioprio_class;
+		entity->new_ioprio_class = pn->ioprio_class;
+	} else {
+		entity->weight = iocg->weight;
+		entity->new_weight = iocg->weight;
+		entity->ioprio_class = iocg->ioprio_class;
+		entity->new_ioprio_class = iocg->ioprio_class;
+	}
+	spin_unlock_irqrestore(&iocg->lock, flags);
 
-	entity->weight = entity->new_weight = iocg->weight;
-	entity->ioprio_class = entity->new_ioprio_class = iocg->ioprio_class;
 	entity->ioprio_changed = 1;
 	entity->my_sched_data = &iog->sched_data;
 }
@@ -1225,6 +1245,7 @@ static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
 	struct io_cgroup *iocg;					\
 	struct io_group *iog;						\
 	struct hlist_node *n;						\
+	struct io_policy_node *pn;					\
 									\
 	if (val < (__MIN) || val > (__MAX))				\
 		return -EINVAL;						\
@@ -1237,6 +1258,9 @@ static int io_cgroup_##__VAR##_write(struct cgroup *cgroup,		\
 	spin_lock_irq(&iocg->lock);					\
 	iocg->__VAR = (unsigned long)val;				\
 	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {	\
+		pn = policy_search_node(iocg, iog->dev);		\
+		if (pn)							\
+			continue;					\
 		iog->entity.new_##__VAR = (unsigned long)val;		\
 		smp_wmb();						\
 		iog->entity.ioprio_changed = 1;				\
@@ -1352,7 +1376,7 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);
 		iog->dev = MKDEV(major, minor);
 
-		io_group_init_entity(iocg, iog);
+		io_group_init_entity(iocg, iog, iog->dev);
 		iog->my_entity = &iog->entity;
 
 		atomic_set(&iog->ref, 0);
@@ -1665,8 +1689,212 @@ struct io_group *io_alloc_root_group(struct request_queue *q,
 	return iog;
 }
 
+static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
+				  struct seq_file *m)
+{
+	struct io_cgroup *iocg;
+	struct io_policy_node *pn;
+
+	iocg = cgroup_to_io_cgroup(cgrp);
+
+	if (list_empty(&iocg->policy_list))
+		goto out;
+
+	seq_printf(m, "dev weight class\n");
+
+	spin_lock_irq(&iocg->lock);
+	list_for_each_entry(pn, &iocg->policy_list, node) {
+		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
+			   pn->weight, pn->ioprio_class);
+	}
+	spin_unlock_irq(&iocg->lock);
+out:
+	return 0;
+}
+
+static inline void policy_insert_node(struct io_cgroup *iocg,
+					  struct io_policy_node *pn)
+{
+	list_add(&pn->node, &iocg->policy_list);
+}
+
+/* Must be called with iocg->lock held */
+static inline void policy_delete_node(struct io_policy_node *pn)
+{
+	list_del(&pn->node);
+}
+
+/* Must be called with iocg->lock held */
+static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
+						 dev_t dev)
+{
+	struct io_policy_node *pn;
+
+	if (list_empty(&iocg->policy_list))
+		return NULL;
+
+	list_for_each_entry(pn, &iocg->policy_list, node) {
+		if (pn->dev == dev)
+			return pn;
+	}
+
+	return NULL;
+}
+
+static int devname_to_devnum(const char *buf, dev_t *dev)
+{
+	struct block_device *bdev;
+	struct gendisk *disk;
+	int part;
+
+	bdev = lookup_bdev(buf);
+	if (IS_ERR(bdev))
+		return -ENODEV;
+
+	disk = get_gendisk(bdev->bd_dev, &part);
+	if (part)
+		return -EINVAL;
+
+	*dev = MKDEV(disk->major, disk->first_minor);
+	bdput(bdev);
+
+	return 0;
+}
+
+static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
+{
+	char *s[3], *p;
+	int ret;
+	int i = 0;
+
+	memset(s, 0, sizeof(s));
+	while ((p = strsep(&buf, ":")) != NULL) {
+		if (!*p)
+			continue;
+		s[i++] = p;
+	}
+
+	ret = devname_to_devnum(s[0], &newpn->dev);
+	if (ret)
+		return ret;
+
+	strcpy(newpn->dev_name, s[0]);
+
+	if (s[1] == NULL)
+		return -EINVAL;
+
+	ret = strict_strtoul(s[1], 10, &newpn->weight);
+	if (ret || newpn->weight > WEIGHT_MAX)
+		return -EINVAL;
+
+	if (s[2] == NULL)
+		return -EINVAL;
+
+	ret = strict_strtoul(s[2], 10, &newpn->ioprio_class);
+	if (ret || newpn->ioprio_class < IOPRIO_CLASS_RT ||
+	    newpn->ioprio_class > IOPRIO_CLASS_IDLE)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int io_cgroup_policy_write(struct cgroup *cgrp, struct cftype *cft,
+			    const char *buffer)
+{
+	struct io_cgroup *iocg;
+	struct io_policy_node *newpn, *pn;
+	char *buf;
+	int ret = 0;
+	int keep_newpn = 0;
+	struct hlist_node *n;
+	struct io_group *iog;
+
+	buf = kstrdup(buffer, GFP_KERNEL);
+	if (!buf)
+		return -ENOMEM;
+
+	newpn = kzalloc(sizeof(*newpn), GFP_KERNEL);
+	if (!newpn) {
+		ret = -ENOMEM;
+		goto free_buf;
+	}
+
+	ret = policy_parse_and_set(buf, newpn);
+	if (ret)
+		goto free_newpn;
+
+	if (!cgroup_lock_live_group(cgrp)) {
+		ret = -ENODEV;
+		goto free_newpn;
+	}
+
+	iocg = cgroup_to_io_cgroup(cgrp);
+	spin_lock_irq(&iocg->lock);
+
+	pn = policy_search_node(iocg, newpn->dev);
+	if (!pn) {
+		if (newpn->weight != 0) {
+			policy_insert_node(iocg, newpn);
+			keep_newpn = 1;
+		}
+		goto update_io_group;
+	}
+
+	if (newpn->weight == 0) {
+		/* weight == 0 means deleteing a policy */
+		policy_delete_node(pn);
+		goto update_io_group;
+	}
+
+	pn->weight = newpn->weight;
+	pn->ioprio_class = newpn->ioprio_class;
+
+update_io_group:
+	hlist_for_each_entry(iog, n, &iocg->group_data, group_node) {
+		if (iog->dev == newpn->dev) {
+			if (newpn->weight) {
+				iog->entity.new_weight = newpn->weight;
+				iog->entity.new_ioprio_class =
+					newpn->ioprio_class;
+				/*
+				 * iog weight and ioprio_class updating
+				 * actually happens if ioprio_changed is set.
+				 * So ensure ioprio_changed is not set until
+				 * new weight and new ioprio_class are updated.
+				 */
+				smp_wmb();
+				iog->entity.ioprio_changed = 1;
+			} else {
+				iog->entity.new_weight = iocg->weight;
+				iog->entity.new_ioprio_class =
+					iocg->ioprio_class;
+
+				/* The same as above */
+				smp_wmb();
+				iog->entity.ioprio_changed = 1;
+			}
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+
+	cgroup_unlock();
+
+free_newpn:
+	if (!keep_newpn)
+		kfree(newpn);
+free_buf:
+	kfree(buf);
+	return ret;
+}
+
 struct cftype bfqio_files[] = {
 	{
+		.name = "policy",
+		.read_seq_string = io_cgroup_policy_read,
+		.write_string = io_cgroup_policy_write,
+		.max_write_len = 256,
+	},
+	{
 		.name = "weight",
 		.read_u64 = io_cgroup_weight_read,
 		.write_u64 = io_cgroup_weight_write,
@@ -1708,6 +1936,7 @@ struct cgroup_subsys_state *iocg_create(struct cgroup_subsys *subsys,
 	INIT_HLIST_HEAD(&iocg->group_data);
 	iocg->weight = IO_DEFAULT_GRP_WEIGHT;
 	iocg->ioprio_class = IO_DEFAULT_GRP_CLASS;
+	INIT_LIST_HEAD(&iocg->policy_list);
 
 	return &iocg->css;
 }
@@ -1911,6 +2140,7 @@ void iocg_destroy(struct cgroup_subsys *subsys, struct cgroup *cgroup)
 	struct io_group *iog;
 	struct elv_fq_data *efqd;
 	unsigned long uninitialized_var(flags);
+	struct io_policy_node *pn, *pntmp;
 
 	/*
 	 * io groups are linked in two lists. One list is maintained
@@ -1949,6 +2179,11 @@ remove_entry:
 	goto remove_entry;
 
 done:
+	list_for_each_entry_safe(pn, pntmp, &iocg->policy_list, node) {
+		policy_delete_node(pn);
+		kfree(pn);
+	}
+
 	free_css_id(&io_subsys, &iocg->css);
 	rcu_read_unlock();
 	BUG_ON(!hlist_empty(&iocg->group_data));
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index d60105f..7102455 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -267,6 +267,14 @@ struct io_group {
 	struct rcu_head rcu_head;
 };
 
+struct io_policy_node {
+	struct list_head node;
+	char dev_name[32];
+	dev_t dev;
+	unsigned long weight;
+	unsigned long ioprio_class;
+};
+
 /**
  * struct bfqio_cgroup - bfq cgroup data structure.
  * @css: subsystem state for bfq in the containing cgroup.
@@ -283,6 +291,9 @@ struct io_cgroup {
 
 	unsigned long weight, ioprio_class;
 
+	/* list of io_policy_node */
+	struct list_head policy_list;
+
 	spinlock_t lock;
 	struct hlist_head group_data;
 };
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 19/20] io-controller: Debug hierarchical IO scheduling
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (17 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 18/20] io-controller: Support per cgroup per device weights and io class Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-19 20:37   ` [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
                     ` (2 subsequent siblings)
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o Littile debugging aid for hierarchical IO scheduling.

o Enabled under CONFIG_DEBUG_GROUP_IOSCHED

o Currently it outputs more debug messages in blktrace output which helps
  a great deal in debugging in hierarchical setup. It also creates additional
  cgroup interfaces io.disk_queue and io.disk_dequeue to output some more
  debugging data.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/Kconfig.iosched |   10 ++-
 block/as-iosched.c    |   50 ++++++---
 block/elevator-fq.c   |  277 ++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h   |   36 +++++++
 4 files changed, 351 insertions(+), 22 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 0677099..79f188c 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -140,6 +140,14 @@ config TRACK_ASYNC_CONTEXT
 	  request, original owner of the bio is decided by using io tracking
 	  patches otherwise we continue to attribute the request to the
 	  submitting thread.
-endmenu
 
+config DEBUG_GROUP_IOSCHED
+	bool "Debug Hierarchical Scheduling support"
+	depends on CGROUPS && GROUP_IOSCHED
+	default n
+	---help---
+	  Enable some debugging hooks for hierarchical scheduling support.
+	  Currently it just outputs more information in blktrace output.
+
+endmenu
 endif
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 68200b3..42dee4c 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -78,6 +78,7 @@ enum anticipation_status {
 };
 
 struct as_queue {
+	struct io_queue *ioq;
 	/*
 	 * requests (as_rq s) are present on both sort_list and fifo_list
 	 */
@@ -162,6 +163,17 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+#define as_log_asq(ad, asq, fmt, args...)				\
+{									\
+	blk_add_trace_msg((ad)->q, "as %s " fmt,			\
+			ioq_to_io_group((asq)->ioq)->path, ##args);	\
+}
+#else
+#define as_log_asq(ad, asq, fmt, args...) \
+	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
+#endif
+
 #define as_log(ad, fmt, args...)        \
 	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
 
@@ -225,7 +237,7 @@ static void as_save_batch_context(struct as_data *ad, struct as_queue *asq)
 	}
 
 out:
-	as_log(ad, "save batch: dir=%c time_left=%d changed_batch=%d"
+	as_log_asq(ad, asq, "save batch: dir=%c time_left=%d changed_batch=%d"
 			" new_batch=%d, antic_status=%d",
 			ad->batch_data_dir ? 'R' : 'W',
 			asq->current_batch_time_left,
@@ -247,8 +259,8 @@ static void as_restore_batch_context(struct as_data *ad, struct as_queue *asq)
 						asq->current_batch_time_left;
 	/* restore asq batch_data_dir info */
 	ad->batch_data_dir = asq->saved_batch_data_dir;
-	as_log(ad, "restore batch: dir=%c time=%d reads_q=%d writes_q=%d"
-			" ad->antic_status=%d",
+	as_log_asq(ad, asq, "restore batch: dir=%c time=%d reads_q=%d"
+			" writes_q=%d ad->antic_status=%d",
 			ad->batch_data_dir ? 'R' : 'W',
 			asq->current_batch_time_left,
 			asq->nr_queued[1], asq->nr_queued[0],
@@ -277,8 +289,8 @@ static int as_expire_ioq(struct request_queue *q, void *sched_queue,
 	int status = ad->antic_status;
 	struct as_queue *asq = sched_queue;
 
-	as_log(ad, "as_expire_ioq slice_expired=%d, force=%d", slice_expired,
-		force);
+	as_log_asq(ad, asq, "as_expire_ioq slice_expired=%d, force=%d",
+			slice_expired, force);
 
 	/* Forced expiry. We don't have a choice */
 	if (force) {
@@ -1019,9 +1031,10 @@ static void update_write_batch(struct as_data *ad, struct request *rq)
 	if (write_time < 0)
 		write_time = 0;
 
-	as_log(ad, "upd write: write_time=%d batch=%d write_batch_idled=%d"
-			" current_write_count=%d", write_time, batch,
-			asq->write_batch_idled, asq->current_write_count);
+	as_log_asq(ad, asq, "upd write: write_time=%d batch=%d"
+			" write_batch_idled=%d current_write_count=%d",
+			write_time, batch, asq->write_batch_idled,
+			asq->current_write_count);
 
 	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
@@ -1038,7 +1051,7 @@ static void update_write_batch(struct as_data *ad, struct request *rq)
 	if (asq->write_batch_count < 1)
 		asq->write_batch_count = 1;
 
-	as_log(ad, "upd write count=%d", asq->write_batch_count);
+	as_log_asq(ad, asq, "upd write count=%d", asq->write_batch_count);
 }
 
 /*
@@ -1057,7 +1070,7 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 		goto out;
 	}
 
-	as_log(ad, "complete: reads_q=%d writes_q=%d changed_batch=%d"
+	as_log_asq(ad, asq, "complete: reads_q=%d writes_q=%d changed_batch=%d"
 		" new_batch=%d switch_queue=%d, dir=%c",
 		asq->nr_queued[1], asq->nr_queued[0], ad->changed_batch,
 		ad->new_batch, ad->switch_queue,
@@ -1251,7 +1264,7 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 	if (RQ_IOC(rq) && RQ_IOC(rq)->aic)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 	ad->nr_dispatched++;
-	as_log(ad, "dispatch req dir=%c nr_dispatched = %d",
+	as_log_asq(ad, asq, "dispatch req dir=%c nr_dispatched = %d",
 			data_dir ? 'R' : 'W', ad->nr_dispatched);
 }
 
@@ -1300,7 +1313,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		}
 		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
-		as_log(ad, "forced dispatch");
+		as_log_asq(ad, asq, "forced dispatch");
 		return dispatched;
 	}
 
@@ -1314,7 +1327,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		|| ad->antic_status == ANTIC_WAIT_REQ
 		|| ad->antic_status == ANTIC_WAIT_NEXT
 		|| ad->changed_batch) {
-		as_log(ad, "no dispatch. read_q=%d, writes_q=%d"
+		as_log_asq(ad, asq, "no dispatch. read_q=%d, writes_q=%d"
 			" ad->antic_status=%d, changed_batch=%d,"
 			" switch_queue=%d new_batch=%d", asq->nr_queued[1],
 			asq->nr_queued[0], ad->antic_status, ad->changed_batch,
@@ -1333,7 +1346,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
-				as_log(ad, "can_anticipate = 1");
+				as_log_asq(ad, asq, "can_anticipate = 1");
 				as_antic_waitreq(ad);
 				return 0;
 			}
@@ -1353,7 +1366,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
-	as_log(ad, "select a fresh batch and request");
+	as_log_asq(ad, asq, "select a fresh batch and request");
 
 	if (reads) {
 		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
@@ -1369,7 +1382,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
-		as_log(ad, "new batch dir is sync");
+		as_log_asq(ad, asq, "new batch dir is sync");
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
 		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
@@ -1394,7 +1407,7 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
-		as_log(ad, "new batch dir is async");
+		as_log_asq(ad, asq, "new batch dir is async");
 		asq->current_write_count = asq->write_batch_count;
 		asq->write_batch_idled = 0;
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
@@ -1457,7 +1470,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 	rq->elevator_private = as_get_io_context(q->node);
 
 	asq->nr_queued[data_dir]++;
-	as_log(ad, "add a %c request read_q=%d write_q=%d",
+	as_log_asq(ad, asq, "add a %c request read_q=%d write_q=%d",
 			data_dir ? 'R' : 'W', asq->nr_queued[1],
 			asq->nr_queued[0]);
 
@@ -1616,6 +1629,7 @@ static void *as_alloc_as_queue(struct request_queue *q,
 
 	if (asq->write_batch_count < 2)
 		asq->write_batch_count = 2;
+	asq->ioq = ioq;
 out:
 	return asq;
 }
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 326f955..baa45c6 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -173,6 +173,119 @@ static void bfq_find_matching_entity(struct io_entity **entity,
 	}
 }
 
+static inline struct io_group *io_entity_to_iog(struct io_entity *entity)
+{
+	struct io_group *iog = NULL;
+
+	BUG_ON(entity == NULL);
+	if (entity->my_sched_data != NULL)
+		iog = container_of(entity, struct io_group, entity);
+	return iog;
+}
+
+/* Returns parent group of io group */
+static inline struct io_group *iog_parent(struct io_group *iog)
+{
+	struct io_group *piog;
+
+	if (!iog->entity.sched_data)
+		return NULL;
+
+	/*
+	 * Not following entity->parent pointer as for top level groups
+	 * this pointer is NULL.
+	 */
+	piog = container_of(iog->entity.sched_data, struct io_group,
+					sched_data);
+	return piog;
+}
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+static void io_group_path(struct io_group *iog, char *buf, int buflen)
+{
+	unsigned short id = iog->iocg_id;
+	struct cgroup_subsys_state *css;
+
+	rcu_read_lock();
+
+	if (!id)
+		goto out;
+
+	css = css_lookup(&io_subsys, id);
+	if (!css)
+		goto out;
+
+	if (!css_tryget(css))
+		goto out;
+
+	cgroup_path(css->cgroup, buf, buflen);
+
+	css_put(css);
+
+	rcu_read_unlock();
+	return;
+out:
+	rcu_read_unlock();
+	buf[0] = '\0';
+	return;
+}
+
+/*
+ * An entity has been freshly added to active tree. Either it came from
+ * idle tree or it was not on any of the trees. Do the accounting.
+ */
+static inline void bfq_account_for_entity_addition(struct io_entity *entity)
+{
+	struct io_group *iog = io_entity_to_iog(entity);
+
+	if (iog) {
+		struct elv_fq_data *efqd;
+
+		/*
+		 * Keep track of how many times a group has been added
+		 * to active tree.
+		 */
+		iog->queue++;
+		iog->queue_start = jiffies;
+
+		/* Log group addition event */
+		rcu_read_lock();
+		efqd = rcu_dereference(iog->key);
+		if (efqd)
+			elv_log_iog(efqd, iog, "add group weight=%ld",
+					iog->entity.weight);
+		rcu_read_unlock();
+	}
+}
+
+/*
+ * An entity got removed from active tree and either went to idle tree or
+ * not is on any of the tree. Do the accouting
+ */
+static inline void bfq_account_for_entity_deletion(struct io_entity *entity)
+{
+	struct io_group *iog = io_entity_to_iog(entity);
+
+	if (iog) {
+		struct elv_fq_data *efqd;
+
+		iog->dequeue++;
+		/* Keep a track of how long group was on active tree */
+		iog->queue_duration += jiffies_to_msecs(jiffies -
+						iog->queue_start);
+		iog->queue_start = 0;
+
+		/* Log group deletion event */
+		rcu_read_lock();
+		efqd = rcu_dereference(iog->key);
+		if (efqd)
+			elv_log_iog(efqd, iog, "del group weight=%ld",
+					iog->entity.weight);
+		rcu_read_unlock();
+	}
+}
+#endif
+
 #else /* GROUP_IOSCHED */
 #define for_each_entity(entity)	\
 	for (; entity != NULL; entity = NULL)
@@ -200,6 +313,12 @@ static void bfq_find_matching_entity(struct io_entity **entity,
 					struct io_entity **new_entity)
 {
 }
+
+static inline struct io_group *io_entity_to_iog(struct io_entity *entity)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 /*
@@ -633,6 +752,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 {
 	struct io_sched_data *sd = entity->sched_data;
 	struct io_service_tree *st = io_entity_service_tree(entity);
+	int newly_added = 0;
 
 	if (entity == sd->active_entity) {
 		BUG_ON(entity->tree != NULL);
@@ -659,6 +779,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 		bfq_idle_extract(st, entity);
 		entity->start = bfq_gt(st->vtime, entity->finish) ?
 				       st->vtime : entity->finish;
+		newly_added = 1;
 	} else {
 		/*
 		 * The finish time of the entity may be invalid, and
@@ -671,6 +792,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 
 		BUG_ON(entity->on_st);
 		entity->on_st = 1;
+		newly_added = 1;
 	}
 
 	st = __bfq_entity_update_prio(st, entity);
@@ -708,6 +830,10 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 		bfq_calc_finish(entity, entity->budget);
 	}
 	bfq_active_insert(st, entity);
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	if (newly_added)
+		bfq_account_for_entity_addition(entity);
+#endif
 }
 
 /**
@@ -778,6 +904,9 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
 	BUG_ON(sd->active_entity == entity);
 	BUG_ON(sd->next_active == entity);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	bfq_account_for_entity_deletion(entity);
+#endif
 	return ret;
 }
 
@@ -1336,6 +1465,67 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 	return 0;
 }
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
+			struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg = NULL;
+	struct io_group *iog = NULL;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+	spin_lock_irq(&iocg->lock);
+	/* Loop through all the io groups and print statistics */
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev), iog->queue,
+					iog->queue_duration);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
+static int io_cgroup_disk_dequeue_read(struct cgroup *cgroup,
+			struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg = NULL;
+	struct io_group *iog = NULL;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+	spin_lock_irq(&iocg->lock);
+	/* Loop through all the io groups and print statistics */
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev), iog->dequeue);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+#endif
+
 /**
  * bfq_group_chain_alloc - allocate a chain of groups.
  * @bfqd: queue descriptor.
@@ -1393,6 +1583,10 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		blk_init_request_list(&iog->rl);
 		elv_io_group_congestion_threshold(q, iog);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+		io_group_path(iog, iog->path, sizeof(iog->path));
+#endif
+
 		if (leaf == NULL) {
 			leaf = iog;
 			prev = leaf;
@@ -1912,6 +2106,16 @@ struct cftype bfqio_files[] = {
 		.name = "disk_sectors",
 		.read_seq_string = io_cgroup_disk_sectors_read,
 	},
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		.name = "disk_queue",
+		.read_seq_string = io_cgroup_disk_queue_read,
+	},
+	{
+		.name = "disk_dequeue",
+		.read_seq_string = io_cgroup_disk_dequeue_read,
+	},
+#endif
 };
 
 int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
@@ -2250,6 +2454,7 @@ struct cgroup_subsys io_subsys = {
 	.destroy = iocg_destroy,
 	.populate = iocg_populate,
 	.subsys_id = io_subsys_id,
+	.use_id = 1,
 };
 
 /*
@@ -2541,6 +2746,22 @@ EXPORT_SYMBOL(elv_get_slice_idle);
 void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
 {
 	entity_served(&ioq->entity, served, ioq->nr_sectors);
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+		{
+			struct elv_fq_data *efqd = ioq->efqd;
+			struct io_group *iog = ioq_to_io_group(ioq);
+			elv_log_ioq(efqd, ioq, "ioq served: QSt=0x%lx QSs=0x%lx"
+				" QTt=0x%lx QTs=0x%lx GTt=0x%lx "
+				" GTs=0x%lx rq_queued=%d",
+				served, ioq->nr_sectors,
+				ioq->entity.total_service,
+				ioq->entity.total_sector_service,
+				iog->entity.total_service,
+				iog->entity.total_sector_service,
+				ioq->nr_queued);
+		}
+#endif
 }
 
 /* Tells whether ioq is queued in root group or not */
@@ -2918,10 +3139,30 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 	if (ioq) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		elv_log_ioq(efqd, ioq, "set_active, busy=%d ioprio=%d"
-				" weight=%ld group_weight=%ld",
+				" weight=%ld rq_queued=%d group_weight=%ld",
 				efqd->busy_queues,
 				ioq->entity.ioprio, ioq->entity.weight,
-				iog_weight(iog));
+				ioq->nr_queued, iog_weight(iog));
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+			{
+				int nr_active = 0;
+				struct io_group *parent = NULL;
+
+				parent = iog_parent(iog);
+				if (parent)
+					nr_active = elv_iog_nr_active(parent);
+
+				elv_log_ioq(efqd, ioq, "set_active, ioq"
+				" nrgrps=%d QTt=0x%lx QTs=0x%lx GTt=0x%lx "
+				" GTs=0x%lx rq_queued=%d", nr_active,
+				ioq->entity.total_service,
+				ioq->entity.total_sector_service,
+				iog->entity.total_service,
+				iog->entity.total_sector_service,
+				ioq->nr_queued);
+			}
+#endif
 		ioq->slice_end = 0;
 
 		elv_clear_ioq_wait_request(ioq);
@@ -3002,6 +3243,21 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues++;
 	}
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "add to busy: QTt=0x%lx QTs=0x%lx"
+			" GTt=0x%lx GTs=0x%lx rq_queued=%d",
+			ioq->entity.total_service,
+			ioq->entity.total_sector_service,
+			iog->entity.total_service,
+			iog->entity.total_sector_service,
+			ioq->nr_queued);
+	}
+#else
+	elv_log_ioq(efqd, ioq, "add to busy");
+#endif
 }
 
 void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
@@ -3011,7 +3267,21 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 
 	BUG_ON(!elv_ioq_busy(ioq));
 	BUG_ON(ioq->nr_queued);
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "del from busy: QTt=0x%lx "
+			"QTs=0x%lx ioq GTt=0x%lx GTs=0x%lx "
+			"rq_queued=%d",
+			ioq->entity.total_service,
+			ioq->entity.total_sector_service,
+			iog->entity.total_service,
+			iog->entity.total_sector_service,
+			ioq->nr_queued);
+	}
+#else
 	elv_log_ioq(efqd, ioq, "del from busy");
+#endif
 	elv_clear_ioq_busy(ioq);
 	BUG_ON(efqd->busy_queues == 0);
 	efqd->busy_queues--;
@@ -3258,6 +3528,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 
 	elv_ioq_update_io_thinktime(ioq);
 	elv_ioq_update_idle_window(q->elevator, ioq, rq);
+	elv_log_ioq(efqd, ioq, "add rq: rq_queued=%d", ioq->nr_queued);
 
 	if (ioq == elv_active_ioq(q->elevator)) {
 		/*
@@ -3492,7 +3763,7 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	}
 
 	/* We are waiting for this queue to become busy before it expires.*/
-	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
+	if (elv_ioq_wait_busy(ioq)) {
 		ioq = NULL;
 		goto keep_queue;
 	}
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 7102455..f7d6092 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -265,6 +265,23 @@ struct io_group {
 	/* request list associated with the group */
 	struct request_list rl;
 	struct rcu_head rcu_head;
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	/* How many times this group has been added to active tree */
+	unsigned long queue;
+
+	/* How long this group remained on active tree, in ms */
+	unsigned long queue_duration;
+
+	/* When was this group added to active tree */
+	unsigned long queue_start;
+
+	/* How many times this group has been removed from active tree */
+	unsigned long dequeue;
+
+	/* Store cgroup path */
+	char path[128];
+#endif
 };
 
 struct io_policy_node {
@@ -367,10 +384,29 @@ extern int elv_slice_idle;
 extern int elv_slice_async;
 
 /* Logging facilities. */
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+#define elv_log_ioq(efqd, ioq, fmt, args...) \
+{								\
+	blk_add_trace_msg((efqd)->queue, "elv%d%c %s " fmt, (ioq)->pid,	\
+			elv_ioq_sync(ioq) ? 'S' : 'A', \
+			ioq_to_io_group(ioq)->path, ##args); \
+}
+
+#define elv_log_iog(efqd, iog, fmt, args...) \
+{                                                                      \
+	blk_add_trace_msg((efqd)->queue, "elv %s " fmt, (iog)->path, ##args); \
+}
+
+#else
 #define elv_log_ioq(efqd, ioq, fmt, args...) \
 	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
 				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
 
+#define elv_log_iog(efqd, iog, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
+
+#endif
+
 #define elv_log(efqd, fmt, args...) \
 	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
 
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 19/20] io-controller: Debug hierarchical IO scheduling
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o Littile debugging aid for hierarchical IO scheduling.

o Enabled under CONFIG_DEBUG_GROUP_IOSCHED

o Currently it outputs more debug messages in blktrace output which helps
  a great deal in debugging in hierarchical setup. It also creates additional
  cgroup interfaces io.disk_queue and io.disk_dequeue to output some more
  debugging data.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched |   10 ++-
 block/as-iosched.c    |   50 ++++++---
 block/elevator-fq.c   |  277 ++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h   |   36 +++++++
 4 files changed, 351 insertions(+), 22 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 0677099..79f188c 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -140,6 +140,14 @@ config TRACK_ASYNC_CONTEXT
 	  request, original owner of the bio is decided by using io tracking
 	  patches otherwise we continue to attribute the request to the
 	  submitting thread.
-endmenu
 
+config DEBUG_GROUP_IOSCHED
+	bool "Debug Hierarchical Scheduling support"
+	depends on CGROUPS && GROUP_IOSCHED
+	default n
+	---help---
+	  Enable some debugging hooks for hierarchical scheduling support.
+	  Currently it just outputs more information in blktrace output.
+
+endmenu
 endif
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 68200b3..42dee4c 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -78,6 +78,7 @@ enum anticipation_status {
 };
 
 struct as_queue {
+	struct io_queue *ioq;
 	/*
 	 * requests (as_rq s) are present on both sort_list and fifo_list
 	 */
@@ -162,6 +163,17 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+#define as_log_asq(ad, asq, fmt, args...)				\
+{									\
+	blk_add_trace_msg((ad)->q, "as %s " fmt,			\
+			ioq_to_io_group((asq)->ioq)->path, ##args);	\
+}
+#else
+#define as_log_asq(ad, asq, fmt, args...) \
+	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
+#endif
+
 #define as_log(ad, fmt, args...)        \
 	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
 
@@ -225,7 +237,7 @@ static void as_save_batch_context(struct as_data *ad, struct as_queue *asq)
 	}
 
 out:
-	as_log(ad, "save batch: dir=%c time_left=%d changed_batch=%d"
+	as_log_asq(ad, asq, "save batch: dir=%c time_left=%d changed_batch=%d"
 			" new_batch=%d, antic_status=%d",
 			ad->batch_data_dir ? 'R' : 'W',
 			asq->current_batch_time_left,
@@ -247,8 +259,8 @@ static void as_restore_batch_context(struct as_data *ad, struct as_queue *asq)
 						asq->current_batch_time_left;
 	/* restore asq batch_data_dir info */
 	ad->batch_data_dir = asq->saved_batch_data_dir;
-	as_log(ad, "restore batch: dir=%c time=%d reads_q=%d writes_q=%d"
-			" ad->antic_status=%d",
+	as_log_asq(ad, asq, "restore batch: dir=%c time=%d reads_q=%d"
+			" writes_q=%d ad->antic_status=%d",
 			ad->batch_data_dir ? 'R' : 'W',
 			asq->current_batch_time_left,
 			asq->nr_queued[1], asq->nr_queued[0],
@@ -277,8 +289,8 @@ static int as_expire_ioq(struct request_queue *q, void *sched_queue,
 	int status = ad->antic_status;
 	struct as_queue *asq = sched_queue;
 
-	as_log(ad, "as_expire_ioq slice_expired=%d, force=%d", slice_expired,
-		force);
+	as_log_asq(ad, asq, "as_expire_ioq slice_expired=%d, force=%d",
+			slice_expired, force);
 
 	/* Forced expiry. We don't have a choice */
 	if (force) {
@@ -1019,9 +1031,10 @@ static void update_write_batch(struct as_data *ad, struct request *rq)
 	if (write_time < 0)
 		write_time = 0;
 
-	as_log(ad, "upd write: write_time=%d batch=%d write_batch_idled=%d"
-			" current_write_count=%d", write_time, batch,
-			asq->write_batch_idled, asq->current_write_count);
+	as_log_asq(ad, asq, "upd write: write_time=%d batch=%d"
+			" write_batch_idled=%d current_write_count=%d",
+			write_time, batch, asq->write_batch_idled,
+			asq->current_write_count);
 
 	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
@@ -1038,7 +1051,7 @@ static void update_write_batch(struct as_data *ad, struct request *rq)
 	if (asq->write_batch_count < 1)
 		asq->write_batch_count = 1;
 
-	as_log(ad, "upd write count=%d", asq->write_batch_count);
+	as_log_asq(ad, asq, "upd write count=%d", asq->write_batch_count);
 }
 
 /*
@@ -1057,7 +1070,7 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 		goto out;
 	}
 
-	as_log(ad, "complete: reads_q=%d writes_q=%d changed_batch=%d"
+	as_log_asq(ad, asq, "complete: reads_q=%d writes_q=%d changed_batch=%d"
 		" new_batch=%d switch_queue=%d, dir=%c",
 		asq->nr_queued[1], asq->nr_queued[0], ad->changed_batch,
 		ad->new_batch, ad->switch_queue,
@@ -1251,7 +1264,7 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 	if (RQ_IOC(rq) && RQ_IOC(rq)->aic)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 	ad->nr_dispatched++;
-	as_log(ad, "dispatch req dir=%c nr_dispatched = %d",
+	as_log_asq(ad, asq, "dispatch req dir=%c nr_dispatched = %d",
 			data_dir ? 'R' : 'W', ad->nr_dispatched);
 }
 
@@ -1300,7 +1313,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		}
 		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
-		as_log(ad, "forced dispatch");
+		as_log_asq(ad, asq, "forced dispatch");
 		return dispatched;
 	}
 
@@ -1314,7 +1327,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		|| ad->antic_status == ANTIC_WAIT_REQ
 		|| ad->antic_status == ANTIC_WAIT_NEXT
 		|| ad->changed_batch) {
-		as_log(ad, "no dispatch. read_q=%d, writes_q=%d"
+		as_log_asq(ad, asq, "no dispatch. read_q=%d, writes_q=%d"
 			" ad->antic_status=%d, changed_batch=%d,"
 			" switch_queue=%d new_batch=%d", asq->nr_queued[1],
 			asq->nr_queued[0], ad->antic_status, ad->changed_batch,
@@ -1333,7 +1346,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
-				as_log(ad, "can_anticipate = 1");
+				as_log_asq(ad, asq, "can_anticipate = 1");
 				as_antic_waitreq(ad);
 				return 0;
 			}
@@ -1353,7 +1366,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
-	as_log(ad, "select a fresh batch and request");
+	as_log_asq(ad, asq, "select a fresh batch and request");
 
 	if (reads) {
 		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
@@ -1369,7 +1382,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
-		as_log(ad, "new batch dir is sync");
+		as_log_asq(ad, asq, "new batch dir is sync");
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
 		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
@@ -1394,7 +1407,7 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
-		as_log(ad, "new batch dir is async");
+		as_log_asq(ad, asq, "new batch dir is async");
 		asq->current_write_count = asq->write_batch_count;
 		asq->write_batch_idled = 0;
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
@@ -1457,7 +1470,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 	rq->elevator_private = as_get_io_context(q->node);
 
 	asq->nr_queued[data_dir]++;
-	as_log(ad, "add a %c request read_q=%d write_q=%d",
+	as_log_asq(ad, asq, "add a %c request read_q=%d write_q=%d",
 			data_dir ? 'R' : 'W', asq->nr_queued[1],
 			asq->nr_queued[0]);
 
@@ -1616,6 +1629,7 @@ static void *as_alloc_as_queue(struct request_queue *q,
 
 	if (asq->write_batch_count < 2)
 		asq->write_batch_count = 2;
+	asq->ioq = ioq;
 out:
 	return asq;
 }
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 326f955..baa45c6 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -173,6 +173,119 @@ static void bfq_find_matching_entity(struct io_entity **entity,
 	}
 }
 
+static inline struct io_group *io_entity_to_iog(struct io_entity *entity)
+{
+	struct io_group *iog = NULL;
+
+	BUG_ON(entity == NULL);
+	if (entity->my_sched_data != NULL)
+		iog = container_of(entity, struct io_group, entity);
+	return iog;
+}
+
+/* Returns parent group of io group */
+static inline struct io_group *iog_parent(struct io_group *iog)
+{
+	struct io_group *piog;
+
+	if (!iog->entity.sched_data)
+		return NULL;
+
+	/*
+	 * Not following entity->parent pointer as for top level groups
+	 * this pointer is NULL.
+	 */
+	piog = container_of(iog->entity.sched_data, struct io_group,
+					sched_data);
+	return piog;
+}
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+static void io_group_path(struct io_group *iog, char *buf, int buflen)
+{
+	unsigned short id = iog->iocg_id;
+	struct cgroup_subsys_state *css;
+
+	rcu_read_lock();
+
+	if (!id)
+		goto out;
+
+	css = css_lookup(&io_subsys, id);
+	if (!css)
+		goto out;
+
+	if (!css_tryget(css))
+		goto out;
+
+	cgroup_path(css->cgroup, buf, buflen);
+
+	css_put(css);
+
+	rcu_read_unlock();
+	return;
+out:
+	rcu_read_unlock();
+	buf[0] = '\0';
+	return;
+}
+
+/*
+ * An entity has been freshly added to active tree. Either it came from
+ * idle tree or it was not on any of the trees. Do the accounting.
+ */
+static inline void bfq_account_for_entity_addition(struct io_entity *entity)
+{
+	struct io_group *iog = io_entity_to_iog(entity);
+
+	if (iog) {
+		struct elv_fq_data *efqd;
+
+		/*
+		 * Keep track of how many times a group has been added
+		 * to active tree.
+		 */
+		iog->queue++;
+		iog->queue_start = jiffies;
+
+		/* Log group addition event */
+		rcu_read_lock();
+		efqd = rcu_dereference(iog->key);
+		if (efqd)
+			elv_log_iog(efqd, iog, "add group weight=%ld",
+					iog->entity.weight);
+		rcu_read_unlock();
+	}
+}
+
+/*
+ * An entity got removed from active tree and either went to idle tree or
+ * not is on any of the tree. Do the accouting
+ */
+static inline void bfq_account_for_entity_deletion(struct io_entity *entity)
+{
+	struct io_group *iog = io_entity_to_iog(entity);
+
+	if (iog) {
+		struct elv_fq_data *efqd;
+
+		iog->dequeue++;
+		/* Keep a track of how long group was on active tree */
+		iog->queue_duration += jiffies_to_msecs(jiffies -
+						iog->queue_start);
+		iog->queue_start = 0;
+
+		/* Log group deletion event */
+		rcu_read_lock();
+		efqd = rcu_dereference(iog->key);
+		if (efqd)
+			elv_log_iog(efqd, iog, "del group weight=%ld",
+					iog->entity.weight);
+		rcu_read_unlock();
+	}
+}
+#endif
+
 #else /* GROUP_IOSCHED */
 #define for_each_entity(entity)	\
 	for (; entity != NULL; entity = NULL)
@@ -200,6 +313,12 @@ static void bfq_find_matching_entity(struct io_entity **entity,
 					struct io_entity **new_entity)
 {
 }
+
+static inline struct io_group *io_entity_to_iog(struct io_entity *entity)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 /*
@@ -633,6 +752,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 {
 	struct io_sched_data *sd = entity->sched_data;
 	struct io_service_tree *st = io_entity_service_tree(entity);
+	int newly_added = 0;
 
 	if (entity == sd->active_entity) {
 		BUG_ON(entity->tree != NULL);
@@ -659,6 +779,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 		bfq_idle_extract(st, entity);
 		entity->start = bfq_gt(st->vtime, entity->finish) ?
 				       st->vtime : entity->finish;
+		newly_added = 1;
 	} else {
 		/*
 		 * The finish time of the entity may be invalid, and
@@ -671,6 +792,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 
 		BUG_ON(entity->on_st);
 		entity->on_st = 1;
+		newly_added = 1;
 	}
 
 	st = __bfq_entity_update_prio(st, entity);
@@ -708,6 +830,10 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 		bfq_calc_finish(entity, entity->budget);
 	}
 	bfq_active_insert(st, entity);
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	if (newly_added)
+		bfq_account_for_entity_addition(entity);
+#endif
 }
 
 /**
@@ -778,6 +904,9 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
 	BUG_ON(sd->active_entity == entity);
 	BUG_ON(sd->next_active == entity);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	bfq_account_for_entity_deletion(entity);
+#endif
 	return ret;
 }
 
@@ -1336,6 +1465,67 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 	return 0;
 }
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
+			struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg = NULL;
+	struct io_group *iog = NULL;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+	spin_lock_irq(&iocg->lock);
+	/* Loop through all the io groups and print statistics */
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev), iog->queue,
+					iog->queue_duration);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
+static int io_cgroup_disk_dequeue_read(struct cgroup *cgroup,
+			struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg = NULL;
+	struct io_group *iog = NULL;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+	spin_lock_irq(&iocg->lock);
+	/* Loop through all the io groups and print statistics */
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev), iog->dequeue);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+#endif
+
 /**
  * bfq_group_chain_alloc - allocate a chain of groups.
  * @bfqd: queue descriptor.
@@ -1393,6 +1583,10 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		blk_init_request_list(&iog->rl);
 		elv_io_group_congestion_threshold(q, iog);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+		io_group_path(iog, iog->path, sizeof(iog->path));
+#endif
+
 		if (leaf == NULL) {
 			leaf = iog;
 			prev = leaf;
@@ -1912,6 +2106,16 @@ struct cftype bfqio_files[] = {
 		.name = "disk_sectors",
 		.read_seq_string = io_cgroup_disk_sectors_read,
 	},
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		.name = "disk_queue",
+		.read_seq_string = io_cgroup_disk_queue_read,
+	},
+	{
+		.name = "disk_dequeue",
+		.read_seq_string = io_cgroup_disk_dequeue_read,
+	},
+#endif
 };
 
 int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
@@ -2250,6 +2454,7 @@ struct cgroup_subsys io_subsys = {
 	.destroy = iocg_destroy,
 	.populate = iocg_populate,
 	.subsys_id = io_subsys_id,
+	.use_id = 1,
 };
 
 /*
@@ -2541,6 +2746,22 @@ EXPORT_SYMBOL(elv_get_slice_idle);
 void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
 {
 	entity_served(&ioq->entity, served, ioq->nr_sectors);
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+		{
+			struct elv_fq_data *efqd = ioq->efqd;
+			struct io_group *iog = ioq_to_io_group(ioq);
+			elv_log_ioq(efqd, ioq, "ioq served: QSt=0x%lx QSs=0x%lx"
+				" QTt=0x%lx QTs=0x%lx GTt=0x%lx "
+				" GTs=0x%lx rq_queued=%d",
+				served, ioq->nr_sectors,
+				ioq->entity.total_service,
+				ioq->entity.total_sector_service,
+				iog->entity.total_service,
+				iog->entity.total_sector_service,
+				ioq->nr_queued);
+		}
+#endif
 }
 
 /* Tells whether ioq is queued in root group or not */
@@ -2918,10 +3139,30 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 	if (ioq) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		elv_log_ioq(efqd, ioq, "set_active, busy=%d ioprio=%d"
-				" weight=%ld group_weight=%ld",
+				" weight=%ld rq_queued=%d group_weight=%ld",
 				efqd->busy_queues,
 				ioq->entity.ioprio, ioq->entity.weight,
-				iog_weight(iog));
+				ioq->nr_queued, iog_weight(iog));
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+			{
+				int nr_active = 0;
+				struct io_group *parent = NULL;
+
+				parent = iog_parent(iog);
+				if (parent)
+					nr_active = elv_iog_nr_active(parent);
+
+				elv_log_ioq(efqd, ioq, "set_active, ioq"
+				" nrgrps=%d QTt=0x%lx QTs=0x%lx GTt=0x%lx "
+				" GTs=0x%lx rq_queued=%d", nr_active,
+				ioq->entity.total_service,
+				ioq->entity.total_sector_service,
+				iog->entity.total_service,
+				iog->entity.total_sector_service,
+				ioq->nr_queued);
+			}
+#endif
 		ioq->slice_end = 0;
 
 		elv_clear_ioq_wait_request(ioq);
@@ -3002,6 +3243,21 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues++;
 	}
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "add to busy: QTt=0x%lx QTs=0x%lx"
+			" GTt=0x%lx GTs=0x%lx rq_queued=%d",
+			ioq->entity.total_service,
+			ioq->entity.total_sector_service,
+			iog->entity.total_service,
+			iog->entity.total_sector_service,
+			ioq->nr_queued);
+	}
+#else
+	elv_log_ioq(efqd, ioq, "add to busy");
+#endif
 }
 
 void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
@@ -3011,7 +3267,21 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 
 	BUG_ON(!elv_ioq_busy(ioq));
 	BUG_ON(ioq->nr_queued);
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "del from busy: QTt=0x%lx "
+			"QTs=0x%lx ioq GTt=0x%lx GTs=0x%lx "
+			"rq_queued=%d",
+			ioq->entity.total_service,
+			ioq->entity.total_sector_service,
+			iog->entity.total_service,
+			iog->entity.total_sector_service,
+			ioq->nr_queued);
+	}
+#else
 	elv_log_ioq(efqd, ioq, "del from busy");
+#endif
 	elv_clear_ioq_busy(ioq);
 	BUG_ON(efqd->busy_queues == 0);
 	efqd->busy_queues--;
@@ -3258,6 +3528,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 
 	elv_ioq_update_io_thinktime(ioq);
 	elv_ioq_update_idle_window(q->elevator, ioq, rq);
+	elv_log_ioq(efqd, ioq, "add rq: rq_queued=%d", ioq->nr_queued);
 
 	if (ioq == elv_active_ioq(q->elevator)) {
 		/*
@@ -3492,7 +3763,7 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	}
 
 	/* We are waiting for this queue to become busy before it expires.*/
-	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
+	if (elv_ioq_wait_busy(ioq)) {
 		ioq = NULL;
 		goto keep_queue;
 	}
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 7102455..f7d6092 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -265,6 +265,23 @@ struct io_group {
 	/* request list associated with the group */
 	struct request_list rl;
 	struct rcu_head rcu_head;
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	/* How many times this group has been added to active tree */
+	unsigned long queue;
+
+	/* How long this group remained on active tree, in ms */
+	unsigned long queue_duration;
+
+	/* When was this group added to active tree */
+	unsigned long queue_start;
+
+	/* How many times this group has been removed from active tree */
+	unsigned long dequeue;
+
+	/* Store cgroup path */
+	char path[128];
+#endif
 };
 
 struct io_policy_node {
@@ -367,10 +384,29 @@ extern int elv_slice_idle;
 extern int elv_slice_async;
 
 /* Logging facilities. */
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+#define elv_log_ioq(efqd, ioq, fmt, args...) \
+{								\
+	blk_add_trace_msg((efqd)->queue, "elv%d%c %s " fmt, (ioq)->pid,	\
+			elv_ioq_sync(ioq) ? 'S' : 'A', \
+			ioq_to_io_group(ioq)->path, ##args); \
+}
+
+#define elv_log_iog(efqd, iog, fmt, args...) \
+{                                                                      \
+	blk_add_trace_msg((efqd)->queue, "elv %s " fmt, (iog)->path, ##args); \
+}
+
+#else
 #define elv_log_ioq(efqd, ioq, fmt, args...) \
 	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
 				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
 
+#define elv_log_iog(efqd, iog, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
+
+#endif
+
 #define elv_log(efqd, fmt, args...) \
 	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
 
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 19/20] io-controller: Debug hierarchical IO scheduling
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o Littile debugging aid for hierarchical IO scheduling.

o Enabled under CONFIG_DEBUG_GROUP_IOSCHED

o Currently it outputs more debug messages in blktrace output which helps
  a great deal in debugging in hierarchical setup. It also creates additional
  cgroup interfaces io.disk_queue and io.disk_dequeue to output some more
  debugging data.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/Kconfig.iosched |   10 ++-
 block/as-iosched.c    |   50 ++++++---
 block/elevator-fq.c   |  277 ++++++++++++++++++++++++++++++++++++++++++++++++-
 block/elevator-fq.h   |   36 +++++++
 4 files changed, 351 insertions(+), 22 deletions(-)

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 0677099..79f188c 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -140,6 +140,14 @@ config TRACK_ASYNC_CONTEXT
 	  request, original owner of the bio is decided by using io tracking
 	  patches otherwise we continue to attribute the request to the
 	  submitting thread.
-endmenu
 
+config DEBUG_GROUP_IOSCHED
+	bool "Debug Hierarchical Scheduling support"
+	depends on CGROUPS && GROUP_IOSCHED
+	default n
+	---help---
+	  Enable some debugging hooks for hierarchical scheduling support.
+	  Currently it just outputs more information in blktrace output.
+
+endmenu
 endif
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 68200b3..42dee4c 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -78,6 +78,7 @@ enum anticipation_status {
 };
 
 struct as_queue {
+	struct io_queue *ioq;
 	/*
 	 * requests (as_rq s) are present on both sort_list and fifo_list
 	 */
@@ -162,6 +163,17 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+#define as_log_asq(ad, asq, fmt, args...)				\
+{									\
+	blk_add_trace_msg((ad)->q, "as %s " fmt,			\
+			ioq_to_io_group((asq)->ioq)->path, ##args);	\
+}
+#else
+#define as_log_asq(ad, asq, fmt, args...) \
+	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
+#endif
+
 #define as_log(ad, fmt, args...)        \
 	blk_add_trace_msg((ad)->q, "as " fmt, ##args)
 
@@ -225,7 +237,7 @@ static void as_save_batch_context(struct as_data *ad, struct as_queue *asq)
 	}
 
 out:
-	as_log(ad, "save batch: dir=%c time_left=%d changed_batch=%d"
+	as_log_asq(ad, asq, "save batch: dir=%c time_left=%d changed_batch=%d"
 			" new_batch=%d, antic_status=%d",
 			ad->batch_data_dir ? 'R' : 'W',
 			asq->current_batch_time_left,
@@ -247,8 +259,8 @@ static void as_restore_batch_context(struct as_data *ad, struct as_queue *asq)
 						asq->current_batch_time_left;
 	/* restore asq batch_data_dir info */
 	ad->batch_data_dir = asq->saved_batch_data_dir;
-	as_log(ad, "restore batch: dir=%c time=%d reads_q=%d writes_q=%d"
-			" ad->antic_status=%d",
+	as_log_asq(ad, asq, "restore batch: dir=%c time=%d reads_q=%d"
+			" writes_q=%d ad->antic_status=%d",
 			ad->batch_data_dir ? 'R' : 'W',
 			asq->current_batch_time_left,
 			asq->nr_queued[1], asq->nr_queued[0],
@@ -277,8 +289,8 @@ static int as_expire_ioq(struct request_queue *q, void *sched_queue,
 	int status = ad->antic_status;
 	struct as_queue *asq = sched_queue;
 
-	as_log(ad, "as_expire_ioq slice_expired=%d, force=%d", slice_expired,
-		force);
+	as_log_asq(ad, asq, "as_expire_ioq slice_expired=%d, force=%d",
+			slice_expired, force);
 
 	/* Forced expiry. We don't have a choice */
 	if (force) {
@@ -1019,9 +1031,10 @@ static void update_write_batch(struct as_data *ad, struct request *rq)
 	if (write_time < 0)
 		write_time = 0;
 
-	as_log(ad, "upd write: write_time=%d batch=%d write_batch_idled=%d"
-			" current_write_count=%d", write_time, batch,
-			asq->write_batch_idled, asq->current_write_count);
+	as_log_asq(ad, asq, "upd write: write_time=%d batch=%d"
+			" write_batch_idled=%d current_write_count=%d",
+			write_time, batch, asq->write_batch_idled,
+			asq->current_write_count);
 
 	if (write_time > batch && !asq->write_batch_idled) {
 		if (write_time > batch * 3)
@@ -1038,7 +1051,7 @@ static void update_write_batch(struct as_data *ad, struct request *rq)
 	if (asq->write_batch_count < 1)
 		asq->write_batch_count = 1;
 
-	as_log(ad, "upd write count=%d", asq->write_batch_count);
+	as_log_asq(ad, asq, "upd write count=%d", asq->write_batch_count);
 }
 
 /*
@@ -1057,7 +1070,7 @@ static void as_completed_request(struct request_queue *q, struct request *rq)
 		goto out;
 	}
 
-	as_log(ad, "complete: reads_q=%d writes_q=%d changed_batch=%d"
+	as_log_asq(ad, asq, "complete: reads_q=%d writes_q=%d changed_batch=%d"
 		" new_batch=%d switch_queue=%d, dir=%c",
 		asq->nr_queued[1], asq->nr_queued[0], ad->changed_batch,
 		ad->new_batch, ad->switch_queue,
@@ -1251,7 +1264,7 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
 	if (RQ_IOC(rq) && RQ_IOC(rq)->aic)
 		atomic_inc(&RQ_IOC(rq)->aic->nr_dispatched);
 	ad->nr_dispatched++;
-	as_log(ad, "dispatch req dir=%c nr_dispatched = %d",
+	as_log_asq(ad, asq, "dispatch req dir=%c nr_dispatched = %d",
 			data_dir ? 'R' : 'W', ad->nr_dispatched);
 }
 
@@ -1300,7 +1313,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		}
 		asq->last_check_fifo[BLK_RW_ASYNC] = jiffies;
 
-		as_log(ad, "forced dispatch");
+		as_log_asq(ad, asq, "forced dispatch");
 		return dispatched;
 	}
 
@@ -1314,7 +1327,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 		|| ad->antic_status == ANTIC_WAIT_REQ
 		|| ad->antic_status == ANTIC_WAIT_NEXT
 		|| ad->changed_batch) {
-		as_log(ad, "no dispatch. read_q=%d, writes_q=%d"
+		as_log_asq(ad, asq, "no dispatch. read_q=%d, writes_q=%d"
 			" ad->antic_status=%d, changed_batch=%d,"
 			" switch_queue=%d new_batch=%d", asq->nr_queued[1],
 			asq->nr_queued[0], ad->antic_status, ad->changed_batch,
@@ -1333,7 +1346,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 				goto fifo_expired;
 
 			if (as_can_anticipate(ad, rq)) {
-				as_log(ad, "can_anticipate = 1");
+				as_log_asq(ad, asq, "can_anticipate = 1");
 				as_antic_waitreq(ad);
 				return 0;
 			}
@@ -1353,7 +1366,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
-	as_log(ad, "select a fresh batch and request");
+	as_log_asq(ad, asq, "select a fresh batch and request");
 
 	if (reads) {
 		BUG_ON(RB_EMPTY_ROOT(&asq->sort_list[BLK_RW_SYNC]));
@@ -1369,7 +1382,7 @@ static int as_dispatch_request(struct request_queue *q, int force)
 			ad->changed_batch = 1;
 		}
 		ad->batch_data_dir = BLK_RW_SYNC;
-		as_log(ad, "new batch dir is sync");
+		as_log_asq(ad, asq, "new batch dir is sync");
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_SYNC].next);
 		asq->last_check_fifo[ad->batch_data_dir] = jiffies;
 		goto dispatch_request;
@@ -1394,7 +1407,7 @@ dispatch_writes:
 			ad->new_batch = 0;
 		}
 		ad->batch_data_dir = BLK_RW_ASYNC;
-		as_log(ad, "new batch dir is async");
+		as_log_asq(ad, asq, "new batch dir is async");
 		asq->current_write_count = asq->write_batch_count;
 		asq->write_batch_idled = 0;
 		rq = rq_entry_fifo(asq->fifo_list[BLK_RW_ASYNC].next);
@@ -1457,7 +1470,7 @@ static void as_add_request(struct request_queue *q, struct request *rq)
 	rq->elevator_private = as_get_io_context(q->node);
 
 	asq->nr_queued[data_dir]++;
-	as_log(ad, "add a %c request read_q=%d write_q=%d",
+	as_log_asq(ad, asq, "add a %c request read_q=%d write_q=%d",
 			data_dir ? 'R' : 'W', asq->nr_queued[1],
 			asq->nr_queued[0]);
 
@@ -1616,6 +1629,7 @@ static void *as_alloc_as_queue(struct request_queue *q,
 
 	if (asq->write_batch_count < 2)
 		asq->write_batch_count = 2;
+	asq->ioq = ioq;
 out:
 	return asq;
 }
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 326f955..baa45c6 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -173,6 +173,119 @@ static void bfq_find_matching_entity(struct io_entity **entity,
 	}
 }
 
+static inline struct io_group *io_entity_to_iog(struct io_entity *entity)
+{
+	struct io_group *iog = NULL;
+
+	BUG_ON(entity == NULL);
+	if (entity->my_sched_data != NULL)
+		iog = container_of(entity, struct io_group, entity);
+	return iog;
+}
+
+/* Returns parent group of io group */
+static inline struct io_group *iog_parent(struct io_group *iog)
+{
+	struct io_group *piog;
+
+	if (!iog->entity.sched_data)
+		return NULL;
+
+	/*
+	 * Not following entity->parent pointer as for top level groups
+	 * this pointer is NULL.
+	 */
+	piog = container_of(iog->entity.sched_data, struct io_group,
+					sched_data);
+	return piog;
+}
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+static void io_group_path(struct io_group *iog, char *buf, int buflen)
+{
+	unsigned short id = iog->iocg_id;
+	struct cgroup_subsys_state *css;
+
+	rcu_read_lock();
+
+	if (!id)
+		goto out;
+
+	css = css_lookup(&io_subsys, id);
+	if (!css)
+		goto out;
+
+	if (!css_tryget(css))
+		goto out;
+
+	cgroup_path(css->cgroup, buf, buflen);
+
+	css_put(css);
+
+	rcu_read_unlock();
+	return;
+out:
+	rcu_read_unlock();
+	buf[0] = '\0';
+	return;
+}
+
+/*
+ * An entity has been freshly added to active tree. Either it came from
+ * idle tree or it was not on any of the trees. Do the accounting.
+ */
+static inline void bfq_account_for_entity_addition(struct io_entity *entity)
+{
+	struct io_group *iog = io_entity_to_iog(entity);
+
+	if (iog) {
+		struct elv_fq_data *efqd;
+
+		/*
+		 * Keep track of how many times a group has been added
+		 * to active tree.
+		 */
+		iog->queue++;
+		iog->queue_start = jiffies;
+
+		/* Log group addition event */
+		rcu_read_lock();
+		efqd = rcu_dereference(iog->key);
+		if (efqd)
+			elv_log_iog(efqd, iog, "add group weight=%ld",
+					iog->entity.weight);
+		rcu_read_unlock();
+	}
+}
+
+/*
+ * An entity got removed from active tree and either went to idle tree or
+ * not is on any of the tree. Do the accouting
+ */
+static inline void bfq_account_for_entity_deletion(struct io_entity *entity)
+{
+	struct io_group *iog = io_entity_to_iog(entity);
+
+	if (iog) {
+		struct elv_fq_data *efqd;
+
+		iog->dequeue++;
+		/* Keep a track of how long group was on active tree */
+		iog->queue_duration += jiffies_to_msecs(jiffies -
+						iog->queue_start);
+		iog->queue_start = 0;
+
+		/* Log group deletion event */
+		rcu_read_lock();
+		efqd = rcu_dereference(iog->key);
+		if (efqd)
+			elv_log_iog(efqd, iog, "del group weight=%ld",
+					iog->entity.weight);
+		rcu_read_unlock();
+	}
+}
+#endif
+
 #else /* GROUP_IOSCHED */
 #define for_each_entity(entity)	\
 	for (; entity != NULL; entity = NULL)
@@ -200,6 +313,12 @@ static void bfq_find_matching_entity(struct io_entity **entity,
 					struct io_entity **new_entity)
 {
 }
+
+static inline struct io_group *io_entity_to_iog(struct io_entity *entity)
+{
+	return NULL;
+}
+
 #endif /* GROUP_IOSCHED */
 
 /*
@@ -633,6 +752,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 {
 	struct io_sched_data *sd = entity->sched_data;
 	struct io_service_tree *st = io_entity_service_tree(entity);
+	int newly_added = 0;
 
 	if (entity == sd->active_entity) {
 		BUG_ON(entity->tree != NULL);
@@ -659,6 +779,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 		bfq_idle_extract(st, entity);
 		entity->start = bfq_gt(st->vtime, entity->finish) ?
 				       st->vtime : entity->finish;
+		newly_added = 1;
 	} else {
 		/*
 		 * The finish time of the entity may be invalid, and
@@ -671,6 +792,7 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 
 		BUG_ON(entity->on_st);
 		entity->on_st = 1;
+		newly_added = 1;
 	}
 
 	st = __bfq_entity_update_prio(st, entity);
@@ -708,6 +830,10 @@ static void __bfq_activate_entity(struct io_entity *entity, int add_front)
 		bfq_calc_finish(entity, entity->budget);
 	}
 	bfq_active_insert(st, entity);
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	if (newly_added)
+		bfq_account_for_entity_addition(entity);
+#endif
 }
 
 /**
@@ -778,6 +904,9 @@ int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
 	BUG_ON(sd->active_entity == entity);
 	BUG_ON(sd->next_active == entity);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	bfq_account_for_entity_deletion(entity);
+#endif
 	return ret;
 }
 
@@ -1336,6 +1465,67 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 	return 0;
 }
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
+			struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg = NULL;
+	struct io_group *iog = NULL;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+	spin_lock_irq(&iocg->lock);
+	/* Loop through all the io groups and print statistics */
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev), iog->queue,
+					iog->queue_duration);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+
+static int io_cgroup_disk_dequeue_read(struct cgroup *cgroup,
+			struct cftype *cftype, struct seq_file *m)
+{
+	struct io_cgroup *iocg = NULL;
+	struct io_group *iog = NULL;
+	struct hlist_node *n;
+
+	if (!cgroup_lock_live_group(cgroup))
+		return -ENODEV;
+
+	iocg = cgroup_to_io_cgroup(cgroup);
+	spin_lock_irq(&iocg->lock);
+	/* Loop through all the io groups and print statistics */
+	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
+		/*
+		 * There might be groups which are not functional and
+		 * waiting to be reclaimed upon cgoup deletion.
+		 */
+		if (iog->key) {
+			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
+					MINOR(iog->dev), iog->dequeue);
+		}
+	}
+	spin_unlock_irq(&iocg->lock);
+	cgroup_unlock();
+
+	return 0;
+}
+#endif
+
 /**
  * bfq_group_chain_alloc - allocate a chain of groups.
  * @bfqd: queue descriptor.
@@ -1393,6 +1583,10 @@ struct io_group *io_group_chain_alloc(struct request_queue *q, void *key,
 		blk_init_request_list(&iog->rl);
 		elv_io_group_congestion_threshold(q, iog);
 
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+		io_group_path(iog, iog->path, sizeof(iog->path));
+#endif
+
 		if (leaf == NULL) {
 			leaf = iog;
 			prev = leaf;
@@ -1912,6 +2106,16 @@ struct cftype bfqio_files[] = {
 		.name = "disk_sectors",
 		.read_seq_string = io_cgroup_disk_sectors_read,
 	},
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		.name = "disk_queue",
+		.read_seq_string = io_cgroup_disk_queue_read,
+	},
+	{
+		.name = "disk_dequeue",
+		.read_seq_string = io_cgroup_disk_dequeue_read,
+	},
+#endif
 };
 
 int iocg_populate(struct cgroup_subsys *subsys, struct cgroup *cgroup)
@@ -2250,6 +2454,7 @@ struct cgroup_subsys io_subsys = {
 	.destroy = iocg_destroy,
 	.populate = iocg_populate,
 	.subsys_id = io_subsys_id,
+	.use_id = 1,
 };
 
 /*
@@ -2541,6 +2746,22 @@ EXPORT_SYMBOL(elv_get_slice_idle);
 void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
 {
 	entity_served(&ioq->entity, served, ioq->nr_sectors);
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+		{
+			struct elv_fq_data *efqd = ioq->efqd;
+			struct io_group *iog = ioq_to_io_group(ioq);
+			elv_log_ioq(efqd, ioq, "ioq served: QSt=0x%lx QSs=0x%lx"
+				" QTt=0x%lx QTs=0x%lx GTt=0x%lx "
+				" GTs=0x%lx rq_queued=%d",
+				served, ioq->nr_sectors,
+				ioq->entity.total_service,
+				ioq->entity.total_sector_service,
+				iog->entity.total_service,
+				iog->entity.total_sector_service,
+				ioq->nr_queued);
+		}
+#endif
 }
 
 /* Tells whether ioq is queued in root group or not */
@@ -2918,10 +3139,30 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
 	if (ioq) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		elv_log_ioq(efqd, ioq, "set_active, busy=%d ioprio=%d"
-				" weight=%ld group_weight=%ld",
+				" weight=%ld rq_queued=%d group_weight=%ld",
 				efqd->busy_queues,
 				ioq->entity.ioprio, ioq->entity.weight,
-				iog_weight(iog));
+				ioq->nr_queued, iog_weight(iog));
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+			{
+				int nr_active = 0;
+				struct io_group *parent = NULL;
+
+				parent = iog_parent(iog);
+				if (parent)
+					nr_active = elv_iog_nr_active(parent);
+
+				elv_log_ioq(efqd, ioq, "set_active, ioq"
+				" nrgrps=%d QTt=0x%lx QTs=0x%lx GTt=0x%lx "
+				" GTs=0x%lx rq_queued=%d", nr_active,
+				ioq->entity.total_service,
+				ioq->entity.total_sector_service,
+				iog->entity.total_service,
+				iog->entity.total_sector_service,
+				ioq->nr_queued);
+			}
+#endif
 		ioq->slice_end = 0;
 
 		elv_clear_ioq_wait_request(ioq);
@@ -3002,6 +3243,21 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues++;
 	}
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "add to busy: QTt=0x%lx QTs=0x%lx"
+			" GTt=0x%lx GTs=0x%lx rq_queued=%d",
+			ioq->entity.total_service,
+			ioq->entity.total_sector_service,
+			iog->entity.total_service,
+			iog->entity.total_sector_service,
+			ioq->nr_queued);
+	}
+#else
+	elv_log_ioq(efqd, ioq, "add to busy");
+#endif
 }
 
 void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
@@ -3011,7 +3267,21 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 
 	BUG_ON(!elv_ioq_busy(ioq));
 	BUG_ON(ioq->nr_queued);
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	{
+		struct io_group *iog = ioq_to_io_group(ioq);
+		elv_log_ioq(efqd, ioq, "del from busy: QTt=0x%lx "
+			"QTs=0x%lx ioq GTt=0x%lx GTs=0x%lx "
+			"rq_queued=%d",
+			ioq->entity.total_service,
+			ioq->entity.total_sector_service,
+			iog->entity.total_service,
+			iog->entity.total_sector_service,
+			ioq->nr_queued);
+	}
+#else
 	elv_log_ioq(efqd, ioq, "del from busy");
+#endif
 	elv_clear_ioq_busy(ioq);
 	BUG_ON(efqd->busy_queues == 0);
 	efqd->busy_queues--;
@@ -3258,6 +3528,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq)
 
 	elv_ioq_update_io_thinktime(ioq);
 	elv_ioq_update_idle_window(q->elevator, ioq, rq);
+	elv_log_ioq(efqd, ioq, "add rq: rq_queued=%d", ioq->nr_queued);
 
 	if (ioq == elv_active_ioq(q->elevator)) {
 		/*
@@ -3492,7 +3763,7 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	}
 
 	/* We are waiting for this queue to become busy before it expires.*/
-	if (efqd->fairness && elv_ioq_wait_busy(ioq)) {
+	if (elv_ioq_wait_busy(ioq)) {
 		ioq = NULL;
 		goto keep_queue;
 	}
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 7102455..f7d6092 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -265,6 +265,23 @@ struct io_group {
 	/* request list associated with the group */
 	struct request_list rl;
 	struct rcu_head rcu_head;
+
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+	/* How many times this group has been added to active tree */
+	unsigned long queue;
+
+	/* How long this group remained on active tree, in ms */
+	unsigned long queue_duration;
+
+	/* When was this group added to active tree */
+	unsigned long queue_start;
+
+	/* How many times this group has been removed from active tree */
+	unsigned long dequeue;
+
+	/* Store cgroup path */
+	char path[128];
+#endif
 };
 
 struct io_policy_node {
@@ -367,10 +384,29 @@ extern int elv_slice_idle;
 extern int elv_slice_async;
 
 /* Logging facilities. */
+#ifdef CONFIG_DEBUG_GROUP_IOSCHED
+#define elv_log_ioq(efqd, ioq, fmt, args...) \
+{								\
+	blk_add_trace_msg((efqd)->queue, "elv%d%c %s " fmt, (ioq)->pid,	\
+			elv_ioq_sync(ioq) ? 'S' : 'A', \
+			ioq_to_io_group(ioq)->path, ##args); \
+}
+
+#define elv_log_iog(efqd, iog, fmt, args...) \
+{                                                                      \
+	blk_add_trace_msg((efqd)->queue, "elv %s " fmt, (iog)->path, ##args); \
+}
+
+#else
 #define elv_log_ioq(efqd, ioq, fmt, args...) \
 	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
 				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
 
+#define elv_log_iog(efqd, iog, fmt, args...) \
+	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
+
+#endif
+
 #define elv_log(efqd, fmt, args...) \
 	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
 
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (18 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 19/20] io-controller: Debug hierarchical IO scheduling Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  2009-06-21 15:21   ` [RFC] IO scheduler based io controller (V5) Balbir Singh
  2009-06-29 16:04   ` Vladislav Bolkhovitin
  21 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA

o A debug patch which does wait for next IO from async queue once it
  becomes empty.

o For async writes, traffic seen by IO scheduler is not in proportion to
  the weight of the cgroup task/page belongs to. So if there are two processes
  doing heavy writeouts in two cgroups with weights 1000 and 500 respectively,
  then IO scheduler does not see more traffic/IO from higher weight cgroup
  even if IO scheduler tries to give it higher disk time. Effectively, the
  async queue belonging to higher weight cgroup becomes empty, and gets out
  of contention for disk and lower weight cgroup gets to use disk giving
  an impression in user space that higher weight cgroup did not get higher
  time to disk.

o This is more of a problem at page cache level where a higher weight
  process might be writing out the pages of lower weight process etc and
  should be fixed there.

o While we fix those issues, introducing this debug patch which allows one
  to idle on async queue (tunable via /sys/blolc/<disk>/queue/async_slice_idle)  so that once a higher weight queue becomes empty, instead of expiring it
  we try to wait for next request to come from that queue hence giving it
  higher disk time. A higher value of async_slice_idle, around 300ms, helps
  me get some right numbers for my setup. Note: higher disk time would not
  necessarily translate in more IO done as higher weight group is not pushing
  enough IO to io scheduler. It is just a debugging aid to prove correctness
  of IO controller by providing higher disk times to higher weight cgroup.

Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |   43 +++++++++++++++++++++++++++++++++++++++----
 block/elevator-fq.h |    5 +++++
 3 files changed, 45 insertions(+), 4 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b02acf2..959e10a 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2093,6 +2093,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_sync),
 	ELV_ATTR(slice_async),
 	ELV_ATTR(fairness),
+	ELV_ATTR(async_slice_idle),
 	__ATTR_NULL
 };
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index baa45c6..2ad40eb 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -22,6 +22,7 @@ const int elv_slice_sync = HZ / 10;
 int elv_slice_async = HZ / 25;
 const int elv_slice_async_rq = 2;
 int elv_slice_idle = HZ / 125;
+int elv_async_slice_idle = 0;
 static struct kmem_cache *elv_ioq_pool;
 
 /* Maximum Window length for updating average disk rate */
@@ -2808,6 +2809,8 @@ SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
 SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
 EXPORT_SYMBOL(elv_fairness_show);
+SHOW_FUNCTION(elv_async_slice_idle_show, efqd->elv_async_slice_idle, 1);
+EXPORT_SYMBOL(elv_async_slice_idle_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -2834,6 +2837,8 @@ STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
 STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
 EXPORT_SYMBOL(elv_fairness_store);
+STORE_FUNCTION(elv_async_slice_idle_store, &efqd->elv_async_slice_idle, 0, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_async_slice_idle_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -3008,8 +3013,8 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
 		ioq->pid = current->pid;
 
 	ioq->sched_queue = sched_queue;
-	if (is_sync && !elv_ioq_class_idle(ioq))
-		elv_mark_ioq_idle_window(ioq);
+	if (!elv_ioq_class_idle(ioq) && (is_sync || efqd->fairness))
+			elv_mark_ioq_idle_window(ioq);
 	bfq_init_entity(&ioq->entity, iog);
 	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
 	if (is_sync)
@@ -3643,7 +3648,12 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	/*
 	 * idle is disabled, either manually or by past process history
 	 */
-	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+	if ((elv_ioq_sync(ioq) && !efqd->elv_slice_idle) ||
+			!elv_ioq_idle_window(ioq))
+		return;
+
+	/* If this is async queue and async_slice_idle is disabled, return */
+	if (!elv_ioq_sync(ioq) && !efqd->elv_async_slice_idle)
 		return;
 
 	/*
@@ -3652,7 +3662,10 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	 */
 	if (wait_for_busy) {
 		elv_mark_ioq_wait_busy(ioq);
-		sl = efqd->elv_slice_idle;
+		if (elv_ioq_sync(ioq))
+			sl = efqd->elv_slice_idle;
+		else
+			sl = efqd->elv_async_slice_idle;
 		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
 		elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl);
 		return;
@@ -3798,6 +3811,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
 	 * cfqq.
+	 *
+	 * TODO: This does not seem right across the io groups. Fix it.
 	 */
 	iog = ioq_to_io_group(ioq);
 
@@ -3840,6 +3855,18 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 		goto keep_queue;
 	}
 
+	/*
+	 * If this is an async queue which has time slice left but not
+	 * requests. Wait busy is also not on (may be because when last
+	 * request completed, ioq was not empty). Wait for the request
+	 * completion. May be completion will turn wait busy on.
+	 */
+	if (efqd->fairness && efqd->elv_async_slice_idle && !elv_ioq_sync(ioq)
+	    && elv_ioq_nr_dispatched(ioq)) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
 	slice_expired = 0;
 expire:
 	if (elv_iosched_expire_ioq(q, slice_expired, force))
@@ -4038,6 +4065,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			goto done;
 		}
 
+		/* For async queue try to do wait busy */
+		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
+		    && (elv_iog_nr_active(iog) <= 1)) {
+			elv_ioq_arm_slice_timer(q, 1);
+			goto done;
+		}
+
 		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
@@ -4166,6 +4200,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
 	efqd->elv_slice_idle = elv_slice_idle;
+	efqd->elv_async_slice_idle = elv_async_slice_idle;
 	efqd->hw_tag = 1;
 
 	/* For the time being keep fairness enabled by default */
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index f7d6092..b3193f8 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -359,6 +359,8 @@ struct elv_fq_data {
 	 * users of this functionality.
 	 */
 	unsigned int elv_slice_idle;
+	/* idle slice for async queue */
+	unsigned int elv_async_slice_idle;
 	struct timer_list idle_slice_timer;
 	struct work_struct unplug_work;
 
@@ -685,6 +687,9 @@ extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
 						size_t count);
+extern ssize_t elv_async_slice_idle_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_async_slice_idle_store(struct elevator_queue *q,
+					const char *name, size_t count);
 
 /* Functions used by elevator.c */
 extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry
  2009-06-19 20:37 ` Vivek Goyal
@ 2009-06-19 20:37   ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron
  Cc: agk, snitzer, vgoyal, akpm, peterz

o A debug patch which does wait for next IO from async queue once it
  becomes empty.

o For async writes, traffic seen by IO scheduler is not in proportion to
  the weight of the cgroup task/page belongs to. So if there are two processes
  doing heavy writeouts in two cgroups with weights 1000 and 500 respectively,
  then IO scheduler does not see more traffic/IO from higher weight cgroup
  even if IO scheduler tries to give it higher disk time. Effectively, the
  async queue belonging to higher weight cgroup becomes empty, and gets out
  of contention for disk and lower weight cgroup gets to use disk giving
  an impression in user space that higher weight cgroup did not get higher
  time to disk.

o This is more of a problem at page cache level where a higher weight
  process might be writing out the pages of lower weight process etc and
  should be fixed there.

o While we fix those issues, introducing this debug patch which allows one
  to idle on async queue (tunable via /sys/blolc/<disk>/queue/async_slice_idle)  so that once a higher weight queue becomes empty, instead of expiring it
  we try to wait for next request to come from that queue hence giving it
  higher disk time. A higher value of async_slice_idle, around 300ms, helps
  me get some right numbers for my setup. Note: higher disk time would not
  necessarily translate in more IO done as higher weight group is not pushing
  enough IO to io scheduler. It is just a debugging aid to prove correctness
  of IO controller by providing higher disk times to higher weight cgroup.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |   43 +++++++++++++++++++++++++++++++++++++++----
 block/elevator-fq.h |    5 +++++
 3 files changed, 45 insertions(+), 4 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b02acf2..959e10a 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2093,6 +2093,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_sync),
 	ELV_ATTR(slice_async),
 	ELV_ATTR(fairness),
+	ELV_ATTR(async_slice_idle),
 	__ATTR_NULL
 };
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index baa45c6..2ad40eb 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -22,6 +22,7 @@ const int elv_slice_sync = HZ / 10;
 int elv_slice_async = HZ / 25;
 const int elv_slice_async_rq = 2;
 int elv_slice_idle = HZ / 125;
+int elv_async_slice_idle = 0;
 static struct kmem_cache *elv_ioq_pool;
 
 /* Maximum Window length for updating average disk rate */
@@ -2808,6 +2809,8 @@ SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
 SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
 EXPORT_SYMBOL(elv_fairness_show);
+SHOW_FUNCTION(elv_async_slice_idle_show, efqd->elv_async_slice_idle, 1);
+EXPORT_SYMBOL(elv_async_slice_idle_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -2834,6 +2837,8 @@ STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
 STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
 EXPORT_SYMBOL(elv_fairness_store);
+STORE_FUNCTION(elv_async_slice_idle_store, &efqd->elv_async_slice_idle, 0, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_async_slice_idle_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -3008,8 +3013,8 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
 		ioq->pid = current->pid;
 
 	ioq->sched_queue = sched_queue;
-	if (is_sync && !elv_ioq_class_idle(ioq))
-		elv_mark_ioq_idle_window(ioq);
+	if (!elv_ioq_class_idle(ioq) && (is_sync || efqd->fairness))
+			elv_mark_ioq_idle_window(ioq);
 	bfq_init_entity(&ioq->entity, iog);
 	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
 	if (is_sync)
@@ -3643,7 +3648,12 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	/*
 	 * idle is disabled, either manually or by past process history
 	 */
-	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+	if ((elv_ioq_sync(ioq) && !efqd->elv_slice_idle) ||
+			!elv_ioq_idle_window(ioq))
+		return;
+
+	/* If this is async queue and async_slice_idle is disabled, return */
+	if (!elv_ioq_sync(ioq) && !efqd->elv_async_slice_idle)
 		return;
 
 	/*
@@ -3652,7 +3662,10 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	 */
 	if (wait_for_busy) {
 		elv_mark_ioq_wait_busy(ioq);
-		sl = efqd->elv_slice_idle;
+		if (elv_ioq_sync(ioq))
+			sl = efqd->elv_slice_idle;
+		else
+			sl = efqd->elv_async_slice_idle;
 		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
 		elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl);
 		return;
@@ -3798,6 +3811,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
 	 * cfqq.
+	 *
+	 * TODO: This does not seem right across the io groups. Fix it.
 	 */
 	iog = ioq_to_io_group(ioq);
 
@@ -3840,6 +3855,18 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 		goto keep_queue;
 	}
 
+	/*
+	 * If this is an async queue which has time slice left but not
+	 * requests. Wait busy is also not on (may be because when last
+	 * request completed, ioq was not empty). Wait for the request
+	 * completion. May be completion will turn wait busy on.
+	 */
+	if (efqd->fairness && efqd->elv_async_slice_idle && !elv_ioq_sync(ioq)
+	    && elv_ioq_nr_dispatched(ioq)) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
 	slice_expired = 0;
 expire:
 	if (elv_iosched_expire_ioq(q, slice_expired, force))
@@ -4038,6 +4065,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			goto done;
 		}
 
+		/* For async queue try to do wait busy */
+		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
+		    && (elv_iog_nr_active(iog) <= 1)) {
+			elv_ioq_arm_slice_timer(q, 1);
+			goto done;
+		}
+
 		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
@@ -4166,6 +4200,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
 	efqd->elv_slice_idle = elv_slice_idle;
+	efqd->elv_async_slice_idle = elv_async_slice_idle;
 	efqd->hw_tag = 1;
 
 	/* For the time being keep fairness enabled by default */
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index f7d6092..b3193f8 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -359,6 +359,8 @@ struct elv_fq_data {
 	 * users of this functionality.
 	 */
 	unsigned int elv_slice_idle;
+	/* idle slice for async queue */
+	unsigned int elv_async_slice_idle;
 	struct timer_list idle_slice_timer;
 	struct work_struct unplug_work;
 
@@ -685,6 +687,9 @@ extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
 						size_t count);
+extern ssize_t elv_async_slice_idle_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_async_slice_idle_store(struct elevator_queue *q,
+					const char *name, size_t count);
 
 /* Functions used by elevator.c */
 extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry
@ 2009-06-19 20:37   ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew
  Cc: peterz, akpm, snitzer, agk, vgoyal

o A debug patch which does wait for next IO from async queue once it
  becomes empty.

o For async writes, traffic seen by IO scheduler is not in proportion to
  the weight of the cgroup task/page belongs to. So if there are two processes
  doing heavy writeouts in two cgroups with weights 1000 and 500 respectively,
  then IO scheduler does not see more traffic/IO from higher weight cgroup
  even if IO scheduler tries to give it higher disk time. Effectively, the
  async queue belonging to higher weight cgroup becomes empty, and gets out
  of contention for disk and lower weight cgroup gets to use disk giving
  an impression in user space that higher weight cgroup did not get higher
  time to disk.

o This is more of a problem at page cache level where a higher weight
  process might be writing out the pages of lower weight process etc and
  should be fixed there.

o While we fix those issues, introducing this debug patch which allows one
  to idle on async queue (tunable via /sys/blolc/<disk>/queue/async_slice_idle)  so that once a higher weight queue becomes empty, instead of expiring it
  we try to wait for next request to come from that queue hence giving it
  higher disk time. A higher value of async_slice_idle, around 300ms, helps
  me get some right numbers for my setup. Note: higher disk time would not
  necessarily translate in more IO done as higher weight group is not pushing
  enough IO to io scheduler. It is just a debugging aid to prove correctness
  of IO controller by providing higher disk times to higher weight cgroup.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |   43 +++++++++++++++++++++++++++++++++++++++----
 block/elevator-fq.h |    5 +++++
 3 files changed, 45 insertions(+), 4 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index b02acf2..959e10a 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2093,6 +2093,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_sync),
 	ELV_ATTR(slice_async),
 	ELV_ATTR(fairness),
+	ELV_ATTR(async_slice_idle),
 	__ATTR_NULL
 };
 
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index baa45c6..2ad40eb 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -22,6 +22,7 @@ const int elv_slice_sync = HZ / 10;
 int elv_slice_async = HZ / 25;
 const int elv_slice_async_rq = 2;
 int elv_slice_idle = HZ / 125;
+int elv_async_slice_idle = 0;
 static struct kmem_cache *elv_ioq_pool;
 
 /* Maximum Window length for updating average disk rate */
@@ -2808,6 +2809,8 @@ SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
 SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
 EXPORT_SYMBOL(elv_fairness_show);
+SHOW_FUNCTION(elv_async_slice_idle_show, efqd->elv_async_slice_idle, 1);
+EXPORT_SYMBOL(elv_async_slice_idle_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -2834,6 +2837,8 @@ STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
 STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
 EXPORT_SYMBOL(elv_fairness_store);
+STORE_FUNCTION(elv_async_slice_idle_store, &efqd->elv_async_slice_idle, 0, UINT_MAX, 1);
+EXPORT_SYMBOL(elv_async_slice_idle_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -3008,8 +3013,8 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
 		ioq->pid = current->pid;
 
 	ioq->sched_queue = sched_queue;
-	if (is_sync && !elv_ioq_class_idle(ioq))
-		elv_mark_ioq_idle_window(ioq);
+	if (!elv_ioq_class_idle(ioq) && (is_sync || efqd->fairness))
+			elv_mark_ioq_idle_window(ioq);
 	bfq_init_entity(&ioq->entity, iog);
 	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
 	if (is_sync)
@@ -3643,7 +3648,12 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	/*
 	 * idle is disabled, either manually or by past process history
 	 */
-	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
+	if ((elv_ioq_sync(ioq) && !efqd->elv_slice_idle) ||
+			!elv_ioq_idle_window(ioq))
+		return;
+
+	/* If this is async queue and async_slice_idle is disabled, return */
+	if (!elv_ioq_sync(ioq) && !efqd->elv_async_slice_idle)
 		return;
 
 	/*
@@ -3652,7 +3662,10 @@ void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy)
 	 */
 	if (wait_for_busy) {
 		elv_mark_ioq_wait_busy(ioq);
-		sl = efqd->elv_slice_idle;
+		if (elv_ioq_sync(ioq))
+			sl = efqd->elv_slice_idle;
+		else
+			sl = efqd->elv_async_slice_idle;
 		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
 		elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl);
 		return;
@@ -3798,6 +3811,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
 	 * cfqq.
+	 *
+	 * TODO: This does not seem right across the io groups. Fix it.
 	 */
 	iog = ioq_to_io_group(ioq);
 
@@ -3840,6 +3855,18 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 		goto keep_queue;
 	}
 
+	/*
+	 * If this is an async queue which has time slice left but not
+	 * requests. Wait busy is also not on (may be because when last
+	 * request completed, ioq was not empty). Wait for the request
+	 * completion. May be completion will turn wait busy on.
+	 */
+	if (efqd->fairness && efqd->elv_async_slice_idle && !elv_ioq_sync(ioq)
+	    && elv_ioq_nr_dispatched(ioq)) {
+		ioq = NULL;
+		goto keep_queue;
+	}
+
 	slice_expired = 0;
 expire:
 	if (elv_iosched_expire_ioq(q, slice_expired, force))
@@ -4038,6 +4065,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			goto done;
 		}
 
+		/* For async queue try to do wait busy */
+		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
+		    && (elv_iog_nr_active(iog) <= 1)) {
+			elv_ioq_arm_slice_timer(q, 1);
+			goto done;
+		}
+
 		/*
 		 * If there are no requests waiting in this queue, and
 		 * there are other queues ready to issue requests, AND
@@ -4166,6 +4200,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
 	efqd->elv_slice_idle = elv_slice_idle;
+	efqd->elv_async_slice_idle = elv_async_slice_idle;
 	efqd->hw_tag = 1;
 
 	/* For the time being keep fairness enabled by default */
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index f7d6092..b3193f8 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -359,6 +359,8 @@ struct elv_fq_data {
 	 * users of this functionality.
 	 */
 	unsigned int elv_slice_idle;
+	/* idle slice for async queue */
+	unsigned int elv_async_slice_idle;
 	struct timer_list idle_slice_timer;
 	struct work_struct unplug_work;
 
@@ -685,6 +687,9 @@ extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
 						size_t count);
+extern ssize_t elv_async_slice_idle_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_async_slice_idle_store(struct elevator_queue *q,
+					const char *name, size_t count);
 
 /* Functions used by elevator.c */
 extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
-- 
1.6.0.6

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (19 preceding siblings ...)
  2009-06-19 20:37   ` [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
@ 2009-06-21 15:21   ` Balbir Singh
  2009-06-29 16:04   ` Vladislav Bolkhovitin
  21 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-21 15:21 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

* Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:

> 
> Hi All,
> 
> Here is the V5 of the IO controller patches generated on top of 2.6.30.
[snip]

> Testing
> =======
>

[snip]

I've not been reading through the discussions in complete detail, but
I see no reference to async reads or aio. In the case of aio, aio
presumes the context of the user space process. Could you elaborate on
any testing you've done with these cases? 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-19 20:37 ` Vivek Goyal
                   ` (21 preceding siblings ...)
  (?)
@ 2009-06-21 15:21 ` Balbir Singh
  2009-06-22 15:30     ` Vivek Goyal
       [not found]   ` <20090621152116.GC3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
  -1 siblings, 2 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-21 15:21 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, righi.andrea, m-ikeda, jbaron,
	agk, snitzer, akpm, peterz

* Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:

> 
> Hi All,
> 
> Here is the V5 of the IO controller patches generated on top of 2.6.30.
[snip]

> Testing
> =======
>

[snip]

I've not been reading through the discussions in complete detail, but
I see no reference to async reads or aio. In the case of aio, aio
presumes the context of the user space process. Could you elaborate on
any testing you've done with these cases? 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 15/20] io-controller: map async requests to appropriate cgroup
       [not found]   ` <1245443858-8487-16-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-22  1:45     ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-22  1:45 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
> o So far we were assuming that a bio/rq belongs to the task who is submitting
>   it. It did not hold good in case of async writes. This patch makes use of
>   blkio_cgroup pataches to attribute the aysnc writes to right group instead
>   of task submitting the bio.
> 
> o For sync requests, we continue to assume that io belongs to the task
>   submitting it. Only in case of async requests, we make use of io tracking
>   patches to track the owner cgroup.
> 
> o So far cfq always caches the async queue pointer. With async requests now
>   not necessarily being tied to submitting task io context, caching the
>   pointer will not help for async queues. This patch introduces a new config
>   option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
>   old behavior where async queue pointer is cached in task context. If it
>   is not set, async queue pointer is not cached and we take help of bio

Here "If it is not set" should be "If it is set".


-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 15/20] io-controller: map async requests to appropriate cgroup
  2009-06-19 20:37   ` Vivek Goyal
  (?)
  (?)
@ 2009-06-22  1:45   ` Gui Jianfeng
       [not found]     ` <4A3EE245.7030409-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  2009-06-22 15:39       ` Vivek Goyal
  -1 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-22  1:45 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
> o So far we were assuming that a bio/rq belongs to the task who is submitting
>   it. It did not hold good in case of async writes. This patch makes use of
>   blkio_cgroup pataches to attribute the aysnc writes to right group instead
>   of task submitting the bio.
> 
> o For sync requests, we continue to assume that io belongs to the task
>   submitting it. Only in case of async requests, we make use of io tracking
>   patches to track the owner cgroup.
> 
> o So far cfq always caches the async queue pointer. With async requests now
>   not necessarily being tied to submitting task io context, caching the
>   pointer will not help for async queues. This patch introduces a new config
>   option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
>   old behavior where async queue pointer is cached in task context. If it
>   is not set, async queue pointer is not cached and we take help of bio

Here "If it is not set" should be "If it is set".


-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
       [not found]   ` <1245443858-8487-21-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-22  7:44     ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-22  7:44 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
in ancestor or sibling groups. It will give other group's rt ioq an chance 
to dispatch ASAP.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
 block/elevator-fq.h |    1 +
 2 files changed, 40 insertions(+), 5 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 2ad40eb..80526fd 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 	elv_mark_ioq_busy(ioq);
 	efqd->busy_queues++;
 	if (elv_ioq_class_rt(ioq)) {
+		struct io_entity *entity;
 		struct io_group *iog = ioq_to_io_group(ioq);
+
 		iog->busy_rt_queues++;
+		entity = iog->entity.parent;
+
+		for_each_entity(entity) {
+			iog = io_entity_to_iog(entity);
+			iog->sub_busy_rt_queues++;
+		}
 	}
 
 #ifdef CONFIG_DEBUG_GROUP_IOSCHED
@@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 	elv_clear_ioq_busy(ioq);
 	BUG_ON(efqd->busy_queues == 0);
 	efqd->busy_queues--;
+
 	if (elv_ioq_class_rt(ioq)) {
+		struct io_entity *entity;
 		struct io_group *iog = ioq_to_io_group(ioq);
+
 		iog->busy_rt_queues--;
+		entity = iog->entity.parent;
+
+		for_each_entity(entity) {
+			iog = io_entity_to_iog(entity);
+			iog->sub_busy_rt_queues--;
+		}
 	}
 
 	elv_deactivate_ioq(efqd, ioq, requeue);
@@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 	return ret;
 }
 
+static int check_rt_queue(struct io_queue *ioq)
+{
+	struct io_group *iog;
+	struct io_entity *entity;
+
+	iog = ioq_to_io_group(ioq);
+
+	if (iog->busy_rt_queues)
+		return 1;
+
+	entity = iog->entity.parent;
+
+	for_each_entity(entity) {
+		iog = io_entity_to_iog(entity);
+		if (iog->sub_busy_rt_queues)
+			return 1;
+	}
+
+	return 0;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
-	struct io_group *iog;
 	int slice_expired = 1;
 
 	if (!elv_nr_busy_ioq(q->elevator))
@@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
 	 * cfqq.
-	 *
-	 * TODO: This does not seem right across the io groups. Fix it.
 	 */
-	iog = ioq_to_io_group(ioq);
 
-	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
 		/*
 		 * We simulate this as cfqq timed out so that it gets to bank
 		 * the remaining of its time slice.
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..be6c1af 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -248,6 +248,7 @@ struct io_group {
 	 * non-RT cfqq in service when this value is non-zero.
 	 */
 	unsigned int busy_rt_queues;
+	unsigned int sub_busy_rt_queues;
 
 	int deleting;
 	unsigned short iocg_id;
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
  2009-06-19 20:37   ` Vivek Goyal
  (?)
  (?)
@ 2009-06-22  7:44   ` Gui Jianfeng
  2009-06-22 17:21       ` Vivek Goyal
       [not found]     ` <4A3F3648.7080007-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  -1 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-22  7:44 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
in ancestor or sibling groups. It will give other group's rt ioq an chance 
to dispatch ASAP.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
 block/elevator-fq.h |    1 +
 2 files changed, 40 insertions(+), 5 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 2ad40eb..80526fd 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 	elv_mark_ioq_busy(ioq);
 	efqd->busy_queues++;
 	if (elv_ioq_class_rt(ioq)) {
+		struct io_entity *entity;
 		struct io_group *iog = ioq_to_io_group(ioq);
+
 		iog->busy_rt_queues++;
+		entity = iog->entity.parent;
+
+		for_each_entity(entity) {
+			iog = io_entity_to_iog(entity);
+			iog->sub_busy_rt_queues++;
+		}
 	}
 
 #ifdef CONFIG_DEBUG_GROUP_IOSCHED
@@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 	elv_clear_ioq_busy(ioq);
 	BUG_ON(efqd->busy_queues == 0);
 	efqd->busy_queues--;
+
 	if (elv_ioq_class_rt(ioq)) {
+		struct io_entity *entity;
 		struct io_group *iog = ioq_to_io_group(ioq);
+
 		iog->busy_rt_queues--;
+		entity = iog->entity.parent;
+
+		for_each_entity(entity) {
+			iog = io_entity_to_iog(entity);
+			iog->sub_busy_rt_queues--;
+		}
 	}
 
 	elv_deactivate_ioq(efqd, ioq, requeue);
@@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 	return ret;
 }
 
+static int check_rt_queue(struct io_queue *ioq)
+{
+	struct io_group *iog;
+	struct io_entity *entity;
+
+	iog = ioq_to_io_group(ioq);
+
+	if (iog->busy_rt_queues)
+		return 1;
+
+	entity = iog->entity.parent;
+
+	for_each_entity(entity) {
+		iog = io_entity_to_iog(entity);
+		if (iog->sub_busy_rt_queues)
+			return 1;
+	}
+
+	return 0;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
 	struct elv_fq_data *efqd = &q->elevator->efqd;
 	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
-	struct io_group *iog;
 	int slice_expired = 1;
 
 	if (!elv_nr_busy_ioq(q->elevator))
@@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	/*
 	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
 	 * cfqq.
-	 *
-	 * TODO: This does not seem right across the io groups. Fix it.
 	 */
-	iog = ioq_to_io_group(ioq);
 
-	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
 		/*
 		 * We simulate this as cfqq timed out so that it gets to bank
 		 * the remaining of its time slice.
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..be6c1af 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -248,6 +248,7 @@ struct io_group {
 	 * non-RT cfqq in service when this value is non-zero.
 	 */
 	unsigned int busy_rt_queues;
+	unsigned int sub_busy_rt_queues;
 
 	int deleting;
 	unsigned short iocg_id;
-- 
1.5.4.rc3



^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]   ` <1245443858-8487-3-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-22  8:46     ` Balbir Singh
  2009-06-30  6:40     ` Gui Jianfeng
  2009-07-01  9:24     ` Gui Jianfeng
  2 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-22  8:46 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

* Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:20]:

> This is common fair queuing code in elevator layer. This is controlled by
> config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> flat fair queuing support where there is only one group, "root group" and all
> the tasks belong to root group.
> 
> This elevator layer changes are backward compatible. That means any ioscheduler
> using old interfaces will continue to work.
> 
> This code is essentially the CFQ code for fair queuing. The primary difference
> is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
>

The patch is quite long and to be honest requires a long time to
review, which I don't mind. I suspect my frequently diverted mind is
likely to miss a lot in a big patch like this. Could you consider
splitting this further if possible. I think you'll notice the number
of reviews will also increase.
 
> Signed-off-by: Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Signed-off-by: Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
> Signed-off-by: Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
> Signed-off-by: Aristeu Rozanski <aris-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Signed-off-by: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  block/Kconfig.iosched    |   13 +
>  block/Makefile           |    1 +
>  block/elevator-fq.c      | 2015 ++++++++++++++++++++++++++++++++++++++++++++++
>  block/elevator-fq.h      |  473 +++++++++++
>  block/elevator.c         |   46 +-
>  include/linux/blkdev.h   |    5 +
>  include/linux/elevator.h |   51 ++
>  7 files changed, 2593 insertions(+), 11 deletions(-)
>  create mode 100644 block/elevator-fq.c
>  create mode 100644 block/elevator-fq.h
> 
> diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
> index 7e803fc..3398134 100644
> --- a/block/Kconfig.iosched
> +++ b/block/Kconfig.iosched
> @@ -2,6 +2,19 @@ if BLOCK
> 
>  menu "IO Schedulers"
> 
> +config ELV_FAIR_QUEUING
> +	bool "Elevator Fair Queuing Support"
> +	default n
> +	---help---
> +	  Traditionally only cfq had notion of multiple queues and it did
> +	  fair queuing at its own. With the cgroups and need of controlling
> +	  IO, now even the simple io schedulers like noop, deadline, as will
> +	  have one queue per cgroup and will need hierarchical fair queuing.
> +	  Instead of every io scheduler implementing its own fair queuing
> +	  logic, this option enables fair queuing in elevator layer so that
> +	  other ioschedulers can make use of it.
> +	  If unsure, say N.
> +
>  config IOSCHED_NOOP
>  	bool
>  	default y
> diff --git a/block/Makefile b/block/Makefile
> index e9fa4dd..94bfc6e 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -15,3 +15,4 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
> 
>  obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
>  obj-$(CONFIG_BLK_DEV_INTEGRITY)	+= blk-integrity.o
> +obj-$(CONFIG_ELV_FAIR_QUEUING)	+= elevator-fq.o
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> new file mode 100644
> index 0000000..9357fb0
> --- /dev/null
> +++ b/block/elevator-fq.c
> @@ -0,0 +1,2015 @@
> +/*
> + * BFQ: Hierarchical B-WF2Q+ scheduler.
> + *
> + * Based on ideas and code from CFQ:
> + * Copyright (C) 2003 Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
> + *
> + * Copyright (C) 2008 Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
> + *		      Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
> + * Copyright (C) 2009 Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> + * 	              Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> + */
> +
> +#include <linux/blkdev.h>
> +#include "elevator-fq.h"
> +#include <linux/blktrace_api.h>
> +
> +/* Values taken from cfq */
> +const int elv_slice_sync = HZ / 10;
> +int elv_slice_async = HZ / 25;
> +const int elv_slice_async_rq = 2;
> +int elv_slice_idle = HZ / 125;
> +static struct kmem_cache *elv_ioq_pool;
> +
> +#define ELV_SLICE_SCALE		(5)
> +#define ELV_HW_QUEUE_MIN	(5)
> +#define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
> +				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
> +
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe);
> +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> +						 int extract);
> +
> +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> +					unsigned short prio)

Why is the return type int and not unsigned int or unsigned long? Can
the return value ever be negative?

> +{
> +	const int base_slice = efqd->elv_slice[sync];
> +
> +	WARN_ON(prio >= IOPRIO_BE_NR);
> +
> +	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
> +}
> +
> +static inline int
> +elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
> +{
> +	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
> +}
> +
> +/* Mainly the BFQ scheduling code Follows */
> +
> +/*
> + * Shift for timestamp calculations.  This actually limits the maximum
> + * service allowed in one timestamp delta (small shift values increase it),
> + * the maximum total weight that can be used for the queues in the system
> + * (big shift values increase it), and the period of virtual time wraparounds.
> + */
> +#define WFQ_SERVICE_SHIFT	22
> +
> +/**
> + * bfq_gt - compare two timestamps.
> + * @a: first ts.
> + * @b: second ts.
> + *
> + * Return @a > @b, dealing with wrapping correctly.
> + */
> +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> +{
> +	return (s64)(a - b) > 0;
> +}
> +

a and b are of type u64, but cast to s64 to deal with wrapping?
Correct?

> +/**
> + * bfq_delta - map service into the virtual time domain.
> + * @service: amount of service.
> + * @weight: scale factor.
> + */
> +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> +					bfq_weight_t weight)
> +{
> +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> +

Why is the case required? Does the compiler complain? service is
already of the correct type.

> +	do_div(d, weight);

On a 64 system both d and weight are 64 bit, but on a 32 bit system
weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
- no?

> +	return d;
> +}
> +
> +/**
> + * bfq_calc_finish - assign the finish time to an entity.
> + * @entity: the entity to act upon.
> + * @service: the service to be charged to the entity.
> + */
> +static inline void bfq_calc_finish(struct io_entity *entity,
> +				   bfq_service_t service)
> +{
> +	BUG_ON(entity->weight == 0);
> +
> +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> +}

Should we BUG_ON (entity->finish == entity->start)? Or is that
expected when the entity has no service time left.

> +
> +static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	BUG_ON(entity == NULL);
> +	if (entity->my_sched_data == NULL)
> +		ioq = container_of(entity, struct io_queue, entity);
> +	return ioq;
> +}
> +
> +/**
> + * bfq_entity_of - get an entity from a node.
> + * @node: the node field of the entity.
> + *
> + * Convert a node pointer to the relative entity.  This is used only
> + * to simplify the logic of some functions and not as the generic
> + * conversion mechanism because, e.g., in the tree walking functions,
> + * the check for a %NULL value would be redundant.
> + */
> +static inline struct io_entity *bfq_entity_of(struct rb_node *node)
> +{
> +	struct io_entity *entity = NULL;
> +
> +	if (node != NULL)
> +		entity = rb_entry(node, struct io_entity, rb_node);
> +
> +	return entity;
> +}
> +
> +/**
> + * bfq_extract - remove an entity from a tree.
> + * @root: the tree root.
> + * @entity: the entity to remove.
> + */
> +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> +{

Extract is not common terminology, why not use bfq_remove()?

> +	BUG_ON(entity->tree != root);
> +
> +	entity->tree = NULL;
> +	rb_erase(&entity->rb_node, root);

Don't you want to make entity->tree = NULL after rb_erase?

> +}
> +
> +/**
> + * bfq_idle_extract - extract an entity from the idle tree.
> + * @st: the service tree of the owning @entity.
> + * @entity: the entity being removed.
> + */
> +static void bfq_idle_extract(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct rb_node *next;
> +
> +	BUG_ON(entity->tree != &st->idle);
> +
> +	if (entity == st->first_idle) {
> +		next = rb_next(&entity->rb_node);

What happens if next is NULL?

> +		st->first_idle = bfq_entity_of(next);
> +	}
> +
> +	if (entity == st->last_idle) {
> +		next = rb_prev(&entity->rb_node);

What happens if next is NULL?

> +		st->last_idle = bfq_entity_of(next);
> +	}
> +
> +	bfq_extract(&st->idle, entity);
> +}
> +
> +/**
> + * bfq_insert - generic tree insertion.
> + * @root: tree root.
> + * @entity: entity to insert.
> + *
> + * This is used for the idle and the active tree, since they are both
> + * ordered by finish time.
> + */
> +static void bfq_insert(struct rb_root *root, struct io_entity *entity)
> +{
> +	struct io_entity *entry;
> +	struct rb_node **node = &root->rb_node;
> +	struct rb_node *parent = NULL;
> +
> +	BUG_ON(entity->tree != NULL);
> +
> +	while (*node != NULL) {
> +		parent = *node;
> +		entry = rb_entry(parent, struct io_entity, rb_node);
> +
> +		if (bfq_gt(entry->finish, entity->finish))
> +			node = &parent->rb_left;
> +		else
> +			node = &parent->rb_right;
> +	}
> +
> +	rb_link_node(&entity->rb_node, parent, node);
> +	rb_insert_color(&entity->rb_node, root);
> +
> +	entity->tree = root;
> +}
> +
> +/**
> + * bfq_update_min - update the min_start field of a entity.
> + * @entity: the entity to update.
> + * @node: one of its children.
> + *
> + * This function is called when @entity may store an invalid value for
> + * min_start due to updates to the active tree.  The function  assumes
> + * that the subtree rooted at @node (which may be its left or its right
> + * child) has a valid min_start value.
> + */
> +static inline void bfq_update_min(struct io_entity *entity,
> +					struct rb_node *node)
> +{
> +	struct io_entity *child;
> +
> +	if (node != NULL) {
> +		child = rb_entry(node, struct io_entity, rb_node);
> +		if (bfq_gt(entity->min_start, child->min_start))
> +			entity->min_start = child->min_start;
> +	}
> +}

So.. we check to see if child's min_time is lesser than the root
entities or node entities and set it to the minimum of the two?
Can you use min_t here?

> +
> +/**
> + * bfq_update_active_node - recalculate min_start.
> + * @node: the node to update.
> + *
> + * @node may have changed position or one of its children may have moved,
> + * this function updates its min_start value.  The left and right subtrees
> + * are assumed to hold a correct min_start value.
> + */
> +static inline void bfq_update_active_node(struct rb_node *node)
> +{
> +	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
> +
> +	entity->min_start = entity->start;
> +	bfq_update_min(entity, node->rb_right);
> +	bfq_update_min(entity, node->rb_left);
> +}

I don't like this every much, we set the min_time twice, this can be
easily optimized to look at both left and right child and pick the
minimum.

> +
> +/**
> + * bfq_update_active_tree - update min_start for the whole active tree.
> + * @node: the starting node.
> + *
> + * @node must be the deepest modified node after an update.  This function
> + * updates its min_start using the values held by its children, assuming
> + * that they did not change, and then updates all the nodes that may have
> + * changed in the path to the root.  The only nodes that may have changed
> + * are the ones in the path or their siblings.
> + */
> +static void bfq_update_active_tree(struct rb_node *node)
> +{
> +	struct rb_node *parent;
> +
> +up:
> +	bfq_update_active_node(node);
> +
> +	parent = rb_parent(node);
> +	if (parent == NULL)
> +		return;
> +
> +	if (node == parent->rb_left && parent->rb_right != NULL)
> +		bfq_update_active_node(parent->rb_right);
> +	else if (parent->rb_left != NULL)
> +		bfq_update_active_node(parent->rb_left);
> +
> +	node = parent;
> +	goto up;
> +}
> +

For these functions, take a look at the walk function in the group
scheduler that does update_shares

> +/**
> + * bfq_active_insert - insert an entity in the active tree of its group/device.
> + * @st: the service tree of the entity.
> + * @entity: the entity being inserted.
> + *
> + * The active tree is ordered by finish time, but an extra key is kept
> + * per each node, containing the minimum value for the start times of
> + * its children (and the node itself), so it's possible to search for
> + * the eligible node with the lowest finish time in logarithmic time.
> + */
> +static void bfq_active_insert(struct io_service_tree *st,
> +					struct io_entity *entity)
> +{
> +	struct rb_node *node = &entity->rb_node;
> +
> +	bfq_insert(&st->active, entity);
> +
> +	if (node->rb_left != NULL)
> +		node = node->rb_left;
> +	else if (node->rb_right != NULL)
> +		node = node->rb_right;
> +
> +	bfq_update_active_tree(node);
> +}
> +
> +/**
> + * bfq_ioprio_to_weight - calc a weight from an ioprio.
> + * @ioprio: the ioprio value to convert.
> + */
> +static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
> +{
> +	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
> +	return IOPRIO_BE_NR - ioprio;
> +}
> +
> +void bfq_get_entity(struct io_entity *entity)
> +{
> +	struct io_queue *ioq = io_entity_to_ioq(entity);
> +
> +	if (ioq)
> +		elv_get_ioq(ioq);
> +}
> +
> +void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
> +{
> +	entity->ioprio = entity->new_ioprio;
> +	entity->ioprio_class = entity->new_ioprio_class;
> +	entity->sched_data = &iog->sched_data;
> +}
> +
> +/**
> + * bfq_find_deepest - find the deepest node that an extraction can modify.
> + * @node: the node being removed.
> + *
> + * Do the first step of an extraction in an rb tree, looking for the
> + * node that will replace @node, and returning the deepest node that
> + * the following modifications to the tree can touch.  If @node is the
> + * last node in the tree return %NULL.
> + */
> +static struct rb_node *bfq_find_deepest(struct rb_node *node)
> +{
> +	struct rb_node *deepest;
> +
> +	if (node->rb_right == NULL && node->rb_left == NULL)
> +		deepest = rb_parent(node);

Why is the parent the deepest? Shouldn't node be the deepest?

> +	else if (node->rb_right == NULL)
> +		deepest = node->rb_left;
> +	else if (node->rb_left == NULL)
> +		deepest = node->rb_right;
> +	else {
> +		deepest = rb_next(node);
> +		if (deepest->rb_right != NULL)
> +			deepest = deepest->rb_right;
> +		else if (rb_parent(deepest) != node)
> +			deepest = rb_parent(deepest);
> +	}
> +
> +	return deepest;
> +}

The function is not clear, could you please define deepest node
better?

> +
> +/**
> + * bfq_active_extract - remove an entity from the active tree.
> + * @st: the service_tree containing the tree.
> + * @entity: the entity being removed.
> + */
> +static void bfq_active_extract(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct rb_node *node;
> +
> +	node = bfq_find_deepest(&entity->rb_node);
> +	bfq_extract(&st->active, entity);
> +
> +	if (node != NULL)
> +		bfq_update_active_tree(node);
> +}
> +

Just to check my understanding, every time an active node is
added/removed, we update the min_time of the entire tree.

> +/**
> + * bfq_idle_insert - insert an entity into the idle tree.
> + * @st: the service tree containing the tree.
> + * @entity: the entity to insert.
> + */
> +static void bfq_idle_insert(struct io_service_tree *st,
> +					struct io_entity *entity)
> +{
> +	struct io_entity *first_idle = st->first_idle;
> +	struct io_entity *last_idle = st->last_idle;
> +
> +	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
> +		st->first_idle = entity;
> +	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
> +		st->last_idle = entity;
> +
> +	bfq_insert(&st->idle, entity);
> +}
> +
> +/**
> + * bfq_forget_entity - remove an entity from the wfq trees.
> + * @st: the service tree.
> + * @entity: the entity being removed.
> + *
> + * Update the device status and forget everything about @entity, putting
> + * the device reference to it, if it is a queue.  Entities belonging to
> + * groups are not refcounted.
> + */
> +static void bfq_forget_entity(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	BUG_ON(!entity->on_st);
> +	entity->on_st = 0;
> +	st->wsum -= entity->weight;
> +	ioq = io_entity_to_ioq(entity);
> +	if (!ioq)
> +		return;
> +	elv_put_ioq(ioq);
> +}
> +
> +/**
> + * bfq_put_idle_entity - release the idle tree ref of an entity.
> + * @st: service tree for the entity.
> + * @entity: the entity being released.
> + */
> +void bfq_put_idle_entity(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	bfq_idle_extract(st, entity);
> +	bfq_forget_entity(st, entity);
> +}
> +
> +/**
> + * bfq_forget_idle - update the idle tree if necessary.
> + * @st: the service tree to act upon.
> + *
> + * To preserve the global O(log N) complexity we only remove one entry here;
> + * as the idle tree will not grow indefinitely this can be done safely.
> + */
> +void bfq_forget_idle(struct io_service_tree *st)
> +{
> +	struct io_entity *first_idle = st->first_idle;
> +	struct io_entity *last_idle = st->last_idle;
> +
> +	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
> +	    !bfq_gt(last_idle->finish, st->vtime)) {
> +		/*
> +		 * Active tree is empty. Pull back vtime to finish time of
> +		 * last idle entity on idle tree.
> +		 * Rational seems to be that it reduces the possibility of
> +		 * vtime wraparound (bfq_gt(V-F) < 0).
> +		 */
> +		st->vtime = last_idle->finish;
> +	}
> +
> +	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
> +		bfq_put_idle_entity(st, first_idle);
> +}
> +
> +
> +static struct io_service_tree *
> +__bfq_entity_update_prio(struct io_service_tree *old_st,
> +				struct io_entity *entity)
> +{
> +	struct io_service_tree *new_st = old_st;
> +	struct io_queue *ioq = io_entity_to_ioq(entity);
> +
> +	if (entity->ioprio_changed) {
> +		entity->ioprio = entity->new_ioprio;
> +		entity->ioprio_class = entity->new_ioprio_class;
> +		entity->ioprio_changed = 0;
> +
> +		/*
> +		 * Also update the scaled budget for ioq. Group will get the
> +		 * updated budget once ioq is selected to run next.
> +		 */
> +		if (ioq) {
> +			struct elv_fq_data *efqd = ioq->efqd;
> +			entity->budget = elv_prio_to_slice(efqd, ioq);
> +		}
> +
> +		old_st->wsum -= entity->weight;
> +		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
> +
> +		/*
> +		 * NOTE: here we may be changing the weight too early,
> +		 * this will cause unfairness.  The correct approach
> +		 * would have required additional complexity to defer
> +		 * weight changes to the proper time instants (i.e.,
> +		 * when entity->finish <= old_st->vtime).
> +		 */
> +		new_st = io_entity_service_tree(entity);
> +		new_st->wsum += entity->weight;
> +
> +		if (new_st != old_st)
> +			entity->start = new_st->vtime;
> +	}
> +
> +	return new_st;
> +}
> +
> +/**
> + * __bfq_activate_entity - activate an entity.
> + * @entity: the entity being activated.
> + *
> + * Called whenever an entity is activated, i.e., it is not active and one
> + * of its children receives a new request, or has to be reactivated due to
> + * budget exhaustion.  It uses the current budget of the entity (and the
> + * service received if @entity is active) of the queue to calculate its
> + * timestamps.
> + */
> +static void __bfq_activate_entity(struct io_entity *entity, int add_front)
> +{
> +	struct io_sched_data *sd = entity->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +
> +	if (entity == sd->active_entity) {
> +		BUG_ON(entity->tree != NULL);
> +		/*
> +		 * If we are requeueing the current entity we have
> +		 * to take care of not charging to it service it has
> +		 * not received.
> +		 */
> +		bfq_calc_finish(entity, entity->service);
> +		entity->start = entity->finish;
> +		sd->active_entity = NULL;
> +	} else if (entity->tree == &st->active) {
> +		/*
> +		 * Requeueing an entity due to a change of some
> +		 * next_active entity below it.  We reuse the old
> +		 * start time.
> +		 */
> +		bfq_active_extract(st, entity);
> +	} else if (entity->tree == &st->idle) {
> +		/*
> +		 * Must be on the idle tree, bfq_idle_extract() will
> +		 * check for that.
> +		 */
> +		bfq_idle_extract(st, entity);
> +		entity->start = bfq_gt(st->vtime, entity->finish) ?
> +				       st->vtime : entity->finish;
> +	} else {
> +		/*
> +		 * The finish time of the entity may be invalid, and
> +		 * it is in the past for sure, otherwise the queue
> +		 * would have been on the idle tree.
> +		 */
> +		entity->start = st->vtime;
> +		st->wsum += entity->weight;
> +		bfq_get_entity(entity);
> +
> +		BUG_ON(entity->on_st);
> +		entity->on_st = 1;
> +	}
> +
> +	st = __bfq_entity_update_prio(st, entity);
> +	/*
> +	 * This is to emulate cfq like functionality where preemption can
> +	 * happen with-in same class, like sync queue preempting async queue
> +	 * May be this is not a very good idea from fairness point of view
> +	 * as preempting queue gains share. Keeping it for now.
> +	 */
> +	if (add_front) {
> +		struct io_entity *next_entity;
> +
> +		/*
> +		 * Determine the entity which will be dispatched next
> +		 * Use sd->next_active once hierarchical patch is applied
> +		 */
> +		next_entity = bfq_lookup_next_entity(sd, 0);
> +
> +		if (next_entity && next_entity != entity) {
> +			struct io_service_tree *new_st;
> +			bfq_timestamp_t delta;
> +
> +			new_st = io_entity_service_tree(next_entity);
> +
> +			/*
> +			 * At this point, both entities should belong to
> +			 * same service tree as cross service tree preemption
> +			 * is automatically taken care by algorithm
> +			 */
> +			BUG_ON(new_st != st);
> +			entity->finish = next_entity->finish - 1;
> +			delta = bfq_delta(entity->budget, entity->weight);
> +			entity->start = entity->finish - delta;
> +			if (bfq_gt(entity->start, st->vtime))
> +				entity->start = st->vtime;
> +		}
> +	} else {
> +		bfq_calc_finish(entity, entity->budget);
> +	}
> +	bfq_active_insert(st, entity);
> +}
> +
> +/**
> + * bfq_activate_entity - activate an entity.
> + * @entity: the entity to activate.
> + */
> +void bfq_activate_entity(struct io_entity *entity, int add_front)
> +{
> +	__bfq_activate_entity(entity, add_front);
> +}
> +
> +/**
> + * __bfq_deactivate_entity - deactivate an entity from its service tree.
> + * @entity: the entity to deactivate.
> + * @requeue: if false, the entity will not be put into the idle tree.
> + *
> + * Deactivate an entity, independently from its previous state.  If the
> + * entity was not on a service tree just return, otherwise if it is on
> + * any scheduler tree, extract it from that tree, and if necessary
> + * and if the caller did not specify @requeue, put it on the idle tree.
> + *
> + */
> +int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
> +{
> +	struct io_sched_data *sd = entity->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +	int was_active = entity == sd->active_entity;
> +	int ret = 0;
> +
> +	if (!entity->on_st)
> +		return 0;
> +
> +	BUG_ON(was_active && entity->tree != NULL);
> +
> +	if (was_active) {
> +		bfq_calc_finish(entity, entity->service);
> +		sd->active_entity = NULL;
> +	} else if (entity->tree == &st->active)
> +		bfq_active_extract(st, entity);
> +	else if (entity->tree == &st->idle)
> +		bfq_idle_extract(st, entity);
> +	else if (entity->tree != NULL)
> +		BUG();
> +
> +	if (!requeue || !bfq_gt(entity->finish, st->vtime))
> +		bfq_forget_entity(st, entity);
> +	else
> +		bfq_idle_insert(st, entity);
> +
> +	BUG_ON(sd->active_entity == entity);
> +
> +	return ret;
> +}
> +
> +/**
> + * bfq_deactivate_entity - deactivate an entity.
> + * @entity: the entity to deactivate.
> + * @requeue: true if the entity can be put on the idle tree
> + */
> +void bfq_deactivate_entity(struct io_entity *entity, int requeue)
> +{
> +	__bfq_deactivate_entity(entity, requeue);
> +}
> +
> +/**
> + * bfq_update_vtime - update vtime if necessary.
> + * @st: the service tree to act upon.
> + *
> + * If necessary update the service tree vtime to have at least one
> + * eligible entity, skipping to its start time.  Assumes that the
> + * active tree of the device is not empty.
> + *
> + * NOTE: this hierarchical implementation updates vtimes quite often,
> + * we may end up with reactivated tasks getting timestamps after a
> + * vtime skip done because we needed a ->first_active entity on some
> + * intermediate node.
> + */
> +static void bfq_update_vtime(struct io_service_tree *st)
> +{
> +	struct io_entity *entry;
> +	struct rb_node *node = st->active.rb_node;
> +
> +	entry = rb_entry(node, struct io_entity, rb_node);
> +	if (bfq_gt(entry->min_start, st->vtime)) {
> +		st->vtime = entry->min_start;
> +		bfq_forget_idle(st);
> +	}
> +}
> +
> +/**
> + * bfq_first_active - find the eligible entity with the smallest finish time
> + * @st: the service tree to select from.
> + *
> + * This function searches the first schedulable entity, starting from the
> + * root of the tree and going on the left every time on this side there is
> + * a subtree with at least one eligible (start <= vtime) entity.  The path
> + * on the right is followed only if a) the left subtree contains no eligible
> + * entities and b) no eligible entity has been found yet.
> + */
> +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> +{
> +	struct io_entity *entry, *first = NULL;
> +	struct rb_node *node = st->active.rb_node;
> +
> +	while (node != NULL) {
> +		entry = rb_entry(node, struct io_entity, rb_node);
> +left:
> +		if (!bfq_gt(entry->start, st->vtime))
> +			first = entry;
> +
> +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> +
> +		if (node->rb_left != NULL) {
> +			entry = rb_entry(node->rb_left,
> +					 struct io_entity, rb_node);
> +			if (!bfq_gt(entry->min_start, st->vtime)) {
> +				node = node->rb_left;
> +				goto left;
> +			}
> +		}
> +		if (first != NULL)
> +			break;
> +		node = node->rb_right;

Please help me understand this, we sort the tree by finish time, but
search by vtime, start_time. The worst case could easily be O(N),
right?

> +	}
> +
> +	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
> +	return first;
> +}
> +
> +/**
> + * __bfq_lookup_next_entity - return the first eligible entity in @st.
> + * @st: the service tree.
> + *
> + * Update the virtual time in @st and return the first eligible entity
> + * it contains.
> + */
> +static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
> +{
> +	struct io_entity *entity;
> +
> +	if (RB_EMPTY_ROOT(&st->active))
> +		return NULL;
> +
> +	bfq_update_vtime(st);
> +	entity = bfq_first_active_entity(st);
> +	BUG_ON(bfq_gt(entity->start, st->vtime));
> +
> +	return entity;
> +}
> +
> +/**
> + * bfq_lookup_next_entity - return the first eligible entity in @sd.
> + * @sd: the sched_data.
> + * @extract: if true the returned entity will be also extracted from @sd.
> + *
> + * NOTE: since we cache the next_active entity at each level of the
> + * hierarchy, the complexity of the lookup can be decreased with
> + * absolutely no effort just returning the cached next_active value;
> + * we prefer to do full lookups to test the consistency of * the data
> + * structures.
> + */
> +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> +						 int extract)
> +{
> +	struct io_service_tree *st = sd->service_tree;
> +	struct io_entity *entity;
> +	int i;
> +
> +	/*
> +	 * We should not call lookup when an entity is active, as doing lookup
> +	 * can result in an erroneous vtime jump.
> +	 */
> +	BUG_ON(sd->active_entity != NULL);
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
> +		entity = __bfq_lookup_next_entity(st);
> +		if (entity != NULL) {
> +			if (extract) {
> +				bfq_active_extract(st, entity);
> +				sd->active_entity = entity;
> +			}
> +			break;
> +		}
> +	}
> +
> +	return entity;
> +}
> +
> +void entity_served(struct io_entity *entity, bfq_service_t served)
> +{
> +	struct io_service_tree *st;
> +
> +	st = io_entity_service_tree(entity);
> +	entity->service += served;
> +	BUG_ON(st->wsum == 0);
> +	st->vtime += bfq_delta(served, st->wsum);
> +	bfq_forget_idle(st);

Forget idle checks to see if the st->vtime > first_idle->finish, if so
it pushes the first_idle to later, right?

> +}
> +
> +/**
> + * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
> + * @st: the service tree being flushed.
> + */
> +void io_flush_idle_tree(struct io_service_tree *st)
> +{
> +	struct io_entity *entity = st->first_idle;
> +
> +	for (; entity != NULL; entity = st->first_idle)
> +		__bfq_deactivate_entity(entity, 0);
> +}
> +
> +/* Elevator fair queuing function */
> +struct io_queue *rq_ioq(struct request *rq)
> +{
> +	return rq->ioq;
> +}
> +
> +static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
> +{
> +	return e->efqd.active_queue;
> +}
> +
> +void *elv_active_sched_queue(struct elevator_queue *e)
> +{
> +	return ioq_sched_queue(elv_active_ioq(e));
> +}
> +EXPORT_SYMBOL(elv_active_sched_queue);
> +
> +int elv_nr_busy_ioq(struct elevator_queue *e)
> +{
> +	return e->efqd.busy_queues;
> +}
> +EXPORT_SYMBOL(elv_nr_busy_ioq);
> +
> +int elv_hw_tag(struct elevator_queue *e)
> +{
> +	return e->efqd.hw_tag;
> +}
> +EXPORT_SYMBOL(elv_hw_tag);
> +
> +/* Helper functions for operating on elevator idle slice timer */
> +int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +
> +	return mod_timer(&efqd->idle_slice_timer, expires);
> +}
> +EXPORT_SYMBOL(elv_mod_idle_slice_timer);
> +
> +int elv_del_idle_slice_timer(struct elevator_queue *eq)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +
> +	return del_timer(&efqd->idle_slice_timer);
> +}
> +EXPORT_SYMBOL(elv_del_idle_slice_timer);
> +
> +unsigned int elv_get_slice_idle(struct elevator_queue *eq)
> +{
> +	return eq->efqd.elv_slice_idle;
> +}
> +EXPORT_SYMBOL(elv_get_slice_idle);
> +
> +void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
> +{
> +	entity_served(&ioq->entity, served);
> +}
> +
> +/* Tells whether ioq is queued in root group or not */
> +static inline int is_root_group_ioq(struct request_queue *q,
> +					struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
> +}
> +
> +/*
> + * sysfs parts below -->
> + */
> +static ssize_t
> +elv_var_show(unsigned int var, char *page)
> +{
> +	return sprintf(page, "%d\n", var);
> +}
> +
> +static ssize_t
> +elv_var_store(unsigned int *var, const char *page, size_t count)
> +{
> +	char *p = (char *) page;
> +
> +	*var = simple_strtoul(p, &p, 10);
> +	return count;
> +}
> +
> +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
> +ssize_t __FUNC(struct elevator_queue *e, char *page)		\
> +{									\
> +	struct elv_fq_data *efqd = &e->efqd;				\
> +	unsigned int __data = __VAR;					\
> +	if (__CONV)							\
> +		__data = jiffies_to_msecs(__data);			\
> +	return elv_var_show(__data, (page));				\
> +}
> +SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
> +EXPORT_SYMBOL(elv_slice_idle_show);
> +SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
> +EXPORT_SYMBOL(elv_slice_sync_show);
> +SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
> +EXPORT_SYMBOL(elv_slice_async_show);
> +#undef SHOW_FUNCTION
> +
> +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
> +ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
> +{									\
> +	struct elv_fq_data *efqd = &e->efqd;				\
> +	unsigned int __data;						\
> +	int ret = elv_var_store(&__data, (page), count);		\
> +	if (__data < (MIN))						\
> +		__data = (MIN);						\
> +	else if (__data > (MAX))					\
> +		__data = (MAX);						\
> +	if (__CONV)							\
> +		*(__PTR) = msecs_to_jiffies(__data);			\
> +	else								\
> +		*(__PTR) = __data;					\
> +	return ret;							\
> +}
> +STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_idle_store);
> +STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_sync_store);
> +STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_async_store);
> +#undef STORE_FUNCTION
> +
> +void elv_schedule_dispatch(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (elv_nr_busy_ioq(q->elevator)) {
> +		elv_log(efqd, "schedule dispatch");
> +		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
> +	}
> +}
> +EXPORT_SYMBOL(elv_schedule_dispatch);
> +
> +void elv_kick_queue(struct work_struct *work)
> +{
> +	struct elv_fq_data *efqd =
> +		container_of(work, struct elv_fq_data, unplug_work);
> +	struct request_queue *q = efqd->queue;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(q->queue_lock, flags);
> +	blk_start_queueing(q);
> +	spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
> +void elv_shutdown_timer_wq(struct elevator_queue *e)
> +{
> +	del_timer_sync(&e->efqd.idle_slice_timer);
> +	cancel_work_sync(&e->efqd.unplug_work);
> +}
> +EXPORT_SYMBOL(elv_shutdown_timer_wq);
> +
> +void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	ioq->slice_end = jiffies + ioq->entity.budget;
> +	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
> +}
> +
> +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = ioq->efqd;
> +	unsigned long elapsed = jiffies - ioq->last_end_request;
> +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> +
> +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> +}

Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
understand the algorithm.

> +
> +/*
> + * Disable idle window if the process thinks too long.
> + * This idle flag can also be updated by io scheduler.
> + */
> +static void elv_ioq_update_idle_window(struct elevator_queue *eq,
> +				struct io_queue *ioq, struct request *rq)
> +{
> +	int old_idle, enable_idle;
> +	struct elv_fq_data *efqd = ioq->efqd;
> +
> +	/*
> +	 * Don't idle for async or idle io prio class
> +	 */
> +	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
> +		return;
> +
> +	enable_idle = old_idle = elv_ioq_idle_window(ioq);
> +
> +	if (!efqd->elv_slice_idle)
> +		enable_idle = 0;
> +	else if (ioq_sample_valid(ioq->ttime_samples)) {
> +		if (ioq->ttime_mean > efqd->elv_slice_idle)
> +			enable_idle = 0;
> +		else
> +			enable_idle = 1;
> +	}
> +
> +	/*
> +	 * From think time perspective idle should be enabled. Check with
> +	 * io scheduler if it wants to disable idling based on additional
> +	 * considrations like seek pattern.
> +	 */
> +	if (enable_idle) {
> +		if (eq->ops->elevator_update_idle_window_fn)
> +			enable_idle = eq->ops->elevator_update_idle_window_fn(
> +						eq, ioq->sched_queue, rq);
> +		if (!enable_idle)
> +			elv_log_ioq(efqd, ioq, "iosched disabled idle");
> +	}
> +
> +	if (old_idle != enable_idle) {
> +		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
> +		if (enable_idle)
> +			elv_mark_ioq_idle_window(ioq);
> +		else
> +			elv_clear_ioq_idle_window(ioq);
> +	}
> +}
> +
> +struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
> +	return ioq;
> +}
> +EXPORT_SYMBOL(elv_alloc_ioq);
> +
> +void elv_free_ioq(struct io_queue *ioq)
> +{
> +	kmem_cache_free(elv_ioq_pool, ioq);
> +}
> +EXPORT_SYMBOL(elv_free_ioq);
> +
> +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> +			void *sched_queue, int ioprio_class, int ioprio,
> +			int is_sync)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> +
> +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> +	atomic_set(&ioq->ref, 0);
> +	ioq->efqd = efqd;
> +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> +	elv_ioq_set_ioprio(ioq, ioprio);
> +	ioq->pid = current->pid;

Is pid used for cgroup association later? I don't see why we save the
pid otherwise? If yes, why not store the cgroup of the current->pid?

> +	ioq->sched_queue = sched_queue;
> +	if (is_sync && !elv_ioq_class_idle(ioq))
> +		elv_mark_ioq_idle_window(ioq);
> +	bfq_init_entity(&ioq->entity, iog);
> +	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
> +	if (is_sync)
> +		ioq->last_end_request = jiffies;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(elv_init_ioq);
> +
> +void elv_put_ioq(struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = ioq->efqd;
> +	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
> +						efqd);
> +
> +	BUG_ON(atomic_read(&ioq->ref) <= 0);
> +	if (!atomic_dec_and_test(&ioq->ref))
> +		return;
> +	BUG_ON(ioq->nr_queued);
> +	BUG_ON(ioq->entity.tree != NULL);
> +	BUG_ON(elv_ioq_busy(ioq));
> +	BUG_ON(efqd->active_queue == ioq);
> +
> +	/* Can be called by outgoing elevator. Don't use q */
> +	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
> +
> +	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
> +	elv_log_ioq(efqd, ioq, "put_queue");
> +	elv_free_ioq(ioq);
> +}
> +EXPORT_SYMBOL(elv_put_ioq);
> +
> +void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
> +{
> +	struct io_queue *ioq = *ioq_ptr;
> +
> +	if (ioq != NULL) {
> +		/* Drop the reference taken by the io group */
> +		elv_put_ioq(ioq);
> +		*ioq_ptr = NULL;
> +	}
> +}
> +
> +/*
> + * Normally next io queue to be served is selected from the service tree.
> + * This function allows one to choose a specific io queue to run next
> + * out of order. This is primarily to accomodate the close_cooperator
> + * feature of cfq.
> + *
> + * Currently it is done only for root level as to begin with supporting
> + * close cooperator feature only for root group to make sure default
> + * cfq behavior in flat hierarchy is not changed.
> + */
> +void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	struct io_sched_data *sd = &efqd->root_group->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +
> +	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
> +	BUG_ON(!efqd->busy_queues);
> +	BUG_ON(sd != entity->sched_data);
> +	BUG_ON(!st);
> +
> +	bfq_update_vtime(st);
> +	bfq_active_extract(st, entity);
> +	sd->active_entity = entity;
> +	entity->service = 0;
> +	elv_log_ioq(efqd, ioq, "set_next_ioq");
> +}
> +
> +/* Get next queue for service. */
> +struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = NULL;
> +	struct io_queue *ioq = NULL;
> +	struct io_sched_data *sd;
> +
> +	/*
> +	 * We should not call lookup when an entity is active, as doing
> +	 * lookup can result in an erroneous vtime jump.
> +	 */
> +	BUG_ON(efqd->active_queue != NULL);
> +
> +	if (!efqd->busy_queues)
> +		return NULL;
> +
> +	sd = &efqd->root_group->sched_data;
> +	entity = bfq_lookup_next_entity(sd, 1);
> +
> +	BUG_ON(!entity);
> +	if (extract)
> +		entity->service = 0;
> +	ioq = io_entity_to_ioq(entity);
> +
> +	return ioq;
> +}
> +
> +/*
> + * coop tells that io scheduler selected a queue for us and we did not

coop?

> + * select the next queue based on fairness.
> + */
> +static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> +					int coop)
> +{
> +	struct request_queue *q = efqd->queue;
> +
> +	if (ioq) {
> +		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
> +							efqd->busy_queues);
> +		ioq->slice_end = 0;
> +
> +		elv_clear_ioq_wait_request(ioq);
> +		elv_clear_ioq_must_dispatch(ioq);
> +		elv_mark_ioq_slice_new(ioq);
> +
> +		del_timer(&efqd->idle_slice_timer);
> +	}
> +
> +	efqd->active_queue = ioq;
> +
> +	/* Let iosched know if it wants to take some action */
> +	if (ioq) {
> +		if (q->elevator->ops->elevator_active_ioq_set_fn)
> +			q->elevator->ops->elevator_active_ioq_set_fn(q,
> +							ioq->sched_queue, coop);
> +	}
> +}
> +
> +/* Get and set a new active queue for service. */
> +struct io_queue *elv_set_active_ioq(struct request_queue *q,
> +						struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	int coop = 0;
> +
> +	if (!ioq)
> +		ioq = elv_get_next_ioq(q, 1);
> +	else {
> +		elv_set_next_ioq(q, ioq);
> +		/*
> +		 * io scheduler selected the next queue for us. Pass this
> +		 * this info back to io scheudler. cfq currently uses it
> +		 * to reset coop flag on the queue.
> +		 */
> +		coop = 1;
> +	}
> +	__elv_set_active_ioq(efqd, ioq, coop);
> +	return ioq;
> +}
> +
> +void elv_reset_active_ioq(struct elv_fq_data *efqd)
> +{
> +	struct request_queue *q = efqd->queue;
> +	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
> +
> +	if (q->elevator->ops->elevator_active_ioq_reset_fn)
> +		q->elevator->ops->elevator_active_ioq_reset_fn(q,
> +							ioq->sched_queue);
> +	efqd->active_queue = NULL;
> +	del_timer(&efqd->idle_slice_timer);
> +}
> +
> +void elv_activate_ioq(struct io_queue *ioq, int add_front)
> +{
> +	bfq_activate_entity(&ioq->entity, add_front);
> +}
> +
> +void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> +					int requeue)
> +{
> +	bfq_deactivate_entity(&ioq->entity, requeue);
> +}
> +
> +/* Called when an inactive queue receives a new request. */
> +void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
> +{
> +	BUG_ON(elv_ioq_busy(ioq));
> +	BUG_ON(ioq == efqd->active_queue);
> +	elv_log_ioq(efqd, ioq, "add to busy");
> +	elv_activate_ioq(ioq, 0);
> +	elv_mark_ioq_busy(ioq);
> +	efqd->busy_queues++;
> +	if (elv_ioq_class_rt(ioq)) {
> +		struct io_group *iog = ioq_to_io_group(ioq);
> +		iog->busy_rt_queues++;
> +	}
> +}
> +
> +void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
> +					int requeue)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	BUG_ON(!elv_ioq_busy(ioq));
> +	BUG_ON(ioq->nr_queued);
> +	elv_log_ioq(efqd, ioq, "del from busy");
> +	elv_clear_ioq_busy(ioq);
> +	BUG_ON(efqd->busy_queues == 0);
> +	efqd->busy_queues--;
> +	if (elv_ioq_class_rt(ioq)) {
> +		struct io_group *iog = ioq_to_io_group(ioq);
> +		iog->busy_rt_queues--;
> +	}
> +
> +	elv_deactivate_ioq(efqd, ioq, requeue);
> +}
> +
> +/*
> + * Do the accounting. Determine how much service (in terms of time slices)
> + * current queue used and adjust the start, finish time of queue and vtime
> + * of the tree accordingly.
> + *
> + * Determining the service used in terms of time is tricky in certain
> + * situations. Especially when underlying device supports command queuing
> + * and requests from multiple queues can be there at same time, then it
> + * is not clear which queue consumed how much of disk time.
> + *
> + * To mitigate this problem, cfq starts the time slice of the queue only
> + * after first request from the queue has completed. This does not work
> + * very well if we expire the queue before we wait for first and more
> + * request to finish from the queue. For seeky queues, we will expire the
> + * queue after dispatching few requests without waiting and start dispatching
> + * from next queue.
> + *
> + * Not sure how to determine the time consumed by queue in such scenarios.
> + * Currently as a crude approximation, we are charging 25% of time slice
> + * for such cases. A better mechanism is needed for accurate accounting.
> + */
> +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> +
> +	assert_spin_locked(q->queue_lock);
> +	elv_log_ioq(efqd, ioq, "slice expired");
> +
> +	if (elv_ioq_wait_request(ioq))
> +		del_timer(&efqd->idle_slice_timer);
> +
> +	elv_clear_ioq_wait_request(ioq);
> +
> +	/*
> +	 * if ioq->slice_end = 0, that means a queue was expired before first
> +	 * reuqest from the queue got completed. Of course we are not planning
> +	 * to idle on the queue otherwise we would not have expired it.
> +	 *
> +	 * Charge for the 25% slice in such cases. This is not the best thing
> +	 * to do but at the same time not very sure what's the next best
> +	 * thing to do.
> +	 *
> +	 * This arises from that fact that we don't have the notion of
> +	 * one queue being operational at one time. io scheduler can dispatch
> +	 * requests from multiple queues in one dispatch round. Ideally for
> +	 * more accurate accounting of exact disk time used by disk, one
> +	 * should dispatch requests from only one queue and wait for all
> +	 * the requests to finish. But this will reduce throughput.
> +	 */
> +	if (!ioq->slice_end)
> +		slice_used = entity->budget/4;
> +	else {
> +		if (time_after(ioq->slice_end, jiffies)) {
> +			slice_unused = ioq->slice_end - jiffies;
> +			if (slice_unused == entity->budget) {
> +				/*
> +				 * queue got expired immediately after
> +				 * completing first request. Charge 25% of
> +				 * slice.
> +				 */
> +				slice_used = entity->budget/4;
> +			} else
> +				slice_used = entity->budget - slice_unused;
> +		} else {
> +			slice_overshoot = jiffies - ioq->slice_end;
> +			slice_used = entity->budget + slice_overshoot;
> +		}
> +	}
> +
> +	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
> +			jiffies);
> +	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
> +				slice_used, entity->budget, slice_overshoot);
> +	elv_ioq_served(ioq, slice_used);
> +
> +	BUG_ON(ioq != efqd->active_queue);
> +	elv_reset_active_ioq(efqd);
> +
> +	if (!ioq->nr_queued)
> +		elv_del_ioq_busy(q->elevator, ioq, 1);
> +	else
> +		elv_activate_ioq(ioq, 0);
> +}
> +EXPORT_SYMBOL(__elv_ioq_slice_expired);
> +
> +/*
> + *  Expire the ioq.
> + */
> +void elv_ioq_slice_expired(struct request_queue *q)
> +{
> +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> +
> +	if (ioq)
> +		__elv_ioq_slice_expired(q, ioq);
> +}
> +
> +/*
> + * Check if new_cfqq should preempt the currently active queue. Return 0 for
> + * no or if we aren't sure, a 1 will cause a preemption attempt.
> + */
> +int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
> +			struct request *rq)
> +{
> +	struct io_queue *ioq;
> +	struct elevator_queue *eq = q->elevator;
> +	struct io_entity *entity, *new_entity;
> +
> +	ioq = elv_active_ioq(eq);
> +
> +	if (!ioq)
> +		return 0;
> +
> +	entity = &ioq->entity;
> +	new_entity = &new_ioq->entity;
> +
> +	/*
> +	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
> +	 */
> +
> +	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
> +	    && entity->ioprio_class != IOPRIO_CLASS_RT)
> +		return 1;
> +	/*
> +	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
> +	 */
> +
> +	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
> +	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
> +		return 1;
> +
> +	/*
> +	 * Check with io scheduler if it has additional criterion based on
> +	 * which it wants to preempt existing queue.
> +	 */
> +	if (eq->ops->elevator_should_preempt_fn)
> +		return eq->ops->elevator_should_preempt_fn(q,
> +						ioq_sched_queue(new_ioq), rq);
> +
> +	return 0;
> +}
> +
> +static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
> +{
> +	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
> +	elv_ioq_slice_expired(q);
> +
> +	/*
> +	 * Put the new queue at the front of the of the current list,
> +	 * so we know that it will be selected next.
> +	 */
> +
> +	elv_activate_ioq(ioq, 1);
> +	elv_ioq_set_slice_end(ioq, 0);
> +	elv_mark_ioq_slice_new(ioq);
> +}
> +
> +void elv_ioq_request_add(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *ioq = rq->ioq;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	BUG_ON(!efqd);
> +	BUG_ON(!ioq);
> +	efqd->rq_queued++;
> +	ioq->nr_queued++;
> +
> +	if (!elv_ioq_busy(ioq))
> +		elv_add_ioq_busy(efqd, ioq);
> +
> +	elv_ioq_update_io_thinktime(ioq);
> +	elv_ioq_update_idle_window(q->elevator, ioq, rq);
> +
> +	if (ioq == elv_active_ioq(q->elevator)) {
> +		/*
> +		 * Remember that we saw a request from this process, but
> +		 * don't start queuing just yet. Otherwise we risk seeing lots
> +		 * of tiny requests, because we disrupt the normal plugging
> +		 * and merging. If the request is already larger than a single
> +		 * page, let it rip immediately. For that case we assume that
> +		 * merging is already done. Ditto for a busy system that
> +		 * has other work pending, don't risk delaying until the
> +		 * idle timer unplug to continue working.
> +		 */
> +		if (elv_ioq_wait_request(ioq)) {
> +			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
> +			    efqd->busy_queues > 1) {
> +				del_timer(&efqd->idle_slice_timer);
> +				blk_start_queueing(q);
> +			}
> +			elv_mark_ioq_must_dispatch(ioq);
> +		}
> +	} else if (elv_should_preempt(q, ioq, rq)) {
> +		/*
> +		 * not the active queue - expire current slice if it is
> +		 * idle and has expired it's mean thinktime or this new queue
> +		 * has some old slice time left and is of higher priority or
> +		 * this new queue is RT and the current one is BE
> +		 */
> +		elv_preempt_queue(q, ioq);
> +		blk_start_queueing(q);
> +	}
> +}
> +
> +void elv_idle_slice_timer(unsigned long data)
> +{
> +	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
> +	struct io_queue *ioq;
> +	unsigned long flags;
> +	struct request_queue *q = efqd->queue;
> +
> +	elv_log(efqd, "idle timer fired");
> +
> +	spin_lock_irqsave(q->queue_lock, flags);
> +
> +	ioq = efqd->active_queue;
> +
> +	if (ioq) {
> +
> +		/*
> +		 * We saw a request before the queue expired, let it through
> +		 */
> +		if (elv_ioq_must_dispatch(ioq))
> +			goto out_kick;
> +
> +		/*
> +		 * expired
> +		 */
> +		if (elv_ioq_slice_used(ioq))
> +			goto expire;
> +
> +		/*
> +		 * only expire and reinvoke request handler, if there are
> +		 * other queues with pending requests
> +		 */
> +		if (!elv_nr_busy_ioq(q->elevator))
> +			goto out_cont;
> +
> +		/*
> +		 * not expired and it has a request pending, let it dispatch
> +		 */
> +		if (ioq->nr_queued)
> +			goto out_kick;
> +	}
> +expire:
> +	elv_ioq_slice_expired(q);
> +out_kick:
> +	elv_schedule_dispatch(q);
> +out_cont:
> +	spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
> +void elv_ioq_arm_slice_timer(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> +	unsigned long sl;
> +
> +	BUG_ON(!ioq);
> +
> +	/*
> +	 * SSD device without seek penalty, disable idling. But only do so
> +	 * for devices that support queuing, otherwise we still have a problem
> +	 * with sync vs async workloads.
> +	 */
> +	if (blk_queue_nonrot(q) && efqd->hw_tag)
> +		return;
> +
> +	/*
> +	 * still requests with the driver, don't idle
> +	 */
> +	if (efqd->rq_in_driver)
> +		return;
> +
> +	/*
> +	 * idle is disabled, either manually or by past process history
> +	 */
> +	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
> +		return;
> +
> +	/*
> +	 * may be iosched got its own idling logic. In that case io
> +	 * schduler will take care of arming the timer, if need be.
> +	 */
> +	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
> +		q->elevator->ops->elevator_arm_slice_timer_fn(q,
> +						ioq->sched_queue);
> +	} else {
> +		elv_mark_ioq_wait_request(ioq);
> +		sl = efqd->elv_slice_idle;
> +		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
> +		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
> +	}
> +}
> +
> +/* Common layer function to select the next queue to dispatch from */
> +void *elv_fq_select_ioq(struct request_queue *q, int force)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> +	struct io_group *iog;
> +
> +	if (!elv_nr_busy_ioq(q->elevator))
> +		return NULL;
> +
> +	if (ioq == NULL)
> +		goto new_queue;
> +
> +	/*
> +	 * Force dispatch. Continue to dispatch from current queue as long
> +	 * as it has requests.
> +	 */
> +	if (unlikely(force)) {
> +		if (ioq->nr_queued)
> +			goto keep_queue;
> +		else
> +			goto expire;
> +	}
> +
> +	/*
> +	 * The active queue has run out of time, expire it and select new.
> +	 */
> +	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
> +		goto expire;
> +
> +	/*
> +	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
> +	 * cfqq.
> +	 */
> +	iog = ioq_to_io_group(ioq);
> +
> +	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +		/*
> +		 * We simulate this as cfqq timed out so that it gets to bank
> +		 * the remaining of its time slice.
> +		 */
> +		elv_log_ioq(efqd, ioq, "preempt");
> +		goto expire;
> +	}
> +
> +	/*
> +	 * The active queue has requests and isn't expired, allow it to
> +	 * dispatch.
> +	 */
> +
> +	if (ioq->nr_queued)
> +		goto keep_queue;
> +
> +	/*
> +	 * If another queue has a request waiting within our mean seek
> +	 * distance, let it run.  The expire code will check for close
> +	 * cooperators and put the close queue at the front of the service
> +	 * tree.
> +	 */
> +	new_ioq = elv_close_cooperator(q, ioq, 0);
> +	if (new_ioq)
> +		goto expire;
> +
> +	/*
> +	 * No requests pending. If the active queue still has requests in
> +	 * flight or is idling for a new request, allow either of these
> +	 * conditions to happen (or time out) before selecting a new queue.
> +	 */
> +
> +	if (timer_pending(&efqd->idle_slice_timer) ||
> +	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
> +		ioq = NULL;
> +		goto keep_queue;
> +	}
> +
> +expire:
> +	elv_ioq_slice_expired(q);
> +new_queue:
> +	ioq = elv_set_active_ioq(q, new_ioq);
> +keep_queue:
> +	return ioq;
> +}
> +
> +/* A request got removed from io_queue. Do the accounting */
> +void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
> +{
> +	struct io_queue *ioq;
> +	struct elv_fq_data *efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	ioq = rq->ioq;
> +	BUG_ON(!ioq);
> +	ioq->nr_queued--;
> +
> +	efqd = ioq->efqd;
> +	BUG_ON(!efqd);
> +	efqd->rq_queued--;
> +
> +	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
> +		elv_del_ioq_busy(e, ioq, 1);
> +}
> +
> +/* A request got dispatched. Do the accounting. */
> +void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
> +{
> +	struct io_queue *ioq = rq->ioq;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	BUG_ON(!ioq);
> +	elv_ioq_request_dispatched(ioq);
> +	elv_ioq_request_removed(e, rq);
> +	elv_clear_ioq_must_dispatch(ioq);
> +}
> +
> +void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	efqd->rq_in_driver++;
> +	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
> +						efqd->rq_in_driver);
> +}
> +
> +void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	WARN_ON(!efqd->rq_in_driver);
> +	efqd->rq_in_driver--;
> +	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
> +						efqd->rq_in_driver);
> +}
> +
> +/*
> + * Update hw_tag based on peak queue depth over 50 samples under
> + * sufficient load.
> + */
> +static void elv_update_hw_tag(struct elv_fq_data *efqd)
> +{
> +	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
> +		efqd->rq_in_driver_peak = efqd->rq_in_driver;
> +
> +	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
> +	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
> +		return;
> +
> +	if (efqd->hw_tag_samples++ < 50)
> +		return;
> +
> +	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
> +		efqd->hw_tag = 1;
> +	else
> +		efqd->hw_tag = 0;
> +
> +	efqd->hw_tag_samples = 0;
> +	efqd->rq_in_driver_peak = 0;
> +}
> +
> +/*
> + * If ioscheduler has functionality of keeping track of close cooperator, check
> + * with it if it has got a closely co-operating queue.
> + */
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe)
> +{
> +	struct elevator_queue *e = q->elevator;
> +	struct io_queue *new_ioq = NULL;
> +
> +	/*
> +	 * Currently this feature is supported only for flat hierarchy or
> +	 * root group queues so that default cfq behavior is not changed.
> +	 */
> +	if (!is_root_group_ioq(q, ioq))
> +		return NULL;
> +
> +	if (q->elevator->ops->elevator_close_cooperator_fn)
> +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> +						ioq->sched_queue, probe);
> +
> +	/* Only select co-operating queue if it belongs to root group */
> +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> +		return NULL;
> +
> +	return new_ioq;
> +}
> +
> +/* A request got completed from io_queue. Do the accounting. */
> +void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> +{
> +	const int sync = rq_is_sync(rq);
> +	struct io_queue *ioq;
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	ioq = rq->ioq;
> +
> +	elv_log_ioq(efqd, ioq, "complete");
> +
> +	elv_update_hw_tag(efqd);
> +
> +	WARN_ON(!efqd->rq_in_driver);
> +	WARN_ON(!ioq->dispatched);
> +	efqd->rq_in_driver--;
> +	ioq->dispatched--;
> +
> +	if (sync)
> +		ioq->last_end_request = jiffies;
> +
> +	/*
> +	 * If this is the active queue, check if it needs to be expired,
> +	 * or if we want to idle in case it has no pending requests.
> +	 */
> +
> +	if (elv_active_ioq(q->elevator) == ioq) {
> +		if (elv_ioq_slice_new(ioq)) {
> +			elv_ioq_set_prio_slice(q, ioq);
> +			elv_clear_ioq_slice_new(ioq);
> +		}
> +		/*
> +		 * If there are no requests waiting in this queue, and
> +		 * there are other queues ready to issue requests, AND
> +		 * those other queues are issuing requests within our
> +		 * mean seek distance, give them a chance to run instead
> +		 * of idling.
> +		 */
> +		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
> +			elv_ioq_slice_expired(q);
> +		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
> +			 && sync && !rq_noidle(rq))
> +			elv_ioq_arm_slice_timer(q);
> +	}
> +
> +	if (!efqd->rq_in_driver)
> +		elv_schedule_dispatch(q);
> +}
> +
> +struct io_group *io_lookup_io_group_current(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	return efqd->root_group;
> +}
> +EXPORT_SYMBOL(io_lookup_io_group_current);
> +
> +void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> +					int ioprio)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	switch (ioprio_class) {
> +	case IOPRIO_CLASS_RT:
> +		ioq = iog->async_queue[0][ioprio];
> +		break;
> +	case IOPRIO_CLASS_BE:
> +		ioq = iog->async_queue[1][ioprio];
> +		break;
> +	case IOPRIO_CLASS_IDLE:
> +		ioq = iog->async_idle_queue;
> +		break;
> +	default:
> +		BUG();
> +	}
> +
> +	if (ioq)
> +		return ioq->sched_queue;
> +	return NULL;
> +}
> +EXPORT_SYMBOL(io_group_async_queue_prio);
> +
> +void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> +					int ioprio, struct io_queue *ioq)
> +{
> +	switch (ioprio_class) {
> +	case IOPRIO_CLASS_RT:
> +		iog->async_queue[0][ioprio] = ioq;
> +		break;
> +	case IOPRIO_CLASS_BE:
> +		iog->async_queue[1][ioprio] = ioq;
> +		break;
> +	case IOPRIO_CLASS_IDLE:
> +		iog->async_idle_queue = ioq;
> +		break;
> +	default:
> +		BUG();
> +	}
> +
> +	/*
> +	 * Take the group reference and pin the queue. Group exit will
> +	 * clean it up
> +	 */
> +	elv_get_ioq(ioq);
> +}
> +EXPORT_SYMBOL(io_group_set_async_queue);
> +
> +/*
> + * Release all the io group references to its async queues.
> + */
> +void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
> +{
> +	int i, j;
> +
> +	for (i = 0; i < 2; i++)
> +		for (j = 0; j < IOPRIO_BE_NR; j++)
> +			elv_release_ioq(e, &iog->async_queue[i][j]);
> +
> +	/* Free up async idle queue */
> +	elv_release_ioq(e, &iog->async_idle_queue);
> +}
> +
> +struct io_group *io_alloc_root_group(struct request_queue *q,
> +					struct elevator_queue *e, void *key)
> +{
> +	struct io_group *iog;
> +	int i;
> +
> +	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
> +	if (iog == NULL)
> +		return NULL;
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
> +		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
> +
> +	return iog;
> +}
> +
> +void io_free_root_group(struct elevator_queue *e)
> +{
> +	struct io_group *iog = e->efqd.root_group;
> +	struct io_service_tree *st;
> +	int i;
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
> +		st = iog->sched_data.service_tree + i;
> +		io_flush_idle_tree(st);
> +	}
> +
> +	io_put_io_group_queues(e, iog);
> +	kfree(iog);
> +}
> +
> +static void elv_slab_kill(void)
> +{
> +	/*
> +	 * Caller already ensured that pending RCU callbacks are completed,
> +	 * so we should have no busy allocations at this point.
> +	 */
> +	if (elv_ioq_pool)
> +		kmem_cache_destroy(elv_ioq_pool);
> +}
> +
> +static int __init elv_slab_setup(void)
> +{
> +	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
> +	if (!elv_ioq_pool)
> +		goto fail;
> +
> +	return 0;
> +fail:
> +	elv_slab_kill();
> +	return -ENOMEM;
> +}
> +
> +/* Initialize fair queueing data associated with elevator */
> +int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
> +{
> +	struct io_group *iog;
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return 0;
> +
> +	iog = io_alloc_root_group(q, e, efqd);
> +	if (iog == NULL)
> +		return 1;
> +
> +	efqd->root_group = iog;
> +	efqd->queue = q;
> +
> +	init_timer(&efqd->idle_slice_timer);
> +	efqd->idle_slice_timer.function = elv_idle_slice_timer;
> +	efqd->idle_slice_timer.data = (unsigned long) efqd;
> +
> +	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
> +
> +	efqd->elv_slice[0] = elv_slice_async;
> +	efqd->elv_slice[1] = elv_slice_sync;
> +	efqd->elv_slice_idle = elv_slice_idle;
> +	efqd->hw_tag = 1;
> +
> +	return 0;
> +}
> +
> +/*
> + * elv_exit_fq_data is called before we call elevator_exit_fn. Before
> + * we ask elevator to cleanup its queues, we do the cleanup here so
> + * that all the group and idle tree references to ioq are dropped. Later
> + * during elevator cleanup, ioc reference will be dropped which will lead
> + * to removal of ioscheduler queue as well as associated ioq object.
> + */
> +void elv_exit_fq_data(struct elevator_queue *e)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	elv_shutdown_timer_wq(e);
> +
> +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> +	io_free_root_group(e);
> +}
> +
> +/*
> + * This is called after the io scheduler has cleaned up its data structres.
> + * I don't think that this function is required. Right now just keeping it
> + * because cfq cleans up timer and work queue again after freeing up
> + * io contexts. To me io scheduler has already been drained out, and all
> + * the active queue have already been expired so time and work queue should
> + * not been activated during cleanup process.
> + *
> + * Keeping it here for the time being. Will get rid of it later.
> + */
> +void elv_exit_fq_data_post(struct elevator_queue *e)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	elv_shutdown_timer_wq(e);
> +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> +}
> +
> +
> +static int __init elv_fq_init(void)
> +{
> +	if (elv_slab_setup())
> +		return -ENOMEM;
> +
> +	/* could be 0 on HZ < 1000 setups */
> +
> +	if (!elv_slice_async)
> +		elv_slice_async = 1;
> +
> +	if (!elv_slice_idle)
> +		elv_slice_idle = 1;
> +
> +	return 0;
> +}
> +
> +module_init(elv_fq_init);
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> new file mode 100644
> index 0000000..5b6c1cc
> --- /dev/null
> +++ b/block/elevator-fq.h
> @@ -0,0 +1,473 @@
> +/*
> + * BFQ: data structures and common functions prototypes.
> + *
> + * Based on ideas and code from CFQ:
> + * Copyright (C) 2003 Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
> + *
> + * Copyright (C) 2008 Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
> + *		      Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
> + * Copyright (C) 2009 Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> + * 	              Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> + */
> +
> +#include <linux/blkdev.h>
> +
> +#ifndef _BFQ_SCHED_H
> +#define _BFQ_SCHED_H
> +
> +#define IO_IOPRIO_CLASSES	3
> +
> +typedef u64 bfq_timestamp_t;
> +typedef unsigned long bfq_weight_t;
> +typedef unsigned long bfq_service_t;

Does this abstraction really provide any benefit? Why not directly use
the standard C types, make the code easier to read.

> +struct io_entity;
> +struct io_queue;
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +
> +#define ELV_ATTR(name) \
> +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> +
> +/**
> + * struct bfq_service_tree - per ioprio_class service tree.

Comment is old, does not reflect the newer name

> + * @active: tree for active entities (i.e., those backlogged).
> + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> + * @first_idle: idle entity with minimum F_i.
> + * @last_idle: idle entity with maximum F_i.
> + * @vtime: scheduler virtual time.
> + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> + *
> + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> + * ioprio_class has its own independent scheduler, and so its own
> + * bfq_service_tree.  All the fields are protected by the queue lock
> + * of the containing efqd.
> + */
> +struct io_service_tree {
> +	struct rb_root active;
> +	struct rb_root idle;
> +
> +	struct io_entity *first_idle;
> +	struct io_entity *last_idle;
> +
> +	bfq_timestamp_t vtime;
> +	bfq_weight_t wsum;
> +};
> +
> +/**
> + * struct bfq_sched_data - multi-class scheduler.

Again the naming convention is broken, you need to change several
bfq's to io's :)

> + * @active_entity: entity under service.
> + * @next_active: head-of-the-line entity in the scheduler.
> + * @service_tree: array of service trees, one per ioprio_class.
> + *
> + * bfq_sched_data is the basic scheduler queue.  It supports three
> + * ioprio_classes, and can be used either as a toplevel queue or as
> + * an intermediate queue on a hierarchical setup.
> + * @next_active points to the active entity of the sched_data service
> + * trees that will be scheduled next.
> + *
> + * The supported ioprio_classes are the same as in CFQ, in descending
> + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
> + * Requests from higher priority queues are served before all the
> + * requests from lower priority queues; among requests of the same
> + * queue requests are served according to B-WF2Q+.
> + * All the fields are protected by the queue lock of the containing bfqd.
> + */
> +struct io_sched_data {
> +	struct io_entity *active_entity;
> +	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
> +};
> +
> +/**
> + * struct bfq_entity - schedulable entity.
> + * @rb_node: service_tree member.
> + * @on_st: flag, true if the entity is on a tree (either the active or
> + *         the idle one of its service_tree).
> + * @finish: B-WF2Q+ finish timestamp (aka F_i).
> + * @start: B-WF2Q+ start timestamp (aka S_i).

Could you mention what key is used in the rb_tree? start, finish
sounds like a range, so my suspicion is that start is used.

> + * @tree: tree the entity is enqueued into; %NULL if not on a tree.
> + * @min_start: minimum start time of the (active) subtree rooted at
> + *             this entity; used for O(log N) lookups into active trees.

Used for O(log N) makes no sense to me, RBTree has a worst case
lookup time of O(log N), but what is the comment saying?

> + * @service: service received during the last round of service.
> + * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
> + * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
> + * @parent: parent entity, for hierarchical scheduling.
> + * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
> + *                 associated scheduler queue, %NULL on leaf nodes.
> + * @sched_data: the scheduler queue this entity belongs to.
> + * @ioprio: the ioprio in use.
> + * @new_ioprio: when an ioprio change is requested, the new ioprio value
> + * @ioprio_class: the ioprio_class in use.
> + * @new_ioprio_class: when an ioprio_class change is requested, the new
> + *                    ioprio_class value.
> + * @ioprio_changed: flag, true when the user requested an ioprio or
> + *                  ioprio_class change.
> + *
> + * A bfq_entity is used to represent either a bfq_queue (leaf node in the
> + * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
> + * entity belongs to the sched_data of the parent group in the cgroup
> + * hierarchy.  Non-leaf entities have also their own sched_data, stored
> + * in @my_sched_data.
> + *
> + * Each entity stores independently its priority values; this would allow
> + * different weights on different devices, but this functionality is not
> + * exported to userspace by now.  Priorities are updated lazily, first
> + * storing the new values into the new_* fields, then setting the
> + * @ioprio_changed flag.  As soon as there is a transition in the entity
> + * state that allows the priority update to take place the effective and
> + * the requested priority values are synchronized.
> + *
> + * The weight value is calculated from the ioprio to export the same
> + * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
> + * queues that do not spend too much time to consume their budget and
> + * have true sequential behavior, and when there are no external factors
> + * breaking anticipation) the relative weights at each level of the
> + * cgroups hierarchy should be guaranteed.
> + * All the fields are protected by the queue lock of the containing bfqd.
> + */
> +struct io_entity {
> +	struct rb_node rb_node;
> +
> +	int on_st;
> +
> +	bfq_timestamp_t finish;
> +	bfq_timestamp_t start;
> +
> +	struct rb_root *tree;
> +
> +	bfq_timestamp_t min_start;
> +
> +	bfq_service_t service, budget;
> +	bfq_weight_t weight;
> +
> +	struct io_entity *parent;
> +
> +	struct io_sched_data *my_sched_data;
> +	struct io_sched_data *sched_data;
> +
> +	unsigned short ioprio, new_ioprio;
> +	unsigned short ioprio_class, new_ioprio_class;
> +
> +	int ioprio_changed;
> +};
> +
> +/*
> + * A common structure embedded by every io scheduler into their respective
> + * queue structure.
> + */
> +struct io_queue {
> +	struct io_entity entity;

So the io_queue has an abstract entity called io_entity that contains
it QoS parameters? Correct?

> +	atomic_t ref;
> +	unsigned int flags;
> +
> +	/* Pointer to generic elevator data structure */
> +	struct elv_fq_data *efqd;
> +	pid_t pid;

Why do we store the pid?

> +
> +	/* Number of requests queued on this io queue */
> +	unsigned long nr_queued;
> +
> +	/* Requests dispatched from this queue */
> +	int dispatched;
> +
> +	/* Keep a track of think time of processes in this queue */
> +	unsigned long last_end_request;
> +	unsigned long ttime_total;
> +	unsigned long ttime_samples;
> +	unsigned long ttime_mean;
> +
> +	unsigned long slice_end;
> +
> +	/* Pointer to io scheduler's queue */
> +	void *sched_queue;
> +};
> +
> +struct io_group {
> +	struct io_sched_data sched_data;
> +
> +	/* async_queue and idle_queue are used only for cfq */
> +	struct io_queue *async_queue[2][IOPRIO_BE_NR];

Again the 2 is confusing

> +	struct io_queue *async_idle_queue;
> +
> +	/*
> +	 * Used to track any pending rt requests so we can pre-empt current
> +	 * non-RT cfqq in service when this value is non-zero.
> +	 */
> +	unsigned int busy_rt_queues;
> +};
> +
> +struct elv_fq_data {

What does fq stand for?

> +	struct io_group *root_group;
> +
> +	struct request_queue *queue;
> +	unsigned int busy_queues;
> +
> +	/* Number of requests queued */
> +	int rq_queued;
> +
> +	/* Pointer to the ioscheduler queue being served */
> +	void *active_queue;
> +
> +	int rq_in_driver;
> +	int hw_tag;
> +	int hw_tag_samples;
> +	int rq_in_driver_peak;

Some comments of _in_driver and _in_driver_peak would be nice.

> +
> +	/*
> +	 * elevator fair queuing layer has the capability to provide idling
> +	 * for ensuring fairness for processes doing dependent reads.
> +	 * This might be needed to ensure fairness among two processes doing
> +	 * synchronous reads in two different cgroups. noop and deadline don't
> +	 * have any notion of anticipation/idling. As of now, these are the
> +	 * users of this functionality.
> +	 */
> +	unsigned int elv_slice_idle;
> +	struct timer_list idle_slice_timer;
> +	struct work_struct unplug_work;
> +
> +	unsigned int elv_slice[2];

Why [2] makes the code hearder to read

> +};
> +
> +extern int elv_slice_idle;
> +extern int elv_slice_async;
> +
> +/* Logging facilities. */
> +#define elv_log_ioq(efqd, ioq, fmt, args...) \
> +	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
> +				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
> +
> +#define elv_log(efqd, fmt, args...) \
> +	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
> +
> +#define ioq_sample_valid(samples)   ((samples) > 80)
> +
> +/* Some shared queue flag manipulation functions among elevators */
> +
> +enum elv_queue_state_flags {
> +	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
> +	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
> +	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
> +	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
> +	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
> +	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
> +	ELV_QUEUE_FLAG_NR,
> +};
> +
> +#define ELV_IO_QUEUE_FLAG_FNS(name)					\
> +static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
> +{                                                                       \
> +	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
> +}                                                                       \
> +static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
> +{                                                                       \
> +	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
> +}                                                                       \
> +static inline int elv_ioq_##name(struct io_queue *ioq)         		\
> +{                                                                       \
> +	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
> +}
> +
> +ELV_IO_QUEUE_FLAG_FNS(busy)
> +ELV_IO_QUEUE_FLAG_FNS(sync)
> +ELV_IO_QUEUE_FLAG_FNS(wait_request)
> +ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
> +ELV_IO_QUEUE_FLAG_FNS(idle_window)
> +ELV_IO_QUEUE_FLAG_FNS(slice_new)
> +
> +static inline struct io_service_tree *
> +io_entity_service_tree(struct io_entity *entity)
> +{
> +	struct io_sched_data *sched_data = entity->sched_data;
> +	unsigned int idx = entity->ioprio_class - 1;
> +
> +	BUG_ON(idx >= IO_IOPRIO_CLASSES);
> +	BUG_ON(sched_data == NULL);
> +
> +	return sched_data->service_tree + idx;
> +}
> +
> +/* A request got dispatched from the io_queue. Do the accounting. */
> +static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
> +{
> +	ioq->dispatched++;
> +}
> +
> +static inline int elv_ioq_slice_used(struct io_queue *ioq)
> +{
> +	if (elv_ioq_slice_new(ioq))
> +		return 0;
> +	if (time_before(jiffies, ioq->slice_end))
> +		return 0;
> +
> +	return 1;
> +}
> +
> +/* How many request are currently dispatched from the queue */
> +static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
> +{
> +	return ioq->dispatched;
> +}
> +
> +/* How many request are currently queued in the queue */
> +static inline int elv_ioq_nr_queued(struct io_queue *ioq)
> +{
> +	return ioq->nr_queued;
> +}
> +
> +static inline void elv_get_ioq(struct io_queue *ioq)
> +{
> +	atomic_inc(&ioq->ref);
> +}
> +
> +static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
> +						unsigned long slice_end)
> +{
> +	ioq->slice_end = slice_end;
> +}
> +
> +static inline int elv_ioq_class_idle(struct io_queue *ioq)
> +{
> +	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
> +}
> +
> +static inline int elv_ioq_class_rt(struct io_queue *ioq)
> +{
> +	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
> +}
> +
> +static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
> +{
> +	return ioq->entity.new_ioprio_class;
> +}
> +
> +static inline int elv_ioq_ioprio(struct io_queue *ioq)
> +{
> +	return ioq->entity.new_ioprio;
> +}
> +
> +static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
> +						int ioprio_class)
> +{
> +	ioq->entity.new_ioprio_class = ioprio_class;
> +	ioq->entity.ioprio_changed = 1;
> +}
> +
> +static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
> +{
> +	ioq->entity.new_ioprio = ioprio;
> +	ioq->entity.ioprio_changed = 1;
> +}
> +
> +static inline void *ioq_sched_queue(struct io_queue *ioq)
> +{
> +	if (ioq)
> +		return ioq->sched_queue;
> +	return NULL;
> +}
> +
> +static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
> +{
> +	return container_of(ioq->entity.sched_data, struct io_group,
> +						sched_data);
> +}
> +
> +extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +
> +/* Functions used by elevator.c */
> +extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
> +extern void elv_exit_fq_data(struct elevator_queue *e);
> +extern void elv_exit_fq_data_post(struct elevator_queue *e);
> +
> +extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
> +extern void elv_ioq_request_removed(struct elevator_queue *e,
> +					struct request *rq);
> +extern void elv_fq_dispatched_request(struct elevator_queue *e,
> +					struct request *rq);
> +
> +extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
> +extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
> +
> +extern void elv_ioq_completed_request(struct request_queue *q,
> +				struct request *rq);
> +
> +extern void *elv_fq_select_ioq(struct request_queue *q, int force);
> +extern struct io_queue *rq_ioq(struct request *rq);
> +
> +/* Functions used by io schedulers */
> +extern void elv_put_ioq(struct io_queue *ioq);
> +extern void __elv_ioq_slice_expired(struct request_queue *q,
> +					struct io_queue *ioq);
> +extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> +		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
> +extern void elv_schedule_dispatch(struct request_queue *q);
> +extern int elv_hw_tag(struct elevator_queue *e);
> +extern void *elv_active_sched_queue(struct elevator_queue *e);
> +extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
> +					unsigned long expires);
> +extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
> +extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
> +extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> +					int ioprio);
> +extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> +					int ioprio, struct io_queue *ioq);
> +extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
> +extern int elv_nr_busy_ioq(struct elevator_queue *e);
> +extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
> +extern void elv_free_ioq(struct io_queue *ioq);
> +
> +#else /* CONFIG_ELV_FAIR_QUEUING */
> +
> +static inline int elv_init_fq_data(struct request_queue *q,
> +					struct elevator_queue *e)
> +{
> +	return 0;
> +}
> +
> +static inline void elv_exit_fq_data(struct elevator_queue *e) {}
> +static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
> +
> +static inline void elv_fq_activate_rq(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_fq_deactivate_rq(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_fq_dispatched_request(struct elevator_queue *e,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_request_removed(struct elevator_queue *e,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_request_add(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_completed_request(struct request_queue *q,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
> +static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
> +static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
> +{
> +	return NULL;
> +}
> +#endif /* CONFIG_ELV_FAIR_QUEUING */
> +#endif /* _BFQ_SCHED_H */
> diff --git a/block/elevator.c b/block/elevator.c
> index 7073a90..c2f07f5 100644
> --- a/block/elevator.c
> +++ b/block/elevator.c
> @@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
>  	for (i = 0; i < ELV_HASH_ENTRIES; i++)
>  		INIT_HLIST_HEAD(&eq->hash[i]);
> 
> +	if (elv_init_fq_data(q, eq))
> +		goto err;
> +
>  	return eq;
>  err:
>  	kfree(eq);
> @@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
>  void elevator_exit(struct elevator_queue *e)
>  {
>  	mutex_lock(&e->sysfs_lock);
> +	elv_exit_fq_data(e);
>  	if (e->ops->elevator_exit_fn)
>  		e->ops->elevator_exit_fn(e);
>  	e->ops = NULL;
> +	elv_exit_fq_data_post(e);
>  	mutex_unlock(&e->sysfs_lock);
> 
>  	kobject_put(&e->kobj);
> @@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
>  {
>  	struct elevator_queue *e = q->elevator;
> 
> +	elv_fq_activate_rq(q, rq);
> +
>  	if (e->ops->elevator_activate_req_fn)
>  		e->ops->elevator_activate_req_fn(q, rq);
>  }
> @@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
>  {
>  	struct elevator_queue *e = q->elevator;
> 
> +	elv_fq_deactivate_rq(q, rq);
> +
>  	if (e->ops->elevator_deactivate_req_fn)
>  		e->ops->elevator_deactivate_req_fn(q, rq);
>  }
> @@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
>  	elv_rqhash_del(q, rq);
> 
>  	q->nr_sorted--;
> +	elv_fq_dispatched_request(q->elevator, rq);
> 
>  	boundary = q->end_sector;
>  	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
> @@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
>  	elv_rqhash_del(q, rq);
> 
>  	q->nr_sorted--;
> +	elv_fq_dispatched_request(q->elevator, rq);
> 
>  	q->end_sector = rq_end_sector(rq);
>  	q->boundary_rq = rq;
> @@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
>  	elv_rqhash_del(q, next);
> 
>  	q->nr_sorted--;
> +	elv_ioq_request_removed(e, next);
>  	q->last_merge = rq;
>  }
> 
> @@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
>  				q->last_merge = rq;
>  		}
> 
> -		/*
> -		 * Some ioscheds (cfq) run q->request_fn directly, so
> -		 * rq cannot be accessed after calling
> -		 * elevator_add_req_fn.
> -		 */
>  		q->elevator->ops->elevator_add_req_fn(q, rq);
> +		elv_ioq_request_add(q, rq);
>  		break;
> 
>  	case ELEVATOR_INSERT_REQUEUE:
> @@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
> 
>  int elv_queue_empty(struct request_queue *q)
>  {
> -	struct elevator_queue *e = q->elevator;
> -
>  	if (!list_empty(&q->queue_head))
>  		return 0;
> 
> -	if (e->ops->elevator_queue_empty_fn)
> -		return e->ops->elevator_queue_empty_fn(q);
> +	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
> +	if (q->nr_sorted)
> +		return 0;
> 
>  	return 1;
>  }
> @@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
>  	 */
>  	if (blk_account_rq(rq)) {
>  		q->in_flight--;
> -		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
> -			e->ops->elevator_completed_req_fn(q, rq);
> +		if (blk_sorted_rq(rq)) {
> +			if (e->ops->elevator_completed_req_fn)
> +				e->ops->elevator_completed_req_fn(q, rq);
> +			elv_ioq_completed_request(q, rq);
> +		}
>  	}
> 
>  	/*
> @@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
>  	return NULL;
>  }
>  EXPORT_SYMBOL(elv_rb_latter_request);
> +
> +/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
> +void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
> +{
> +	return ioq_sched_queue(rq_ioq(rq));
> +}
> +EXPORT_SYMBOL(elv_get_sched_queue);
> +
> +/* Select an ioscheduler queue to dispatch request from. */
> +void *elv_select_sched_queue(struct request_queue *q, int force)
> +{
> +	return ioq_sched_queue(elv_fq_select_ioq(q, force));
> +}
> +EXPORT_SYMBOL(elv_select_sched_queue);
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index b4f71f1..96a94c9 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -245,6 +245,11 @@ struct request {
> 
>  	/* for bidi */
>  	struct request *next_rq;
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	/* io queue request belongs to */
> +	struct io_queue *ioq;
> +#endif
>  };
> 
>  static inline unsigned short req_get_ioprio(struct request *req)
> diff --git a/include/linux/elevator.h b/include/linux/elevator.h
> index c59b769..679c149 100644
> --- a/include/linux/elevator.h
> +++ b/include/linux/elevator.h
> @@ -2,6 +2,7 @@
>  #define _LINUX_ELEVATOR_H
> 
>  #include <linux/percpu.h>
> +#include "../../block/elevator-fq.h"
> 
>  #ifdef CONFIG_BLOCK
> 
> @@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
> 
>  typedef void *(elevator_init_fn) (struct request_queue *);
>  typedef void (elevator_exit_fn) (struct elevator_queue *);
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
> +typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
> +typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
> +typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
> +typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
> +						struct request*);
> +typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
> +						struct request*);
> +typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
> +						void*, int probe);
> +#endif
> 
>  struct elevator_ops
>  {
> @@ -56,6 +69,17 @@ struct elevator_ops
>  	elevator_init_fn *elevator_init_fn;
>  	elevator_exit_fn *elevator_exit_fn;
>  	void (*trim)(struct io_context *);
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
> +	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
> +	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
> +
> +	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
> +	elevator_should_preempt_fn *elevator_should_preempt_fn;
> +	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
> +	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
> +#endif
>  };
> 
>  #define ELV_NAME_MAX	(16)
> @@ -76,6 +100,9 @@ struct elevator_type
>  	struct elv_fs_entry *elevator_attrs;
>  	char elevator_name[ELV_NAME_MAX];
>  	struct module *elevator_owner;
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	int elevator_features;
> +#endif
>  };
> 
>  /*
> @@ -89,6 +116,10 @@ struct elevator_queue
>  	struct elevator_type *elevator_type;
>  	struct mutex sysfs_lock;
>  	struct hlist_head *hash;
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	/* fair queuing data */
> +	struct elv_fq_data efqd;
> +#endif
>  };
> 
>  /*
> @@ -209,5 +240,25 @@ enum {
>  	__val;							\
>  })
> 
> +/* iosched can let elevator know their feature set/capability */
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +
> +/* iosched wants to use fq logic of elevator layer */
> +#define	ELV_IOSCHED_NEED_FQ	1
> +
> +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> +{
> +	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
> +}
> +
> +#else /* ELV_IOSCHED_FAIR_QUEUING */
> +
> +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> +{
> +	return 0;
> +}
> +#endif /* ELV_IOSCHED_FAIR_QUEUING */
> +extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
> +extern void *elv_select_sched_queue(struct request_queue *q, int force);
>  #endif /* CONFIG_BLOCK */
>  #endif
> -- 
> 1.6.0.6
> 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-19 20:37   ` Vivek Goyal
@ 2009-06-22  8:46     ` Balbir Singh
  -1 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-22  8:46 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, righi.andrea, m-ikeda, jbaron,
	agk, snitzer, akpm, peterz

* Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:20]:

> This is common fair queuing code in elevator layer. This is controlled by
> config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> flat fair queuing support where there is only one group, "root group" and all
> the tasks belong to root group.
> 
> This elevator layer changes are backward compatible. That means any ioscheduler
> using old interfaces will continue to work.
> 
> This code is essentially the CFQ code for fair queuing. The primary difference
> is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
>

The patch is quite long and to be honest requires a long time to
review, which I don't mind. I suspect my frequently diverted mind is
likely to miss a lot in a big patch like this. Could you consider
splitting this further if possible. I think you'll notice the number
of reviews will also increase.
 
> Signed-off-by: Nauman Rafique <nauman@google.com>
> Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
> Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
> Signed-off-by: Aristeu Rozanski <aris@redhat.com>
> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> ---
>  block/Kconfig.iosched    |   13 +
>  block/Makefile           |    1 +
>  block/elevator-fq.c      | 2015 ++++++++++++++++++++++++++++++++++++++++++++++
>  block/elevator-fq.h      |  473 +++++++++++
>  block/elevator.c         |   46 +-
>  include/linux/blkdev.h   |    5 +
>  include/linux/elevator.h |   51 ++
>  7 files changed, 2593 insertions(+), 11 deletions(-)
>  create mode 100644 block/elevator-fq.c
>  create mode 100644 block/elevator-fq.h
> 
> diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
> index 7e803fc..3398134 100644
> --- a/block/Kconfig.iosched
> +++ b/block/Kconfig.iosched
> @@ -2,6 +2,19 @@ if BLOCK
> 
>  menu "IO Schedulers"
> 
> +config ELV_FAIR_QUEUING
> +	bool "Elevator Fair Queuing Support"
> +	default n
> +	---help---
> +	  Traditionally only cfq had notion of multiple queues and it did
> +	  fair queuing at its own. With the cgroups and need of controlling
> +	  IO, now even the simple io schedulers like noop, deadline, as will
> +	  have one queue per cgroup and will need hierarchical fair queuing.
> +	  Instead of every io scheduler implementing its own fair queuing
> +	  logic, this option enables fair queuing in elevator layer so that
> +	  other ioschedulers can make use of it.
> +	  If unsure, say N.
> +
>  config IOSCHED_NOOP
>  	bool
>  	default y
> diff --git a/block/Makefile b/block/Makefile
> index e9fa4dd..94bfc6e 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -15,3 +15,4 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
> 
>  obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
>  obj-$(CONFIG_BLK_DEV_INTEGRITY)	+= blk-integrity.o
> +obj-$(CONFIG_ELV_FAIR_QUEUING)	+= elevator-fq.o
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> new file mode 100644
> index 0000000..9357fb0
> --- /dev/null
> +++ b/block/elevator-fq.c
> @@ -0,0 +1,2015 @@
> +/*
> + * BFQ: Hierarchical B-WF2Q+ scheduler.
> + *
> + * Based on ideas and code from CFQ:
> + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> + *
> + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> + *		      Paolo Valente <paolo.valente@unimore.it>
> + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> + * 	              Nauman Rafique <nauman@google.com>
> + */
> +
> +#include <linux/blkdev.h>
> +#include "elevator-fq.h"
> +#include <linux/blktrace_api.h>
> +
> +/* Values taken from cfq */
> +const int elv_slice_sync = HZ / 10;
> +int elv_slice_async = HZ / 25;
> +const int elv_slice_async_rq = 2;
> +int elv_slice_idle = HZ / 125;
> +static struct kmem_cache *elv_ioq_pool;
> +
> +#define ELV_SLICE_SCALE		(5)
> +#define ELV_HW_QUEUE_MIN	(5)
> +#define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
> +				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
> +
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe);
> +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> +						 int extract);
> +
> +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> +					unsigned short prio)

Why is the return type int and not unsigned int or unsigned long? Can
the return value ever be negative?

> +{
> +	const int base_slice = efqd->elv_slice[sync];
> +
> +	WARN_ON(prio >= IOPRIO_BE_NR);
> +
> +	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
> +}
> +
> +static inline int
> +elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
> +{
> +	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
> +}
> +
> +/* Mainly the BFQ scheduling code Follows */
> +
> +/*
> + * Shift for timestamp calculations.  This actually limits the maximum
> + * service allowed in one timestamp delta (small shift values increase it),
> + * the maximum total weight that can be used for the queues in the system
> + * (big shift values increase it), and the period of virtual time wraparounds.
> + */
> +#define WFQ_SERVICE_SHIFT	22
> +
> +/**
> + * bfq_gt - compare two timestamps.
> + * @a: first ts.
> + * @b: second ts.
> + *
> + * Return @a > @b, dealing with wrapping correctly.
> + */
> +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> +{
> +	return (s64)(a - b) > 0;
> +}
> +

a and b are of type u64, but cast to s64 to deal with wrapping?
Correct?

> +/**
> + * bfq_delta - map service into the virtual time domain.
> + * @service: amount of service.
> + * @weight: scale factor.
> + */
> +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> +					bfq_weight_t weight)
> +{
> +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> +

Why is the case required? Does the compiler complain? service is
already of the correct type.

> +	do_div(d, weight);

On a 64 system both d and weight are 64 bit, but on a 32 bit system
weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
- no?

> +	return d;
> +}
> +
> +/**
> + * bfq_calc_finish - assign the finish time to an entity.
> + * @entity: the entity to act upon.
> + * @service: the service to be charged to the entity.
> + */
> +static inline void bfq_calc_finish(struct io_entity *entity,
> +				   bfq_service_t service)
> +{
> +	BUG_ON(entity->weight == 0);
> +
> +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> +}

Should we BUG_ON (entity->finish == entity->start)? Or is that
expected when the entity has no service time left.

> +
> +static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	BUG_ON(entity == NULL);
> +	if (entity->my_sched_data == NULL)
> +		ioq = container_of(entity, struct io_queue, entity);
> +	return ioq;
> +}
> +
> +/**
> + * bfq_entity_of - get an entity from a node.
> + * @node: the node field of the entity.
> + *
> + * Convert a node pointer to the relative entity.  This is used only
> + * to simplify the logic of some functions and not as the generic
> + * conversion mechanism because, e.g., in the tree walking functions,
> + * the check for a %NULL value would be redundant.
> + */
> +static inline struct io_entity *bfq_entity_of(struct rb_node *node)
> +{
> +	struct io_entity *entity = NULL;
> +
> +	if (node != NULL)
> +		entity = rb_entry(node, struct io_entity, rb_node);
> +
> +	return entity;
> +}
> +
> +/**
> + * bfq_extract - remove an entity from a tree.
> + * @root: the tree root.
> + * @entity: the entity to remove.
> + */
> +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> +{

Extract is not common terminology, why not use bfq_remove()?

> +	BUG_ON(entity->tree != root);
> +
> +	entity->tree = NULL;
> +	rb_erase(&entity->rb_node, root);

Don't you want to make entity->tree = NULL after rb_erase?

> +}
> +
> +/**
> + * bfq_idle_extract - extract an entity from the idle tree.
> + * @st: the service tree of the owning @entity.
> + * @entity: the entity being removed.
> + */
> +static void bfq_idle_extract(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct rb_node *next;
> +
> +	BUG_ON(entity->tree != &st->idle);
> +
> +	if (entity == st->first_idle) {
> +		next = rb_next(&entity->rb_node);

What happens if next is NULL?

> +		st->first_idle = bfq_entity_of(next);
> +	}
> +
> +	if (entity == st->last_idle) {
> +		next = rb_prev(&entity->rb_node);

What happens if next is NULL?

> +		st->last_idle = bfq_entity_of(next);
> +	}
> +
> +	bfq_extract(&st->idle, entity);
> +}
> +
> +/**
> + * bfq_insert - generic tree insertion.
> + * @root: tree root.
> + * @entity: entity to insert.
> + *
> + * This is used for the idle and the active tree, since they are both
> + * ordered by finish time.
> + */
> +static void bfq_insert(struct rb_root *root, struct io_entity *entity)
> +{
> +	struct io_entity *entry;
> +	struct rb_node **node = &root->rb_node;
> +	struct rb_node *parent = NULL;
> +
> +	BUG_ON(entity->tree != NULL);
> +
> +	while (*node != NULL) {
> +		parent = *node;
> +		entry = rb_entry(parent, struct io_entity, rb_node);
> +
> +		if (bfq_gt(entry->finish, entity->finish))
> +			node = &parent->rb_left;
> +		else
> +			node = &parent->rb_right;
> +	}
> +
> +	rb_link_node(&entity->rb_node, parent, node);
> +	rb_insert_color(&entity->rb_node, root);
> +
> +	entity->tree = root;
> +}
> +
> +/**
> + * bfq_update_min - update the min_start field of a entity.
> + * @entity: the entity to update.
> + * @node: one of its children.
> + *
> + * This function is called when @entity may store an invalid value for
> + * min_start due to updates to the active tree.  The function  assumes
> + * that the subtree rooted at @node (which may be its left or its right
> + * child) has a valid min_start value.
> + */
> +static inline void bfq_update_min(struct io_entity *entity,
> +					struct rb_node *node)
> +{
> +	struct io_entity *child;
> +
> +	if (node != NULL) {
> +		child = rb_entry(node, struct io_entity, rb_node);
> +		if (bfq_gt(entity->min_start, child->min_start))
> +			entity->min_start = child->min_start;
> +	}
> +}

So.. we check to see if child's min_time is lesser than the root
entities or node entities and set it to the minimum of the two?
Can you use min_t here?

> +
> +/**
> + * bfq_update_active_node - recalculate min_start.
> + * @node: the node to update.
> + *
> + * @node may have changed position or one of its children may have moved,
> + * this function updates its min_start value.  The left and right subtrees
> + * are assumed to hold a correct min_start value.
> + */
> +static inline void bfq_update_active_node(struct rb_node *node)
> +{
> +	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
> +
> +	entity->min_start = entity->start;
> +	bfq_update_min(entity, node->rb_right);
> +	bfq_update_min(entity, node->rb_left);
> +}

I don't like this every much, we set the min_time twice, this can be
easily optimized to look at both left and right child and pick the
minimum.

> +
> +/**
> + * bfq_update_active_tree - update min_start for the whole active tree.
> + * @node: the starting node.
> + *
> + * @node must be the deepest modified node after an update.  This function
> + * updates its min_start using the values held by its children, assuming
> + * that they did not change, and then updates all the nodes that may have
> + * changed in the path to the root.  The only nodes that may have changed
> + * are the ones in the path or their siblings.
> + */
> +static void bfq_update_active_tree(struct rb_node *node)
> +{
> +	struct rb_node *parent;
> +
> +up:
> +	bfq_update_active_node(node);
> +
> +	parent = rb_parent(node);
> +	if (parent == NULL)
> +		return;
> +
> +	if (node == parent->rb_left && parent->rb_right != NULL)
> +		bfq_update_active_node(parent->rb_right);
> +	else if (parent->rb_left != NULL)
> +		bfq_update_active_node(parent->rb_left);
> +
> +	node = parent;
> +	goto up;
> +}
> +

For these functions, take a look at the walk function in the group
scheduler that does update_shares

> +/**
> + * bfq_active_insert - insert an entity in the active tree of its group/device.
> + * @st: the service tree of the entity.
> + * @entity: the entity being inserted.
> + *
> + * The active tree is ordered by finish time, but an extra key is kept
> + * per each node, containing the minimum value for the start times of
> + * its children (and the node itself), so it's possible to search for
> + * the eligible node with the lowest finish time in logarithmic time.
> + */
> +static void bfq_active_insert(struct io_service_tree *st,
> +					struct io_entity *entity)
> +{
> +	struct rb_node *node = &entity->rb_node;
> +
> +	bfq_insert(&st->active, entity);
> +
> +	if (node->rb_left != NULL)
> +		node = node->rb_left;
> +	else if (node->rb_right != NULL)
> +		node = node->rb_right;
> +
> +	bfq_update_active_tree(node);
> +}
> +
> +/**
> + * bfq_ioprio_to_weight - calc a weight from an ioprio.
> + * @ioprio: the ioprio value to convert.
> + */
> +static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
> +{
> +	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
> +	return IOPRIO_BE_NR - ioprio;
> +}
> +
> +void bfq_get_entity(struct io_entity *entity)
> +{
> +	struct io_queue *ioq = io_entity_to_ioq(entity);
> +
> +	if (ioq)
> +		elv_get_ioq(ioq);
> +}
> +
> +void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
> +{
> +	entity->ioprio = entity->new_ioprio;
> +	entity->ioprio_class = entity->new_ioprio_class;
> +	entity->sched_data = &iog->sched_data;
> +}
> +
> +/**
> + * bfq_find_deepest - find the deepest node that an extraction can modify.
> + * @node: the node being removed.
> + *
> + * Do the first step of an extraction in an rb tree, looking for the
> + * node that will replace @node, and returning the deepest node that
> + * the following modifications to the tree can touch.  If @node is the
> + * last node in the tree return %NULL.
> + */
> +static struct rb_node *bfq_find_deepest(struct rb_node *node)
> +{
> +	struct rb_node *deepest;
> +
> +	if (node->rb_right == NULL && node->rb_left == NULL)
> +		deepest = rb_parent(node);

Why is the parent the deepest? Shouldn't node be the deepest?

> +	else if (node->rb_right == NULL)
> +		deepest = node->rb_left;
> +	else if (node->rb_left == NULL)
> +		deepest = node->rb_right;
> +	else {
> +		deepest = rb_next(node);
> +		if (deepest->rb_right != NULL)
> +			deepest = deepest->rb_right;
> +		else if (rb_parent(deepest) != node)
> +			deepest = rb_parent(deepest);
> +	}
> +
> +	return deepest;
> +}

The function is not clear, could you please define deepest node
better?

> +
> +/**
> + * bfq_active_extract - remove an entity from the active tree.
> + * @st: the service_tree containing the tree.
> + * @entity: the entity being removed.
> + */
> +static void bfq_active_extract(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct rb_node *node;
> +
> +	node = bfq_find_deepest(&entity->rb_node);
> +	bfq_extract(&st->active, entity);
> +
> +	if (node != NULL)
> +		bfq_update_active_tree(node);
> +}
> +

Just to check my understanding, every time an active node is
added/removed, we update the min_time of the entire tree.

> +/**
> + * bfq_idle_insert - insert an entity into the idle tree.
> + * @st: the service tree containing the tree.
> + * @entity: the entity to insert.
> + */
> +static void bfq_idle_insert(struct io_service_tree *st,
> +					struct io_entity *entity)
> +{
> +	struct io_entity *first_idle = st->first_idle;
> +	struct io_entity *last_idle = st->last_idle;
> +
> +	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
> +		st->first_idle = entity;
> +	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
> +		st->last_idle = entity;
> +
> +	bfq_insert(&st->idle, entity);
> +}
> +
> +/**
> + * bfq_forget_entity - remove an entity from the wfq trees.
> + * @st: the service tree.
> + * @entity: the entity being removed.
> + *
> + * Update the device status and forget everything about @entity, putting
> + * the device reference to it, if it is a queue.  Entities belonging to
> + * groups are not refcounted.
> + */
> +static void bfq_forget_entity(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	BUG_ON(!entity->on_st);
> +	entity->on_st = 0;
> +	st->wsum -= entity->weight;
> +	ioq = io_entity_to_ioq(entity);
> +	if (!ioq)
> +		return;
> +	elv_put_ioq(ioq);
> +}
> +
> +/**
> + * bfq_put_idle_entity - release the idle tree ref of an entity.
> + * @st: service tree for the entity.
> + * @entity: the entity being released.
> + */
> +void bfq_put_idle_entity(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	bfq_idle_extract(st, entity);
> +	bfq_forget_entity(st, entity);
> +}
> +
> +/**
> + * bfq_forget_idle - update the idle tree if necessary.
> + * @st: the service tree to act upon.
> + *
> + * To preserve the global O(log N) complexity we only remove one entry here;
> + * as the idle tree will not grow indefinitely this can be done safely.
> + */
> +void bfq_forget_idle(struct io_service_tree *st)
> +{
> +	struct io_entity *first_idle = st->first_idle;
> +	struct io_entity *last_idle = st->last_idle;
> +
> +	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
> +	    !bfq_gt(last_idle->finish, st->vtime)) {
> +		/*
> +		 * Active tree is empty. Pull back vtime to finish time of
> +		 * last idle entity on idle tree.
> +		 * Rational seems to be that it reduces the possibility of
> +		 * vtime wraparound (bfq_gt(V-F) < 0).
> +		 */
> +		st->vtime = last_idle->finish;
> +	}
> +
> +	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
> +		bfq_put_idle_entity(st, first_idle);
> +}
> +
> +
> +static struct io_service_tree *
> +__bfq_entity_update_prio(struct io_service_tree *old_st,
> +				struct io_entity *entity)
> +{
> +	struct io_service_tree *new_st = old_st;
> +	struct io_queue *ioq = io_entity_to_ioq(entity);
> +
> +	if (entity->ioprio_changed) {
> +		entity->ioprio = entity->new_ioprio;
> +		entity->ioprio_class = entity->new_ioprio_class;
> +		entity->ioprio_changed = 0;
> +
> +		/*
> +		 * Also update the scaled budget for ioq. Group will get the
> +		 * updated budget once ioq is selected to run next.
> +		 */
> +		if (ioq) {
> +			struct elv_fq_data *efqd = ioq->efqd;
> +			entity->budget = elv_prio_to_slice(efqd, ioq);
> +		}
> +
> +		old_st->wsum -= entity->weight;
> +		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
> +
> +		/*
> +		 * NOTE: here we may be changing the weight too early,
> +		 * this will cause unfairness.  The correct approach
> +		 * would have required additional complexity to defer
> +		 * weight changes to the proper time instants (i.e.,
> +		 * when entity->finish <= old_st->vtime).
> +		 */
> +		new_st = io_entity_service_tree(entity);
> +		new_st->wsum += entity->weight;
> +
> +		if (new_st != old_st)
> +			entity->start = new_st->vtime;
> +	}
> +
> +	return new_st;
> +}
> +
> +/**
> + * __bfq_activate_entity - activate an entity.
> + * @entity: the entity being activated.
> + *
> + * Called whenever an entity is activated, i.e., it is not active and one
> + * of its children receives a new request, or has to be reactivated due to
> + * budget exhaustion.  It uses the current budget of the entity (and the
> + * service received if @entity is active) of the queue to calculate its
> + * timestamps.
> + */
> +static void __bfq_activate_entity(struct io_entity *entity, int add_front)
> +{
> +	struct io_sched_data *sd = entity->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +
> +	if (entity == sd->active_entity) {
> +		BUG_ON(entity->tree != NULL);
> +		/*
> +		 * If we are requeueing the current entity we have
> +		 * to take care of not charging to it service it has
> +		 * not received.
> +		 */
> +		bfq_calc_finish(entity, entity->service);
> +		entity->start = entity->finish;
> +		sd->active_entity = NULL;
> +	} else if (entity->tree == &st->active) {
> +		/*
> +		 * Requeueing an entity due to a change of some
> +		 * next_active entity below it.  We reuse the old
> +		 * start time.
> +		 */
> +		bfq_active_extract(st, entity);
> +	} else if (entity->tree == &st->idle) {
> +		/*
> +		 * Must be on the idle tree, bfq_idle_extract() will
> +		 * check for that.
> +		 */
> +		bfq_idle_extract(st, entity);
> +		entity->start = bfq_gt(st->vtime, entity->finish) ?
> +				       st->vtime : entity->finish;
> +	} else {
> +		/*
> +		 * The finish time of the entity may be invalid, and
> +		 * it is in the past for sure, otherwise the queue
> +		 * would have been on the idle tree.
> +		 */
> +		entity->start = st->vtime;
> +		st->wsum += entity->weight;
> +		bfq_get_entity(entity);
> +
> +		BUG_ON(entity->on_st);
> +		entity->on_st = 1;
> +	}
> +
> +	st = __bfq_entity_update_prio(st, entity);
> +	/*
> +	 * This is to emulate cfq like functionality where preemption can
> +	 * happen with-in same class, like sync queue preempting async queue
> +	 * May be this is not a very good idea from fairness point of view
> +	 * as preempting queue gains share. Keeping it for now.
> +	 */
> +	if (add_front) {
> +		struct io_entity *next_entity;
> +
> +		/*
> +		 * Determine the entity which will be dispatched next
> +		 * Use sd->next_active once hierarchical patch is applied
> +		 */
> +		next_entity = bfq_lookup_next_entity(sd, 0);
> +
> +		if (next_entity && next_entity != entity) {
> +			struct io_service_tree *new_st;
> +			bfq_timestamp_t delta;
> +
> +			new_st = io_entity_service_tree(next_entity);
> +
> +			/*
> +			 * At this point, both entities should belong to
> +			 * same service tree as cross service tree preemption
> +			 * is automatically taken care by algorithm
> +			 */
> +			BUG_ON(new_st != st);
> +			entity->finish = next_entity->finish - 1;
> +			delta = bfq_delta(entity->budget, entity->weight);
> +			entity->start = entity->finish - delta;
> +			if (bfq_gt(entity->start, st->vtime))
> +				entity->start = st->vtime;
> +		}
> +	} else {
> +		bfq_calc_finish(entity, entity->budget);
> +	}
> +	bfq_active_insert(st, entity);
> +}
> +
> +/**
> + * bfq_activate_entity - activate an entity.
> + * @entity: the entity to activate.
> + */
> +void bfq_activate_entity(struct io_entity *entity, int add_front)
> +{
> +	__bfq_activate_entity(entity, add_front);
> +}
> +
> +/**
> + * __bfq_deactivate_entity - deactivate an entity from its service tree.
> + * @entity: the entity to deactivate.
> + * @requeue: if false, the entity will not be put into the idle tree.
> + *
> + * Deactivate an entity, independently from its previous state.  If the
> + * entity was not on a service tree just return, otherwise if it is on
> + * any scheduler tree, extract it from that tree, and if necessary
> + * and if the caller did not specify @requeue, put it on the idle tree.
> + *
> + */
> +int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
> +{
> +	struct io_sched_data *sd = entity->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +	int was_active = entity == sd->active_entity;
> +	int ret = 0;
> +
> +	if (!entity->on_st)
> +		return 0;
> +
> +	BUG_ON(was_active && entity->tree != NULL);
> +
> +	if (was_active) {
> +		bfq_calc_finish(entity, entity->service);
> +		sd->active_entity = NULL;
> +	} else if (entity->tree == &st->active)
> +		bfq_active_extract(st, entity);
> +	else if (entity->tree == &st->idle)
> +		bfq_idle_extract(st, entity);
> +	else if (entity->tree != NULL)
> +		BUG();
> +
> +	if (!requeue || !bfq_gt(entity->finish, st->vtime))
> +		bfq_forget_entity(st, entity);
> +	else
> +		bfq_idle_insert(st, entity);
> +
> +	BUG_ON(sd->active_entity == entity);
> +
> +	return ret;
> +}
> +
> +/**
> + * bfq_deactivate_entity - deactivate an entity.
> + * @entity: the entity to deactivate.
> + * @requeue: true if the entity can be put on the idle tree
> + */
> +void bfq_deactivate_entity(struct io_entity *entity, int requeue)
> +{
> +	__bfq_deactivate_entity(entity, requeue);
> +}
> +
> +/**
> + * bfq_update_vtime - update vtime if necessary.
> + * @st: the service tree to act upon.
> + *
> + * If necessary update the service tree vtime to have at least one
> + * eligible entity, skipping to its start time.  Assumes that the
> + * active tree of the device is not empty.
> + *
> + * NOTE: this hierarchical implementation updates vtimes quite often,
> + * we may end up with reactivated tasks getting timestamps after a
> + * vtime skip done because we needed a ->first_active entity on some
> + * intermediate node.
> + */
> +static void bfq_update_vtime(struct io_service_tree *st)
> +{
> +	struct io_entity *entry;
> +	struct rb_node *node = st->active.rb_node;
> +
> +	entry = rb_entry(node, struct io_entity, rb_node);
> +	if (bfq_gt(entry->min_start, st->vtime)) {
> +		st->vtime = entry->min_start;
> +		bfq_forget_idle(st);
> +	}
> +}
> +
> +/**
> + * bfq_first_active - find the eligible entity with the smallest finish time
> + * @st: the service tree to select from.
> + *
> + * This function searches the first schedulable entity, starting from the
> + * root of the tree and going on the left every time on this side there is
> + * a subtree with at least one eligible (start <= vtime) entity.  The path
> + * on the right is followed only if a) the left subtree contains no eligible
> + * entities and b) no eligible entity has been found yet.
> + */
> +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> +{
> +	struct io_entity *entry, *first = NULL;
> +	struct rb_node *node = st->active.rb_node;
> +
> +	while (node != NULL) {
> +		entry = rb_entry(node, struct io_entity, rb_node);
> +left:
> +		if (!bfq_gt(entry->start, st->vtime))
> +			first = entry;
> +
> +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> +
> +		if (node->rb_left != NULL) {
> +			entry = rb_entry(node->rb_left,
> +					 struct io_entity, rb_node);
> +			if (!bfq_gt(entry->min_start, st->vtime)) {
> +				node = node->rb_left;
> +				goto left;
> +			}
> +		}
> +		if (first != NULL)
> +			break;
> +		node = node->rb_right;

Please help me understand this, we sort the tree by finish time, but
search by vtime, start_time. The worst case could easily be O(N),
right?

> +	}
> +
> +	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
> +	return first;
> +}
> +
> +/**
> + * __bfq_lookup_next_entity - return the first eligible entity in @st.
> + * @st: the service tree.
> + *
> + * Update the virtual time in @st and return the first eligible entity
> + * it contains.
> + */
> +static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
> +{
> +	struct io_entity *entity;
> +
> +	if (RB_EMPTY_ROOT(&st->active))
> +		return NULL;
> +
> +	bfq_update_vtime(st);
> +	entity = bfq_first_active_entity(st);
> +	BUG_ON(bfq_gt(entity->start, st->vtime));
> +
> +	return entity;
> +}
> +
> +/**
> + * bfq_lookup_next_entity - return the first eligible entity in @sd.
> + * @sd: the sched_data.
> + * @extract: if true the returned entity will be also extracted from @sd.
> + *
> + * NOTE: since we cache the next_active entity at each level of the
> + * hierarchy, the complexity of the lookup can be decreased with
> + * absolutely no effort just returning the cached next_active value;
> + * we prefer to do full lookups to test the consistency of * the data
> + * structures.
> + */
> +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> +						 int extract)
> +{
> +	struct io_service_tree *st = sd->service_tree;
> +	struct io_entity *entity;
> +	int i;
> +
> +	/*
> +	 * We should not call lookup when an entity is active, as doing lookup
> +	 * can result in an erroneous vtime jump.
> +	 */
> +	BUG_ON(sd->active_entity != NULL);
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
> +		entity = __bfq_lookup_next_entity(st);
> +		if (entity != NULL) {
> +			if (extract) {
> +				bfq_active_extract(st, entity);
> +				sd->active_entity = entity;
> +			}
> +			break;
> +		}
> +	}
> +
> +	return entity;
> +}
> +
> +void entity_served(struct io_entity *entity, bfq_service_t served)
> +{
> +	struct io_service_tree *st;
> +
> +	st = io_entity_service_tree(entity);
> +	entity->service += served;
> +	BUG_ON(st->wsum == 0);
> +	st->vtime += bfq_delta(served, st->wsum);
> +	bfq_forget_idle(st);

Forget idle checks to see if the st->vtime > first_idle->finish, if so
it pushes the first_idle to later, right?

> +}
> +
> +/**
> + * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
> + * @st: the service tree being flushed.
> + */
> +void io_flush_idle_tree(struct io_service_tree *st)
> +{
> +	struct io_entity *entity = st->first_idle;
> +
> +	for (; entity != NULL; entity = st->first_idle)
> +		__bfq_deactivate_entity(entity, 0);
> +}
> +
> +/* Elevator fair queuing function */
> +struct io_queue *rq_ioq(struct request *rq)
> +{
> +	return rq->ioq;
> +}
> +
> +static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
> +{
> +	return e->efqd.active_queue;
> +}
> +
> +void *elv_active_sched_queue(struct elevator_queue *e)
> +{
> +	return ioq_sched_queue(elv_active_ioq(e));
> +}
> +EXPORT_SYMBOL(elv_active_sched_queue);
> +
> +int elv_nr_busy_ioq(struct elevator_queue *e)
> +{
> +	return e->efqd.busy_queues;
> +}
> +EXPORT_SYMBOL(elv_nr_busy_ioq);
> +
> +int elv_hw_tag(struct elevator_queue *e)
> +{
> +	return e->efqd.hw_tag;
> +}
> +EXPORT_SYMBOL(elv_hw_tag);
> +
> +/* Helper functions for operating on elevator idle slice timer */
> +int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +
> +	return mod_timer(&efqd->idle_slice_timer, expires);
> +}
> +EXPORT_SYMBOL(elv_mod_idle_slice_timer);
> +
> +int elv_del_idle_slice_timer(struct elevator_queue *eq)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +
> +	return del_timer(&efqd->idle_slice_timer);
> +}
> +EXPORT_SYMBOL(elv_del_idle_slice_timer);
> +
> +unsigned int elv_get_slice_idle(struct elevator_queue *eq)
> +{
> +	return eq->efqd.elv_slice_idle;
> +}
> +EXPORT_SYMBOL(elv_get_slice_idle);
> +
> +void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
> +{
> +	entity_served(&ioq->entity, served);
> +}
> +
> +/* Tells whether ioq is queued in root group or not */
> +static inline int is_root_group_ioq(struct request_queue *q,
> +					struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
> +}
> +
> +/*
> + * sysfs parts below -->
> + */
> +static ssize_t
> +elv_var_show(unsigned int var, char *page)
> +{
> +	return sprintf(page, "%d\n", var);
> +}
> +
> +static ssize_t
> +elv_var_store(unsigned int *var, const char *page, size_t count)
> +{
> +	char *p = (char *) page;
> +
> +	*var = simple_strtoul(p, &p, 10);
> +	return count;
> +}
> +
> +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
> +ssize_t __FUNC(struct elevator_queue *e, char *page)		\
> +{									\
> +	struct elv_fq_data *efqd = &e->efqd;				\
> +	unsigned int __data = __VAR;					\
> +	if (__CONV)							\
> +		__data = jiffies_to_msecs(__data);			\
> +	return elv_var_show(__data, (page));				\
> +}
> +SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
> +EXPORT_SYMBOL(elv_slice_idle_show);
> +SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
> +EXPORT_SYMBOL(elv_slice_sync_show);
> +SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
> +EXPORT_SYMBOL(elv_slice_async_show);
> +#undef SHOW_FUNCTION
> +
> +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
> +ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
> +{									\
> +	struct elv_fq_data *efqd = &e->efqd;				\
> +	unsigned int __data;						\
> +	int ret = elv_var_store(&__data, (page), count);		\
> +	if (__data < (MIN))						\
> +		__data = (MIN);						\
> +	else if (__data > (MAX))					\
> +		__data = (MAX);						\
> +	if (__CONV)							\
> +		*(__PTR) = msecs_to_jiffies(__data);			\
> +	else								\
> +		*(__PTR) = __data;					\
> +	return ret;							\
> +}
> +STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_idle_store);
> +STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_sync_store);
> +STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_async_store);
> +#undef STORE_FUNCTION
> +
> +void elv_schedule_dispatch(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (elv_nr_busy_ioq(q->elevator)) {
> +		elv_log(efqd, "schedule dispatch");
> +		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
> +	}
> +}
> +EXPORT_SYMBOL(elv_schedule_dispatch);
> +
> +void elv_kick_queue(struct work_struct *work)
> +{
> +	struct elv_fq_data *efqd =
> +		container_of(work, struct elv_fq_data, unplug_work);
> +	struct request_queue *q = efqd->queue;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(q->queue_lock, flags);
> +	blk_start_queueing(q);
> +	spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
> +void elv_shutdown_timer_wq(struct elevator_queue *e)
> +{
> +	del_timer_sync(&e->efqd.idle_slice_timer);
> +	cancel_work_sync(&e->efqd.unplug_work);
> +}
> +EXPORT_SYMBOL(elv_shutdown_timer_wq);
> +
> +void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	ioq->slice_end = jiffies + ioq->entity.budget;
> +	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
> +}
> +
> +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = ioq->efqd;
> +	unsigned long elapsed = jiffies - ioq->last_end_request;
> +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> +
> +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> +}

Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
understand the algorithm.

> +
> +/*
> + * Disable idle window if the process thinks too long.
> + * This idle flag can also be updated by io scheduler.
> + */
> +static void elv_ioq_update_idle_window(struct elevator_queue *eq,
> +				struct io_queue *ioq, struct request *rq)
> +{
> +	int old_idle, enable_idle;
> +	struct elv_fq_data *efqd = ioq->efqd;
> +
> +	/*
> +	 * Don't idle for async or idle io prio class
> +	 */
> +	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
> +		return;
> +
> +	enable_idle = old_idle = elv_ioq_idle_window(ioq);
> +
> +	if (!efqd->elv_slice_idle)
> +		enable_idle = 0;
> +	else if (ioq_sample_valid(ioq->ttime_samples)) {
> +		if (ioq->ttime_mean > efqd->elv_slice_idle)
> +			enable_idle = 0;
> +		else
> +			enable_idle = 1;
> +	}
> +
> +	/*
> +	 * From think time perspective idle should be enabled. Check with
> +	 * io scheduler if it wants to disable idling based on additional
> +	 * considrations like seek pattern.
> +	 */
> +	if (enable_idle) {
> +		if (eq->ops->elevator_update_idle_window_fn)
> +			enable_idle = eq->ops->elevator_update_idle_window_fn(
> +						eq, ioq->sched_queue, rq);
> +		if (!enable_idle)
> +			elv_log_ioq(efqd, ioq, "iosched disabled idle");
> +	}
> +
> +	if (old_idle != enable_idle) {
> +		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
> +		if (enable_idle)
> +			elv_mark_ioq_idle_window(ioq);
> +		else
> +			elv_clear_ioq_idle_window(ioq);
> +	}
> +}
> +
> +struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
> +	return ioq;
> +}
> +EXPORT_SYMBOL(elv_alloc_ioq);
> +
> +void elv_free_ioq(struct io_queue *ioq)
> +{
> +	kmem_cache_free(elv_ioq_pool, ioq);
> +}
> +EXPORT_SYMBOL(elv_free_ioq);
> +
> +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> +			void *sched_queue, int ioprio_class, int ioprio,
> +			int is_sync)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> +
> +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> +	atomic_set(&ioq->ref, 0);
> +	ioq->efqd = efqd;
> +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> +	elv_ioq_set_ioprio(ioq, ioprio);
> +	ioq->pid = current->pid;

Is pid used for cgroup association later? I don't see why we save the
pid otherwise? If yes, why not store the cgroup of the current->pid?

> +	ioq->sched_queue = sched_queue;
> +	if (is_sync && !elv_ioq_class_idle(ioq))
> +		elv_mark_ioq_idle_window(ioq);
> +	bfq_init_entity(&ioq->entity, iog);
> +	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
> +	if (is_sync)
> +		ioq->last_end_request = jiffies;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(elv_init_ioq);
> +
> +void elv_put_ioq(struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = ioq->efqd;
> +	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
> +						efqd);
> +
> +	BUG_ON(atomic_read(&ioq->ref) <= 0);
> +	if (!atomic_dec_and_test(&ioq->ref))
> +		return;
> +	BUG_ON(ioq->nr_queued);
> +	BUG_ON(ioq->entity.tree != NULL);
> +	BUG_ON(elv_ioq_busy(ioq));
> +	BUG_ON(efqd->active_queue == ioq);
> +
> +	/* Can be called by outgoing elevator. Don't use q */
> +	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
> +
> +	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
> +	elv_log_ioq(efqd, ioq, "put_queue");
> +	elv_free_ioq(ioq);
> +}
> +EXPORT_SYMBOL(elv_put_ioq);
> +
> +void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
> +{
> +	struct io_queue *ioq = *ioq_ptr;
> +
> +	if (ioq != NULL) {
> +		/* Drop the reference taken by the io group */
> +		elv_put_ioq(ioq);
> +		*ioq_ptr = NULL;
> +	}
> +}
> +
> +/*
> + * Normally next io queue to be served is selected from the service tree.
> + * This function allows one to choose a specific io queue to run next
> + * out of order. This is primarily to accomodate the close_cooperator
> + * feature of cfq.
> + *
> + * Currently it is done only for root level as to begin with supporting
> + * close cooperator feature only for root group to make sure default
> + * cfq behavior in flat hierarchy is not changed.
> + */
> +void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	struct io_sched_data *sd = &efqd->root_group->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +
> +	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
> +	BUG_ON(!efqd->busy_queues);
> +	BUG_ON(sd != entity->sched_data);
> +	BUG_ON(!st);
> +
> +	bfq_update_vtime(st);
> +	bfq_active_extract(st, entity);
> +	sd->active_entity = entity;
> +	entity->service = 0;
> +	elv_log_ioq(efqd, ioq, "set_next_ioq");
> +}
> +
> +/* Get next queue for service. */
> +struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = NULL;
> +	struct io_queue *ioq = NULL;
> +	struct io_sched_data *sd;
> +
> +	/*
> +	 * We should not call lookup when an entity is active, as doing
> +	 * lookup can result in an erroneous vtime jump.
> +	 */
> +	BUG_ON(efqd->active_queue != NULL);
> +
> +	if (!efqd->busy_queues)
> +		return NULL;
> +
> +	sd = &efqd->root_group->sched_data;
> +	entity = bfq_lookup_next_entity(sd, 1);
> +
> +	BUG_ON(!entity);
> +	if (extract)
> +		entity->service = 0;
> +	ioq = io_entity_to_ioq(entity);
> +
> +	return ioq;
> +}
> +
> +/*
> + * coop tells that io scheduler selected a queue for us and we did not

coop?

> + * select the next queue based on fairness.
> + */
> +static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> +					int coop)
> +{
> +	struct request_queue *q = efqd->queue;
> +
> +	if (ioq) {
> +		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
> +							efqd->busy_queues);
> +		ioq->slice_end = 0;
> +
> +		elv_clear_ioq_wait_request(ioq);
> +		elv_clear_ioq_must_dispatch(ioq);
> +		elv_mark_ioq_slice_new(ioq);
> +
> +		del_timer(&efqd->idle_slice_timer);
> +	}
> +
> +	efqd->active_queue = ioq;
> +
> +	/* Let iosched know if it wants to take some action */
> +	if (ioq) {
> +		if (q->elevator->ops->elevator_active_ioq_set_fn)
> +			q->elevator->ops->elevator_active_ioq_set_fn(q,
> +							ioq->sched_queue, coop);
> +	}
> +}
> +
> +/* Get and set a new active queue for service. */
> +struct io_queue *elv_set_active_ioq(struct request_queue *q,
> +						struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	int coop = 0;
> +
> +	if (!ioq)
> +		ioq = elv_get_next_ioq(q, 1);
> +	else {
> +		elv_set_next_ioq(q, ioq);
> +		/*
> +		 * io scheduler selected the next queue for us. Pass this
> +		 * this info back to io scheudler. cfq currently uses it
> +		 * to reset coop flag on the queue.
> +		 */
> +		coop = 1;
> +	}
> +	__elv_set_active_ioq(efqd, ioq, coop);
> +	return ioq;
> +}
> +
> +void elv_reset_active_ioq(struct elv_fq_data *efqd)
> +{
> +	struct request_queue *q = efqd->queue;
> +	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
> +
> +	if (q->elevator->ops->elevator_active_ioq_reset_fn)
> +		q->elevator->ops->elevator_active_ioq_reset_fn(q,
> +							ioq->sched_queue);
> +	efqd->active_queue = NULL;
> +	del_timer(&efqd->idle_slice_timer);
> +}
> +
> +void elv_activate_ioq(struct io_queue *ioq, int add_front)
> +{
> +	bfq_activate_entity(&ioq->entity, add_front);
> +}
> +
> +void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> +					int requeue)
> +{
> +	bfq_deactivate_entity(&ioq->entity, requeue);
> +}
> +
> +/* Called when an inactive queue receives a new request. */
> +void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
> +{
> +	BUG_ON(elv_ioq_busy(ioq));
> +	BUG_ON(ioq == efqd->active_queue);
> +	elv_log_ioq(efqd, ioq, "add to busy");
> +	elv_activate_ioq(ioq, 0);
> +	elv_mark_ioq_busy(ioq);
> +	efqd->busy_queues++;
> +	if (elv_ioq_class_rt(ioq)) {
> +		struct io_group *iog = ioq_to_io_group(ioq);
> +		iog->busy_rt_queues++;
> +	}
> +}
> +
> +void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
> +					int requeue)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	BUG_ON(!elv_ioq_busy(ioq));
> +	BUG_ON(ioq->nr_queued);
> +	elv_log_ioq(efqd, ioq, "del from busy");
> +	elv_clear_ioq_busy(ioq);
> +	BUG_ON(efqd->busy_queues == 0);
> +	efqd->busy_queues--;
> +	if (elv_ioq_class_rt(ioq)) {
> +		struct io_group *iog = ioq_to_io_group(ioq);
> +		iog->busy_rt_queues--;
> +	}
> +
> +	elv_deactivate_ioq(efqd, ioq, requeue);
> +}
> +
> +/*
> + * Do the accounting. Determine how much service (in terms of time slices)
> + * current queue used and adjust the start, finish time of queue and vtime
> + * of the tree accordingly.
> + *
> + * Determining the service used in terms of time is tricky in certain
> + * situations. Especially when underlying device supports command queuing
> + * and requests from multiple queues can be there at same time, then it
> + * is not clear which queue consumed how much of disk time.
> + *
> + * To mitigate this problem, cfq starts the time slice of the queue only
> + * after first request from the queue has completed. This does not work
> + * very well if we expire the queue before we wait for first and more
> + * request to finish from the queue. For seeky queues, we will expire the
> + * queue after dispatching few requests without waiting and start dispatching
> + * from next queue.
> + *
> + * Not sure how to determine the time consumed by queue in such scenarios.
> + * Currently as a crude approximation, we are charging 25% of time slice
> + * for such cases. A better mechanism is needed for accurate accounting.
> + */
> +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> +
> +	assert_spin_locked(q->queue_lock);
> +	elv_log_ioq(efqd, ioq, "slice expired");
> +
> +	if (elv_ioq_wait_request(ioq))
> +		del_timer(&efqd->idle_slice_timer);
> +
> +	elv_clear_ioq_wait_request(ioq);
> +
> +	/*
> +	 * if ioq->slice_end = 0, that means a queue was expired before first
> +	 * reuqest from the queue got completed. Of course we are not planning
> +	 * to idle on the queue otherwise we would not have expired it.
> +	 *
> +	 * Charge for the 25% slice in such cases. This is not the best thing
> +	 * to do but at the same time not very sure what's the next best
> +	 * thing to do.
> +	 *
> +	 * This arises from that fact that we don't have the notion of
> +	 * one queue being operational at one time. io scheduler can dispatch
> +	 * requests from multiple queues in one dispatch round. Ideally for
> +	 * more accurate accounting of exact disk time used by disk, one
> +	 * should dispatch requests from only one queue and wait for all
> +	 * the requests to finish. But this will reduce throughput.
> +	 */
> +	if (!ioq->slice_end)
> +		slice_used = entity->budget/4;
> +	else {
> +		if (time_after(ioq->slice_end, jiffies)) {
> +			slice_unused = ioq->slice_end - jiffies;
> +			if (slice_unused == entity->budget) {
> +				/*
> +				 * queue got expired immediately after
> +				 * completing first request. Charge 25% of
> +				 * slice.
> +				 */
> +				slice_used = entity->budget/4;
> +			} else
> +				slice_used = entity->budget - slice_unused;
> +		} else {
> +			slice_overshoot = jiffies - ioq->slice_end;
> +			slice_used = entity->budget + slice_overshoot;
> +		}
> +	}
> +
> +	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
> +			jiffies);
> +	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
> +				slice_used, entity->budget, slice_overshoot);
> +	elv_ioq_served(ioq, slice_used);
> +
> +	BUG_ON(ioq != efqd->active_queue);
> +	elv_reset_active_ioq(efqd);
> +
> +	if (!ioq->nr_queued)
> +		elv_del_ioq_busy(q->elevator, ioq, 1);
> +	else
> +		elv_activate_ioq(ioq, 0);
> +}
> +EXPORT_SYMBOL(__elv_ioq_slice_expired);
> +
> +/*
> + *  Expire the ioq.
> + */
> +void elv_ioq_slice_expired(struct request_queue *q)
> +{
> +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> +
> +	if (ioq)
> +		__elv_ioq_slice_expired(q, ioq);
> +}
> +
> +/*
> + * Check if new_cfqq should preempt the currently active queue. Return 0 for
> + * no or if we aren't sure, a 1 will cause a preemption attempt.
> + */
> +int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
> +			struct request *rq)
> +{
> +	struct io_queue *ioq;
> +	struct elevator_queue *eq = q->elevator;
> +	struct io_entity *entity, *new_entity;
> +
> +	ioq = elv_active_ioq(eq);
> +
> +	if (!ioq)
> +		return 0;
> +
> +	entity = &ioq->entity;
> +	new_entity = &new_ioq->entity;
> +
> +	/*
> +	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
> +	 */
> +
> +	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
> +	    && entity->ioprio_class != IOPRIO_CLASS_RT)
> +		return 1;
> +	/*
> +	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
> +	 */
> +
> +	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
> +	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
> +		return 1;
> +
> +	/*
> +	 * Check with io scheduler if it has additional criterion based on
> +	 * which it wants to preempt existing queue.
> +	 */
> +	if (eq->ops->elevator_should_preempt_fn)
> +		return eq->ops->elevator_should_preempt_fn(q,
> +						ioq_sched_queue(new_ioq), rq);
> +
> +	return 0;
> +}
> +
> +static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
> +{
> +	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
> +	elv_ioq_slice_expired(q);
> +
> +	/*
> +	 * Put the new queue at the front of the of the current list,
> +	 * so we know that it will be selected next.
> +	 */
> +
> +	elv_activate_ioq(ioq, 1);
> +	elv_ioq_set_slice_end(ioq, 0);
> +	elv_mark_ioq_slice_new(ioq);
> +}
> +
> +void elv_ioq_request_add(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *ioq = rq->ioq;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	BUG_ON(!efqd);
> +	BUG_ON(!ioq);
> +	efqd->rq_queued++;
> +	ioq->nr_queued++;
> +
> +	if (!elv_ioq_busy(ioq))
> +		elv_add_ioq_busy(efqd, ioq);
> +
> +	elv_ioq_update_io_thinktime(ioq);
> +	elv_ioq_update_idle_window(q->elevator, ioq, rq);
> +
> +	if (ioq == elv_active_ioq(q->elevator)) {
> +		/*
> +		 * Remember that we saw a request from this process, but
> +		 * don't start queuing just yet. Otherwise we risk seeing lots
> +		 * of tiny requests, because we disrupt the normal plugging
> +		 * and merging. If the request is already larger than a single
> +		 * page, let it rip immediately. For that case we assume that
> +		 * merging is already done. Ditto for a busy system that
> +		 * has other work pending, don't risk delaying until the
> +		 * idle timer unplug to continue working.
> +		 */
> +		if (elv_ioq_wait_request(ioq)) {
> +			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
> +			    efqd->busy_queues > 1) {
> +				del_timer(&efqd->idle_slice_timer);
> +				blk_start_queueing(q);
> +			}
> +			elv_mark_ioq_must_dispatch(ioq);
> +		}
> +	} else if (elv_should_preempt(q, ioq, rq)) {
> +		/*
> +		 * not the active queue - expire current slice if it is
> +		 * idle and has expired it's mean thinktime or this new queue
> +		 * has some old slice time left and is of higher priority or
> +		 * this new queue is RT and the current one is BE
> +		 */
> +		elv_preempt_queue(q, ioq);
> +		blk_start_queueing(q);
> +	}
> +}
> +
> +void elv_idle_slice_timer(unsigned long data)
> +{
> +	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
> +	struct io_queue *ioq;
> +	unsigned long flags;
> +	struct request_queue *q = efqd->queue;
> +
> +	elv_log(efqd, "idle timer fired");
> +
> +	spin_lock_irqsave(q->queue_lock, flags);
> +
> +	ioq = efqd->active_queue;
> +
> +	if (ioq) {
> +
> +		/*
> +		 * We saw a request before the queue expired, let it through
> +		 */
> +		if (elv_ioq_must_dispatch(ioq))
> +			goto out_kick;
> +
> +		/*
> +		 * expired
> +		 */
> +		if (elv_ioq_slice_used(ioq))
> +			goto expire;
> +
> +		/*
> +		 * only expire and reinvoke request handler, if there are
> +		 * other queues with pending requests
> +		 */
> +		if (!elv_nr_busy_ioq(q->elevator))
> +			goto out_cont;
> +
> +		/*
> +		 * not expired and it has a request pending, let it dispatch
> +		 */
> +		if (ioq->nr_queued)
> +			goto out_kick;
> +	}
> +expire:
> +	elv_ioq_slice_expired(q);
> +out_kick:
> +	elv_schedule_dispatch(q);
> +out_cont:
> +	spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
> +void elv_ioq_arm_slice_timer(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> +	unsigned long sl;
> +
> +	BUG_ON(!ioq);
> +
> +	/*
> +	 * SSD device without seek penalty, disable idling. But only do so
> +	 * for devices that support queuing, otherwise we still have a problem
> +	 * with sync vs async workloads.
> +	 */
> +	if (blk_queue_nonrot(q) && efqd->hw_tag)
> +		return;
> +
> +	/*
> +	 * still requests with the driver, don't idle
> +	 */
> +	if (efqd->rq_in_driver)
> +		return;
> +
> +	/*
> +	 * idle is disabled, either manually or by past process history
> +	 */
> +	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
> +		return;
> +
> +	/*
> +	 * may be iosched got its own idling logic. In that case io
> +	 * schduler will take care of arming the timer, if need be.
> +	 */
> +	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
> +		q->elevator->ops->elevator_arm_slice_timer_fn(q,
> +						ioq->sched_queue);
> +	} else {
> +		elv_mark_ioq_wait_request(ioq);
> +		sl = efqd->elv_slice_idle;
> +		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
> +		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
> +	}
> +}
> +
> +/* Common layer function to select the next queue to dispatch from */
> +void *elv_fq_select_ioq(struct request_queue *q, int force)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> +	struct io_group *iog;
> +
> +	if (!elv_nr_busy_ioq(q->elevator))
> +		return NULL;
> +
> +	if (ioq == NULL)
> +		goto new_queue;
> +
> +	/*
> +	 * Force dispatch. Continue to dispatch from current queue as long
> +	 * as it has requests.
> +	 */
> +	if (unlikely(force)) {
> +		if (ioq->nr_queued)
> +			goto keep_queue;
> +		else
> +			goto expire;
> +	}
> +
> +	/*
> +	 * The active queue has run out of time, expire it and select new.
> +	 */
> +	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
> +		goto expire;
> +
> +	/*
> +	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
> +	 * cfqq.
> +	 */
> +	iog = ioq_to_io_group(ioq);
> +
> +	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +		/*
> +		 * We simulate this as cfqq timed out so that it gets to bank
> +		 * the remaining of its time slice.
> +		 */
> +		elv_log_ioq(efqd, ioq, "preempt");
> +		goto expire;
> +	}
> +
> +	/*
> +	 * The active queue has requests and isn't expired, allow it to
> +	 * dispatch.
> +	 */
> +
> +	if (ioq->nr_queued)
> +		goto keep_queue;
> +
> +	/*
> +	 * If another queue has a request waiting within our mean seek
> +	 * distance, let it run.  The expire code will check for close
> +	 * cooperators and put the close queue at the front of the service
> +	 * tree.
> +	 */
> +	new_ioq = elv_close_cooperator(q, ioq, 0);
> +	if (new_ioq)
> +		goto expire;
> +
> +	/*
> +	 * No requests pending. If the active queue still has requests in
> +	 * flight or is idling for a new request, allow either of these
> +	 * conditions to happen (or time out) before selecting a new queue.
> +	 */
> +
> +	if (timer_pending(&efqd->idle_slice_timer) ||
> +	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
> +		ioq = NULL;
> +		goto keep_queue;
> +	}
> +
> +expire:
> +	elv_ioq_slice_expired(q);
> +new_queue:
> +	ioq = elv_set_active_ioq(q, new_ioq);
> +keep_queue:
> +	return ioq;
> +}
> +
> +/* A request got removed from io_queue. Do the accounting */
> +void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
> +{
> +	struct io_queue *ioq;
> +	struct elv_fq_data *efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	ioq = rq->ioq;
> +	BUG_ON(!ioq);
> +	ioq->nr_queued--;
> +
> +	efqd = ioq->efqd;
> +	BUG_ON(!efqd);
> +	efqd->rq_queued--;
> +
> +	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
> +		elv_del_ioq_busy(e, ioq, 1);
> +}
> +
> +/* A request got dispatched. Do the accounting. */
> +void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
> +{
> +	struct io_queue *ioq = rq->ioq;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	BUG_ON(!ioq);
> +	elv_ioq_request_dispatched(ioq);
> +	elv_ioq_request_removed(e, rq);
> +	elv_clear_ioq_must_dispatch(ioq);
> +}
> +
> +void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	efqd->rq_in_driver++;
> +	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
> +						efqd->rq_in_driver);
> +}
> +
> +void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	WARN_ON(!efqd->rq_in_driver);
> +	efqd->rq_in_driver--;
> +	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
> +						efqd->rq_in_driver);
> +}
> +
> +/*
> + * Update hw_tag based on peak queue depth over 50 samples under
> + * sufficient load.
> + */
> +static void elv_update_hw_tag(struct elv_fq_data *efqd)
> +{
> +	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
> +		efqd->rq_in_driver_peak = efqd->rq_in_driver;
> +
> +	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
> +	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
> +		return;
> +
> +	if (efqd->hw_tag_samples++ < 50)
> +		return;
> +
> +	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
> +		efqd->hw_tag = 1;
> +	else
> +		efqd->hw_tag = 0;
> +
> +	efqd->hw_tag_samples = 0;
> +	efqd->rq_in_driver_peak = 0;
> +}
> +
> +/*
> + * If ioscheduler has functionality of keeping track of close cooperator, check
> + * with it if it has got a closely co-operating queue.
> + */
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe)
> +{
> +	struct elevator_queue *e = q->elevator;
> +	struct io_queue *new_ioq = NULL;
> +
> +	/*
> +	 * Currently this feature is supported only for flat hierarchy or
> +	 * root group queues so that default cfq behavior is not changed.
> +	 */
> +	if (!is_root_group_ioq(q, ioq))
> +		return NULL;
> +
> +	if (q->elevator->ops->elevator_close_cooperator_fn)
> +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> +						ioq->sched_queue, probe);
> +
> +	/* Only select co-operating queue if it belongs to root group */
> +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> +		return NULL;
> +
> +	return new_ioq;
> +}
> +
> +/* A request got completed from io_queue. Do the accounting. */
> +void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> +{
> +	const int sync = rq_is_sync(rq);
> +	struct io_queue *ioq;
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	ioq = rq->ioq;
> +
> +	elv_log_ioq(efqd, ioq, "complete");
> +
> +	elv_update_hw_tag(efqd);
> +
> +	WARN_ON(!efqd->rq_in_driver);
> +	WARN_ON(!ioq->dispatched);
> +	efqd->rq_in_driver--;
> +	ioq->dispatched--;
> +
> +	if (sync)
> +		ioq->last_end_request = jiffies;
> +
> +	/*
> +	 * If this is the active queue, check if it needs to be expired,
> +	 * or if we want to idle in case it has no pending requests.
> +	 */
> +
> +	if (elv_active_ioq(q->elevator) == ioq) {
> +		if (elv_ioq_slice_new(ioq)) {
> +			elv_ioq_set_prio_slice(q, ioq);
> +			elv_clear_ioq_slice_new(ioq);
> +		}
> +		/*
> +		 * If there are no requests waiting in this queue, and
> +		 * there are other queues ready to issue requests, AND
> +		 * those other queues are issuing requests within our
> +		 * mean seek distance, give them a chance to run instead
> +		 * of idling.
> +		 */
> +		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
> +			elv_ioq_slice_expired(q);
> +		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
> +			 && sync && !rq_noidle(rq))
> +			elv_ioq_arm_slice_timer(q);
> +	}
> +
> +	if (!efqd->rq_in_driver)
> +		elv_schedule_dispatch(q);
> +}
> +
> +struct io_group *io_lookup_io_group_current(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	return efqd->root_group;
> +}
> +EXPORT_SYMBOL(io_lookup_io_group_current);
> +
> +void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> +					int ioprio)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	switch (ioprio_class) {
> +	case IOPRIO_CLASS_RT:
> +		ioq = iog->async_queue[0][ioprio];
> +		break;
> +	case IOPRIO_CLASS_BE:
> +		ioq = iog->async_queue[1][ioprio];
> +		break;
> +	case IOPRIO_CLASS_IDLE:
> +		ioq = iog->async_idle_queue;
> +		break;
> +	default:
> +		BUG();
> +	}
> +
> +	if (ioq)
> +		return ioq->sched_queue;
> +	return NULL;
> +}
> +EXPORT_SYMBOL(io_group_async_queue_prio);
> +
> +void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> +					int ioprio, struct io_queue *ioq)
> +{
> +	switch (ioprio_class) {
> +	case IOPRIO_CLASS_RT:
> +		iog->async_queue[0][ioprio] = ioq;
> +		break;
> +	case IOPRIO_CLASS_BE:
> +		iog->async_queue[1][ioprio] = ioq;
> +		break;
> +	case IOPRIO_CLASS_IDLE:
> +		iog->async_idle_queue = ioq;
> +		break;
> +	default:
> +		BUG();
> +	}
> +
> +	/*
> +	 * Take the group reference and pin the queue. Group exit will
> +	 * clean it up
> +	 */
> +	elv_get_ioq(ioq);
> +}
> +EXPORT_SYMBOL(io_group_set_async_queue);
> +
> +/*
> + * Release all the io group references to its async queues.
> + */
> +void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
> +{
> +	int i, j;
> +
> +	for (i = 0; i < 2; i++)
> +		for (j = 0; j < IOPRIO_BE_NR; j++)
> +			elv_release_ioq(e, &iog->async_queue[i][j]);
> +
> +	/* Free up async idle queue */
> +	elv_release_ioq(e, &iog->async_idle_queue);
> +}
> +
> +struct io_group *io_alloc_root_group(struct request_queue *q,
> +					struct elevator_queue *e, void *key)
> +{
> +	struct io_group *iog;
> +	int i;
> +
> +	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
> +	if (iog == NULL)
> +		return NULL;
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
> +		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
> +
> +	return iog;
> +}
> +
> +void io_free_root_group(struct elevator_queue *e)
> +{
> +	struct io_group *iog = e->efqd.root_group;
> +	struct io_service_tree *st;
> +	int i;
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
> +		st = iog->sched_data.service_tree + i;
> +		io_flush_idle_tree(st);
> +	}
> +
> +	io_put_io_group_queues(e, iog);
> +	kfree(iog);
> +}
> +
> +static void elv_slab_kill(void)
> +{
> +	/*
> +	 * Caller already ensured that pending RCU callbacks are completed,
> +	 * so we should have no busy allocations at this point.
> +	 */
> +	if (elv_ioq_pool)
> +		kmem_cache_destroy(elv_ioq_pool);
> +}
> +
> +static int __init elv_slab_setup(void)
> +{
> +	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
> +	if (!elv_ioq_pool)
> +		goto fail;
> +
> +	return 0;
> +fail:
> +	elv_slab_kill();
> +	return -ENOMEM;
> +}
> +
> +/* Initialize fair queueing data associated with elevator */
> +int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
> +{
> +	struct io_group *iog;
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return 0;
> +
> +	iog = io_alloc_root_group(q, e, efqd);
> +	if (iog == NULL)
> +		return 1;
> +
> +	efqd->root_group = iog;
> +	efqd->queue = q;
> +
> +	init_timer(&efqd->idle_slice_timer);
> +	efqd->idle_slice_timer.function = elv_idle_slice_timer;
> +	efqd->idle_slice_timer.data = (unsigned long) efqd;
> +
> +	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
> +
> +	efqd->elv_slice[0] = elv_slice_async;
> +	efqd->elv_slice[1] = elv_slice_sync;
> +	efqd->elv_slice_idle = elv_slice_idle;
> +	efqd->hw_tag = 1;
> +
> +	return 0;
> +}
> +
> +/*
> + * elv_exit_fq_data is called before we call elevator_exit_fn. Before
> + * we ask elevator to cleanup its queues, we do the cleanup here so
> + * that all the group and idle tree references to ioq are dropped. Later
> + * during elevator cleanup, ioc reference will be dropped which will lead
> + * to removal of ioscheduler queue as well as associated ioq object.
> + */
> +void elv_exit_fq_data(struct elevator_queue *e)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	elv_shutdown_timer_wq(e);
> +
> +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> +	io_free_root_group(e);
> +}
> +
> +/*
> + * This is called after the io scheduler has cleaned up its data structres.
> + * I don't think that this function is required. Right now just keeping it
> + * because cfq cleans up timer and work queue again after freeing up
> + * io contexts. To me io scheduler has already been drained out, and all
> + * the active queue have already been expired so time and work queue should
> + * not been activated during cleanup process.
> + *
> + * Keeping it here for the time being. Will get rid of it later.
> + */
> +void elv_exit_fq_data_post(struct elevator_queue *e)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	elv_shutdown_timer_wq(e);
> +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> +}
> +
> +
> +static int __init elv_fq_init(void)
> +{
> +	if (elv_slab_setup())
> +		return -ENOMEM;
> +
> +	/* could be 0 on HZ < 1000 setups */
> +
> +	if (!elv_slice_async)
> +		elv_slice_async = 1;
> +
> +	if (!elv_slice_idle)
> +		elv_slice_idle = 1;
> +
> +	return 0;
> +}
> +
> +module_init(elv_fq_init);
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> new file mode 100644
> index 0000000..5b6c1cc
> --- /dev/null
> +++ b/block/elevator-fq.h
> @@ -0,0 +1,473 @@
> +/*
> + * BFQ: data structures and common functions prototypes.
> + *
> + * Based on ideas and code from CFQ:
> + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> + *
> + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> + *		      Paolo Valente <paolo.valente@unimore.it>
> + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> + * 	              Nauman Rafique <nauman@google.com>
> + */
> +
> +#include <linux/blkdev.h>
> +
> +#ifndef _BFQ_SCHED_H
> +#define _BFQ_SCHED_H
> +
> +#define IO_IOPRIO_CLASSES	3
> +
> +typedef u64 bfq_timestamp_t;
> +typedef unsigned long bfq_weight_t;
> +typedef unsigned long bfq_service_t;

Does this abstraction really provide any benefit? Why not directly use
the standard C types, make the code easier to read.

> +struct io_entity;
> +struct io_queue;
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +
> +#define ELV_ATTR(name) \
> +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> +
> +/**
> + * struct bfq_service_tree - per ioprio_class service tree.

Comment is old, does not reflect the newer name

> + * @active: tree for active entities (i.e., those backlogged).
> + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> + * @first_idle: idle entity with minimum F_i.
> + * @last_idle: idle entity with maximum F_i.
> + * @vtime: scheduler virtual time.
> + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> + *
> + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> + * ioprio_class has its own independent scheduler, and so its own
> + * bfq_service_tree.  All the fields are protected by the queue lock
> + * of the containing efqd.
> + */
> +struct io_service_tree {
> +	struct rb_root active;
> +	struct rb_root idle;
> +
> +	struct io_entity *first_idle;
> +	struct io_entity *last_idle;
> +
> +	bfq_timestamp_t vtime;
> +	bfq_weight_t wsum;
> +};
> +
> +/**
> + * struct bfq_sched_data - multi-class scheduler.

Again the naming convention is broken, you need to change several
bfq's to io's :)

> + * @active_entity: entity under service.
> + * @next_active: head-of-the-line entity in the scheduler.
> + * @service_tree: array of service trees, one per ioprio_class.
> + *
> + * bfq_sched_data is the basic scheduler queue.  It supports three
> + * ioprio_classes, and can be used either as a toplevel queue or as
> + * an intermediate queue on a hierarchical setup.
> + * @next_active points to the active entity of the sched_data service
> + * trees that will be scheduled next.
> + *
> + * The supported ioprio_classes are the same as in CFQ, in descending
> + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
> + * Requests from higher priority queues are served before all the
> + * requests from lower priority queues; among requests of the same
> + * queue requests are served according to B-WF2Q+.
> + * All the fields are protected by the queue lock of the containing bfqd.
> + */
> +struct io_sched_data {
> +	struct io_entity *active_entity;
> +	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
> +};
> +
> +/**
> + * struct bfq_entity - schedulable entity.
> + * @rb_node: service_tree member.
> + * @on_st: flag, true if the entity is on a tree (either the active or
> + *         the idle one of its service_tree).
> + * @finish: B-WF2Q+ finish timestamp (aka F_i).
> + * @start: B-WF2Q+ start timestamp (aka S_i).

Could you mention what key is used in the rb_tree? start, finish
sounds like a range, so my suspicion is that start is used.

> + * @tree: tree the entity is enqueued into; %NULL if not on a tree.
> + * @min_start: minimum start time of the (active) subtree rooted at
> + *             this entity; used for O(log N) lookups into active trees.

Used for O(log N) makes no sense to me, RBTree has a worst case
lookup time of O(log N), but what is the comment saying?

> + * @service: service received during the last round of service.
> + * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
> + * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
> + * @parent: parent entity, for hierarchical scheduling.
> + * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
> + *                 associated scheduler queue, %NULL on leaf nodes.
> + * @sched_data: the scheduler queue this entity belongs to.
> + * @ioprio: the ioprio in use.
> + * @new_ioprio: when an ioprio change is requested, the new ioprio value
> + * @ioprio_class: the ioprio_class in use.
> + * @new_ioprio_class: when an ioprio_class change is requested, the new
> + *                    ioprio_class value.
> + * @ioprio_changed: flag, true when the user requested an ioprio or
> + *                  ioprio_class change.
> + *
> + * A bfq_entity is used to represent either a bfq_queue (leaf node in the
> + * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
> + * entity belongs to the sched_data of the parent group in the cgroup
> + * hierarchy.  Non-leaf entities have also their own sched_data, stored
> + * in @my_sched_data.
> + *
> + * Each entity stores independently its priority values; this would allow
> + * different weights on different devices, but this functionality is not
> + * exported to userspace by now.  Priorities are updated lazily, first
> + * storing the new values into the new_* fields, then setting the
> + * @ioprio_changed flag.  As soon as there is a transition in the entity
> + * state that allows the priority update to take place the effective and
> + * the requested priority values are synchronized.
> + *
> + * The weight value is calculated from the ioprio to export the same
> + * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
> + * queues that do not spend too much time to consume their budget and
> + * have true sequential behavior, and when there are no external factors
> + * breaking anticipation) the relative weights at each level of the
> + * cgroups hierarchy should be guaranteed.
> + * All the fields are protected by the queue lock of the containing bfqd.
> + */
> +struct io_entity {
> +	struct rb_node rb_node;
> +
> +	int on_st;
> +
> +	bfq_timestamp_t finish;
> +	bfq_timestamp_t start;
> +
> +	struct rb_root *tree;
> +
> +	bfq_timestamp_t min_start;
> +
> +	bfq_service_t service, budget;
> +	bfq_weight_t weight;
> +
> +	struct io_entity *parent;
> +
> +	struct io_sched_data *my_sched_data;
> +	struct io_sched_data *sched_data;
> +
> +	unsigned short ioprio, new_ioprio;
> +	unsigned short ioprio_class, new_ioprio_class;
> +
> +	int ioprio_changed;
> +};
> +
> +/*
> + * A common structure embedded by every io scheduler into their respective
> + * queue structure.
> + */
> +struct io_queue {
> +	struct io_entity entity;

So the io_queue has an abstract entity called io_entity that contains
it QoS parameters? Correct?

> +	atomic_t ref;
> +	unsigned int flags;
> +
> +	/* Pointer to generic elevator data structure */
> +	struct elv_fq_data *efqd;
> +	pid_t pid;

Why do we store the pid?

> +
> +	/* Number of requests queued on this io queue */
> +	unsigned long nr_queued;
> +
> +	/* Requests dispatched from this queue */
> +	int dispatched;
> +
> +	/* Keep a track of think time of processes in this queue */
> +	unsigned long last_end_request;
> +	unsigned long ttime_total;
> +	unsigned long ttime_samples;
> +	unsigned long ttime_mean;
> +
> +	unsigned long slice_end;
> +
> +	/* Pointer to io scheduler's queue */
> +	void *sched_queue;
> +};
> +
> +struct io_group {
> +	struct io_sched_data sched_data;
> +
> +	/* async_queue and idle_queue are used only for cfq */
> +	struct io_queue *async_queue[2][IOPRIO_BE_NR];

Again the 2 is confusing

> +	struct io_queue *async_idle_queue;
> +
> +	/*
> +	 * Used to track any pending rt requests so we can pre-empt current
> +	 * non-RT cfqq in service when this value is non-zero.
> +	 */
> +	unsigned int busy_rt_queues;
> +};
> +
> +struct elv_fq_data {

What does fq stand for?

> +	struct io_group *root_group;
> +
> +	struct request_queue *queue;
> +	unsigned int busy_queues;
> +
> +	/* Number of requests queued */
> +	int rq_queued;
> +
> +	/* Pointer to the ioscheduler queue being served */
> +	void *active_queue;
> +
> +	int rq_in_driver;
> +	int hw_tag;
> +	int hw_tag_samples;
> +	int rq_in_driver_peak;

Some comments of _in_driver and _in_driver_peak would be nice.

> +
> +	/*
> +	 * elevator fair queuing layer has the capability to provide idling
> +	 * for ensuring fairness for processes doing dependent reads.
> +	 * This might be needed to ensure fairness among two processes doing
> +	 * synchronous reads in two different cgroups. noop and deadline don't
> +	 * have any notion of anticipation/idling. As of now, these are the
> +	 * users of this functionality.
> +	 */
> +	unsigned int elv_slice_idle;
> +	struct timer_list idle_slice_timer;
> +	struct work_struct unplug_work;
> +
> +	unsigned int elv_slice[2];

Why [2] makes the code hearder to read

> +};
> +
> +extern int elv_slice_idle;
> +extern int elv_slice_async;
> +
> +/* Logging facilities. */
> +#define elv_log_ioq(efqd, ioq, fmt, args...) \
> +	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
> +				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
> +
> +#define elv_log(efqd, fmt, args...) \
> +	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
> +
> +#define ioq_sample_valid(samples)   ((samples) > 80)
> +
> +/* Some shared queue flag manipulation functions among elevators */
> +
> +enum elv_queue_state_flags {
> +	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
> +	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
> +	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
> +	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
> +	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
> +	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
> +	ELV_QUEUE_FLAG_NR,
> +};
> +
> +#define ELV_IO_QUEUE_FLAG_FNS(name)					\
> +static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
> +{                                                                       \
> +	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
> +}                                                                       \
> +static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
> +{                                                                       \
> +	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
> +}                                                                       \
> +static inline int elv_ioq_##name(struct io_queue *ioq)         		\
> +{                                                                       \
> +	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
> +}
> +
> +ELV_IO_QUEUE_FLAG_FNS(busy)
> +ELV_IO_QUEUE_FLAG_FNS(sync)
> +ELV_IO_QUEUE_FLAG_FNS(wait_request)
> +ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
> +ELV_IO_QUEUE_FLAG_FNS(idle_window)
> +ELV_IO_QUEUE_FLAG_FNS(slice_new)
> +
> +static inline struct io_service_tree *
> +io_entity_service_tree(struct io_entity *entity)
> +{
> +	struct io_sched_data *sched_data = entity->sched_data;
> +	unsigned int idx = entity->ioprio_class - 1;
> +
> +	BUG_ON(idx >= IO_IOPRIO_CLASSES);
> +	BUG_ON(sched_data == NULL);
> +
> +	return sched_data->service_tree + idx;
> +}
> +
> +/* A request got dispatched from the io_queue. Do the accounting. */
> +static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
> +{
> +	ioq->dispatched++;
> +}
> +
> +static inline int elv_ioq_slice_used(struct io_queue *ioq)
> +{
> +	if (elv_ioq_slice_new(ioq))
> +		return 0;
> +	if (time_before(jiffies, ioq->slice_end))
> +		return 0;
> +
> +	return 1;
> +}
> +
> +/* How many request are currently dispatched from the queue */
> +static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
> +{
> +	return ioq->dispatched;
> +}
> +
> +/* How many request are currently queued in the queue */
> +static inline int elv_ioq_nr_queued(struct io_queue *ioq)
> +{
> +	return ioq->nr_queued;
> +}
> +
> +static inline void elv_get_ioq(struct io_queue *ioq)
> +{
> +	atomic_inc(&ioq->ref);
> +}
> +
> +static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
> +						unsigned long slice_end)
> +{
> +	ioq->slice_end = slice_end;
> +}
> +
> +static inline int elv_ioq_class_idle(struct io_queue *ioq)
> +{
> +	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
> +}
> +
> +static inline int elv_ioq_class_rt(struct io_queue *ioq)
> +{
> +	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
> +}
> +
> +static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
> +{
> +	return ioq->entity.new_ioprio_class;
> +}
> +
> +static inline int elv_ioq_ioprio(struct io_queue *ioq)
> +{
> +	return ioq->entity.new_ioprio;
> +}
> +
> +static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
> +						int ioprio_class)
> +{
> +	ioq->entity.new_ioprio_class = ioprio_class;
> +	ioq->entity.ioprio_changed = 1;
> +}
> +
> +static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
> +{
> +	ioq->entity.new_ioprio = ioprio;
> +	ioq->entity.ioprio_changed = 1;
> +}
> +
> +static inline void *ioq_sched_queue(struct io_queue *ioq)
> +{
> +	if (ioq)
> +		return ioq->sched_queue;
> +	return NULL;
> +}
> +
> +static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
> +{
> +	return container_of(ioq->entity.sched_data, struct io_group,
> +						sched_data);
> +}
> +
> +extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +
> +/* Functions used by elevator.c */
> +extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
> +extern void elv_exit_fq_data(struct elevator_queue *e);
> +extern void elv_exit_fq_data_post(struct elevator_queue *e);
> +
> +extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
> +extern void elv_ioq_request_removed(struct elevator_queue *e,
> +					struct request *rq);
> +extern void elv_fq_dispatched_request(struct elevator_queue *e,
> +					struct request *rq);
> +
> +extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
> +extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
> +
> +extern void elv_ioq_completed_request(struct request_queue *q,
> +				struct request *rq);
> +
> +extern void *elv_fq_select_ioq(struct request_queue *q, int force);
> +extern struct io_queue *rq_ioq(struct request *rq);
> +
> +/* Functions used by io schedulers */
> +extern void elv_put_ioq(struct io_queue *ioq);
> +extern void __elv_ioq_slice_expired(struct request_queue *q,
> +					struct io_queue *ioq);
> +extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> +		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
> +extern void elv_schedule_dispatch(struct request_queue *q);
> +extern int elv_hw_tag(struct elevator_queue *e);
> +extern void *elv_active_sched_queue(struct elevator_queue *e);
> +extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
> +					unsigned long expires);
> +extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
> +extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
> +extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> +					int ioprio);
> +extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> +					int ioprio, struct io_queue *ioq);
> +extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
> +extern int elv_nr_busy_ioq(struct elevator_queue *e);
> +extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
> +extern void elv_free_ioq(struct io_queue *ioq);
> +
> +#else /* CONFIG_ELV_FAIR_QUEUING */
> +
> +static inline int elv_init_fq_data(struct request_queue *q,
> +					struct elevator_queue *e)
> +{
> +	return 0;
> +}
> +
> +static inline void elv_exit_fq_data(struct elevator_queue *e) {}
> +static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
> +
> +static inline void elv_fq_activate_rq(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_fq_deactivate_rq(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_fq_dispatched_request(struct elevator_queue *e,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_request_removed(struct elevator_queue *e,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_request_add(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_completed_request(struct request_queue *q,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
> +static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
> +static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
> +{
> +	return NULL;
> +}
> +#endif /* CONFIG_ELV_FAIR_QUEUING */
> +#endif /* _BFQ_SCHED_H */
> diff --git a/block/elevator.c b/block/elevator.c
> index 7073a90..c2f07f5 100644
> --- a/block/elevator.c
> +++ b/block/elevator.c
> @@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
>  	for (i = 0; i < ELV_HASH_ENTRIES; i++)
>  		INIT_HLIST_HEAD(&eq->hash[i]);
> 
> +	if (elv_init_fq_data(q, eq))
> +		goto err;
> +
>  	return eq;
>  err:
>  	kfree(eq);
> @@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
>  void elevator_exit(struct elevator_queue *e)
>  {
>  	mutex_lock(&e->sysfs_lock);
> +	elv_exit_fq_data(e);
>  	if (e->ops->elevator_exit_fn)
>  		e->ops->elevator_exit_fn(e);
>  	e->ops = NULL;
> +	elv_exit_fq_data_post(e);
>  	mutex_unlock(&e->sysfs_lock);
> 
>  	kobject_put(&e->kobj);
> @@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
>  {
>  	struct elevator_queue *e = q->elevator;
> 
> +	elv_fq_activate_rq(q, rq);
> +
>  	if (e->ops->elevator_activate_req_fn)
>  		e->ops->elevator_activate_req_fn(q, rq);
>  }
> @@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
>  {
>  	struct elevator_queue *e = q->elevator;
> 
> +	elv_fq_deactivate_rq(q, rq);
> +
>  	if (e->ops->elevator_deactivate_req_fn)
>  		e->ops->elevator_deactivate_req_fn(q, rq);
>  }
> @@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
>  	elv_rqhash_del(q, rq);
> 
>  	q->nr_sorted--;
> +	elv_fq_dispatched_request(q->elevator, rq);
> 
>  	boundary = q->end_sector;
>  	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
> @@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
>  	elv_rqhash_del(q, rq);
> 
>  	q->nr_sorted--;
> +	elv_fq_dispatched_request(q->elevator, rq);
> 
>  	q->end_sector = rq_end_sector(rq);
>  	q->boundary_rq = rq;
> @@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
>  	elv_rqhash_del(q, next);
> 
>  	q->nr_sorted--;
> +	elv_ioq_request_removed(e, next);
>  	q->last_merge = rq;
>  }
> 
> @@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
>  				q->last_merge = rq;
>  		}
> 
> -		/*
> -		 * Some ioscheds (cfq) run q->request_fn directly, so
> -		 * rq cannot be accessed after calling
> -		 * elevator_add_req_fn.
> -		 */
>  		q->elevator->ops->elevator_add_req_fn(q, rq);
> +		elv_ioq_request_add(q, rq);
>  		break;
> 
>  	case ELEVATOR_INSERT_REQUEUE:
> @@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
> 
>  int elv_queue_empty(struct request_queue *q)
>  {
> -	struct elevator_queue *e = q->elevator;
> -
>  	if (!list_empty(&q->queue_head))
>  		return 0;
> 
> -	if (e->ops->elevator_queue_empty_fn)
> -		return e->ops->elevator_queue_empty_fn(q);
> +	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
> +	if (q->nr_sorted)
> +		return 0;
> 
>  	return 1;
>  }
> @@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
>  	 */
>  	if (blk_account_rq(rq)) {
>  		q->in_flight--;
> -		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
> -			e->ops->elevator_completed_req_fn(q, rq);
> +		if (blk_sorted_rq(rq)) {
> +			if (e->ops->elevator_completed_req_fn)
> +				e->ops->elevator_completed_req_fn(q, rq);
> +			elv_ioq_completed_request(q, rq);
> +		}
>  	}
> 
>  	/*
> @@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
>  	return NULL;
>  }
>  EXPORT_SYMBOL(elv_rb_latter_request);
> +
> +/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
> +void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
> +{
> +	return ioq_sched_queue(rq_ioq(rq));
> +}
> +EXPORT_SYMBOL(elv_get_sched_queue);
> +
> +/* Select an ioscheduler queue to dispatch request from. */
> +void *elv_select_sched_queue(struct request_queue *q, int force)
> +{
> +	return ioq_sched_queue(elv_fq_select_ioq(q, force));
> +}
> +EXPORT_SYMBOL(elv_select_sched_queue);
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index b4f71f1..96a94c9 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -245,6 +245,11 @@ struct request {
> 
>  	/* for bidi */
>  	struct request *next_rq;
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	/* io queue request belongs to */
> +	struct io_queue *ioq;
> +#endif
>  };
> 
>  static inline unsigned short req_get_ioprio(struct request *req)
> diff --git a/include/linux/elevator.h b/include/linux/elevator.h
> index c59b769..679c149 100644
> --- a/include/linux/elevator.h
> +++ b/include/linux/elevator.h
> @@ -2,6 +2,7 @@
>  #define _LINUX_ELEVATOR_H
> 
>  #include <linux/percpu.h>
> +#include "../../block/elevator-fq.h"
> 
>  #ifdef CONFIG_BLOCK
> 
> @@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
> 
>  typedef void *(elevator_init_fn) (struct request_queue *);
>  typedef void (elevator_exit_fn) (struct elevator_queue *);
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
> +typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
> +typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
> +typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
> +typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
> +						struct request*);
> +typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
> +						struct request*);
> +typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
> +						void*, int probe);
> +#endif
> 
>  struct elevator_ops
>  {
> @@ -56,6 +69,17 @@ struct elevator_ops
>  	elevator_init_fn *elevator_init_fn;
>  	elevator_exit_fn *elevator_exit_fn;
>  	void (*trim)(struct io_context *);
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
> +	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
> +	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
> +
> +	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
> +	elevator_should_preempt_fn *elevator_should_preempt_fn;
> +	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
> +	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
> +#endif
>  };
> 
>  #define ELV_NAME_MAX	(16)
> @@ -76,6 +100,9 @@ struct elevator_type
>  	struct elv_fs_entry *elevator_attrs;
>  	char elevator_name[ELV_NAME_MAX];
>  	struct module *elevator_owner;
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	int elevator_features;
> +#endif
>  };
> 
>  /*
> @@ -89,6 +116,10 @@ struct elevator_queue
>  	struct elevator_type *elevator_type;
>  	struct mutex sysfs_lock;
>  	struct hlist_head *hash;
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	/* fair queuing data */
> +	struct elv_fq_data efqd;
> +#endif
>  };
> 
>  /*
> @@ -209,5 +240,25 @@ enum {
>  	__val;							\
>  })
> 
> +/* iosched can let elevator know their feature set/capability */
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +
> +/* iosched wants to use fq logic of elevator layer */
> +#define	ELV_IOSCHED_NEED_FQ	1
> +
> +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> +{
> +	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
> +}
> +
> +#else /* ELV_IOSCHED_FAIR_QUEUING */
> +
> +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> +{
> +	return 0;
> +}
> +#endif /* ELV_IOSCHED_FAIR_QUEUING */
> +extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
> +extern void *elv_select_sched_queue(struct request_queue *q, int force);
>  #endif /* CONFIG_BLOCK */
>  #endif
> -- 
> 1.6.0.6
> 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-22  8:46     ` Balbir Singh
  0 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-22  8:46 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	paolo.valente, guijianfeng, fernando, mikew, jmoyer, nauman,
	m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

* Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:20]:

> This is common fair queuing code in elevator layer. This is controlled by
> config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> flat fair queuing support where there is only one group, "root group" and all
> the tasks belong to root group.
> 
> This elevator layer changes are backward compatible. That means any ioscheduler
> using old interfaces will continue to work.
> 
> This code is essentially the CFQ code for fair queuing. The primary difference
> is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
>

The patch is quite long and to be honest requires a long time to
review, which I don't mind. I suspect my frequently diverted mind is
likely to miss a lot in a big patch like this. Could you consider
splitting this further if possible. I think you'll notice the number
of reviews will also increase.
 
> Signed-off-by: Nauman Rafique <nauman@google.com>
> Signed-off-by: Fabio Checconi <fabio@gandalf.sssup.it>
> Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
> Signed-off-by: Aristeu Rozanski <aris@redhat.com>
> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> ---
>  block/Kconfig.iosched    |   13 +
>  block/Makefile           |    1 +
>  block/elevator-fq.c      | 2015 ++++++++++++++++++++++++++++++++++++++++++++++
>  block/elevator-fq.h      |  473 +++++++++++
>  block/elevator.c         |   46 +-
>  include/linux/blkdev.h   |    5 +
>  include/linux/elevator.h |   51 ++
>  7 files changed, 2593 insertions(+), 11 deletions(-)
>  create mode 100644 block/elevator-fq.c
>  create mode 100644 block/elevator-fq.h
> 
> diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
> index 7e803fc..3398134 100644
> --- a/block/Kconfig.iosched
> +++ b/block/Kconfig.iosched
> @@ -2,6 +2,19 @@ if BLOCK
> 
>  menu "IO Schedulers"
> 
> +config ELV_FAIR_QUEUING
> +	bool "Elevator Fair Queuing Support"
> +	default n
> +	---help---
> +	  Traditionally only cfq had notion of multiple queues and it did
> +	  fair queuing at its own. With the cgroups and need of controlling
> +	  IO, now even the simple io schedulers like noop, deadline, as will
> +	  have one queue per cgroup and will need hierarchical fair queuing.
> +	  Instead of every io scheduler implementing its own fair queuing
> +	  logic, this option enables fair queuing in elevator layer so that
> +	  other ioschedulers can make use of it.
> +	  If unsure, say N.
> +
>  config IOSCHED_NOOP
>  	bool
>  	default y
> diff --git a/block/Makefile b/block/Makefile
> index e9fa4dd..94bfc6e 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -15,3 +15,4 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
> 
>  obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
>  obj-$(CONFIG_BLK_DEV_INTEGRITY)	+= blk-integrity.o
> +obj-$(CONFIG_ELV_FAIR_QUEUING)	+= elevator-fq.o
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> new file mode 100644
> index 0000000..9357fb0
> --- /dev/null
> +++ b/block/elevator-fq.c
> @@ -0,0 +1,2015 @@
> +/*
> + * BFQ: Hierarchical B-WF2Q+ scheduler.
> + *
> + * Based on ideas and code from CFQ:
> + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> + *
> + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> + *		      Paolo Valente <paolo.valente@unimore.it>
> + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> + * 	              Nauman Rafique <nauman@google.com>
> + */
> +
> +#include <linux/blkdev.h>
> +#include "elevator-fq.h"
> +#include <linux/blktrace_api.h>
> +
> +/* Values taken from cfq */
> +const int elv_slice_sync = HZ / 10;
> +int elv_slice_async = HZ / 25;
> +const int elv_slice_async_rq = 2;
> +int elv_slice_idle = HZ / 125;
> +static struct kmem_cache *elv_ioq_pool;
> +
> +#define ELV_SLICE_SCALE		(5)
> +#define ELV_HW_QUEUE_MIN	(5)
> +#define IO_SERVICE_TREE_INIT   ((struct io_service_tree)		\
> +				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
> +
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe);
> +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> +						 int extract);
> +
> +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> +					unsigned short prio)

Why is the return type int and not unsigned int or unsigned long? Can
the return value ever be negative?

> +{
> +	const int base_slice = efqd->elv_slice[sync];
> +
> +	WARN_ON(prio >= IOPRIO_BE_NR);
> +
> +	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
> +}
> +
> +static inline int
> +elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
> +{
> +	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
> +}
> +
> +/* Mainly the BFQ scheduling code Follows */
> +
> +/*
> + * Shift for timestamp calculations.  This actually limits the maximum
> + * service allowed in one timestamp delta (small shift values increase it),
> + * the maximum total weight that can be used for the queues in the system
> + * (big shift values increase it), and the period of virtual time wraparounds.
> + */
> +#define WFQ_SERVICE_SHIFT	22
> +
> +/**
> + * bfq_gt - compare two timestamps.
> + * @a: first ts.
> + * @b: second ts.
> + *
> + * Return @a > @b, dealing with wrapping correctly.
> + */
> +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> +{
> +	return (s64)(a - b) > 0;
> +}
> +

a and b are of type u64, but cast to s64 to deal with wrapping?
Correct?

> +/**
> + * bfq_delta - map service into the virtual time domain.
> + * @service: amount of service.
> + * @weight: scale factor.
> + */
> +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> +					bfq_weight_t weight)
> +{
> +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> +

Why is the case required? Does the compiler complain? service is
already of the correct type.

> +	do_div(d, weight);

On a 64 system both d and weight are 64 bit, but on a 32 bit system
weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
- no?

> +	return d;
> +}
> +
> +/**
> + * bfq_calc_finish - assign the finish time to an entity.
> + * @entity: the entity to act upon.
> + * @service: the service to be charged to the entity.
> + */
> +static inline void bfq_calc_finish(struct io_entity *entity,
> +				   bfq_service_t service)
> +{
> +	BUG_ON(entity->weight == 0);
> +
> +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> +}

Should we BUG_ON (entity->finish == entity->start)? Or is that
expected when the entity has no service time left.

> +
> +static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	BUG_ON(entity == NULL);
> +	if (entity->my_sched_data == NULL)
> +		ioq = container_of(entity, struct io_queue, entity);
> +	return ioq;
> +}
> +
> +/**
> + * bfq_entity_of - get an entity from a node.
> + * @node: the node field of the entity.
> + *
> + * Convert a node pointer to the relative entity.  This is used only
> + * to simplify the logic of some functions and not as the generic
> + * conversion mechanism because, e.g., in the tree walking functions,
> + * the check for a %NULL value would be redundant.
> + */
> +static inline struct io_entity *bfq_entity_of(struct rb_node *node)
> +{
> +	struct io_entity *entity = NULL;
> +
> +	if (node != NULL)
> +		entity = rb_entry(node, struct io_entity, rb_node);
> +
> +	return entity;
> +}
> +
> +/**
> + * bfq_extract - remove an entity from a tree.
> + * @root: the tree root.
> + * @entity: the entity to remove.
> + */
> +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> +{

Extract is not common terminology, why not use bfq_remove()?

> +	BUG_ON(entity->tree != root);
> +
> +	entity->tree = NULL;
> +	rb_erase(&entity->rb_node, root);

Don't you want to make entity->tree = NULL after rb_erase?

> +}
> +
> +/**
> + * bfq_idle_extract - extract an entity from the idle tree.
> + * @st: the service tree of the owning @entity.
> + * @entity: the entity being removed.
> + */
> +static void bfq_idle_extract(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct rb_node *next;
> +
> +	BUG_ON(entity->tree != &st->idle);
> +
> +	if (entity == st->first_idle) {
> +		next = rb_next(&entity->rb_node);

What happens if next is NULL?

> +		st->first_idle = bfq_entity_of(next);
> +	}
> +
> +	if (entity == st->last_idle) {
> +		next = rb_prev(&entity->rb_node);

What happens if next is NULL?

> +		st->last_idle = bfq_entity_of(next);
> +	}
> +
> +	bfq_extract(&st->idle, entity);
> +}
> +
> +/**
> + * bfq_insert - generic tree insertion.
> + * @root: tree root.
> + * @entity: entity to insert.
> + *
> + * This is used for the idle and the active tree, since they are both
> + * ordered by finish time.
> + */
> +static void bfq_insert(struct rb_root *root, struct io_entity *entity)
> +{
> +	struct io_entity *entry;
> +	struct rb_node **node = &root->rb_node;
> +	struct rb_node *parent = NULL;
> +
> +	BUG_ON(entity->tree != NULL);
> +
> +	while (*node != NULL) {
> +		parent = *node;
> +		entry = rb_entry(parent, struct io_entity, rb_node);
> +
> +		if (bfq_gt(entry->finish, entity->finish))
> +			node = &parent->rb_left;
> +		else
> +			node = &parent->rb_right;
> +	}
> +
> +	rb_link_node(&entity->rb_node, parent, node);
> +	rb_insert_color(&entity->rb_node, root);
> +
> +	entity->tree = root;
> +}
> +
> +/**
> + * bfq_update_min - update the min_start field of a entity.
> + * @entity: the entity to update.
> + * @node: one of its children.
> + *
> + * This function is called when @entity may store an invalid value for
> + * min_start due to updates to the active tree.  The function  assumes
> + * that the subtree rooted at @node (which may be its left or its right
> + * child) has a valid min_start value.
> + */
> +static inline void bfq_update_min(struct io_entity *entity,
> +					struct rb_node *node)
> +{
> +	struct io_entity *child;
> +
> +	if (node != NULL) {
> +		child = rb_entry(node, struct io_entity, rb_node);
> +		if (bfq_gt(entity->min_start, child->min_start))
> +			entity->min_start = child->min_start;
> +	}
> +}

So.. we check to see if child's min_time is lesser than the root
entities or node entities and set it to the minimum of the two?
Can you use min_t here?

> +
> +/**
> + * bfq_update_active_node - recalculate min_start.
> + * @node: the node to update.
> + *
> + * @node may have changed position or one of its children may have moved,
> + * this function updates its min_start value.  The left and right subtrees
> + * are assumed to hold a correct min_start value.
> + */
> +static inline void bfq_update_active_node(struct rb_node *node)
> +{
> +	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
> +
> +	entity->min_start = entity->start;
> +	bfq_update_min(entity, node->rb_right);
> +	bfq_update_min(entity, node->rb_left);
> +}

I don't like this every much, we set the min_time twice, this can be
easily optimized to look at both left and right child and pick the
minimum.

> +
> +/**
> + * bfq_update_active_tree - update min_start for the whole active tree.
> + * @node: the starting node.
> + *
> + * @node must be the deepest modified node after an update.  This function
> + * updates its min_start using the values held by its children, assuming
> + * that they did not change, and then updates all the nodes that may have
> + * changed in the path to the root.  The only nodes that may have changed
> + * are the ones in the path or their siblings.
> + */
> +static void bfq_update_active_tree(struct rb_node *node)
> +{
> +	struct rb_node *parent;
> +
> +up:
> +	bfq_update_active_node(node);
> +
> +	parent = rb_parent(node);
> +	if (parent == NULL)
> +		return;
> +
> +	if (node == parent->rb_left && parent->rb_right != NULL)
> +		bfq_update_active_node(parent->rb_right);
> +	else if (parent->rb_left != NULL)
> +		bfq_update_active_node(parent->rb_left);
> +
> +	node = parent;
> +	goto up;
> +}
> +

For these functions, take a look at the walk function in the group
scheduler that does update_shares

> +/**
> + * bfq_active_insert - insert an entity in the active tree of its group/device.
> + * @st: the service tree of the entity.
> + * @entity: the entity being inserted.
> + *
> + * The active tree is ordered by finish time, but an extra key is kept
> + * per each node, containing the minimum value for the start times of
> + * its children (and the node itself), so it's possible to search for
> + * the eligible node with the lowest finish time in logarithmic time.
> + */
> +static void bfq_active_insert(struct io_service_tree *st,
> +					struct io_entity *entity)
> +{
> +	struct rb_node *node = &entity->rb_node;
> +
> +	bfq_insert(&st->active, entity);
> +
> +	if (node->rb_left != NULL)
> +		node = node->rb_left;
> +	else if (node->rb_right != NULL)
> +		node = node->rb_right;
> +
> +	bfq_update_active_tree(node);
> +}
> +
> +/**
> + * bfq_ioprio_to_weight - calc a weight from an ioprio.
> + * @ioprio: the ioprio value to convert.
> + */
> +static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
> +{
> +	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
> +	return IOPRIO_BE_NR - ioprio;
> +}
> +
> +void bfq_get_entity(struct io_entity *entity)
> +{
> +	struct io_queue *ioq = io_entity_to_ioq(entity);
> +
> +	if (ioq)
> +		elv_get_ioq(ioq);
> +}
> +
> +void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
> +{
> +	entity->ioprio = entity->new_ioprio;
> +	entity->ioprio_class = entity->new_ioprio_class;
> +	entity->sched_data = &iog->sched_data;
> +}
> +
> +/**
> + * bfq_find_deepest - find the deepest node that an extraction can modify.
> + * @node: the node being removed.
> + *
> + * Do the first step of an extraction in an rb tree, looking for the
> + * node that will replace @node, and returning the deepest node that
> + * the following modifications to the tree can touch.  If @node is the
> + * last node in the tree return %NULL.
> + */
> +static struct rb_node *bfq_find_deepest(struct rb_node *node)
> +{
> +	struct rb_node *deepest;
> +
> +	if (node->rb_right == NULL && node->rb_left == NULL)
> +		deepest = rb_parent(node);

Why is the parent the deepest? Shouldn't node be the deepest?

> +	else if (node->rb_right == NULL)
> +		deepest = node->rb_left;
> +	else if (node->rb_left == NULL)
> +		deepest = node->rb_right;
> +	else {
> +		deepest = rb_next(node);
> +		if (deepest->rb_right != NULL)
> +			deepest = deepest->rb_right;
> +		else if (rb_parent(deepest) != node)
> +			deepest = rb_parent(deepest);
> +	}
> +
> +	return deepest;
> +}

The function is not clear, could you please define deepest node
better?

> +
> +/**
> + * bfq_active_extract - remove an entity from the active tree.
> + * @st: the service_tree containing the tree.
> + * @entity: the entity being removed.
> + */
> +static void bfq_active_extract(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct rb_node *node;
> +
> +	node = bfq_find_deepest(&entity->rb_node);
> +	bfq_extract(&st->active, entity);
> +
> +	if (node != NULL)
> +		bfq_update_active_tree(node);
> +}
> +

Just to check my understanding, every time an active node is
added/removed, we update the min_time of the entire tree.

> +/**
> + * bfq_idle_insert - insert an entity into the idle tree.
> + * @st: the service tree containing the tree.
> + * @entity: the entity to insert.
> + */
> +static void bfq_idle_insert(struct io_service_tree *st,
> +					struct io_entity *entity)
> +{
> +	struct io_entity *first_idle = st->first_idle;
> +	struct io_entity *last_idle = st->last_idle;
> +
> +	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
> +		st->first_idle = entity;
> +	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
> +		st->last_idle = entity;
> +
> +	bfq_insert(&st->idle, entity);
> +}
> +
> +/**
> + * bfq_forget_entity - remove an entity from the wfq trees.
> + * @st: the service tree.
> + * @entity: the entity being removed.
> + *
> + * Update the device status and forget everything about @entity, putting
> + * the device reference to it, if it is a queue.  Entities belonging to
> + * groups are not refcounted.
> + */
> +static void bfq_forget_entity(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	BUG_ON(!entity->on_st);
> +	entity->on_st = 0;
> +	st->wsum -= entity->weight;
> +	ioq = io_entity_to_ioq(entity);
> +	if (!ioq)
> +		return;
> +	elv_put_ioq(ioq);
> +}
> +
> +/**
> + * bfq_put_idle_entity - release the idle tree ref of an entity.
> + * @st: service tree for the entity.
> + * @entity: the entity being released.
> + */
> +void bfq_put_idle_entity(struct io_service_tree *st,
> +				struct io_entity *entity)
> +{
> +	bfq_idle_extract(st, entity);
> +	bfq_forget_entity(st, entity);
> +}
> +
> +/**
> + * bfq_forget_idle - update the idle tree if necessary.
> + * @st: the service tree to act upon.
> + *
> + * To preserve the global O(log N) complexity we only remove one entry here;
> + * as the idle tree will not grow indefinitely this can be done safely.
> + */
> +void bfq_forget_idle(struct io_service_tree *st)
> +{
> +	struct io_entity *first_idle = st->first_idle;
> +	struct io_entity *last_idle = st->last_idle;
> +
> +	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
> +	    !bfq_gt(last_idle->finish, st->vtime)) {
> +		/*
> +		 * Active tree is empty. Pull back vtime to finish time of
> +		 * last idle entity on idle tree.
> +		 * Rational seems to be that it reduces the possibility of
> +		 * vtime wraparound (bfq_gt(V-F) < 0).
> +		 */
> +		st->vtime = last_idle->finish;
> +	}
> +
> +	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
> +		bfq_put_idle_entity(st, first_idle);
> +}
> +
> +
> +static struct io_service_tree *
> +__bfq_entity_update_prio(struct io_service_tree *old_st,
> +				struct io_entity *entity)
> +{
> +	struct io_service_tree *new_st = old_st;
> +	struct io_queue *ioq = io_entity_to_ioq(entity);
> +
> +	if (entity->ioprio_changed) {
> +		entity->ioprio = entity->new_ioprio;
> +		entity->ioprio_class = entity->new_ioprio_class;
> +		entity->ioprio_changed = 0;
> +
> +		/*
> +		 * Also update the scaled budget for ioq. Group will get the
> +		 * updated budget once ioq is selected to run next.
> +		 */
> +		if (ioq) {
> +			struct elv_fq_data *efqd = ioq->efqd;
> +			entity->budget = elv_prio_to_slice(efqd, ioq);
> +		}
> +
> +		old_st->wsum -= entity->weight;
> +		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
> +
> +		/*
> +		 * NOTE: here we may be changing the weight too early,
> +		 * this will cause unfairness.  The correct approach
> +		 * would have required additional complexity to defer
> +		 * weight changes to the proper time instants (i.e.,
> +		 * when entity->finish <= old_st->vtime).
> +		 */
> +		new_st = io_entity_service_tree(entity);
> +		new_st->wsum += entity->weight;
> +
> +		if (new_st != old_st)
> +			entity->start = new_st->vtime;
> +	}
> +
> +	return new_st;
> +}
> +
> +/**
> + * __bfq_activate_entity - activate an entity.
> + * @entity: the entity being activated.
> + *
> + * Called whenever an entity is activated, i.e., it is not active and one
> + * of its children receives a new request, or has to be reactivated due to
> + * budget exhaustion.  It uses the current budget of the entity (and the
> + * service received if @entity is active) of the queue to calculate its
> + * timestamps.
> + */
> +static void __bfq_activate_entity(struct io_entity *entity, int add_front)
> +{
> +	struct io_sched_data *sd = entity->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +
> +	if (entity == sd->active_entity) {
> +		BUG_ON(entity->tree != NULL);
> +		/*
> +		 * If we are requeueing the current entity we have
> +		 * to take care of not charging to it service it has
> +		 * not received.
> +		 */
> +		bfq_calc_finish(entity, entity->service);
> +		entity->start = entity->finish;
> +		sd->active_entity = NULL;
> +	} else if (entity->tree == &st->active) {
> +		/*
> +		 * Requeueing an entity due to a change of some
> +		 * next_active entity below it.  We reuse the old
> +		 * start time.
> +		 */
> +		bfq_active_extract(st, entity);
> +	} else if (entity->tree == &st->idle) {
> +		/*
> +		 * Must be on the idle tree, bfq_idle_extract() will
> +		 * check for that.
> +		 */
> +		bfq_idle_extract(st, entity);
> +		entity->start = bfq_gt(st->vtime, entity->finish) ?
> +				       st->vtime : entity->finish;
> +	} else {
> +		/*
> +		 * The finish time of the entity may be invalid, and
> +		 * it is in the past for sure, otherwise the queue
> +		 * would have been on the idle tree.
> +		 */
> +		entity->start = st->vtime;
> +		st->wsum += entity->weight;
> +		bfq_get_entity(entity);
> +
> +		BUG_ON(entity->on_st);
> +		entity->on_st = 1;
> +	}
> +
> +	st = __bfq_entity_update_prio(st, entity);
> +	/*
> +	 * This is to emulate cfq like functionality where preemption can
> +	 * happen with-in same class, like sync queue preempting async queue
> +	 * May be this is not a very good idea from fairness point of view
> +	 * as preempting queue gains share. Keeping it for now.
> +	 */
> +	if (add_front) {
> +		struct io_entity *next_entity;
> +
> +		/*
> +		 * Determine the entity which will be dispatched next
> +		 * Use sd->next_active once hierarchical patch is applied
> +		 */
> +		next_entity = bfq_lookup_next_entity(sd, 0);
> +
> +		if (next_entity && next_entity != entity) {
> +			struct io_service_tree *new_st;
> +			bfq_timestamp_t delta;
> +
> +			new_st = io_entity_service_tree(next_entity);
> +
> +			/*
> +			 * At this point, both entities should belong to
> +			 * same service tree as cross service tree preemption
> +			 * is automatically taken care by algorithm
> +			 */
> +			BUG_ON(new_st != st);
> +			entity->finish = next_entity->finish - 1;
> +			delta = bfq_delta(entity->budget, entity->weight);
> +			entity->start = entity->finish - delta;
> +			if (bfq_gt(entity->start, st->vtime))
> +				entity->start = st->vtime;
> +		}
> +	} else {
> +		bfq_calc_finish(entity, entity->budget);
> +	}
> +	bfq_active_insert(st, entity);
> +}
> +
> +/**
> + * bfq_activate_entity - activate an entity.
> + * @entity: the entity to activate.
> + */
> +void bfq_activate_entity(struct io_entity *entity, int add_front)
> +{
> +	__bfq_activate_entity(entity, add_front);
> +}
> +
> +/**
> + * __bfq_deactivate_entity - deactivate an entity from its service tree.
> + * @entity: the entity to deactivate.
> + * @requeue: if false, the entity will not be put into the idle tree.
> + *
> + * Deactivate an entity, independently from its previous state.  If the
> + * entity was not on a service tree just return, otherwise if it is on
> + * any scheduler tree, extract it from that tree, and if necessary
> + * and if the caller did not specify @requeue, put it on the idle tree.
> + *
> + */
> +int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
> +{
> +	struct io_sched_data *sd = entity->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +	int was_active = entity == sd->active_entity;
> +	int ret = 0;
> +
> +	if (!entity->on_st)
> +		return 0;
> +
> +	BUG_ON(was_active && entity->tree != NULL);
> +
> +	if (was_active) {
> +		bfq_calc_finish(entity, entity->service);
> +		sd->active_entity = NULL;
> +	} else if (entity->tree == &st->active)
> +		bfq_active_extract(st, entity);
> +	else if (entity->tree == &st->idle)
> +		bfq_idle_extract(st, entity);
> +	else if (entity->tree != NULL)
> +		BUG();
> +
> +	if (!requeue || !bfq_gt(entity->finish, st->vtime))
> +		bfq_forget_entity(st, entity);
> +	else
> +		bfq_idle_insert(st, entity);
> +
> +	BUG_ON(sd->active_entity == entity);
> +
> +	return ret;
> +}
> +
> +/**
> + * bfq_deactivate_entity - deactivate an entity.
> + * @entity: the entity to deactivate.
> + * @requeue: true if the entity can be put on the idle tree
> + */
> +void bfq_deactivate_entity(struct io_entity *entity, int requeue)
> +{
> +	__bfq_deactivate_entity(entity, requeue);
> +}
> +
> +/**
> + * bfq_update_vtime - update vtime if necessary.
> + * @st: the service tree to act upon.
> + *
> + * If necessary update the service tree vtime to have at least one
> + * eligible entity, skipping to its start time.  Assumes that the
> + * active tree of the device is not empty.
> + *
> + * NOTE: this hierarchical implementation updates vtimes quite often,
> + * we may end up with reactivated tasks getting timestamps after a
> + * vtime skip done because we needed a ->first_active entity on some
> + * intermediate node.
> + */
> +static void bfq_update_vtime(struct io_service_tree *st)
> +{
> +	struct io_entity *entry;
> +	struct rb_node *node = st->active.rb_node;
> +
> +	entry = rb_entry(node, struct io_entity, rb_node);
> +	if (bfq_gt(entry->min_start, st->vtime)) {
> +		st->vtime = entry->min_start;
> +		bfq_forget_idle(st);
> +	}
> +}
> +
> +/**
> + * bfq_first_active - find the eligible entity with the smallest finish time
> + * @st: the service tree to select from.
> + *
> + * This function searches the first schedulable entity, starting from the
> + * root of the tree and going on the left every time on this side there is
> + * a subtree with at least one eligible (start <= vtime) entity.  The path
> + * on the right is followed only if a) the left subtree contains no eligible
> + * entities and b) no eligible entity has been found yet.
> + */
> +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> +{
> +	struct io_entity *entry, *first = NULL;
> +	struct rb_node *node = st->active.rb_node;
> +
> +	while (node != NULL) {
> +		entry = rb_entry(node, struct io_entity, rb_node);
> +left:
> +		if (!bfq_gt(entry->start, st->vtime))
> +			first = entry;
> +
> +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> +
> +		if (node->rb_left != NULL) {
> +			entry = rb_entry(node->rb_left,
> +					 struct io_entity, rb_node);
> +			if (!bfq_gt(entry->min_start, st->vtime)) {
> +				node = node->rb_left;
> +				goto left;
> +			}
> +		}
> +		if (first != NULL)
> +			break;
> +		node = node->rb_right;

Please help me understand this, we sort the tree by finish time, but
search by vtime, start_time. The worst case could easily be O(N),
right?

> +	}
> +
> +	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
> +	return first;
> +}
> +
> +/**
> + * __bfq_lookup_next_entity - return the first eligible entity in @st.
> + * @st: the service tree.
> + *
> + * Update the virtual time in @st and return the first eligible entity
> + * it contains.
> + */
> +static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
> +{
> +	struct io_entity *entity;
> +
> +	if (RB_EMPTY_ROOT(&st->active))
> +		return NULL;
> +
> +	bfq_update_vtime(st);
> +	entity = bfq_first_active_entity(st);
> +	BUG_ON(bfq_gt(entity->start, st->vtime));
> +
> +	return entity;
> +}
> +
> +/**
> + * bfq_lookup_next_entity - return the first eligible entity in @sd.
> + * @sd: the sched_data.
> + * @extract: if true the returned entity will be also extracted from @sd.
> + *
> + * NOTE: since we cache the next_active entity at each level of the
> + * hierarchy, the complexity of the lookup can be decreased with
> + * absolutely no effort just returning the cached next_active value;
> + * we prefer to do full lookups to test the consistency of * the data
> + * structures.
> + */
> +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> +						 int extract)
> +{
> +	struct io_service_tree *st = sd->service_tree;
> +	struct io_entity *entity;
> +	int i;
> +
> +	/*
> +	 * We should not call lookup when an entity is active, as doing lookup
> +	 * can result in an erroneous vtime jump.
> +	 */
> +	BUG_ON(sd->active_entity != NULL);
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
> +		entity = __bfq_lookup_next_entity(st);
> +		if (entity != NULL) {
> +			if (extract) {
> +				bfq_active_extract(st, entity);
> +				sd->active_entity = entity;
> +			}
> +			break;
> +		}
> +	}
> +
> +	return entity;
> +}
> +
> +void entity_served(struct io_entity *entity, bfq_service_t served)
> +{
> +	struct io_service_tree *st;
> +
> +	st = io_entity_service_tree(entity);
> +	entity->service += served;
> +	BUG_ON(st->wsum == 0);
> +	st->vtime += bfq_delta(served, st->wsum);
> +	bfq_forget_idle(st);

Forget idle checks to see if the st->vtime > first_idle->finish, if so
it pushes the first_idle to later, right?

> +}
> +
> +/**
> + * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
> + * @st: the service tree being flushed.
> + */
> +void io_flush_idle_tree(struct io_service_tree *st)
> +{
> +	struct io_entity *entity = st->first_idle;
> +
> +	for (; entity != NULL; entity = st->first_idle)
> +		__bfq_deactivate_entity(entity, 0);
> +}
> +
> +/* Elevator fair queuing function */
> +struct io_queue *rq_ioq(struct request *rq)
> +{
> +	return rq->ioq;
> +}
> +
> +static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
> +{
> +	return e->efqd.active_queue;
> +}
> +
> +void *elv_active_sched_queue(struct elevator_queue *e)
> +{
> +	return ioq_sched_queue(elv_active_ioq(e));
> +}
> +EXPORT_SYMBOL(elv_active_sched_queue);
> +
> +int elv_nr_busy_ioq(struct elevator_queue *e)
> +{
> +	return e->efqd.busy_queues;
> +}
> +EXPORT_SYMBOL(elv_nr_busy_ioq);
> +
> +int elv_hw_tag(struct elevator_queue *e)
> +{
> +	return e->efqd.hw_tag;
> +}
> +EXPORT_SYMBOL(elv_hw_tag);
> +
> +/* Helper functions for operating on elevator idle slice timer */
> +int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +
> +	return mod_timer(&efqd->idle_slice_timer, expires);
> +}
> +EXPORT_SYMBOL(elv_mod_idle_slice_timer);
> +
> +int elv_del_idle_slice_timer(struct elevator_queue *eq)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +
> +	return del_timer(&efqd->idle_slice_timer);
> +}
> +EXPORT_SYMBOL(elv_del_idle_slice_timer);
> +
> +unsigned int elv_get_slice_idle(struct elevator_queue *eq)
> +{
> +	return eq->efqd.elv_slice_idle;
> +}
> +EXPORT_SYMBOL(elv_get_slice_idle);
> +
> +void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
> +{
> +	entity_served(&ioq->entity, served);
> +}
> +
> +/* Tells whether ioq is queued in root group or not */
> +static inline int is_root_group_ioq(struct request_queue *q,
> +					struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
> +}
> +
> +/*
> + * sysfs parts below -->
> + */
> +static ssize_t
> +elv_var_show(unsigned int var, char *page)
> +{
> +	return sprintf(page, "%d\n", var);
> +}
> +
> +static ssize_t
> +elv_var_store(unsigned int *var, const char *page, size_t count)
> +{
> +	char *p = (char *) page;
> +
> +	*var = simple_strtoul(p, &p, 10);
> +	return count;
> +}
> +
> +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
> +ssize_t __FUNC(struct elevator_queue *e, char *page)		\
> +{									\
> +	struct elv_fq_data *efqd = &e->efqd;				\
> +	unsigned int __data = __VAR;					\
> +	if (__CONV)							\
> +		__data = jiffies_to_msecs(__data);			\
> +	return elv_var_show(__data, (page));				\
> +}
> +SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
> +EXPORT_SYMBOL(elv_slice_idle_show);
> +SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
> +EXPORT_SYMBOL(elv_slice_sync_show);
> +SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
> +EXPORT_SYMBOL(elv_slice_async_show);
> +#undef SHOW_FUNCTION
> +
> +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
> +ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
> +{									\
> +	struct elv_fq_data *efqd = &e->efqd;				\
> +	unsigned int __data;						\
> +	int ret = elv_var_store(&__data, (page), count);		\
> +	if (__data < (MIN))						\
> +		__data = (MIN);						\
> +	else if (__data > (MAX))					\
> +		__data = (MAX);						\
> +	if (__CONV)							\
> +		*(__PTR) = msecs_to_jiffies(__data);			\
> +	else								\
> +		*(__PTR) = __data;					\
> +	return ret;							\
> +}
> +STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_idle_store);
> +STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_sync_store);
> +STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
> +EXPORT_SYMBOL(elv_slice_async_store);
> +#undef STORE_FUNCTION
> +
> +void elv_schedule_dispatch(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (elv_nr_busy_ioq(q->elevator)) {
> +		elv_log(efqd, "schedule dispatch");
> +		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
> +	}
> +}
> +EXPORT_SYMBOL(elv_schedule_dispatch);
> +
> +void elv_kick_queue(struct work_struct *work)
> +{
> +	struct elv_fq_data *efqd =
> +		container_of(work, struct elv_fq_data, unplug_work);
> +	struct request_queue *q = efqd->queue;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(q->queue_lock, flags);
> +	blk_start_queueing(q);
> +	spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
> +void elv_shutdown_timer_wq(struct elevator_queue *e)
> +{
> +	del_timer_sync(&e->efqd.idle_slice_timer);
> +	cancel_work_sync(&e->efqd.unplug_work);
> +}
> +EXPORT_SYMBOL(elv_shutdown_timer_wq);
> +
> +void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	ioq->slice_end = jiffies + ioq->entity.budget;
> +	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
> +}
> +
> +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = ioq->efqd;
> +	unsigned long elapsed = jiffies - ioq->last_end_request;
> +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> +
> +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> +}

Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
understand the algorithm.

> +
> +/*
> + * Disable idle window if the process thinks too long.
> + * This idle flag can also be updated by io scheduler.
> + */
> +static void elv_ioq_update_idle_window(struct elevator_queue *eq,
> +				struct io_queue *ioq, struct request *rq)
> +{
> +	int old_idle, enable_idle;
> +	struct elv_fq_data *efqd = ioq->efqd;
> +
> +	/*
> +	 * Don't idle for async or idle io prio class
> +	 */
> +	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
> +		return;
> +
> +	enable_idle = old_idle = elv_ioq_idle_window(ioq);
> +
> +	if (!efqd->elv_slice_idle)
> +		enable_idle = 0;
> +	else if (ioq_sample_valid(ioq->ttime_samples)) {
> +		if (ioq->ttime_mean > efqd->elv_slice_idle)
> +			enable_idle = 0;
> +		else
> +			enable_idle = 1;
> +	}
> +
> +	/*
> +	 * From think time perspective idle should be enabled. Check with
> +	 * io scheduler if it wants to disable idling based on additional
> +	 * considrations like seek pattern.
> +	 */
> +	if (enable_idle) {
> +		if (eq->ops->elevator_update_idle_window_fn)
> +			enable_idle = eq->ops->elevator_update_idle_window_fn(
> +						eq, ioq->sched_queue, rq);
> +		if (!enable_idle)
> +			elv_log_ioq(efqd, ioq, "iosched disabled idle");
> +	}
> +
> +	if (old_idle != enable_idle) {
> +		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
> +		if (enable_idle)
> +			elv_mark_ioq_idle_window(ioq);
> +		else
> +			elv_clear_ioq_idle_window(ioq);
> +	}
> +}
> +
> +struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
> +	return ioq;
> +}
> +EXPORT_SYMBOL(elv_alloc_ioq);
> +
> +void elv_free_ioq(struct io_queue *ioq)
> +{
> +	kmem_cache_free(elv_ioq_pool, ioq);
> +}
> +EXPORT_SYMBOL(elv_free_ioq);
> +
> +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> +			void *sched_queue, int ioprio_class, int ioprio,
> +			int is_sync)
> +{
> +	struct elv_fq_data *efqd = &eq->efqd;
> +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> +
> +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> +	atomic_set(&ioq->ref, 0);
> +	ioq->efqd = efqd;
> +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> +	elv_ioq_set_ioprio(ioq, ioprio);
> +	ioq->pid = current->pid;

Is pid used for cgroup association later? I don't see why we save the
pid otherwise? If yes, why not store the cgroup of the current->pid?

> +	ioq->sched_queue = sched_queue;
> +	if (is_sync && !elv_ioq_class_idle(ioq))
> +		elv_mark_ioq_idle_window(ioq);
> +	bfq_init_entity(&ioq->entity, iog);
> +	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
> +	if (is_sync)
> +		ioq->last_end_request = jiffies;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(elv_init_ioq);
> +
> +void elv_put_ioq(struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = ioq->efqd;
> +	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
> +						efqd);
> +
> +	BUG_ON(atomic_read(&ioq->ref) <= 0);
> +	if (!atomic_dec_and_test(&ioq->ref))
> +		return;
> +	BUG_ON(ioq->nr_queued);
> +	BUG_ON(ioq->entity.tree != NULL);
> +	BUG_ON(elv_ioq_busy(ioq));
> +	BUG_ON(efqd->active_queue == ioq);
> +
> +	/* Can be called by outgoing elevator. Don't use q */
> +	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
> +
> +	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
> +	elv_log_ioq(efqd, ioq, "put_queue");
> +	elv_free_ioq(ioq);
> +}
> +EXPORT_SYMBOL(elv_put_ioq);
> +
> +void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
> +{
> +	struct io_queue *ioq = *ioq_ptr;
> +
> +	if (ioq != NULL) {
> +		/* Drop the reference taken by the io group */
> +		elv_put_ioq(ioq);
> +		*ioq_ptr = NULL;
> +	}
> +}
> +
> +/*
> + * Normally next io queue to be served is selected from the service tree.
> + * This function allows one to choose a specific io queue to run next
> + * out of order. This is primarily to accomodate the close_cooperator
> + * feature of cfq.
> + *
> + * Currently it is done only for root level as to begin with supporting
> + * close cooperator feature only for root group to make sure default
> + * cfq behavior in flat hierarchy is not changed.
> + */
> +void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	struct io_sched_data *sd = &efqd->root_group->sched_data;
> +	struct io_service_tree *st = io_entity_service_tree(entity);
> +
> +	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
> +	BUG_ON(!efqd->busy_queues);
> +	BUG_ON(sd != entity->sched_data);
> +	BUG_ON(!st);
> +
> +	bfq_update_vtime(st);
> +	bfq_active_extract(st, entity);
> +	sd->active_entity = entity;
> +	entity->service = 0;
> +	elv_log_ioq(efqd, ioq, "set_next_ioq");
> +}
> +
> +/* Get next queue for service. */
> +struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = NULL;
> +	struct io_queue *ioq = NULL;
> +	struct io_sched_data *sd;
> +
> +	/*
> +	 * We should not call lookup when an entity is active, as doing
> +	 * lookup can result in an erroneous vtime jump.
> +	 */
> +	BUG_ON(efqd->active_queue != NULL);
> +
> +	if (!efqd->busy_queues)
> +		return NULL;
> +
> +	sd = &efqd->root_group->sched_data;
> +	entity = bfq_lookup_next_entity(sd, 1);
> +
> +	BUG_ON(!entity);
> +	if (extract)
> +		entity->service = 0;
> +	ioq = io_entity_to_ioq(entity);
> +
> +	return ioq;
> +}
> +
> +/*
> + * coop tells that io scheduler selected a queue for us and we did not

coop?

> + * select the next queue based on fairness.
> + */
> +static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> +					int coop)
> +{
> +	struct request_queue *q = efqd->queue;
> +
> +	if (ioq) {
> +		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
> +							efqd->busy_queues);
> +		ioq->slice_end = 0;
> +
> +		elv_clear_ioq_wait_request(ioq);
> +		elv_clear_ioq_must_dispatch(ioq);
> +		elv_mark_ioq_slice_new(ioq);
> +
> +		del_timer(&efqd->idle_slice_timer);
> +	}
> +
> +	efqd->active_queue = ioq;
> +
> +	/* Let iosched know if it wants to take some action */
> +	if (ioq) {
> +		if (q->elevator->ops->elevator_active_ioq_set_fn)
> +			q->elevator->ops->elevator_active_ioq_set_fn(q,
> +							ioq->sched_queue, coop);
> +	}
> +}
> +
> +/* Get and set a new active queue for service. */
> +struct io_queue *elv_set_active_ioq(struct request_queue *q,
> +						struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	int coop = 0;
> +
> +	if (!ioq)
> +		ioq = elv_get_next_ioq(q, 1);
> +	else {
> +		elv_set_next_ioq(q, ioq);
> +		/*
> +		 * io scheduler selected the next queue for us. Pass this
> +		 * this info back to io scheudler. cfq currently uses it
> +		 * to reset coop flag on the queue.
> +		 */
> +		coop = 1;
> +	}
> +	__elv_set_active_ioq(efqd, ioq, coop);
> +	return ioq;
> +}
> +
> +void elv_reset_active_ioq(struct elv_fq_data *efqd)
> +{
> +	struct request_queue *q = efqd->queue;
> +	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
> +
> +	if (q->elevator->ops->elevator_active_ioq_reset_fn)
> +		q->elevator->ops->elevator_active_ioq_reset_fn(q,
> +							ioq->sched_queue);
> +	efqd->active_queue = NULL;
> +	del_timer(&efqd->idle_slice_timer);
> +}
> +
> +void elv_activate_ioq(struct io_queue *ioq, int add_front)
> +{
> +	bfq_activate_entity(&ioq->entity, add_front);
> +}
> +
> +void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> +					int requeue)
> +{
> +	bfq_deactivate_entity(&ioq->entity, requeue);
> +}
> +
> +/* Called when an inactive queue receives a new request. */
> +void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
> +{
> +	BUG_ON(elv_ioq_busy(ioq));
> +	BUG_ON(ioq == efqd->active_queue);
> +	elv_log_ioq(efqd, ioq, "add to busy");
> +	elv_activate_ioq(ioq, 0);
> +	elv_mark_ioq_busy(ioq);
> +	efqd->busy_queues++;
> +	if (elv_ioq_class_rt(ioq)) {
> +		struct io_group *iog = ioq_to_io_group(ioq);
> +		iog->busy_rt_queues++;
> +	}
> +}
> +
> +void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
> +					int requeue)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	BUG_ON(!elv_ioq_busy(ioq));
> +	BUG_ON(ioq->nr_queued);
> +	elv_log_ioq(efqd, ioq, "del from busy");
> +	elv_clear_ioq_busy(ioq);
> +	BUG_ON(efqd->busy_queues == 0);
> +	efqd->busy_queues--;
> +	if (elv_ioq_class_rt(ioq)) {
> +		struct io_group *iog = ioq_to_io_group(ioq);
> +		iog->busy_rt_queues--;
> +	}
> +
> +	elv_deactivate_ioq(efqd, ioq, requeue);
> +}
> +
> +/*
> + * Do the accounting. Determine how much service (in terms of time slices)
> + * current queue used and adjust the start, finish time of queue and vtime
> + * of the tree accordingly.
> + *
> + * Determining the service used in terms of time is tricky in certain
> + * situations. Especially when underlying device supports command queuing
> + * and requests from multiple queues can be there at same time, then it
> + * is not clear which queue consumed how much of disk time.
> + *
> + * To mitigate this problem, cfq starts the time slice of the queue only
> + * after first request from the queue has completed. This does not work
> + * very well if we expire the queue before we wait for first and more
> + * request to finish from the queue. For seeky queues, we will expire the
> + * queue after dispatching few requests without waiting and start dispatching
> + * from next queue.
> + *
> + * Not sure how to determine the time consumed by queue in such scenarios.
> + * Currently as a crude approximation, we are charging 25% of time slice
> + * for such cases. A better mechanism is needed for accurate accounting.
> + */
> +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> +
> +	assert_spin_locked(q->queue_lock);
> +	elv_log_ioq(efqd, ioq, "slice expired");
> +
> +	if (elv_ioq_wait_request(ioq))
> +		del_timer(&efqd->idle_slice_timer);
> +
> +	elv_clear_ioq_wait_request(ioq);
> +
> +	/*
> +	 * if ioq->slice_end = 0, that means a queue was expired before first
> +	 * reuqest from the queue got completed. Of course we are not planning
> +	 * to idle on the queue otherwise we would not have expired it.
> +	 *
> +	 * Charge for the 25% slice in such cases. This is not the best thing
> +	 * to do but at the same time not very sure what's the next best
> +	 * thing to do.
> +	 *
> +	 * This arises from that fact that we don't have the notion of
> +	 * one queue being operational at one time. io scheduler can dispatch
> +	 * requests from multiple queues in one dispatch round. Ideally for
> +	 * more accurate accounting of exact disk time used by disk, one
> +	 * should dispatch requests from only one queue and wait for all
> +	 * the requests to finish. But this will reduce throughput.
> +	 */
> +	if (!ioq->slice_end)
> +		slice_used = entity->budget/4;
> +	else {
> +		if (time_after(ioq->slice_end, jiffies)) {
> +			slice_unused = ioq->slice_end - jiffies;
> +			if (slice_unused == entity->budget) {
> +				/*
> +				 * queue got expired immediately after
> +				 * completing first request. Charge 25% of
> +				 * slice.
> +				 */
> +				slice_used = entity->budget/4;
> +			} else
> +				slice_used = entity->budget - slice_unused;
> +		} else {
> +			slice_overshoot = jiffies - ioq->slice_end;
> +			slice_used = entity->budget + slice_overshoot;
> +		}
> +	}
> +
> +	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
> +			jiffies);
> +	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
> +				slice_used, entity->budget, slice_overshoot);
> +	elv_ioq_served(ioq, slice_used);
> +
> +	BUG_ON(ioq != efqd->active_queue);
> +	elv_reset_active_ioq(efqd);
> +
> +	if (!ioq->nr_queued)
> +		elv_del_ioq_busy(q->elevator, ioq, 1);
> +	else
> +		elv_activate_ioq(ioq, 0);
> +}
> +EXPORT_SYMBOL(__elv_ioq_slice_expired);
> +
> +/*
> + *  Expire the ioq.
> + */
> +void elv_ioq_slice_expired(struct request_queue *q)
> +{
> +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> +
> +	if (ioq)
> +		__elv_ioq_slice_expired(q, ioq);
> +}
> +
> +/*
> + * Check if new_cfqq should preempt the currently active queue. Return 0 for
> + * no or if we aren't sure, a 1 will cause a preemption attempt.
> + */
> +int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
> +			struct request *rq)
> +{
> +	struct io_queue *ioq;
> +	struct elevator_queue *eq = q->elevator;
> +	struct io_entity *entity, *new_entity;
> +
> +	ioq = elv_active_ioq(eq);
> +
> +	if (!ioq)
> +		return 0;
> +
> +	entity = &ioq->entity;
> +	new_entity = &new_ioq->entity;
> +
> +	/*
> +	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
> +	 */
> +
> +	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
> +	    && entity->ioprio_class != IOPRIO_CLASS_RT)
> +		return 1;
> +	/*
> +	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
> +	 */
> +
> +	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
> +	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
> +		return 1;
> +
> +	/*
> +	 * Check with io scheduler if it has additional criterion based on
> +	 * which it wants to preempt existing queue.
> +	 */
> +	if (eq->ops->elevator_should_preempt_fn)
> +		return eq->ops->elevator_should_preempt_fn(q,
> +						ioq_sched_queue(new_ioq), rq);
> +
> +	return 0;
> +}
> +
> +static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
> +{
> +	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
> +	elv_ioq_slice_expired(q);
> +
> +	/*
> +	 * Put the new queue at the front of the of the current list,
> +	 * so we know that it will be selected next.
> +	 */
> +
> +	elv_activate_ioq(ioq, 1);
> +	elv_ioq_set_slice_end(ioq, 0);
> +	elv_mark_ioq_slice_new(ioq);
> +}
> +
> +void elv_ioq_request_add(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *ioq = rq->ioq;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	BUG_ON(!efqd);
> +	BUG_ON(!ioq);
> +	efqd->rq_queued++;
> +	ioq->nr_queued++;
> +
> +	if (!elv_ioq_busy(ioq))
> +		elv_add_ioq_busy(efqd, ioq);
> +
> +	elv_ioq_update_io_thinktime(ioq);
> +	elv_ioq_update_idle_window(q->elevator, ioq, rq);
> +
> +	if (ioq == elv_active_ioq(q->elevator)) {
> +		/*
> +		 * Remember that we saw a request from this process, but
> +		 * don't start queuing just yet. Otherwise we risk seeing lots
> +		 * of tiny requests, because we disrupt the normal plugging
> +		 * and merging. If the request is already larger than a single
> +		 * page, let it rip immediately. For that case we assume that
> +		 * merging is already done. Ditto for a busy system that
> +		 * has other work pending, don't risk delaying until the
> +		 * idle timer unplug to continue working.
> +		 */
> +		if (elv_ioq_wait_request(ioq)) {
> +			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
> +			    efqd->busy_queues > 1) {
> +				del_timer(&efqd->idle_slice_timer);
> +				blk_start_queueing(q);
> +			}
> +			elv_mark_ioq_must_dispatch(ioq);
> +		}
> +	} else if (elv_should_preempt(q, ioq, rq)) {
> +		/*
> +		 * not the active queue - expire current slice if it is
> +		 * idle and has expired it's mean thinktime or this new queue
> +		 * has some old slice time left and is of higher priority or
> +		 * this new queue is RT and the current one is BE
> +		 */
> +		elv_preempt_queue(q, ioq);
> +		blk_start_queueing(q);
> +	}
> +}
> +
> +void elv_idle_slice_timer(unsigned long data)
> +{
> +	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
> +	struct io_queue *ioq;
> +	unsigned long flags;
> +	struct request_queue *q = efqd->queue;
> +
> +	elv_log(efqd, "idle timer fired");
> +
> +	spin_lock_irqsave(q->queue_lock, flags);
> +
> +	ioq = efqd->active_queue;
> +
> +	if (ioq) {
> +
> +		/*
> +		 * We saw a request before the queue expired, let it through
> +		 */
> +		if (elv_ioq_must_dispatch(ioq))
> +			goto out_kick;
> +
> +		/*
> +		 * expired
> +		 */
> +		if (elv_ioq_slice_used(ioq))
> +			goto expire;
> +
> +		/*
> +		 * only expire and reinvoke request handler, if there are
> +		 * other queues with pending requests
> +		 */
> +		if (!elv_nr_busy_ioq(q->elevator))
> +			goto out_cont;
> +
> +		/*
> +		 * not expired and it has a request pending, let it dispatch
> +		 */
> +		if (ioq->nr_queued)
> +			goto out_kick;
> +	}
> +expire:
> +	elv_ioq_slice_expired(q);
> +out_kick:
> +	elv_schedule_dispatch(q);
> +out_cont:
> +	spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
> +void elv_ioq_arm_slice_timer(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> +	unsigned long sl;
> +
> +	BUG_ON(!ioq);
> +
> +	/*
> +	 * SSD device without seek penalty, disable idling. But only do so
> +	 * for devices that support queuing, otherwise we still have a problem
> +	 * with sync vs async workloads.
> +	 */
> +	if (blk_queue_nonrot(q) && efqd->hw_tag)
> +		return;
> +
> +	/*
> +	 * still requests with the driver, don't idle
> +	 */
> +	if (efqd->rq_in_driver)
> +		return;
> +
> +	/*
> +	 * idle is disabled, either manually or by past process history
> +	 */
> +	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
> +		return;
> +
> +	/*
> +	 * may be iosched got its own idling logic. In that case io
> +	 * schduler will take care of arming the timer, if need be.
> +	 */
> +	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
> +		q->elevator->ops->elevator_arm_slice_timer_fn(q,
> +						ioq->sched_queue);
> +	} else {
> +		elv_mark_ioq_wait_request(ioq);
> +		sl = efqd->elv_slice_idle;
> +		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
> +		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
> +	}
> +}
> +
> +/* Common layer function to select the next queue to dispatch from */
> +void *elv_fq_select_ioq(struct request_queue *q, int force)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> +	struct io_group *iog;
> +
> +	if (!elv_nr_busy_ioq(q->elevator))
> +		return NULL;
> +
> +	if (ioq == NULL)
> +		goto new_queue;
> +
> +	/*
> +	 * Force dispatch. Continue to dispatch from current queue as long
> +	 * as it has requests.
> +	 */
> +	if (unlikely(force)) {
> +		if (ioq->nr_queued)
> +			goto keep_queue;
> +		else
> +			goto expire;
> +	}
> +
> +	/*
> +	 * The active queue has run out of time, expire it and select new.
> +	 */
> +	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
> +		goto expire;
> +
> +	/*
> +	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
> +	 * cfqq.
> +	 */
> +	iog = ioq_to_io_group(ioq);
> +
> +	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +		/*
> +		 * We simulate this as cfqq timed out so that it gets to bank
> +		 * the remaining of its time slice.
> +		 */
> +		elv_log_ioq(efqd, ioq, "preempt");
> +		goto expire;
> +	}
> +
> +	/*
> +	 * The active queue has requests and isn't expired, allow it to
> +	 * dispatch.
> +	 */
> +
> +	if (ioq->nr_queued)
> +		goto keep_queue;
> +
> +	/*
> +	 * If another queue has a request waiting within our mean seek
> +	 * distance, let it run.  The expire code will check for close
> +	 * cooperators and put the close queue at the front of the service
> +	 * tree.
> +	 */
> +	new_ioq = elv_close_cooperator(q, ioq, 0);
> +	if (new_ioq)
> +		goto expire;
> +
> +	/*
> +	 * No requests pending. If the active queue still has requests in
> +	 * flight or is idling for a new request, allow either of these
> +	 * conditions to happen (or time out) before selecting a new queue.
> +	 */
> +
> +	if (timer_pending(&efqd->idle_slice_timer) ||
> +	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
> +		ioq = NULL;
> +		goto keep_queue;
> +	}
> +
> +expire:
> +	elv_ioq_slice_expired(q);
> +new_queue:
> +	ioq = elv_set_active_ioq(q, new_ioq);
> +keep_queue:
> +	return ioq;
> +}
> +
> +/* A request got removed from io_queue. Do the accounting */
> +void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
> +{
> +	struct io_queue *ioq;
> +	struct elv_fq_data *efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	ioq = rq->ioq;
> +	BUG_ON(!ioq);
> +	ioq->nr_queued--;
> +
> +	efqd = ioq->efqd;
> +	BUG_ON(!efqd);
> +	efqd->rq_queued--;
> +
> +	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
> +		elv_del_ioq_busy(e, ioq, 1);
> +}
> +
> +/* A request got dispatched. Do the accounting. */
> +void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
> +{
> +	struct io_queue *ioq = rq->ioq;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	BUG_ON(!ioq);
> +	elv_ioq_request_dispatched(ioq);
> +	elv_ioq_request_removed(e, rq);
> +	elv_clear_ioq_must_dispatch(ioq);
> +}
> +
> +void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	efqd->rq_in_driver++;
> +	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
> +						efqd->rq_in_driver);
> +}
> +
> +void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	WARN_ON(!efqd->rq_in_driver);
> +	efqd->rq_in_driver--;
> +	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
> +						efqd->rq_in_driver);
> +}
> +
> +/*
> + * Update hw_tag based on peak queue depth over 50 samples under
> + * sufficient load.
> + */
> +static void elv_update_hw_tag(struct elv_fq_data *efqd)
> +{
> +	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
> +		efqd->rq_in_driver_peak = efqd->rq_in_driver;
> +
> +	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
> +	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
> +		return;
> +
> +	if (efqd->hw_tag_samples++ < 50)
> +		return;
> +
> +	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
> +		efqd->hw_tag = 1;
> +	else
> +		efqd->hw_tag = 0;
> +
> +	efqd->hw_tag_samples = 0;
> +	efqd->rq_in_driver_peak = 0;
> +}
> +
> +/*
> + * If ioscheduler has functionality of keeping track of close cooperator, check
> + * with it if it has got a closely co-operating queue.
> + */
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe)
> +{
> +	struct elevator_queue *e = q->elevator;
> +	struct io_queue *new_ioq = NULL;
> +
> +	/*
> +	 * Currently this feature is supported only for flat hierarchy or
> +	 * root group queues so that default cfq behavior is not changed.
> +	 */
> +	if (!is_root_group_ioq(q, ioq))
> +		return NULL;
> +
> +	if (q->elevator->ops->elevator_close_cooperator_fn)
> +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> +						ioq->sched_queue, probe);
> +
> +	/* Only select co-operating queue if it belongs to root group */
> +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> +		return NULL;
> +
> +	return new_ioq;
> +}
> +
> +/* A request got completed from io_queue. Do the accounting. */
> +void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> +{
> +	const int sync = rq_is_sync(rq);
> +	struct io_queue *ioq;
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> +		return;
> +
> +	ioq = rq->ioq;
> +
> +	elv_log_ioq(efqd, ioq, "complete");
> +
> +	elv_update_hw_tag(efqd);
> +
> +	WARN_ON(!efqd->rq_in_driver);
> +	WARN_ON(!ioq->dispatched);
> +	efqd->rq_in_driver--;
> +	ioq->dispatched--;
> +
> +	if (sync)
> +		ioq->last_end_request = jiffies;
> +
> +	/*
> +	 * If this is the active queue, check if it needs to be expired,
> +	 * or if we want to idle in case it has no pending requests.
> +	 */
> +
> +	if (elv_active_ioq(q->elevator) == ioq) {
> +		if (elv_ioq_slice_new(ioq)) {
> +			elv_ioq_set_prio_slice(q, ioq);
> +			elv_clear_ioq_slice_new(ioq);
> +		}
> +		/*
> +		 * If there are no requests waiting in this queue, and
> +		 * there are other queues ready to issue requests, AND
> +		 * those other queues are issuing requests within our
> +		 * mean seek distance, give them a chance to run instead
> +		 * of idling.
> +		 */
> +		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
> +			elv_ioq_slice_expired(q);
> +		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
> +			 && sync && !rq_noidle(rq))
> +			elv_ioq_arm_slice_timer(q);
> +	}
> +
> +	if (!efqd->rq_in_driver)
> +		elv_schedule_dispatch(q);
> +}
> +
> +struct io_group *io_lookup_io_group_current(struct request_queue *q)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +
> +	return efqd->root_group;
> +}
> +EXPORT_SYMBOL(io_lookup_io_group_current);
> +
> +void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> +					int ioprio)
> +{
> +	struct io_queue *ioq = NULL;
> +
> +	switch (ioprio_class) {
> +	case IOPRIO_CLASS_RT:
> +		ioq = iog->async_queue[0][ioprio];
> +		break;
> +	case IOPRIO_CLASS_BE:
> +		ioq = iog->async_queue[1][ioprio];
> +		break;
> +	case IOPRIO_CLASS_IDLE:
> +		ioq = iog->async_idle_queue;
> +		break;
> +	default:
> +		BUG();
> +	}
> +
> +	if (ioq)
> +		return ioq->sched_queue;
> +	return NULL;
> +}
> +EXPORT_SYMBOL(io_group_async_queue_prio);
> +
> +void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> +					int ioprio, struct io_queue *ioq)
> +{
> +	switch (ioprio_class) {
> +	case IOPRIO_CLASS_RT:
> +		iog->async_queue[0][ioprio] = ioq;
> +		break;
> +	case IOPRIO_CLASS_BE:
> +		iog->async_queue[1][ioprio] = ioq;
> +		break;
> +	case IOPRIO_CLASS_IDLE:
> +		iog->async_idle_queue = ioq;
> +		break;
> +	default:
> +		BUG();
> +	}
> +
> +	/*
> +	 * Take the group reference and pin the queue. Group exit will
> +	 * clean it up
> +	 */
> +	elv_get_ioq(ioq);
> +}
> +EXPORT_SYMBOL(io_group_set_async_queue);
> +
> +/*
> + * Release all the io group references to its async queues.
> + */
> +void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
> +{
> +	int i, j;
> +
> +	for (i = 0; i < 2; i++)
> +		for (j = 0; j < IOPRIO_BE_NR; j++)
> +			elv_release_ioq(e, &iog->async_queue[i][j]);
> +
> +	/* Free up async idle queue */
> +	elv_release_ioq(e, &iog->async_idle_queue);
> +}
> +
> +struct io_group *io_alloc_root_group(struct request_queue *q,
> +					struct elevator_queue *e, void *key)
> +{
> +	struct io_group *iog;
> +	int i;
> +
> +	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
> +	if (iog == NULL)
> +		return NULL;
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
> +		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
> +
> +	return iog;
> +}
> +
> +void io_free_root_group(struct elevator_queue *e)
> +{
> +	struct io_group *iog = e->efqd.root_group;
> +	struct io_service_tree *st;
> +	int i;
> +
> +	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
> +		st = iog->sched_data.service_tree + i;
> +		io_flush_idle_tree(st);
> +	}
> +
> +	io_put_io_group_queues(e, iog);
> +	kfree(iog);
> +}
> +
> +static void elv_slab_kill(void)
> +{
> +	/*
> +	 * Caller already ensured that pending RCU callbacks are completed,
> +	 * so we should have no busy allocations at this point.
> +	 */
> +	if (elv_ioq_pool)
> +		kmem_cache_destroy(elv_ioq_pool);
> +}
> +
> +static int __init elv_slab_setup(void)
> +{
> +	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
> +	if (!elv_ioq_pool)
> +		goto fail;
> +
> +	return 0;
> +fail:
> +	elv_slab_kill();
> +	return -ENOMEM;
> +}
> +
> +/* Initialize fair queueing data associated with elevator */
> +int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
> +{
> +	struct io_group *iog;
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return 0;
> +
> +	iog = io_alloc_root_group(q, e, efqd);
> +	if (iog == NULL)
> +		return 1;
> +
> +	efqd->root_group = iog;
> +	efqd->queue = q;
> +
> +	init_timer(&efqd->idle_slice_timer);
> +	efqd->idle_slice_timer.function = elv_idle_slice_timer;
> +	efqd->idle_slice_timer.data = (unsigned long) efqd;
> +
> +	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
> +
> +	efqd->elv_slice[0] = elv_slice_async;
> +	efqd->elv_slice[1] = elv_slice_sync;
> +	efqd->elv_slice_idle = elv_slice_idle;
> +	efqd->hw_tag = 1;
> +
> +	return 0;
> +}
> +
> +/*
> + * elv_exit_fq_data is called before we call elevator_exit_fn. Before
> + * we ask elevator to cleanup its queues, we do the cleanup here so
> + * that all the group and idle tree references to ioq are dropped. Later
> + * during elevator cleanup, ioc reference will be dropped which will lead
> + * to removal of ioscheduler queue as well as associated ioq object.
> + */
> +void elv_exit_fq_data(struct elevator_queue *e)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	elv_shutdown_timer_wq(e);
> +
> +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> +	io_free_root_group(e);
> +}
> +
> +/*
> + * This is called after the io scheduler has cleaned up its data structres.
> + * I don't think that this function is required. Right now just keeping it
> + * because cfq cleans up timer and work queue again after freeing up
> + * io contexts. To me io scheduler has already been drained out, and all
> + * the active queue have already been expired so time and work queue should
> + * not been activated during cleanup process.
> + *
> + * Keeping it here for the time being. Will get rid of it later.
> + */
> +void elv_exit_fq_data_post(struct elevator_queue *e)
> +{
> +	struct elv_fq_data *efqd = &e->efqd;
> +
> +	if (!elv_iosched_fair_queuing_enabled(e))
> +		return;
> +
> +	elv_shutdown_timer_wq(e);
> +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> +}
> +
> +
> +static int __init elv_fq_init(void)
> +{
> +	if (elv_slab_setup())
> +		return -ENOMEM;
> +
> +	/* could be 0 on HZ < 1000 setups */
> +
> +	if (!elv_slice_async)
> +		elv_slice_async = 1;
> +
> +	if (!elv_slice_idle)
> +		elv_slice_idle = 1;
> +
> +	return 0;
> +}
> +
> +module_init(elv_fq_init);
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> new file mode 100644
> index 0000000..5b6c1cc
> --- /dev/null
> +++ b/block/elevator-fq.h
> @@ -0,0 +1,473 @@
> +/*
> + * BFQ: data structures and common functions prototypes.
> + *
> + * Based on ideas and code from CFQ:
> + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> + *
> + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> + *		      Paolo Valente <paolo.valente@unimore.it>
> + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> + * 	              Nauman Rafique <nauman@google.com>
> + */
> +
> +#include <linux/blkdev.h>
> +
> +#ifndef _BFQ_SCHED_H
> +#define _BFQ_SCHED_H
> +
> +#define IO_IOPRIO_CLASSES	3
> +
> +typedef u64 bfq_timestamp_t;
> +typedef unsigned long bfq_weight_t;
> +typedef unsigned long bfq_service_t;

Does this abstraction really provide any benefit? Why not directly use
the standard C types, make the code easier to read.

> +struct io_entity;
> +struct io_queue;
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +
> +#define ELV_ATTR(name) \
> +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> +
> +/**
> + * struct bfq_service_tree - per ioprio_class service tree.

Comment is old, does not reflect the newer name

> + * @active: tree for active entities (i.e., those backlogged).
> + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> + * @first_idle: idle entity with minimum F_i.
> + * @last_idle: idle entity with maximum F_i.
> + * @vtime: scheduler virtual time.
> + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> + *
> + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> + * ioprio_class has its own independent scheduler, and so its own
> + * bfq_service_tree.  All the fields are protected by the queue lock
> + * of the containing efqd.
> + */
> +struct io_service_tree {
> +	struct rb_root active;
> +	struct rb_root idle;
> +
> +	struct io_entity *first_idle;
> +	struct io_entity *last_idle;
> +
> +	bfq_timestamp_t vtime;
> +	bfq_weight_t wsum;
> +};
> +
> +/**
> + * struct bfq_sched_data - multi-class scheduler.

Again the naming convention is broken, you need to change several
bfq's to io's :)

> + * @active_entity: entity under service.
> + * @next_active: head-of-the-line entity in the scheduler.
> + * @service_tree: array of service trees, one per ioprio_class.
> + *
> + * bfq_sched_data is the basic scheduler queue.  It supports three
> + * ioprio_classes, and can be used either as a toplevel queue or as
> + * an intermediate queue on a hierarchical setup.
> + * @next_active points to the active entity of the sched_data service
> + * trees that will be scheduled next.
> + *
> + * The supported ioprio_classes are the same as in CFQ, in descending
> + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
> + * Requests from higher priority queues are served before all the
> + * requests from lower priority queues; among requests of the same
> + * queue requests are served according to B-WF2Q+.
> + * All the fields are protected by the queue lock of the containing bfqd.
> + */
> +struct io_sched_data {
> +	struct io_entity *active_entity;
> +	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
> +};
> +
> +/**
> + * struct bfq_entity - schedulable entity.
> + * @rb_node: service_tree member.
> + * @on_st: flag, true if the entity is on a tree (either the active or
> + *         the idle one of its service_tree).
> + * @finish: B-WF2Q+ finish timestamp (aka F_i).
> + * @start: B-WF2Q+ start timestamp (aka S_i).

Could you mention what key is used in the rb_tree? start, finish
sounds like a range, so my suspicion is that start is used.

> + * @tree: tree the entity is enqueued into; %NULL if not on a tree.
> + * @min_start: minimum start time of the (active) subtree rooted at
> + *             this entity; used for O(log N) lookups into active trees.

Used for O(log N) makes no sense to me, RBTree has a worst case
lookup time of O(log N), but what is the comment saying?

> + * @service: service received during the last round of service.
> + * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
> + * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
> + * @parent: parent entity, for hierarchical scheduling.
> + * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
> + *                 associated scheduler queue, %NULL on leaf nodes.
> + * @sched_data: the scheduler queue this entity belongs to.
> + * @ioprio: the ioprio in use.
> + * @new_ioprio: when an ioprio change is requested, the new ioprio value
> + * @ioprio_class: the ioprio_class in use.
> + * @new_ioprio_class: when an ioprio_class change is requested, the new
> + *                    ioprio_class value.
> + * @ioprio_changed: flag, true when the user requested an ioprio or
> + *                  ioprio_class change.
> + *
> + * A bfq_entity is used to represent either a bfq_queue (leaf node in the
> + * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
> + * entity belongs to the sched_data of the parent group in the cgroup
> + * hierarchy.  Non-leaf entities have also their own sched_data, stored
> + * in @my_sched_data.
> + *
> + * Each entity stores independently its priority values; this would allow
> + * different weights on different devices, but this functionality is not
> + * exported to userspace by now.  Priorities are updated lazily, first
> + * storing the new values into the new_* fields, then setting the
> + * @ioprio_changed flag.  As soon as there is a transition in the entity
> + * state that allows the priority update to take place the effective and
> + * the requested priority values are synchronized.
> + *
> + * The weight value is calculated from the ioprio to export the same
> + * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
> + * queues that do not spend too much time to consume their budget and
> + * have true sequential behavior, and when there are no external factors
> + * breaking anticipation) the relative weights at each level of the
> + * cgroups hierarchy should be guaranteed.
> + * All the fields are protected by the queue lock of the containing bfqd.
> + */
> +struct io_entity {
> +	struct rb_node rb_node;
> +
> +	int on_st;
> +
> +	bfq_timestamp_t finish;
> +	bfq_timestamp_t start;
> +
> +	struct rb_root *tree;
> +
> +	bfq_timestamp_t min_start;
> +
> +	bfq_service_t service, budget;
> +	bfq_weight_t weight;
> +
> +	struct io_entity *parent;
> +
> +	struct io_sched_data *my_sched_data;
> +	struct io_sched_data *sched_data;
> +
> +	unsigned short ioprio, new_ioprio;
> +	unsigned short ioprio_class, new_ioprio_class;
> +
> +	int ioprio_changed;
> +};
> +
> +/*
> + * A common structure embedded by every io scheduler into their respective
> + * queue structure.
> + */
> +struct io_queue {
> +	struct io_entity entity;

So the io_queue has an abstract entity called io_entity that contains
it QoS parameters? Correct?

> +	atomic_t ref;
> +	unsigned int flags;
> +
> +	/* Pointer to generic elevator data structure */
> +	struct elv_fq_data *efqd;
> +	pid_t pid;

Why do we store the pid?

> +
> +	/* Number of requests queued on this io queue */
> +	unsigned long nr_queued;
> +
> +	/* Requests dispatched from this queue */
> +	int dispatched;
> +
> +	/* Keep a track of think time of processes in this queue */
> +	unsigned long last_end_request;
> +	unsigned long ttime_total;
> +	unsigned long ttime_samples;
> +	unsigned long ttime_mean;
> +
> +	unsigned long slice_end;
> +
> +	/* Pointer to io scheduler's queue */
> +	void *sched_queue;
> +};
> +
> +struct io_group {
> +	struct io_sched_data sched_data;
> +
> +	/* async_queue and idle_queue are used only for cfq */
> +	struct io_queue *async_queue[2][IOPRIO_BE_NR];

Again the 2 is confusing

> +	struct io_queue *async_idle_queue;
> +
> +	/*
> +	 * Used to track any pending rt requests so we can pre-empt current
> +	 * non-RT cfqq in service when this value is non-zero.
> +	 */
> +	unsigned int busy_rt_queues;
> +};
> +
> +struct elv_fq_data {

What does fq stand for?

> +	struct io_group *root_group;
> +
> +	struct request_queue *queue;
> +	unsigned int busy_queues;
> +
> +	/* Number of requests queued */
> +	int rq_queued;
> +
> +	/* Pointer to the ioscheduler queue being served */
> +	void *active_queue;
> +
> +	int rq_in_driver;
> +	int hw_tag;
> +	int hw_tag_samples;
> +	int rq_in_driver_peak;

Some comments of _in_driver and _in_driver_peak would be nice.

> +
> +	/*
> +	 * elevator fair queuing layer has the capability to provide idling
> +	 * for ensuring fairness for processes doing dependent reads.
> +	 * This might be needed to ensure fairness among two processes doing
> +	 * synchronous reads in two different cgroups. noop and deadline don't
> +	 * have any notion of anticipation/idling. As of now, these are the
> +	 * users of this functionality.
> +	 */
> +	unsigned int elv_slice_idle;
> +	struct timer_list idle_slice_timer;
> +	struct work_struct unplug_work;
> +
> +	unsigned int elv_slice[2];

Why [2] makes the code hearder to read

> +};
> +
> +extern int elv_slice_idle;
> +extern int elv_slice_async;
> +
> +/* Logging facilities. */
> +#define elv_log_ioq(efqd, ioq, fmt, args...) \
> +	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
> +				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
> +
> +#define elv_log(efqd, fmt, args...) \
> +	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
> +
> +#define ioq_sample_valid(samples)   ((samples) > 80)
> +
> +/* Some shared queue flag manipulation functions among elevators */
> +
> +enum elv_queue_state_flags {
> +	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
> +	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
> +	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
> +	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
> +	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
> +	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
> +	ELV_QUEUE_FLAG_NR,
> +};
> +
> +#define ELV_IO_QUEUE_FLAG_FNS(name)					\
> +static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
> +{                                                                       \
> +	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
> +}                                                                       \
> +static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
> +{                                                                       \
> +	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
> +}                                                                       \
> +static inline int elv_ioq_##name(struct io_queue *ioq)         		\
> +{                                                                       \
> +	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
> +}
> +
> +ELV_IO_QUEUE_FLAG_FNS(busy)
> +ELV_IO_QUEUE_FLAG_FNS(sync)
> +ELV_IO_QUEUE_FLAG_FNS(wait_request)
> +ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
> +ELV_IO_QUEUE_FLAG_FNS(idle_window)
> +ELV_IO_QUEUE_FLAG_FNS(slice_new)
> +
> +static inline struct io_service_tree *
> +io_entity_service_tree(struct io_entity *entity)
> +{
> +	struct io_sched_data *sched_data = entity->sched_data;
> +	unsigned int idx = entity->ioprio_class - 1;
> +
> +	BUG_ON(idx >= IO_IOPRIO_CLASSES);
> +	BUG_ON(sched_data == NULL);
> +
> +	return sched_data->service_tree + idx;
> +}
> +
> +/* A request got dispatched from the io_queue. Do the accounting. */
> +static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
> +{
> +	ioq->dispatched++;
> +}
> +
> +static inline int elv_ioq_slice_used(struct io_queue *ioq)
> +{
> +	if (elv_ioq_slice_new(ioq))
> +		return 0;
> +	if (time_before(jiffies, ioq->slice_end))
> +		return 0;
> +
> +	return 1;
> +}
> +
> +/* How many request are currently dispatched from the queue */
> +static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
> +{
> +	return ioq->dispatched;
> +}
> +
> +/* How many request are currently queued in the queue */
> +static inline int elv_ioq_nr_queued(struct io_queue *ioq)
> +{
> +	return ioq->nr_queued;
> +}
> +
> +static inline void elv_get_ioq(struct io_queue *ioq)
> +{
> +	atomic_inc(&ioq->ref);
> +}
> +
> +static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
> +						unsigned long slice_end)
> +{
> +	ioq->slice_end = slice_end;
> +}
> +
> +static inline int elv_ioq_class_idle(struct io_queue *ioq)
> +{
> +	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
> +}
> +
> +static inline int elv_ioq_class_rt(struct io_queue *ioq)
> +{
> +	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
> +}
> +
> +static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
> +{
> +	return ioq->entity.new_ioprio_class;
> +}
> +
> +static inline int elv_ioq_ioprio(struct io_queue *ioq)
> +{
> +	return ioq->entity.new_ioprio;
> +}
> +
> +static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
> +						int ioprio_class)
> +{
> +	ioq->entity.new_ioprio_class = ioprio_class;
> +	ioq->entity.ioprio_changed = 1;
> +}
> +
> +static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
> +{
> +	ioq->entity.new_ioprio = ioprio;
> +	ioq->entity.ioprio_changed = 1;
> +}
> +
> +static inline void *ioq_sched_queue(struct io_queue *ioq)
> +{
> +	if (ioq)
> +		return ioq->sched_queue;
> +	return NULL;
> +}
> +
> +static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
> +{
> +	return container_of(ioq->entity.sched_data, struct io_group,
> +						sched_data);
> +}
> +
> +extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
> +extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
> +						size_t count);
> +
> +/* Functions used by elevator.c */
> +extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
> +extern void elv_exit_fq_data(struct elevator_queue *e);
> +extern void elv_exit_fq_data_post(struct elevator_queue *e);
> +
> +extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
> +extern void elv_ioq_request_removed(struct elevator_queue *e,
> +					struct request *rq);
> +extern void elv_fq_dispatched_request(struct elevator_queue *e,
> +					struct request *rq);
> +
> +extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
> +extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
> +
> +extern void elv_ioq_completed_request(struct request_queue *q,
> +				struct request *rq);
> +
> +extern void *elv_fq_select_ioq(struct request_queue *q, int force);
> +extern struct io_queue *rq_ioq(struct request *rq);
> +
> +/* Functions used by io schedulers */
> +extern void elv_put_ioq(struct io_queue *ioq);
> +extern void __elv_ioq_slice_expired(struct request_queue *q,
> +					struct io_queue *ioq);
> +extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> +		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
> +extern void elv_schedule_dispatch(struct request_queue *q);
> +extern int elv_hw_tag(struct elevator_queue *e);
> +extern void *elv_active_sched_queue(struct elevator_queue *e);
> +extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
> +					unsigned long expires);
> +extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
> +extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
> +extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> +					int ioprio);
> +extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> +					int ioprio, struct io_queue *ioq);
> +extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
> +extern int elv_nr_busy_ioq(struct elevator_queue *e);
> +extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
> +extern void elv_free_ioq(struct io_queue *ioq);
> +
> +#else /* CONFIG_ELV_FAIR_QUEUING */
> +
> +static inline int elv_init_fq_data(struct request_queue *q,
> +					struct elevator_queue *e)
> +{
> +	return 0;
> +}
> +
> +static inline void elv_exit_fq_data(struct elevator_queue *e) {}
> +static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
> +
> +static inline void elv_fq_activate_rq(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_fq_deactivate_rq(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_fq_dispatched_request(struct elevator_queue *e,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_request_removed(struct elevator_queue *e,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_request_add(struct request_queue *q,
> +					struct request *rq)
> +{
> +}
> +
> +static inline void elv_ioq_completed_request(struct request_queue *q,
> +						struct request *rq)
> +{
> +}
> +
> +static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
> +static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
> +static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
> +{
> +	return NULL;
> +}
> +#endif /* CONFIG_ELV_FAIR_QUEUING */
> +#endif /* _BFQ_SCHED_H */
> diff --git a/block/elevator.c b/block/elevator.c
> index 7073a90..c2f07f5 100644
> --- a/block/elevator.c
> +++ b/block/elevator.c
> @@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
>  	for (i = 0; i < ELV_HASH_ENTRIES; i++)
>  		INIT_HLIST_HEAD(&eq->hash[i]);
> 
> +	if (elv_init_fq_data(q, eq))
> +		goto err;
> +
>  	return eq;
>  err:
>  	kfree(eq);
> @@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
>  void elevator_exit(struct elevator_queue *e)
>  {
>  	mutex_lock(&e->sysfs_lock);
> +	elv_exit_fq_data(e);
>  	if (e->ops->elevator_exit_fn)
>  		e->ops->elevator_exit_fn(e);
>  	e->ops = NULL;
> +	elv_exit_fq_data_post(e);
>  	mutex_unlock(&e->sysfs_lock);
> 
>  	kobject_put(&e->kobj);
> @@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
>  {
>  	struct elevator_queue *e = q->elevator;
> 
> +	elv_fq_activate_rq(q, rq);
> +
>  	if (e->ops->elevator_activate_req_fn)
>  		e->ops->elevator_activate_req_fn(q, rq);
>  }
> @@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
>  {
>  	struct elevator_queue *e = q->elevator;
> 
> +	elv_fq_deactivate_rq(q, rq);
> +
>  	if (e->ops->elevator_deactivate_req_fn)
>  		e->ops->elevator_deactivate_req_fn(q, rq);
>  }
> @@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
>  	elv_rqhash_del(q, rq);
> 
>  	q->nr_sorted--;
> +	elv_fq_dispatched_request(q->elevator, rq);
> 
>  	boundary = q->end_sector;
>  	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
> @@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
>  	elv_rqhash_del(q, rq);
> 
>  	q->nr_sorted--;
> +	elv_fq_dispatched_request(q->elevator, rq);
> 
>  	q->end_sector = rq_end_sector(rq);
>  	q->boundary_rq = rq;
> @@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
>  	elv_rqhash_del(q, next);
> 
>  	q->nr_sorted--;
> +	elv_ioq_request_removed(e, next);
>  	q->last_merge = rq;
>  }
> 
> @@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
>  				q->last_merge = rq;
>  		}
> 
> -		/*
> -		 * Some ioscheds (cfq) run q->request_fn directly, so
> -		 * rq cannot be accessed after calling
> -		 * elevator_add_req_fn.
> -		 */
>  		q->elevator->ops->elevator_add_req_fn(q, rq);
> +		elv_ioq_request_add(q, rq);
>  		break;
> 
>  	case ELEVATOR_INSERT_REQUEUE:
> @@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
> 
>  int elv_queue_empty(struct request_queue *q)
>  {
> -	struct elevator_queue *e = q->elevator;
> -
>  	if (!list_empty(&q->queue_head))
>  		return 0;
> 
> -	if (e->ops->elevator_queue_empty_fn)
> -		return e->ops->elevator_queue_empty_fn(q);
> +	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
> +	if (q->nr_sorted)
> +		return 0;
> 
>  	return 1;
>  }
> @@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
>  	 */
>  	if (blk_account_rq(rq)) {
>  		q->in_flight--;
> -		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
> -			e->ops->elevator_completed_req_fn(q, rq);
> +		if (blk_sorted_rq(rq)) {
> +			if (e->ops->elevator_completed_req_fn)
> +				e->ops->elevator_completed_req_fn(q, rq);
> +			elv_ioq_completed_request(q, rq);
> +		}
>  	}
> 
>  	/*
> @@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
>  	return NULL;
>  }
>  EXPORT_SYMBOL(elv_rb_latter_request);
> +
> +/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
> +void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
> +{
> +	return ioq_sched_queue(rq_ioq(rq));
> +}
> +EXPORT_SYMBOL(elv_get_sched_queue);
> +
> +/* Select an ioscheduler queue to dispatch request from. */
> +void *elv_select_sched_queue(struct request_queue *q, int force)
> +{
> +	return ioq_sched_queue(elv_fq_select_ioq(q, force));
> +}
> +EXPORT_SYMBOL(elv_select_sched_queue);
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index b4f71f1..96a94c9 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -245,6 +245,11 @@ struct request {
> 
>  	/* for bidi */
>  	struct request *next_rq;
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	/* io queue request belongs to */
> +	struct io_queue *ioq;
> +#endif
>  };
> 
>  static inline unsigned short req_get_ioprio(struct request *req)
> diff --git a/include/linux/elevator.h b/include/linux/elevator.h
> index c59b769..679c149 100644
> --- a/include/linux/elevator.h
> +++ b/include/linux/elevator.h
> @@ -2,6 +2,7 @@
>  #define _LINUX_ELEVATOR_H
> 
>  #include <linux/percpu.h>
> +#include "../../block/elevator-fq.h"
> 
>  #ifdef CONFIG_BLOCK
> 
> @@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
> 
>  typedef void *(elevator_init_fn) (struct request_queue *);
>  typedef void (elevator_exit_fn) (struct elevator_queue *);
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
> +typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
> +typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
> +typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
> +typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
> +						struct request*);
> +typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
> +						struct request*);
> +typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
> +						void*, int probe);
> +#endif
> 
>  struct elevator_ops
>  {
> @@ -56,6 +69,17 @@ struct elevator_ops
>  	elevator_init_fn *elevator_init_fn;
>  	elevator_exit_fn *elevator_exit_fn;
>  	void (*trim)(struct io_context *);
> +
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
> +	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
> +	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
> +
> +	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
> +	elevator_should_preempt_fn *elevator_should_preempt_fn;
> +	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
> +	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
> +#endif
>  };
> 
>  #define ELV_NAME_MAX	(16)
> @@ -76,6 +100,9 @@ struct elevator_type
>  	struct elv_fs_entry *elevator_attrs;
>  	char elevator_name[ELV_NAME_MAX];
>  	struct module *elevator_owner;
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	int elevator_features;
> +#endif
>  };
> 
>  /*
> @@ -89,6 +116,10 @@ struct elevator_queue
>  	struct elevator_type *elevator_type;
>  	struct mutex sysfs_lock;
>  	struct hlist_head *hash;
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +	/* fair queuing data */
> +	struct elv_fq_data efqd;
> +#endif
>  };
> 
>  /*
> @@ -209,5 +240,25 @@ enum {
>  	__val;							\
>  })
> 
> +/* iosched can let elevator know their feature set/capability */
> +#ifdef CONFIG_ELV_FAIR_QUEUING
> +
> +/* iosched wants to use fq logic of elevator layer */
> +#define	ELV_IOSCHED_NEED_FQ	1
> +
> +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> +{
> +	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
> +}
> +
> +#else /* ELV_IOSCHED_FAIR_QUEUING */
> +
> +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> +{
> +	return 0;
> +}
> +#endif /* ELV_IOSCHED_FAIR_QUEUING */
> +extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
> +extern void *elv_select_sched_queue(struct request_queue *q, int force);
>  #endif /* CONFIG_BLOCK */
>  #endif
> -- 
> 1.6.0.6
> 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]     ` <20090622084612.GD3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
@ 2009-06-22 12:43       ` Fabio Checconi
  2009-06-23  2:05       ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Fabio Checconi @ 2009-06-22 12:43 UTC (permalink / raw)
  To: Balbir Singh
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

> From: Balbir Singh <balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
> Date: Mon, Jun 22, 2009 02:16:12PM +0530
>
> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:20]:
> 
> > This is common fair queuing code in elevator layer. This is controlled by
> > config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> > flat fair queuing support where there is only one group, "root group" and all
> > the tasks belong to root group.
> > 
> > This elevator layer changes are backward compatible. That means any ioscheduler
> > using old interfaces will continue to work.
> > 
> > This code is essentially the CFQ code for fair queuing. The primary difference
> > is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
> >
> 
> The patch is quite long and to be honest requires a long time to
> review, which I don't mind. I suspect my frequently diverted mind is
> likely to miss a lot in a big patch like this. Could you consider
> splitting this further if possible. I think you'll notice the number
> of reviews will also increase.
>  

This core scheduler part has not changed too much from the bfq patches,
so I'll try to answer your questions; Vivek, please correct me where
my knowledge is outdated.  I preferred to leave out the questions about
code that was not in the original patches.

...
> > +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> > +					unsigned short prio)
> 
> Why is the return type int and not unsigned int or unsigned long? Can
> the return value ever be negative?
> 
> > +{
> > +	const int base_slice = efqd->elv_slice[sync];
> > +
> > +	WARN_ON(prio >= IOPRIO_BE_NR);
> > +
> > +	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
> > +}
> > +
> > +static inline int
> > +elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
> > +{
> > +	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
> > +}
> > +
> > +/* Mainly the BFQ scheduling code Follows */
> > +
> > +/*
> > + * Shift for timestamp calculations.  This actually limits the maximum
> > + * service allowed in one timestamp delta (small shift values increase it),
> > + * the maximum total weight that can be used for the queues in the system
> > + * (big shift values increase it), and the period of virtual time wraparounds.
> > + */
> > +#define WFQ_SERVICE_SHIFT	22
> > +
> > +/**
> > + * bfq_gt - compare two timestamps.
> > + * @a: first ts.
> > + * @b: second ts.
> > + *
> > + * Return @a > @b, dealing with wrapping correctly.
> > + */
> > +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> > +{
> > +	return (s64)(a - b) > 0;
> > +}
> > +
> 
> a and b are of type u64, but cast to s64 to deal with wrapping?
> Correct?
> 

yes


> > +/**
> > + * bfq_delta - map service into the virtual time domain.
> > + * @service: amount of service.
> > + * @weight: scale factor.
> > + */
> > +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> > +					bfq_weight_t weight)
> > +{
> > +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> > +
> 
> Why is the case required? Does the compiler complain? service is
> already of the correct type.
> 

service is unsigned long, so it can be 32 bits in 32 bit machines,
while timestamps are always u64, so I think we need the cast.


> > +	do_div(d, weight);
> 
> On a 64 system both d and weight are 64 bit, but on a 32 bit system
> weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
> - no?
> 

yes.  here the situation is that we actually don't care about the type
of weight, as long as it can contain a 32 bit value, and weights should
never reach near the 2^32 boundary, otherwise we're prone to any kind
of numerical error.  there are no problems with weight being u32.


> > +	return d;
> > +}
> > +
> > +/**
> > + * bfq_calc_finish - assign the finish time to an entity.
> > + * @entity: the entity to act upon.
> > + * @service: the service to be charged to the entity.
> > + */
> > +static inline void bfq_calc_finish(struct io_entity *entity,
> > +				   bfq_service_t service)
> > +{
> > +	BUG_ON(entity->weight == 0);
> > +
> > +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> > +}
> 
> Should we BUG_ON (entity->finish == entity->start)? Or is that
> expected when the entity has no service time left.
> 

bfq_calc_finish() is used in two cases:

  1) we need to resync the finish time with the service received by an
    entity

  2) we need to assign a new finish time to an entity when it's enqueued

with preemptions 1) can happen with service = 0, and we need to reset the
finish time to the start time (depending on how preemptions are implemented),
so in this case we'd have a false positive (leading to a crashed system :) ).


> > +
> > +static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	BUG_ON(entity == NULL);
> > +	if (entity->my_sched_data == NULL)
> > +		ioq = container_of(entity, struct io_queue, entity);
> > +	return ioq;
> > +}
> > +
> > +/**
> > + * bfq_entity_of - get an entity from a node.
> > + * @node: the node field of the entity.
> > + *
> > + * Convert a node pointer to the relative entity.  This is used only
> > + * to simplify the logic of some functions and not as the generic
> > + * conversion mechanism because, e.g., in the tree walking functions,
> > + * the check for a %NULL value would be redundant.
> > + */
> > +static inline struct io_entity *bfq_entity_of(struct rb_node *node)
> > +{
> > +	struct io_entity *entity = NULL;
> > +
> > +	if (node != NULL)
> > +		entity = rb_entry(node, struct io_entity, rb_node);
> > +
> > +	return entity;
> > +}
> > +
> > +/**
> > + * bfq_extract - remove an entity from a tree.
> > + * @root: the tree root.
> > + * @entity: the entity to remove.
> > + */
> > +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> > +{
> 
> Extract is not common terminology, why not use bfq_remove()?
> 
> > +	BUG_ON(entity->tree != root);
> > +
> > +	entity->tree = NULL;
> > +	rb_erase(&entity->rb_node, root);
> 
> Don't you want to make entity->tree = NULL after rb_erase?
> 

this code assumes to be executed under spinlock, so order doesn't really
matter (tree is not affected by rb_erase(), it is a bfq private field).


> > +}
> > +
> > +/**
> > + * bfq_idle_extract - extract an entity from the idle tree.
> > + * @st: the service tree of the owning @entity.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_idle_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *next;
> > +
> > +	BUG_ON(entity->tree != &st->idle);
> > +
> > +	if (entity == st->first_idle) {
> > +		next = rb_next(&entity->rb_node);
> 
> What happens if next is NULL?
> 

the bfq_entity_of() call below returns NULL


> > +		st->first_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	if (entity == st->last_idle) {
> > +		next = rb_prev(&entity->rb_node);
> 
> What happens if next is NULL?
> 

same as above


> > +		st->last_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	bfq_extract(&st->idle, entity);
> > +}
> > +
> > +/**
> > + * bfq_insert - generic tree insertion.
> > + * @root: tree root.
> > + * @entity: entity to insert.
> > + *
> > + * This is used for the idle and the active tree, since they are both
> > + * ordered by finish time.
> > + */
> > +static void bfq_insert(struct rb_root *root, struct io_entity *entity)
> > +{
> > +	struct io_entity *entry;
> > +	struct rb_node **node = &root->rb_node;
> > +	struct rb_node *parent = NULL;
> > +
> > +	BUG_ON(entity->tree != NULL);
> > +
> > +	while (*node != NULL) {
> > +		parent = *node;
> > +		entry = rb_entry(parent, struct io_entity, rb_node);
> > +
> > +		if (bfq_gt(entry->finish, entity->finish))
> > +			node = &parent->rb_left;
> > +		else
> > +			node = &parent->rb_right;
> > +	}
> > +
> > +	rb_link_node(&entity->rb_node, parent, node);
> > +	rb_insert_color(&entity->rb_node, root);
> > +
> > +	entity->tree = root;
> > +}
> > +
> > +/**
> > + * bfq_update_min - update the min_start field of a entity.
> > + * @entity: the entity to update.
> > + * @node: one of its children.
> > + *
> > + * This function is called when @entity may store an invalid value for
> > + * min_start due to updates to the active tree.  The function  assumes
> > + * that the subtree rooted at @node (which may be its left or its right
> > + * child) has a valid min_start value.
> > + */
> > +static inline void bfq_update_min(struct io_entity *entity,
> > +					struct rb_node *node)
> > +{
> > +	struct io_entity *child;
> > +
> > +	if (node != NULL) {
> > +		child = rb_entry(node, struct io_entity, rb_node);
> > +		if (bfq_gt(entity->min_start, child->min_start))
> > +			entity->min_start = child->min_start;
> > +	}
> > +}
> 
> So.. we check to see if child's min_time is lesser than the root
> entities or node entities and set it to the minimum of the two?
> Can you use min_t here?
> 

no, it would not deal with wraparound correctly


> > +
> > +/**
> > + * bfq_update_active_node - recalculate min_start.
> > + * @node: the node to update.
> > + *
> > + * @node may have changed position or one of its children may have moved,
> > + * this function updates its min_start value.  The left and right subtrees
> > + * are assumed to hold a correct min_start value.
> > + */
> > +static inline void bfq_update_active_node(struct rb_node *node)
> > +{
> > +	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
> > +
> > +	entity->min_start = entity->start;
> > +	bfq_update_min(entity, node->rb_right);
> > +	bfq_update_min(entity, node->rb_left);
> > +}
> 
> I don't like this every much, we set the min_time twice, this can be
> easily optimized to look at both left and right child and pick the
> minimum.
> 

it's a minimum between three values (the ->start fields of the two children
and of the node itself), you cannot be sure it will be set twice


> > +
> > +/**
> > + * bfq_update_active_tree - update min_start for the whole active tree.
> > + * @node: the starting node.
> > + *
> > + * @node must be the deepest modified node after an update.  This function
> > + * updates its min_start using the values held by its children, assuming
> > + * that they did not change, and then updates all the nodes that may have
> > + * changed in the path to the root.  The only nodes that may have changed
> > + * are the ones in the path or their siblings.
> > + */
> > +static void bfq_update_active_tree(struct rb_node *node)
> > +{
> > +	struct rb_node *parent;
> > +
> > +up:
> > +	bfq_update_active_node(node);
> > +
> > +	parent = rb_parent(node);
> > +	if (parent == NULL)
> > +		return;
> > +
> > +	if (node == parent->rb_left && parent->rb_right != NULL)
> > +		bfq_update_active_node(parent->rb_right);
> > +	else if (parent->rb_left != NULL)
> > +		bfq_update_active_node(parent->rb_left);
> > +
> > +	node = parent;
> > +	goto up;
> > +}
> > +
> 
> For these functions, take a look at the walk function in the group
> scheduler that does update_shares
> 

are you sure?  AFAICT walk_tg_tree() walks all over the tree, this just
walks a single path to a node up to the root, I don't see what the two
have in common.

in the original patches we cited (among the others):

  http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf

which contains a description of the algorithm.


> > +/**
> > + * bfq_active_insert - insert an entity in the active tree of its group/device.
> > + * @st: the service tree of the entity.
> > + * @entity: the entity being inserted.
> > + *
> > + * The active tree is ordered by finish time, but an extra key is kept
> > + * per each node, containing the minimum value for the start times of
> > + * its children (and the node itself), so it's possible to search for
> > + * the eligible node with the lowest finish time in logarithmic time.
> > + */
> > +static void bfq_active_insert(struct io_service_tree *st,
> > +					struct io_entity *entity)
> > +{
> > +	struct rb_node *node = &entity->rb_node;
> > +
> > +	bfq_insert(&st->active, entity);
> > +
> > +	if (node->rb_left != NULL)
> > +		node = node->rb_left;
> > +	else if (node->rb_right != NULL)
> > +		node = node->rb_right;
> > +
> > +	bfq_update_active_tree(node);
> > +}
> > +
> > +/**
> > + * bfq_ioprio_to_weight - calc a weight from an ioprio.
> > + * @ioprio: the ioprio value to convert.
> > + */
> > +static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
> > +{
> > +	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
> > +	return IOPRIO_BE_NR - ioprio;
> > +}
> > +
> > +void bfq_get_entity(struct io_entity *entity)
> > +{
> > +	struct io_queue *ioq = io_entity_to_ioq(entity);
> > +
> > +	if (ioq)
> > +		elv_get_ioq(ioq);
> > +}
> > +
> > +void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
> > +{
> > +	entity->ioprio = entity->new_ioprio;
> > +	entity->ioprio_class = entity->new_ioprio_class;
> > +	entity->sched_data = &iog->sched_data;
> > +}
> > +
> > +/**
> > + * bfq_find_deepest - find the deepest node that an extraction can modify.
> > + * @node: the node being removed.
> > + *
> > + * Do the first step of an extraction in an rb tree, looking for the
> > + * node that will replace @node, and returning the deepest node that
> > + * the following modifications to the tree can touch.  If @node is the
> > + * last node in the tree return %NULL.
> > + */
> > +static struct rb_node *bfq_find_deepest(struct rb_node *node)
> > +{
> > +	struct rb_node *deepest;
> > +
> > +	if (node->rb_right == NULL && node->rb_left == NULL)
> > +		deepest = rb_parent(node);
> 
> Why is the parent the deepest? Shouldn't node be the deepest?
> 

this is related to how the RB tree is updated (see below)


> > +	else if (node->rb_right == NULL)
> > +		deepest = node->rb_left;
> > +	else if (node->rb_left == NULL)
> > +		deepest = node->rb_right;
> > +	else {
> > +		deepest = rb_next(node);
> > +		if (deepest->rb_right != NULL)
> > +			deepest = deepest->rb_right;
> > +		else if (rb_parent(deepest) != node)
> > +			deepest = rb_parent(deepest);
> > +	}
> > +
> > +	return deepest;
> > +}
> 
> The function is not clear, could you please define deepest node
> better?
> 

according to the paper cited above, we need to update the min_start
value on the path from the deepest node modified by the extraction
up to the root.  this function tries to consider all the cases of RB
extraction, looking for the deepest node that (after all the rotations
etc.) will need an update to min_start.  one interesting property
of RB trees is that this can be done in O(log N) because there is a
single path that needs to be updated.


> > +
> > +/**
> > + * bfq_active_extract - remove an entity from the active tree.
> > + * @st: the service_tree containing the tree.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_active_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *node;
> > +
> > +	node = bfq_find_deepest(&entity->rb_node);
> > +	bfq_extract(&st->active, entity);
> > +
> > +	if (node != NULL)
> > +		bfq_update_active_tree(node);
> > +}
> > +
> 
> Just to check my understanding, every time an active node is
> added/removed, we update the min_time of the entire tree.
> 

yes, but only O(log N) nodes need to be updated


> > +/**
> > + * bfq_idle_insert - insert an entity into the idle tree.
> > + * @st: the service tree containing the tree.
> > + * @entity: the entity to insert.
> > + */
> > +static void bfq_idle_insert(struct io_service_tree *st,
> > +					struct io_entity *entity)
> > +{
> > +	struct io_entity *first_idle = st->first_idle;
> > +	struct io_entity *last_idle = st->last_idle;
> > +
> > +	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
> > +		st->first_idle = entity;
> > +	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
> > +		st->last_idle = entity;
> > +
> > +	bfq_insert(&st->idle, entity);
> > +}
> > +
> > +/**
> > + * bfq_forget_entity - remove an entity from the wfq trees.
> > + * @st: the service tree.
> > + * @entity: the entity being removed.
> > + *
> > + * Update the device status and forget everything about @entity, putting
> > + * the device reference to it, if it is a queue.  Entities belonging to
> > + * groups are not refcounted.
> > + */
> > +static void bfq_forget_entity(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	BUG_ON(!entity->on_st);
> > +	entity->on_st = 0;
> > +	st->wsum -= entity->weight;
> > +	ioq = io_entity_to_ioq(entity);
> > +	if (!ioq)
> > +		return;
> > +	elv_put_ioq(ioq);
> > +}
> > +
> > +/**
> > + * bfq_put_idle_entity - release the idle tree ref of an entity.
> > + * @st: service tree for the entity.
> > + * @entity: the entity being released.
> > + */
> > +void bfq_put_idle_entity(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	bfq_idle_extract(st, entity);
> > +	bfq_forget_entity(st, entity);
> > +}
> > +
> > +/**
> > + * bfq_forget_idle - update the idle tree if necessary.
> > + * @st: the service tree to act upon.
> > + *
> > + * To preserve the global O(log N) complexity we only remove one entry here;
> > + * as the idle tree will not grow indefinitely this can be done safely.
> > + */
> > +void bfq_forget_idle(struct io_service_tree *st)
> > +{
> > +	struct io_entity *first_idle = st->first_idle;
> > +	struct io_entity *last_idle = st->last_idle;
> > +
> > +	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
> > +	    !bfq_gt(last_idle->finish, st->vtime)) {
> > +		/*
> > +		 * Active tree is empty. Pull back vtime to finish time of
> > +		 * last idle entity on idle tree.
> > +		 * Rational seems to be that it reduces the possibility of
> > +		 * vtime wraparound (bfq_gt(V-F) < 0).
> > +		 */
> > +		st->vtime = last_idle->finish;
> > +	}
> > +
> > +	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
> > +		bfq_put_idle_entity(st, first_idle);
> > +}
> > +
> > +
> > +static struct io_service_tree *
> > +__bfq_entity_update_prio(struct io_service_tree *old_st,
> > +				struct io_entity *entity)
> > +{
> > +	struct io_service_tree *new_st = old_st;
> > +	struct io_queue *ioq = io_entity_to_ioq(entity);
> > +
> > +	if (entity->ioprio_changed) {
> > +		entity->ioprio = entity->new_ioprio;
> > +		entity->ioprio_class = entity->new_ioprio_class;
> > +		entity->ioprio_changed = 0;
> > +
> > +		/*
> > +		 * Also update the scaled budget for ioq. Group will get the
> > +		 * updated budget once ioq is selected to run next.
> > +		 */
> > +		if (ioq) {
> > +			struct elv_fq_data *efqd = ioq->efqd;
> > +			entity->budget = elv_prio_to_slice(efqd, ioq);
> > +		}
> > +
> > +		old_st->wsum -= entity->weight;
> > +		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
> > +
> > +		/*
> > +		 * NOTE: here we may be changing the weight too early,
> > +		 * this will cause unfairness.  The correct approach
> > +		 * would have required additional complexity to defer
> > +		 * weight changes to the proper time instants (i.e.,
> > +		 * when entity->finish <= old_st->vtime).
> > +		 */
> > +		new_st = io_entity_service_tree(entity);
> > +		new_st->wsum += entity->weight;
> > +
> > +		if (new_st != old_st)
> > +			entity->start = new_st->vtime;
> > +	}
> > +
> > +	return new_st;
> > +}
> > +
> > +/**
> > + * __bfq_activate_entity - activate an entity.
> > + * @entity: the entity being activated.
> > + *
> > + * Called whenever an entity is activated, i.e., it is not active and one
> > + * of its children receives a new request, or has to be reactivated due to
> > + * budget exhaustion.  It uses the current budget of the entity (and the
> > + * service received if @entity is active) of the queue to calculate its
> > + * timestamps.
> > + */
> > +static void __bfq_activate_entity(struct io_entity *entity, int add_front)
> > +{
> > +	struct io_sched_data *sd = entity->sched_data;
> > +	struct io_service_tree *st = io_entity_service_tree(entity);
> > +
> > +	if (entity == sd->active_entity) {
> > +		BUG_ON(entity->tree != NULL);
> > +		/*
> > +		 * If we are requeueing the current entity we have
> > +		 * to take care of not charging to it service it has
> > +		 * not received.
> > +		 */
> > +		bfq_calc_finish(entity, entity->service);
> > +		entity->start = entity->finish;
> > +		sd->active_entity = NULL;
> > +	} else if (entity->tree == &st->active) {
> > +		/*
> > +		 * Requeueing an entity due to a change of some
> > +		 * next_active entity below it.  We reuse the old
> > +		 * start time.
> > +		 */
> > +		bfq_active_extract(st, entity);
> > +	} else if (entity->tree == &st->idle) {
> > +		/*
> > +		 * Must be on the idle tree, bfq_idle_extract() will
> > +		 * check for that.
> > +		 */
> > +		bfq_idle_extract(st, entity);
> > +		entity->start = bfq_gt(st->vtime, entity->finish) ?
> > +				       st->vtime : entity->finish;
> > +	} else {
> > +		/*
> > +		 * The finish time of the entity may be invalid, and
> > +		 * it is in the past for sure, otherwise the queue
> > +		 * would have been on the idle tree.
> > +		 */
> > +		entity->start = st->vtime;
> > +		st->wsum += entity->weight;
> > +		bfq_get_entity(entity);
> > +
> > +		BUG_ON(entity->on_st);
> > +		entity->on_st = 1;
> > +	}
> > +
> > +	st = __bfq_entity_update_prio(st, entity);
> > +	/*
> > +	 * This is to emulate cfq like functionality where preemption can
> > +	 * happen with-in same class, like sync queue preempting async queue
> > +	 * May be this is not a very good idea from fairness point of view
> > +	 * as preempting queue gains share. Keeping it for now.
> > +	 */
> > +	if (add_front) {
> > +		struct io_entity *next_entity;
> > +
> > +		/*
> > +		 * Determine the entity which will be dispatched next
> > +		 * Use sd->next_active once hierarchical patch is applied
> > +		 */
> > +		next_entity = bfq_lookup_next_entity(sd, 0);
> > +
> > +		if (next_entity && next_entity != entity) {
> > +			struct io_service_tree *new_st;
> > +			bfq_timestamp_t delta;
> > +
> > +			new_st = io_entity_service_tree(next_entity);
> > +
> > +			/*
> > +			 * At this point, both entities should belong to
> > +			 * same service tree as cross service tree preemption
> > +			 * is automatically taken care by algorithm
> > +			 */
> > +			BUG_ON(new_st != st);
> > +			entity->finish = next_entity->finish - 1;
> > +			delta = bfq_delta(entity->budget, entity->weight);
> > +			entity->start = entity->finish - delta;
> > +			if (bfq_gt(entity->start, st->vtime))
> > +				entity->start = st->vtime;
> > +		}
> > +	} else {
> > +		bfq_calc_finish(entity, entity->budget);
> > +	}
> > +	bfq_active_insert(st, entity);
> > +}
> > +
> > +/**
> > + * bfq_activate_entity - activate an entity.
> > + * @entity: the entity to activate.
> > + */
> > +void bfq_activate_entity(struct io_entity *entity, int add_front)
> > +{
> > +	__bfq_activate_entity(entity, add_front);
> > +}
> > +
> > +/**
> > + * __bfq_deactivate_entity - deactivate an entity from its service tree.
> > + * @entity: the entity to deactivate.
> > + * @requeue: if false, the entity will not be put into the idle tree.
> > + *
> > + * Deactivate an entity, independently from its previous state.  If the
> > + * entity was not on a service tree just return, otherwise if it is on
> > + * any scheduler tree, extract it from that tree, and if necessary
> > + * and if the caller did not specify @requeue, put it on the idle tree.
> > + *
> > + */
> > +int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
> > +{
> > +	struct io_sched_data *sd = entity->sched_data;
> > +	struct io_service_tree *st = io_entity_service_tree(entity);
> > +	int was_active = entity == sd->active_entity;
> > +	int ret = 0;
> > +
> > +	if (!entity->on_st)
> > +		return 0;
> > +
> > +	BUG_ON(was_active && entity->tree != NULL);
> > +
> > +	if (was_active) {
> > +		bfq_calc_finish(entity, entity->service);
> > +		sd->active_entity = NULL;
> > +	} else if (entity->tree == &st->active)
> > +		bfq_active_extract(st, entity);
> > +	else if (entity->tree == &st->idle)
> > +		bfq_idle_extract(st, entity);
> > +	else if (entity->tree != NULL)
> > +		BUG();
> > +
> > +	if (!requeue || !bfq_gt(entity->finish, st->vtime))
> > +		bfq_forget_entity(st, entity);
> > +	else
> > +		bfq_idle_insert(st, entity);
> > +
> > +	BUG_ON(sd->active_entity == entity);
> > +
> > +	return ret;
> > +}
> > +
> > +/**
> > + * bfq_deactivate_entity - deactivate an entity.
> > + * @entity: the entity to deactivate.
> > + * @requeue: true if the entity can be put on the idle tree
> > + */
> > +void bfq_deactivate_entity(struct io_entity *entity, int requeue)
> > +{
> > +	__bfq_deactivate_entity(entity, requeue);
> > +}
> > +
> > +/**
> > + * bfq_update_vtime - update vtime if necessary.
> > + * @st: the service tree to act upon.
> > + *
> > + * If necessary update the service tree vtime to have at least one
> > + * eligible entity, skipping to its start time.  Assumes that the
> > + * active tree of the device is not empty.
> > + *
> > + * NOTE: this hierarchical implementation updates vtimes quite often,
> > + * we may end up with reactivated tasks getting timestamps after a
> > + * vtime skip done because we needed a ->first_active entity on some
> > + * intermediate node.
> > + */
> > +static void bfq_update_vtime(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entry;
> > +	struct rb_node *node = st->active.rb_node;
> > +
> > +	entry = rb_entry(node, struct io_entity, rb_node);
> > +	if (bfq_gt(entry->min_start, st->vtime)) {
> > +		st->vtime = entry->min_start;
> > +		bfq_forget_idle(st);
> > +	}
> > +}
> > +
> > +/**
> > + * bfq_first_active - find the eligible entity with the smallest finish time
> > + * @st: the service tree to select from.
> > + *
> > + * This function searches the first schedulable entity, starting from the
> > + * root of the tree and going on the left every time on this side there is
> > + * a subtree with at least one eligible (start <= vtime) entity.  The path
> > + * on the right is followed only if a) the left subtree contains no eligible
> > + * entities and b) no eligible entity has been found yet.
> > + */
> > +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entry, *first = NULL;
> > +	struct rb_node *node = st->active.rb_node;
> > +
> > +	while (node != NULL) {
> > +		entry = rb_entry(node, struct io_entity, rb_node);
> > +left:
> > +		if (!bfq_gt(entry->start, st->vtime))
> > +			first = entry;
> > +
> > +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> > +
> > +		if (node->rb_left != NULL) {
> > +			entry = rb_entry(node->rb_left,
> > +					 struct io_entity, rb_node);
> > +			if (!bfq_gt(entry->min_start, st->vtime)) {
> > +				node = node->rb_left;
> > +				goto left;
> > +			}
> > +		}
> > +		if (first != NULL)
> > +			break;
> > +		node = node->rb_right;
> 
> Please help me understand this, we sort the tree by finish time, but
> search by vtime, start_time. The worst case could easily be O(N),
> right?
> 

no, (again, the full answer is in the paper); the nice property of
min_start is that it partitions the tree in two regions, one with
eligible entities and one without any of them.  once we know that
there is one eligible entity (checking the min_start at the root)
we can find the node i with min(F_i) subject to S_i < V walking down
a single path from the root to the leftmost eligible entity.  (we
need to go to the right only if the subtree on the left contains 
no eligible entities at all.)  since the RB tree is balanced this
can be done in O(log N).


> > +	}
> > +
> > +	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
> > +	return first;
> > +}
> > +
> > +/**
> > + * __bfq_lookup_next_entity - return the first eligible entity in @st.
> > + * @st: the service tree.
> > + *
> > + * Update the virtual time in @st and return the first eligible entity
> > + * it contains.
> > + */
> > +static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entity;
> > +
> > +	if (RB_EMPTY_ROOT(&st->active))
> > +		return NULL;
> > +
> > +	bfq_update_vtime(st);
> > +	entity = bfq_first_active_entity(st);
> > +	BUG_ON(bfq_gt(entity->start, st->vtime));
> > +
> > +	return entity;
> > +}
> > +
> > +/**
> > + * bfq_lookup_next_entity - return the first eligible entity in @sd.
> > + * @sd: the sched_data.
> > + * @extract: if true the returned entity will be also extracted from @sd.
> > + *
> > + * NOTE: since we cache the next_active entity at each level of the
> > + * hierarchy, the complexity of the lookup can be decreased with
> > + * absolutely no effort just returning the cached next_active value;
> > + * we prefer to do full lookups to test the consistency of * the data
> > + * structures.
> > + */
> > +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> > +						 int extract)
> > +{
> > +	struct io_service_tree *st = sd->service_tree;
> > +	struct io_entity *entity;
> > +	int i;
> > +
> > +	/*
> > +	 * We should not call lookup when an entity is active, as doing lookup
> > +	 * can result in an erroneous vtime jump.
> > +	 */
> > +	BUG_ON(sd->active_entity != NULL);
> > +
> > +	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
> > +		entity = __bfq_lookup_next_entity(st);
> > +		if (entity != NULL) {
> > +			if (extract) {
> > +				bfq_active_extract(st, entity);
> > +				sd->active_entity = entity;
> > +			}
> > +			break;
> > +		}
> > +	}
> > +
> > +	return entity;
> > +}
> > +
> > +void entity_served(struct io_entity *entity, bfq_service_t served)
> > +{
> > +	struct io_service_tree *st;
> > +
> > +	st = io_entity_service_tree(entity);
> > +	entity->service += served;
> > +	BUG_ON(st->wsum == 0);
> > +	st->vtime += bfq_delta(served, st->wsum);
> > +	bfq_forget_idle(st);
> 
> Forget idle checks to see if the st->vtime > first_idle->finish, if so
> it pushes the first_idle to later, right?
> 

yes, updating the weight sum accordingly


> > +}
> > +
> > +/**
> > + * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
> > + * @st: the service tree being flushed.
> > + */
> > +void io_flush_idle_tree(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entity = st->first_idle;
> > +
> > +	for (; entity != NULL; entity = st->first_idle)
> > +		__bfq_deactivate_entity(entity, 0);
> > +}
> > +
> > +/* Elevator fair queuing function */
> > +struct io_queue *rq_ioq(struct request *rq)
> > +{
> > +	return rq->ioq;
> > +}
> > +
> > +static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
> > +{
> > +	return e->efqd.active_queue;
> > +}
> > +
> > +void *elv_active_sched_queue(struct elevator_queue *e)
> > +{
> > +	return ioq_sched_queue(elv_active_ioq(e));
> > +}
> > +EXPORT_SYMBOL(elv_active_sched_queue);
> > +
> > +int elv_nr_busy_ioq(struct elevator_queue *e)
> > +{
> > +	return e->efqd.busy_queues;
> > +}
> > +EXPORT_SYMBOL(elv_nr_busy_ioq);
> > +
> > +int elv_hw_tag(struct elevator_queue *e)
> > +{
> > +	return e->efqd.hw_tag;
> > +}
> > +EXPORT_SYMBOL(elv_hw_tag);
> > +
> > +/* Helper functions for operating on elevator idle slice timer */
> > +int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +
> > +	return mod_timer(&efqd->idle_slice_timer, expires);
> > +}
> > +EXPORT_SYMBOL(elv_mod_idle_slice_timer);
> > +
> > +int elv_del_idle_slice_timer(struct elevator_queue *eq)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +
> > +	return del_timer(&efqd->idle_slice_timer);
> > +}
> > +EXPORT_SYMBOL(elv_del_idle_slice_timer);
> > +
> > +unsigned int elv_get_slice_idle(struct elevator_queue *eq)
> > +{
> > +	return eq->efqd.elv_slice_idle;
> > +}
> > +EXPORT_SYMBOL(elv_get_slice_idle);
> > +
> > +void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
> > +{
> > +	entity_served(&ioq->entity, served);
> > +}
> > +
> > +/* Tells whether ioq is queued in root group or not */
> > +static inline int is_root_group_ioq(struct request_queue *q,
> > +					struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
> > +}
> > +
> > +/*
> > + * sysfs parts below -->
> > + */
> > +static ssize_t
> > +elv_var_show(unsigned int var, char *page)
> > +{
> > +	return sprintf(page, "%d\n", var);
> > +}
> > +
> > +static ssize_t
> > +elv_var_store(unsigned int *var, const char *page, size_t count)
> > +{
> > +	char *p = (char *) page;
> > +
> > +	*var = simple_strtoul(p, &p, 10);
> > +	return count;
> > +}
> > +
> > +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
> > +ssize_t __FUNC(struct elevator_queue *e, char *page)		\
> > +{									\
> > +	struct elv_fq_data *efqd = &e->efqd;				\
> > +	unsigned int __data = __VAR;					\
> > +	if (__CONV)							\
> > +		__data = jiffies_to_msecs(__data);			\
> > +	return elv_var_show(__data, (page));				\
> > +}
> > +SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
> > +EXPORT_SYMBOL(elv_slice_idle_show);
> > +SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
> > +EXPORT_SYMBOL(elv_slice_sync_show);
> > +SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
> > +EXPORT_SYMBOL(elv_slice_async_show);
> > +#undef SHOW_FUNCTION
> > +
> > +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
> > +ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
> > +{									\
> > +	struct elv_fq_data *efqd = &e->efqd;				\
> > +	unsigned int __data;						\
> > +	int ret = elv_var_store(&__data, (page), count);		\
> > +	if (__data < (MIN))						\
> > +		__data = (MIN);						\
> > +	else if (__data > (MAX))					\
> > +		__data = (MAX);						\
> > +	if (__CONV)							\
> > +		*(__PTR) = msecs_to_jiffies(__data);			\
> > +	else								\
> > +		*(__PTR) = __data;					\
> > +	return ret;							\
> > +}
> > +STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
> > +EXPORT_SYMBOL(elv_slice_idle_store);
> > +STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
> > +EXPORT_SYMBOL(elv_slice_sync_store);
> > +STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
> > +EXPORT_SYMBOL(elv_slice_async_store);
> > +#undef STORE_FUNCTION
> > +
> > +void elv_schedule_dispatch(struct request_queue *q)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (elv_nr_busy_ioq(q->elevator)) {
> > +		elv_log(efqd, "schedule dispatch");
> > +		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
> > +	}
> > +}
> > +EXPORT_SYMBOL(elv_schedule_dispatch);
> > +
> > +void elv_kick_queue(struct work_struct *work)
> > +{
> > +	struct elv_fq_data *efqd =
> > +		container_of(work, struct elv_fq_data, unplug_work);
> > +	struct request_queue *q = efqd->queue;
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(q->queue_lock, flags);
> > +	blk_start_queueing(q);
> > +	spin_unlock_irqrestore(q->queue_lock, flags);
> > +}
> > +
> > +void elv_shutdown_timer_wq(struct elevator_queue *e)
> > +{
> > +	del_timer_sync(&e->efqd.idle_slice_timer);
> > +	cancel_work_sync(&e->efqd.unplug_work);
> > +}
> > +EXPORT_SYMBOL(elv_shutdown_timer_wq);
> > +
> > +void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	ioq->slice_end = jiffies + ioq->entity.budget;
> > +	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
> > +}
> > +
> > +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	unsigned long elapsed = jiffies - ioq->last_end_request;
> > +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> > +
> > +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> > +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> > +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> > +}
> 
> Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
> understand the algorithm.
> 

this came from cfq, it's a variation of an exponential moving average,
with ttime_samples used to scale the average value.


> > +
> > +/*
> > + * Disable idle window if the process thinks too long.
> > + * This idle flag can also be updated by io scheduler.
> > + */
> > +static void elv_ioq_update_idle_window(struct elevator_queue *eq,
> > +				struct io_queue *ioq, struct request *rq)
> > +{
> > +	int old_idle, enable_idle;
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +
> > +	/*
> > +	 * Don't idle for async or idle io prio class
> > +	 */
> > +	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
> > +		return;
> > +
> > +	enable_idle = old_idle = elv_ioq_idle_window(ioq);
> > +
> > +	if (!efqd->elv_slice_idle)
> > +		enable_idle = 0;
> > +	else if (ioq_sample_valid(ioq->ttime_samples)) {
> > +		if (ioq->ttime_mean > efqd->elv_slice_idle)
> > +			enable_idle = 0;
> > +		else
> > +			enable_idle = 1;
> > +	}
> > +
> > +	/*
> > +	 * From think time perspective idle should be enabled. Check with
> > +	 * io scheduler if it wants to disable idling based on additional
> > +	 * considrations like seek pattern.
> > +	 */
> > +	if (enable_idle) {
> > +		if (eq->ops->elevator_update_idle_window_fn)
> > +			enable_idle = eq->ops->elevator_update_idle_window_fn(
> > +						eq, ioq->sched_queue, rq);
> > +		if (!enable_idle)
> > +			elv_log_ioq(efqd, ioq, "iosched disabled idle");
> > +	}
> > +
> > +	if (old_idle != enable_idle) {
> > +		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
> > +		if (enable_idle)
> > +			elv_mark_ioq_idle_window(ioq);
> > +		else
> > +			elv_clear_ioq_idle_window(ioq);
> > +	}
> > +}
> > +
> > +struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
> > +	return ioq;
> > +}
> > +EXPORT_SYMBOL(elv_alloc_ioq);
> > +
> > +void elv_free_ioq(struct io_queue *ioq)
> > +{
> > +	kmem_cache_free(elv_ioq_pool, ioq);
> > +}
> > +EXPORT_SYMBOL(elv_free_ioq);
> > +
> > +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +			void *sched_queue, int ioprio_class, int ioprio,
> > +			int is_sync)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> > +
> > +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> > +	atomic_set(&ioq->ref, 0);
> > +	ioq->efqd = efqd;
> > +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> > +	elv_ioq_set_ioprio(ioq, ioprio);
> > +	ioq->pid = current->pid;
> 
> Is pid used for cgroup association later? I don't see why we save the
> pid otherwise? If yes, why not store the cgroup of the current->pid?
> 
> > +	ioq->sched_queue = sched_queue;
> > +	if (is_sync && !elv_ioq_class_idle(ioq))
> > +		elv_mark_ioq_idle_window(ioq);
> > +	bfq_init_entity(&ioq->entity, iog);
> > +	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
> > +	if (is_sync)
> > +		ioq->last_end_request = jiffies;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(elv_init_ioq);
> > +
> > +void elv_put_ioq(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
> > +						efqd);
> > +
> > +	BUG_ON(atomic_read(&ioq->ref) <= 0);
> > +	if (!atomic_dec_and_test(&ioq->ref))
> > +		return;
> > +	BUG_ON(ioq->nr_queued);
> > +	BUG_ON(ioq->entity.tree != NULL);
> > +	BUG_ON(elv_ioq_busy(ioq));
> > +	BUG_ON(efqd->active_queue == ioq);
> > +
> > +	/* Can be called by outgoing elevator. Don't use q */
> > +	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
> > +
> > +	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
> > +	elv_log_ioq(efqd, ioq, "put_queue");
> > +	elv_free_ioq(ioq);
> > +}
> > +EXPORT_SYMBOL(elv_put_ioq);
> > +
> > +void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
> > +{
> > +	struct io_queue *ioq = *ioq_ptr;
> > +
> > +	if (ioq != NULL) {
> > +		/* Drop the reference taken by the io group */
> > +		elv_put_ioq(ioq);
> > +		*ioq_ptr = NULL;
> > +	}
> > +}
> > +
> > +/*
> > + * Normally next io queue to be served is selected from the service tree.
> > + * This function allows one to choose a specific io queue to run next
> > + * out of order. This is primarily to accomodate the close_cooperator
> > + * feature of cfq.
> > + *
> > + * Currently it is done only for root level as to begin with supporting
> > + * close cooperator feature only for root group to make sure default
> > + * cfq behavior in flat hierarchy is not changed.
> > + */
> > +void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_entity *entity = &ioq->entity;
> > +	struct io_sched_data *sd = &efqd->root_group->sched_data;
> > +	struct io_service_tree *st = io_entity_service_tree(entity);
> > +
> > +	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
> > +	BUG_ON(!efqd->busy_queues);
> > +	BUG_ON(sd != entity->sched_data);
> > +	BUG_ON(!st);
> > +
> > +	bfq_update_vtime(st);
> > +	bfq_active_extract(st, entity);
> > +	sd->active_entity = entity;
> > +	entity->service = 0;
> > +	elv_log_ioq(efqd, ioq, "set_next_ioq");
> > +}
> > +
> > +/* Get next queue for service. */
> > +struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_entity *entity = NULL;
> > +	struct io_queue *ioq = NULL;
> > +	struct io_sched_data *sd;
> > +
> > +	/*
> > +	 * We should not call lookup when an entity is active, as doing
> > +	 * lookup can result in an erroneous vtime jump.
> > +	 */
> > +	BUG_ON(efqd->active_queue != NULL);
> > +
> > +	if (!efqd->busy_queues)
> > +		return NULL;
> > +
> > +	sd = &efqd->root_group->sched_data;
> > +	entity = bfq_lookup_next_entity(sd, 1);
> > +
> > +	BUG_ON(!entity);
> > +	if (extract)
> > +		entity->service = 0;
> > +	ioq = io_entity_to_ioq(entity);
> > +
> > +	return ioq;
> > +}
> > +
> > +/*
> > + * coop tells that io scheduler selected a queue for us and we did not
> 
> coop?
> 
> > + * select the next queue based on fairness.
> > + */
> > +static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> > +					int coop)
> > +{
> > +	struct request_queue *q = efqd->queue;
> > +
> > +	if (ioq) {
> > +		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
> > +							efqd->busy_queues);
> > +		ioq->slice_end = 0;
> > +
> > +		elv_clear_ioq_wait_request(ioq);
> > +		elv_clear_ioq_must_dispatch(ioq);
> > +		elv_mark_ioq_slice_new(ioq);
> > +
> > +		del_timer(&efqd->idle_slice_timer);
> > +	}
> > +
> > +	efqd->active_queue = ioq;
> > +
> > +	/* Let iosched know if it wants to take some action */
> > +	if (ioq) {
> > +		if (q->elevator->ops->elevator_active_ioq_set_fn)
> > +			q->elevator->ops->elevator_active_ioq_set_fn(q,
> > +							ioq->sched_queue, coop);
> > +	}
> > +}
> > +
> > +/* Get and set a new active queue for service. */
> > +struct io_queue *elv_set_active_ioq(struct request_queue *q,
> > +						struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	int coop = 0;
> > +
> > +	if (!ioq)
> > +		ioq = elv_get_next_ioq(q, 1);
> > +	else {
> > +		elv_set_next_ioq(q, ioq);
> > +		/*
> > +		 * io scheduler selected the next queue for us. Pass this
> > +		 * this info back to io scheudler. cfq currently uses it
> > +		 * to reset coop flag on the queue.
> > +		 */
> > +		coop = 1;
> > +	}
> > +	__elv_set_active_ioq(efqd, ioq, coop);
> > +	return ioq;
> > +}
> > +
> > +void elv_reset_active_ioq(struct elv_fq_data *efqd)
> > +{
> > +	struct request_queue *q = efqd->queue;
> > +	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
> > +
> > +	if (q->elevator->ops->elevator_active_ioq_reset_fn)
> > +		q->elevator->ops->elevator_active_ioq_reset_fn(q,
> > +							ioq->sched_queue);
> > +	efqd->active_queue = NULL;
> > +	del_timer(&efqd->idle_slice_timer);
> > +}
> > +
> > +void elv_activate_ioq(struct io_queue *ioq, int add_front)
> > +{
> > +	bfq_activate_entity(&ioq->entity, add_front);
> > +}
> > +
> > +void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> > +					int requeue)
> > +{
> > +	bfq_deactivate_entity(&ioq->entity, requeue);
> > +}
> > +
> > +/* Called when an inactive queue receives a new request. */
> > +void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
> > +{
> > +	BUG_ON(elv_ioq_busy(ioq));
> > +	BUG_ON(ioq == efqd->active_queue);
> > +	elv_log_ioq(efqd, ioq, "add to busy");
> > +	elv_activate_ioq(ioq, 0);
> > +	elv_mark_ioq_busy(ioq);
> > +	efqd->busy_queues++;
> > +	if (elv_ioq_class_rt(ioq)) {
> > +		struct io_group *iog = ioq_to_io_group(ioq);
> > +		iog->busy_rt_queues++;
> > +	}
> > +}
> > +
> > +void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
> > +					int requeue)
> > +{
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	BUG_ON(!elv_ioq_busy(ioq));
> > +	BUG_ON(ioq->nr_queued);
> > +	elv_log_ioq(efqd, ioq, "del from busy");
> > +	elv_clear_ioq_busy(ioq);
> > +	BUG_ON(efqd->busy_queues == 0);
> > +	efqd->busy_queues--;
> > +	if (elv_ioq_class_rt(ioq)) {
> > +		struct io_group *iog = ioq_to_io_group(ioq);
> > +		iog->busy_rt_queues--;
> > +	}
> > +
> > +	elv_deactivate_ioq(efqd, ioq, requeue);
> > +}
> > +
> > +/*
> > + * Do the accounting. Determine how much service (in terms of time slices)
> > + * current queue used and adjust the start, finish time of queue and vtime
> > + * of the tree accordingly.
> > + *
> > + * Determining the service used in terms of time is tricky in certain
> > + * situations. Especially when underlying device supports command queuing
> > + * and requests from multiple queues can be there at same time, then it
> > + * is not clear which queue consumed how much of disk time.
> > + *
> > + * To mitigate this problem, cfq starts the time slice of the queue only
> > + * after first request from the queue has completed. This does not work
> > + * very well if we expire the queue before we wait for first and more
> > + * request to finish from the queue. For seeky queues, we will expire the
> > + * queue after dispatching few requests without waiting and start dispatching
> > + * from next queue.
> > + *
> > + * Not sure how to determine the time consumed by queue in such scenarios.
> > + * Currently as a crude approximation, we are charging 25% of time slice
> > + * for such cases. A better mechanism is needed for accurate accounting.
> > + */
> > +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_entity *entity = &ioq->entity;
> > +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> > +
> > +	assert_spin_locked(q->queue_lock);
> > +	elv_log_ioq(efqd, ioq, "slice expired");
> > +
> > +	if (elv_ioq_wait_request(ioq))
> > +		del_timer(&efqd->idle_slice_timer);
> > +
> > +	elv_clear_ioq_wait_request(ioq);
> > +
> > +	/*
> > +	 * if ioq->slice_end = 0, that means a queue was expired before first
> > +	 * reuqest from the queue got completed. Of course we are not planning
> > +	 * to idle on the queue otherwise we would not have expired it.
> > +	 *
> > +	 * Charge for the 25% slice in such cases. This is not the best thing
> > +	 * to do but at the same time not very sure what's the next best
> > +	 * thing to do.
> > +	 *
> > +	 * This arises from that fact that we don't have the notion of
> > +	 * one queue being operational at one time. io scheduler can dispatch
> > +	 * requests from multiple queues in one dispatch round. Ideally for
> > +	 * more accurate accounting of exact disk time used by disk, one
> > +	 * should dispatch requests from only one queue and wait for all
> > +	 * the requests to finish. But this will reduce throughput.
> > +	 */
> > +	if (!ioq->slice_end)
> > +		slice_used = entity->budget/4;
> > +	else {
> > +		if (time_after(ioq->slice_end, jiffies)) {
> > +			slice_unused = ioq->slice_end - jiffies;
> > +			if (slice_unused == entity->budget) {
> > +				/*
> > +				 * queue got expired immediately after
> > +				 * completing first request. Charge 25% of
> > +				 * slice.
> > +				 */
> > +				slice_used = entity->budget/4;
> > +			} else
> > +				slice_used = entity->budget - slice_unused;
> > +		} else {
> > +			slice_overshoot = jiffies - ioq->slice_end;
> > +			slice_used = entity->budget + slice_overshoot;
> > +		}
> > +	}
> > +
> > +	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
> > +			jiffies);
> > +	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
> > +				slice_used, entity->budget, slice_overshoot);
> > +	elv_ioq_served(ioq, slice_used);
> > +
> > +	BUG_ON(ioq != efqd->active_queue);
> > +	elv_reset_active_ioq(efqd);
> > +
> > +	if (!ioq->nr_queued)
> > +		elv_del_ioq_busy(q->elevator, ioq, 1);
> > +	else
> > +		elv_activate_ioq(ioq, 0);
> > +}
> > +EXPORT_SYMBOL(__elv_ioq_slice_expired);
> > +
> > +/*
> > + *  Expire the ioq.
> > + */
> > +void elv_ioq_slice_expired(struct request_queue *q)
> > +{
> > +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> > +
> > +	if (ioq)
> > +		__elv_ioq_slice_expired(q, ioq);
> > +}
> > +
> > +/*
> > + * Check if new_cfqq should preempt the currently active queue. Return 0 for
> > + * no or if we aren't sure, a 1 will cause a preemption attempt.
> > + */
> > +int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
> > +			struct request *rq)
> > +{
> > +	struct io_queue *ioq;
> > +	struct elevator_queue *eq = q->elevator;
> > +	struct io_entity *entity, *new_entity;
> > +
> > +	ioq = elv_active_ioq(eq);
> > +
> > +	if (!ioq)
> > +		return 0;
> > +
> > +	entity = &ioq->entity;
> > +	new_entity = &new_ioq->entity;
> > +
> > +	/*
> > +	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
> > +	 */
> > +
> > +	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
> > +	    && entity->ioprio_class != IOPRIO_CLASS_RT)
> > +		return 1;
> > +	/*
> > +	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
> > +	 */
> > +
> > +	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
> > +	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
> > +		return 1;
> > +
> > +	/*
> > +	 * Check with io scheduler if it has additional criterion based on
> > +	 * which it wants to preempt existing queue.
> > +	 */
> > +	if (eq->ops->elevator_should_preempt_fn)
> > +		return eq->ops->elevator_should_preempt_fn(q,
> > +						ioq_sched_queue(new_ioq), rq);
> > +
> > +	return 0;
> > +}
> > +
> > +static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
> > +	elv_ioq_slice_expired(q);
> > +
> > +	/*
> > +	 * Put the new queue at the front of the of the current list,
> > +	 * so we know that it will be selected next.
> > +	 */
> > +
> > +	elv_activate_ioq(ioq, 1);
> > +	elv_ioq_set_slice_end(ioq, 0);
> > +	elv_mark_ioq_slice_new(ioq);
> > +}
> > +
> > +void elv_ioq_request_add(struct request_queue *q, struct request *rq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_queue *ioq = rq->ioq;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	BUG_ON(!efqd);
> > +	BUG_ON(!ioq);
> > +	efqd->rq_queued++;
> > +	ioq->nr_queued++;
> > +
> > +	if (!elv_ioq_busy(ioq))
> > +		elv_add_ioq_busy(efqd, ioq);
> > +
> > +	elv_ioq_update_io_thinktime(ioq);
> > +	elv_ioq_update_idle_window(q->elevator, ioq, rq);
> > +
> > +	if (ioq == elv_active_ioq(q->elevator)) {
> > +		/*
> > +		 * Remember that we saw a request from this process, but
> > +		 * don't start queuing just yet. Otherwise we risk seeing lots
> > +		 * of tiny requests, because we disrupt the normal plugging
> > +		 * and merging. If the request is already larger than a single
> > +		 * page, let it rip immediately. For that case we assume that
> > +		 * merging is already done. Ditto for a busy system that
> > +		 * has other work pending, don't risk delaying until the
> > +		 * idle timer unplug to continue working.
> > +		 */
> > +		if (elv_ioq_wait_request(ioq)) {
> > +			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
> > +			    efqd->busy_queues > 1) {
> > +				del_timer(&efqd->idle_slice_timer);
> > +				blk_start_queueing(q);
> > +			}
> > +			elv_mark_ioq_must_dispatch(ioq);
> > +		}
> > +	} else if (elv_should_preempt(q, ioq, rq)) {
> > +		/*
> > +		 * not the active queue - expire current slice if it is
> > +		 * idle and has expired it's mean thinktime or this new queue
> > +		 * has some old slice time left and is of higher priority or
> > +		 * this new queue is RT and the current one is BE
> > +		 */
> > +		elv_preempt_queue(q, ioq);
> > +		blk_start_queueing(q);
> > +	}
> > +}
> > +
> > +void elv_idle_slice_timer(unsigned long data)
> > +{
> > +	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
> > +	struct io_queue *ioq;
> > +	unsigned long flags;
> > +	struct request_queue *q = efqd->queue;
> > +
> > +	elv_log(efqd, "idle timer fired");
> > +
> > +	spin_lock_irqsave(q->queue_lock, flags);
> > +
> > +	ioq = efqd->active_queue;
> > +
> > +	if (ioq) {
> > +
> > +		/*
> > +		 * We saw a request before the queue expired, let it through
> > +		 */
> > +		if (elv_ioq_must_dispatch(ioq))
> > +			goto out_kick;
> > +
> > +		/*
> > +		 * expired
> > +		 */
> > +		if (elv_ioq_slice_used(ioq))
> > +			goto expire;
> > +
> > +		/*
> > +		 * only expire and reinvoke request handler, if there are
> > +		 * other queues with pending requests
> > +		 */
> > +		if (!elv_nr_busy_ioq(q->elevator))
> > +			goto out_cont;
> > +
> > +		/*
> > +		 * not expired and it has a request pending, let it dispatch
> > +		 */
> > +		if (ioq->nr_queued)
> > +			goto out_kick;
> > +	}
> > +expire:
> > +	elv_ioq_slice_expired(q);
> > +out_kick:
> > +	elv_schedule_dispatch(q);
> > +out_cont:
> > +	spin_unlock_irqrestore(q->queue_lock, flags);
> > +}
> > +
> > +void elv_ioq_arm_slice_timer(struct request_queue *q)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> > +	unsigned long sl;
> > +
> > +	BUG_ON(!ioq);
> > +
> > +	/*
> > +	 * SSD device without seek penalty, disable idling. But only do so
> > +	 * for devices that support queuing, otherwise we still have a problem
> > +	 * with sync vs async workloads.
> > +	 */
> > +	if (blk_queue_nonrot(q) && efqd->hw_tag)
> > +		return;
> > +
> > +	/*
> > +	 * still requests with the driver, don't idle
> > +	 */
> > +	if (efqd->rq_in_driver)
> > +		return;
> > +
> > +	/*
> > +	 * idle is disabled, either manually or by past process history
> > +	 */
> > +	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
> > +		return;
> > +
> > +	/*
> > +	 * may be iosched got its own idling logic. In that case io
> > +	 * schduler will take care of arming the timer, if need be.
> > +	 */
> > +	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
> > +		q->elevator->ops->elevator_arm_slice_timer_fn(q,
> > +						ioq->sched_queue);
> > +	} else {
> > +		elv_mark_ioq_wait_request(ioq);
> > +		sl = efqd->elv_slice_idle;
> > +		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
> > +		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
> > +	}
> > +}
> > +
> > +/* Common layer function to select the next queue to dispatch from */
> > +void *elv_fq_select_ioq(struct request_queue *q, int force)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> > +	struct io_group *iog;
> > +
> > +	if (!elv_nr_busy_ioq(q->elevator))
> > +		return NULL;
> > +
> > +	if (ioq == NULL)
> > +		goto new_queue;
> > +
> > +	/*
> > +	 * Force dispatch. Continue to dispatch from current queue as long
> > +	 * as it has requests.
> > +	 */
> > +	if (unlikely(force)) {
> > +		if (ioq->nr_queued)
> > +			goto keep_queue;
> > +		else
> > +			goto expire;
> > +	}
> > +
> > +	/*
> > +	 * The active queue has run out of time, expire it and select new.
> > +	 */
> > +	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
> > +		goto expire;
> > +
> > +	/*
> > +	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
> > +	 * cfqq.
> > +	 */
> > +	iog = ioq_to_io_group(ioq);
> > +
> > +	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> > +		/*
> > +		 * We simulate this as cfqq timed out so that it gets to bank
> > +		 * the remaining of its time slice.
> > +		 */
> > +		elv_log_ioq(efqd, ioq, "preempt");
> > +		goto expire;
> > +	}
> > +
> > +	/*
> > +	 * The active queue has requests and isn't expired, allow it to
> > +	 * dispatch.
> > +	 */
> > +
> > +	if (ioq->nr_queued)
> > +		goto keep_queue;
> > +
> > +	/*
> > +	 * If another queue has a request waiting within our mean seek
> > +	 * distance, let it run.  The expire code will check for close
> > +	 * cooperators and put the close queue at the front of the service
> > +	 * tree.
> > +	 */
> > +	new_ioq = elv_close_cooperator(q, ioq, 0);
> > +	if (new_ioq)
> > +		goto expire;
> > +
> > +	/*
> > +	 * No requests pending. If the active queue still has requests in
> > +	 * flight or is idling for a new request, allow either of these
> > +	 * conditions to happen (or time out) before selecting a new queue.
> > +	 */
> > +
> > +	if (timer_pending(&efqd->idle_slice_timer) ||
> > +	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
> > +		ioq = NULL;
> > +		goto keep_queue;
> > +	}
> > +
> > +expire:
> > +	elv_ioq_slice_expired(q);
> > +new_queue:
> > +	ioq = elv_set_active_ioq(q, new_ioq);
> > +keep_queue:
> > +	return ioq;
> > +}
> > +
> > +/* A request got removed from io_queue. Do the accounting */
> > +void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
> > +{
> > +	struct io_queue *ioq;
> > +	struct elv_fq_data *efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	ioq = rq->ioq;
> > +	BUG_ON(!ioq);
> > +	ioq->nr_queued--;
> > +
> > +	efqd = ioq->efqd;
> > +	BUG_ON(!efqd);
> > +	efqd->rq_queued--;
> > +
> > +	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
> > +		elv_del_ioq_busy(e, ioq, 1);
> > +}
> > +
> > +/* A request got dispatched. Do the accounting. */
> > +void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
> > +{
> > +	struct io_queue *ioq = rq->ioq;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	BUG_ON(!ioq);
> > +	elv_ioq_request_dispatched(ioq);
> > +	elv_ioq_request_removed(e, rq);
> > +	elv_clear_ioq_must_dispatch(ioq);
> > +}
> > +
> > +void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	efqd->rq_in_driver++;
> > +	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
> > +						efqd->rq_in_driver);
> > +}
> > +
> > +void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	WARN_ON(!efqd->rq_in_driver);
> > +	efqd->rq_in_driver--;
> > +	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
> > +						efqd->rq_in_driver);
> > +}
> > +
> > +/*
> > + * Update hw_tag based on peak queue depth over 50 samples under
> > + * sufficient load.
> > + */
> > +static void elv_update_hw_tag(struct elv_fq_data *efqd)
> > +{
> > +	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
> > +		efqd->rq_in_driver_peak = efqd->rq_in_driver;
> > +
> > +	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
> > +	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
> > +		return;
> > +
> > +	if (efqd->hw_tag_samples++ < 50)
> > +		return;
> > +
> > +	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
> > +		efqd->hw_tag = 1;
> > +	else
> > +		efqd->hw_tag = 0;
> > +
> > +	efqd->hw_tag_samples = 0;
> > +	efqd->rq_in_driver_peak = 0;
> > +}
> > +
> > +/*
> > + * If ioscheduler has functionality of keeping track of close cooperator, check
> > + * with it if it has got a closely co-operating queue.
> > + */
> > +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> > +					struct io_queue *ioq, int probe)
> > +{
> > +	struct elevator_queue *e = q->elevator;
> > +	struct io_queue *new_ioq = NULL;
> > +
> > +	/*
> > +	 * Currently this feature is supported only for flat hierarchy or
> > +	 * root group queues so that default cfq behavior is not changed.
> > +	 */
> > +	if (!is_root_group_ioq(q, ioq))
> > +		return NULL;
> > +
> > +	if (q->elevator->ops->elevator_close_cooperator_fn)
> > +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> > +						ioq->sched_queue, probe);
> > +
> > +	/* Only select co-operating queue if it belongs to root group */
> > +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> > +		return NULL;
> > +
> > +	return new_ioq;
> > +}
> > +
> > +/* A request got completed from io_queue. Do the accounting. */
> > +void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> > +{
> > +	const int sync = rq_is_sync(rq);
> > +	struct io_queue *ioq;
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	ioq = rq->ioq;
> > +
> > +	elv_log_ioq(efqd, ioq, "complete");
> > +
> > +	elv_update_hw_tag(efqd);
> > +
> > +	WARN_ON(!efqd->rq_in_driver);
> > +	WARN_ON(!ioq->dispatched);
> > +	efqd->rq_in_driver--;
> > +	ioq->dispatched--;
> > +
> > +	if (sync)
> > +		ioq->last_end_request = jiffies;
> > +
> > +	/*
> > +	 * If this is the active queue, check if it needs to be expired,
> > +	 * or if we want to idle in case it has no pending requests.
> > +	 */
> > +
> > +	if (elv_active_ioq(q->elevator) == ioq) {
> > +		if (elv_ioq_slice_new(ioq)) {
> > +			elv_ioq_set_prio_slice(q, ioq);
> > +			elv_clear_ioq_slice_new(ioq);
> > +		}
> > +		/*
> > +		 * If there are no requests waiting in this queue, and
> > +		 * there are other queues ready to issue requests, AND
> > +		 * those other queues are issuing requests within our
> > +		 * mean seek distance, give them a chance to run instead
> > +		 * of idling.
> > +		 */
> > +		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
> > +			elv_ioq_slice_expired(q);
> > +		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
> > +			 && sync && !rq_noidle(rq))
> > +			elv_ioq_arm_slice_timer(q);
> > +	}
> > +
> > +	if (!efqd->rq_in_driver)
> > +		elv_schedule_dispatch(q);
> > +}
> > +
> > +struct io_group *io_lookup_io_group_current(struct request_queue *q)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	return efqd->root_group;
> > +}
> > +EXPORT_SYMBOL(io_lookup_io_group_current);
> > +
> > +void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> > +					int ioprio)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	switch (ioprio_class) {
> > +	case IOPRIO_CLASS_RT:
> > +		ioq = iog->async_queue[0][ioprio];
> > +		break;
> > +	case IOPRIO_CLASS_BE:
> > +		ioq = iog->async_queue[1][ioprio];
> > +		break;
> > +	case IOPRIO_CLASS_IDLE:
> > +		ioq = iog->async_idle_queue;
> > +		break;
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	if (ioq)
> > +		return ioq->sched_queue;
> > +	return NULL;
> > +}
> > +EXPORT_SYMBOL(io_group_async_queue_prio);
> > +
> > +void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> > +					int ioprio, struct io_queue *ioq)
> > +{
> > +	switch (ioprio_class) {
> > +	case IOPRIO_CLASS_RT:
> > +		iog->async_queue[0][ioprio] = ioq;
> > +		break;
> > +	case IOPRIO_CLASS_BE:
> > +		iog->async_queue[1][ioprio] = ioq;
> > +		break;
> > +	case IOPRIO_CLASS_IDLE:
> > +		iog->async_idle_queue = ioq;
> > +		break;
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	/*
> > +	 * Take the group reference and pin the queue. Group exit will
> > +	 * clean it up
> > +	 */
> > +	elv_get_ioq(ioq);
> > +}
> > +EXPORT_SYMBOL(io_group_set_async_queue);
> > +
> > +/*
> > + * Release all the io group references to its async queues.
> > + */
> > +void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
> > +{
> > +	int i, j;
> > +
> > +	for (i = 0; i < 2; i++)
> > +		for (j = 0; j < IOPRIO_BE_NR; j++)
> > +			elv_release_ioq(e, &iog->async_queue[i][j]);
> > +
> > +	/* Free up async idle queue */
> > +	elv_release_ioq(e, &iog->async_idle_queue);
> > +}
> > +
> > +struct io_group *io_alloc_root_group(struct request_queue *q,
> > +					struct elevator_queue *e, void *key)
> > +{
> > +	struct io_group *iog;
> > +	int i;
> > +
> > +	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
> > +	if (iog == NULL)
> > +		return NULL;
> > +
> > +	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
> > +		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
> > +
> > +	return iog;
> > +}
> > +
> > +void io_free_root_group(struct elevator_queue *e)
> > +{
> > +	struct io_group *iog = e->efqd.root_group;
> > +	struct io_service_tree *st;
> > +	int i;
> > +
> > +	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
> > +		st = iog->sched_data.service_tree + i;
> > +		io_flush_idle_tree(st);
> > +	}
> > +
> > +	io_put_io_group_queues(e, iog);
> > +	kfree(iog);
> > +}
> > +
> > +static void elv_slab_kill(void)
> > +{
> > +	/*
> > +	 * Caller already ensured that pending RCU callbacks are completed,
> > +	 * so we should have no busy allocations at this point.
> > +	 */
> > +	if (elv_ioq_pool)
> > +		kmem_cache_destroy(elv_ioq_pool);
> > +}
> > +
> > +static int __init elv_slab_setup(void)
> > +{
> > +	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
> > +	if (!elv_ioq_pool)
> > +		goto fail;
> > +
> > +	return 0;
> > +fail:
> > +	elv_slab_kill();
> > +	return -ENOMEM;
> > +}
> > +
> > +/* Initialize fair queueing data associated with elevator */
> > +int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
> > +{
> > +	struct io_group *iog;
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return 0;
> > +
> > +	iog = io_alloc_root_group(q, e, efqd);
> > +	if (iog == NULL)
> > +		return 1;
> > +
> > +	efqd->root_group = iog;
> > +	efqd->queue = q;
> > +
> > +	init_timer(&efqd->idle_slice_timer);
> > +	efqd->idle_slice_timer.function = elv_idle_slice_timer;
> > +	efqd->idle_slice_timer.data = (unsigned long) efqd;
> > +
> > +	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
> > +
> > +	efqd->elv_slice[0] = elv_slice_async;
> > +	efqd->elv_slice[1] = elv_slice_sync;
> > +	efqd->elv_slice_idle = elv_slice_idle;
> > +	efqd->hw_tag = 1;
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * elv_exit_fq_data is called before we call elevator_exit_fn. Before
> > + * we ask elevator to cleanup its queues, we do the cleanup here so
> > + * that all the group and idle tree references to ioq are dropped. Later
> > + * during elevator cleanup, ioc reference will be dropped which will lead
> > + * to removal of ioscheduler queue as well as associated ioq object.
> > + */
> > +void elv_exit_fq_data(struct elevator_queue *e)
> > +{
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	elv_shutdown_timer_wq(e);
> > +
> > +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> > +	io_free_root_group(e);
> > +}
> > +
> > +/*
> > + * This is called after the io scheduler has cleaned up its data structres.
> > + * I don't think that this function is required. Right now just keeping it
> > + * because cfq cleans up timer and work queue again after freeing up
> > + * io contexts. To me io scheduler has already been drained out, and all
> > + * the active queue have already been expired so time and work queue should
> > + * not been activated during cleanup process.
> > + *
> > + * Keeping it here for the time being. Will get rid of it later.
> > + */
> > +void elv_exit_fq_data_post(struct elevator_queue *e)
> > +{
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	elv_shutdown_timer_wq(e);
> > +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> > +}
> > +
> > +
> > +static int __init elv_fq_init(void)
> > +{
> > +	if (elv_slab_setup())
> > +		return -ENOMEM;
> > +
> > +	/* could be 0 on HZ < 1000 setups */
> > +
> > +	if (!elv_slice_async)
> > +		elv_slice_async = 1;
> > +
> > +	if (!elv_slice_idle)
> > +		elv_slice_idle = 1;
> > +
> > +	return 0;
> > +}
> > +
> > +module_init(elv_fq_init);
> > diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> > new file mode 100644
> > index 0000000..5b6c1cc
> > --- /dev/null
> > +++ b/block/elevator-fq.h
> > @@ -0,0 +1,473 @@
> > +/*
> > + * BFQ: data structures and common functions prototypes.
> > + *
> > + * Based on ideas and code from CFQ:
> > + * Copyright (C) 2003 Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
> > + *
> > + * Copyright (C) 2008 Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
> > + *		      Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
> > + * Copyright (C) 2009 Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > + * 	              Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > + */
> > +
> > +#include <linux/blkdev.h>
> > +
> > +#ifndef _BFQ_SCHED_H
> > +#define _BFQ_SCHED_H
> > +
> > +#define IO_IOPRIO_CLASSES	3
> > +
> > +typedef u64 bfq_timestamp_t;
> > +typedef unsigned long bfq_weight_t;
> > +typedef unsigned long bfq_service_t;
> 
> Does this abstraction really provide any benefit? Why not directly use
> the standard C types, make the code easier to read.
> 

I have no strong opinions on that, during debugging it helped a lot
to identify the role of variables in the code, but common practice in
the kernel is avoiding typedefs, so they can go now.


> > +struct io_entity;
> > +struct io_queue;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +#define ELV_ATTR(name) \
> > +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> > +
> > +/**
> > + * struct bfq_service_tree - per ioprio_class service tree.
> 
> Comment is old, does not reflect the newer name
> 
> > + * @active: tree for active entities (i.e., those backlogged).
> > + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> > + * @first_idle: idle entity with minimum F_i.
> > + * @last_idle: idle entity with maximum F_i.
> > + * @vtime: scheduler virtual time.
> > + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> > + *
> > + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> > + * ioprio_class has its own independent scheduler, and so its own
> > + * bfq_service_tree.  All the fields are protected by the queue lock
> > + * of the containing efqd.
> > + */
> > +struct io_service_tree {
> > +	struct rb_root active;
> > +	struct rb_root idle;
> > +
> > +	struct io_entity *first_idle;
> > +	struct io_entity *last_idle;
> > +
> > +	bfq_timestamp_t vtime;
> > +	bfq_weight_t wsum;
> > +};
> > +
> > +/**
> > + * struct bfq_sched_data - multi-class scheduler.
> 
> Again the naming convention is broken, you need to change several
> bfq's to io's :)
> 
> > + * @active_entity: entity under service.
> > + * @next_active: head-of-the-line entity in the scheduler.
> > + * @service_tree: array of service trees, one per ioprio_class.
> > + *
> > + * bfq_sched_data is the basic scheduler queue.  It supports three
> > + * ioprio_classes, and can be used either as a toplevel queue or as
> > + * an intermediate queue on a hierarchical setup.
> > + * @next_active points to the active entity of the sched_data service
> > + * trees that will be scheduled next.
> > + *
> > + * The supported ioprio_classes are the same as in CFQ, in descending
> > + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
> > + * Requests from higher priority queues are served before all the
> > + * requests from lower priority queues; among requests of the same
> > + * queue requests are served according to B-WF2Q+.
> > + * All the fields are protected by the queue lock of the containing bfqd.
> > + */
> > +struct io_sched_data {
> > +	struct io_entity *active_entity;
> > +	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
> > +};
> > +
> > +/**
> > + * struct bfq_entity - schedulable entity.
> > + * @rb_node: service_tree member.
> > + * @on_st: flag, true if the entity is on a tree (either the active or
> > + *         the idle one of its service_tree).
> > + * @finish: B-WF2Q+ finish timestamp (aka F_i).
> > + * @start: B-WF2Q+ start timestamp (aka S_i).
> 
> Could you mention what key is used in the rb_tree? start, finish
> sounds like a range, so my suspicion is that start is used.
> 

finish is used as the key, and min_start keeps the minimum ->start for
the subtree rooted at the given entity (as said in the comment below).


> > + * @tree: tree the entity is enqueued into; %NULL if not on a tree.
> > + * @min_start: minimum start time of the (active) subtree rooted at
> > + *             this entity; used for O(log N) lookups into active trees.
> 
> Used for O(log N) makes no sense to me, RBTree has a worst case
> lookup time of O(log N), but what is the comment saying?
> 

it's badly written (my fault), but it intended to say that this field is
used to allow the lookups to be done in O(log N).  without augmenting
the RB tree with min_start, lookups could not be done in O(log N),
because we want a constrained minimum search.


> > + * @service: service received during the last round of service.
> > + * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
> > + * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
> > + * @parent: parent entity, for hierarchical scheduling.
> > + * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
> > + *                 associated scheduler queue, %NULL on leaf nodes.
> > + * @sched_data: the scheduler queue this entity belongs to.
> > + * @ioprio: the ioprio in use.
> > + * @new_ioprio: when an ioprio change is requested, the new ioprio value
> > + * @ioprio_class: the ioprio_class in use.
> > + * @new_ioprio_class: when an ioprio_class change is requested, the new
> > + *                    ioprio_class value.
> > + * @ioprio_changed: flag, true when the user requested an ioprio or
> > + *                  ioprio_class change.
> > + *
> > + * A bfq_entity is used to represent either a bfq_queue (leaf node in the
> > + * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
> > + * entity belongs to the sched_data of the parent group in the cgroup
> > + * hierarchy.  Non-leaf entities have also their own sched_data, stored
> > + * in @my_sched_data.
> > + *
> > + * Each entity stores independently its priority values; this would allow
> > + * different weights on different devices, but this functionality is not
> > + * exported to userspace by now.  Priorities are updated lazily, first
> > + * storing the new values into the new_* fields, then setting the
> > + * @ioprio_changed flag.  As soon as there is a transition in the entity
> > + * state that allows the priority update to take place the effective and
> > + * the requested priority values are synchronized.
> > + *
> > + * The weight value is calculated from the ioprio to export the same
> > + * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
> > + * queues that do not spend too much time to consume their budget and
> > + * have true sequential behavior, and when there are no external factors
> > + * breaking anticipation) the relative weights at each level of the
> > + * cgroups hierarchy should be guaranteed.
> > + * All the fields are protected by the queue lock of the containing bfqd.
> > + */
> > +struct io_entity {
> > +	struct rb_node rb_node;
> > +
> > +	int on_st;
> > +
> > +	bfq_timestamp_t finish;
> > +	bfq_timestamp_t start;
> > +
> > +	struct rb_root *tree;
> > +
> > +	bfq_timestamp_t min_start;
> > +
> > +	bfq_service_t service, budget;
> > +	bfq_weight_t weight;
> > +
> > +	struct io_entity *parent;
> > +
> > +	struct io_sched_data *my_sched_data;
> > +	struct io_sched_data *sched_data;
> > +
> > +	unsigned short ioprio, new_ioprio;
> > +	unsigned short ioprio_class, new_ioprio_class;
> > +
> > +	int ioprio_changed;
> > +};
> > +
> > +/*
> > + * A common structure embedded by every io scheduler into their respective
> > + * queue structure.
> > + */
> > +struct io_queue {
> > +	struct io_entity entity;
> 
> So the io_queue has an abstract entity called io_entity that contains
> it QoS parameters? Correct?
> 

yes


> > +	atomic_t ref;
> > +	unsigned int flags;
> > +
> > +	/* Pointer to generic elevator data structure */
> > +	struct elv_fq_data *efqd;
> > +	pid_t pid;
> 
> Why do we store the pid?
> 

originally it was for logging purposes


> > +
> > +	/* Number of requests queued on this io queue */
> > +	unsigned long nr_queued;
> > +
> > +	/* Requests dispatched from this queue */
> > +	int dispatched;
> > +
> > +	/* Keep a track of think time of processes in this queue */
> > +	unsigned long last_end_request;
> > +	unsigned long ttime_total;
> > +	unsigned long ttime_samples;
> > +	unsigned long ttime_mean;
> > +
> > +	unsigned long slice_end;
> > +
> > +	/* Pointer to io scheduler's queue */
> > +	void *sched_queue;
> > +};
> > +
> > +struct io_group {
> > +	struct io_sched_data sched_data;
> > +
> > +	/* async_queue and idle_queue are used only for cfq */
> > +	struct io_queue *async_queue[2][IOPRIO_BE_NR];
> 
> Again the 2 is confusing
> 
> > +	struct io_queue *async_idle_queue;
> > +
> > +	/*
> > +	 * Used to track any pending rt requests so we can pre-empt current
> > +	 * non-RT cfqq in service when this value is non-zero.
> > +	 */
> > +	unsigned int busy_rt_queues;
> > +};
> > +
> > +struct elv_fq_data {
> 
> What does fq stand for?
> 
> > +	struct io_group *root_group;
> > +
> > +	struct request_queue *queue;
> > +	unsigned int busy_queues;
> > +
> > +	/* Number of requests queued */
> > +	int rq_queued;
> > +
> > +	/* Pointer to the ioscheduler queue being served */
> > +	void *active_queue;
> > +
> > +	int rq_in_driver;
> > +	int hw_tag;
> > +	int hw_tag_samples;
> > +	int rq_in_driver_peak;
> 
> Some comments of _in_driver and _in_driver_peak would be nice.
> 
> > +
> > +	/*
> > +	 * elevator fair queuing layer has the capability to provide idling
> > +	 * for ensuring fairness for processes doing dependent reads.
> > +	 * This might be needed to ensure fairness among two processes doing
> > +	 * synchronous reads in two different cgroups. noop and deadline don't
> > +	 * have any notion of anticipation/idling. As of now, these are the
> > +	 * users of this functionality.
> > +	 */
> > +	unsigned int elv_slice_idle;
> > +	struct timer_list idle_slice_timer;
> > +	struct work_struct unplug_work;
> > +
> > +	unsigned int elv_slice[2];
> 
> Why [2] makes the code hearder to read
> 
> > +};
> > +
> > +extern int elv_slice_idle;
> > +extern int elv_slice_async;
> > +
> > +/* Logging facilities. */
> > +#define elv_log_ioq(efqd, ioq, fmt, args...) \
> > +	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
> > +				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
> > +
> > +#define elv_log(efqd, fmt, args...) \
> > +	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
> > +
> > +#define ioq_sample_valid(samples)   ((samples) > 80)
> > +
> > +/* Some shared queue flag manipulation functions among elevators */
> > +
> > +enum elv_queue_state_flags {
> > +	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
> > +	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
> > +	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
> > +	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
> > +	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
> > +	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
> > +	ELV_QUEUE_FLAG_NR,
> > +};
> > +
> > +#define ELV_IO_QUEUE_FLAG_FNS(name)					\
> > +static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
> > +{                                                                       \
> > +	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
> > +}                                                                       \
> > +static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
> > +{                                                                       \
> > +	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
> > +}                                                                       \
> > +static inline int elv_ioq_##name(struct io_queue *ioq)         		\
> > +{                                                                       \
> > +	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
> > +}
> > +
> > +ELV_IO_QUEUE_FLAG_FNS(busy)
> > +ELV_IO_QUEUE_FLAG_FNS(sync)
> > +ELV_IO_QUEUE_FLAG_FNS(wait_request)
> > +ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
> > +ELV_IO_QUEUE_FLAG_FNS(idle_window)
> > +ELV_IO_QUEUE_FLAG_FNS(slice_new)
> > +
> > +static inline struct io_service_tree *
> > +io_entity_service_tree(struct io_entity *entity)
> > +{
> > +	struct io_sched_data *sched_data = entity->sched_data;
> > +	unsigned int idx = entity->ioprio_class - 1;
> > +
> > +	BUG_ON(idx >= IO_IOPRIO_CLASSES);
> > +	BUG_ON(sched_data == NULL);
> > +
> > +	return sched_data->service_tree + idx;
> > +}
> > +
> > +/* A request got dispatched from the io_queue. Do the accounting. */
> > +static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
> > +{
> > +	ioq->dispatched++;
> > +}
> > +
> > +static inline int elv_ioq_slice_used(struct io_queue *ioq)
> > +{
> > +	if (elv_ioq_slice_new(ioq))
> > +		return 0;
> > +	if (time_before(jiffies, ioq->slice_end))
> > +		return 0;
> > +
> > +	return 1;
> > +}
> > +
> > +/* How many request are currently dispatched from the queue */
> > +static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
> > +{
> > +	return ioq->dispatched;
> > +}
> > +
> > +/* How many request are currently queued in the queue */
> > +static inline int elv_ioq_nr_queued(struct io_queue *ioq)
> > +{
> > +	return ioq->nr_queued;
> > +}
> > +
> > +static inline void elv_get_ioq(struct io_queue *ioq)
> > +{
> > +	atomic_inc(&ioq->ref);
> > +}
> > +
> > +static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
> > +						unsigned long slice_end)
> > +{
> > +	ioq->slice_end = slice_end;
> > +}
> > +
> > +static inline int elv_ioq_class_idle(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
> > +}
> > +
> > +static inline int elv_ioq_class_rt(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
> > +}
> > +
> > +static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.new_ioprio_class;
> > +}
> > +
> > +static inline int elv_ioq_ioprio(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.new_ioprio;
> > +}
> > +
> > +static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
> > +						int ioprio_class)
> > +{
> > +	ioq->entity.new_ioprio_class = ioprio_class;
> > +	ioq->entity.ioprio_changed = 1;
> > +}
> > +
> > +static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
> > +{
> > +	ioq->entity.new_ioprio = ioprio;
> > +	ioq->entity.ioprio_changed = 1;
> > +}
> > +
> > +static inline void *ioq_sched_queue(struct io_queue *ioq)
> > +{
> > +	if (ioq)
> > +		return ioq->sched_queue;
> > +	return NULL;
> > +}
> > +
> > +static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
> > +{
> > +	return container_of(ioq->entity.sched_data, struct io_group,
> > +						sched_data);
> > +}
> > +
> > +extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
> > +extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
> > +						size_t count);
> > +extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
> > +extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
> > +						size_t count);
> > +extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
> > +extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
> > +						size_t count);
> > +
> > +/* Functions used by elevator.c */
> > +extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
> > +extern void elv_exit_fq_data(struct elevator_queue *e);
> > +extern void elv_exit_fq_data_post(struct elevator_queue *e);
> > +
> > +extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
> > +extern void elv_ioq_request_removed(struct elevator_queue *e,
> > +					struct request *rq);
> > +extern void elv_fq_dispatched_request(struct elevator_queue *e,
> > +					struct request *rq);
> > +
> > +extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
> > +extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
> > +
> > +extern void elv_ioq_completed_request(struct request_queue *q,
> > +				struct request *rq);
> > +
> > +extern void *elv_fq_select_ioq(struct request_queue *q, int force);
> > +extern struct io_queue *rq_ioq(struct request *rq);
> > +
> > +/* Functions used by io schedulers */
> > +extern void elv_put_ioq(struct io_queue *ioq);
> > +extern void __elv_ioq_slice_expired(struct request_queue *q,
> > +					struct io_queue *ioq);
> > +extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
> > +extern void elv_schedule_dispatch(struct request_queue *q);
> > +extern int elv_hw_tag(struct elevator_queue *e);
> > +extern void *elv_active_sched_queue(struct elevator_queue *e);
> > +extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
> > +					unsigned long expires);
> > +extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
> > +extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
> > +extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> > +					int ioprio);
> > +extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> > +					int ioprio, struct io_queue *ioq);
> > +extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
> > +extern int elv_nr_busy_ioq(struct elevator_queue *e);
> > +extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
> > +extern void elv_free_ioq(struct io_queue *ioq);
> > +
> > +#else /* CONFIG_ELV_FAIR_QUEUING */
> > +
> > +static inline int elv_init_fq_data(struct request_queue *q,
> > +					struct elevator_queue *e)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline void elv_exit_fq_data(struct elevator_queue *e) {}
> > +static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
> > +
> > +static inline void elv_fq_activate_rq(struct request_queue *q,
> > +					struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_fq_deactivate_rq(struct request_queue *q,
> > +					struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_fq_dispatched_request(struct elevator_queue *e,
> > +						struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_ioq_request_removed(struct elevator_queue *e,
> > +						struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_ioq_request_add(struct request_queue *q,
> > +					struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_ioq_completed_request(struct request_queue *q,
> > +						struct request *rq)
> > +{
> > +}
> > +
> > +static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
> > +static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
> > +static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
> > +{
> > +	return NULL;
> > +}
> > +#endif /* CONFIG_ELV_FAIR_QUEUING */
> > +#endif /* _BFQ_SCHED_H */
> > diff --git a/block/elevator.c b/block/elevator.c
> > index 7073a90..c2f07f5 100644
> > --- a/block/elevator.c
> > +++ b/block/elevator.c
> > @@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
> >  	for (i = 0; i < ELV_HASH_ENTRIES; i++)
> >  		INIT_HLIST_HEAD(&eq->hash[i]);
> > 
> > +	if (elv_init_fq_data(q, eq))
> > +		goto err;
> > +
> >  	return eq;
> >  err:
> >  	kfree(eq);
> > @@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
> >  void elevator_exit(struct elevator_queue *e)
> >  {
> >  	mutex_lock(&e->sysfs_lock);
> > +	elv_exit_fq_data(e);
> >  	if (e->ops->elevator_exit_fn)
> >  		e->ops->elevator_exit_fn(e);
> >  	e->ops = NULL;
> > +	elv_exit_fq_data_post(e);
> >  	mutex_unlock(&e->sysfs_lock);
> > 
> >  	kobject_put(&e->kobj);
> > @@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
> >  {
> >  	struct elevator_queue *e = q->elevator;
> > 
> > +	elv_fq_activate_rq(q, rq);
> > +
> >  	if (e->ops->elevator_activate_req_fn)
> >  		e->ops->elevator_activate_req_fn(q, rq);
> >  }
> > @@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
> >  {
> >  	struct elevator_queue *e = q->elevator;
> > 
> > +	elv_fq_deactivate_rq(q, rq);
> > +
> >  	if (e->ops->elevator_deactivate_req_fn)
> >  		e->ops->elevator_deactivate_req_fn(q, rq);
> >  }
> > @@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
> >  	elv_rqhash_del(q, rq);
> > 
> >  	q->nr_sorted--;
> > +	elv_fq_dispatched_request(q->elevator, rq);
> > 
> >  	boundary = q->end_sector;
> >  	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
> > @@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
> >  	elv_rqhash_del(q, rq);
> > 
> >  	q->nr_sorted--;
> > +	elv_fq_dispatched_request(q->elevator, rq);
> > 
> >  	q->end_sector = rq_end_sector(rq);
> >  	q->boundary_rq = rq;
> > @@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
> >  	elv_rqhash_del(q, next);
> > 
> >  	q->nr_sorted--;
> > +	elv_ioq_request_removed(e, next);
> >  	q->last_merge = rq;
> >  }
> > 
> > @@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
> >  				q->last_merge = rq;
> >  		}
> > 
> > -		/*
> > -		 * Some ioscheds (cfq) run q->request_fn directly, so
> > -		 * rq cannot be accessed after calling
> > -		 * elevator_add_req_fn.
> > -		 */
> >  		q->elevator->ops->elevator_add_req_fn(q, rq);
> > +		elv_ioq_request_add(q, rq);
> >  		break;
> > 
> >  	case ELEVATOR_INSERT_REQUEUE:
> > @@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
> > 
> >  int elv_queue_empty(struct request_queue *q)
> >  {
> > -	struct elevator_queue *e = q->elevator;
> > -
> >  	if (!list_empty(&q->queue_head))
> >  		return 0;
> > 
> > -	if (e->ops->elevator_queue_empty_fn)
> > -		return e->ops->elevator_queue_empty_fn(q);
> > +	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
> > +	if (q->nr_sorted)
> > +		return 0;
> > 
> >  	return 1;
> >  }
> > @@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
> >  	 */
> >  	if (blk_account_rq(rq)) {
> >  		q->in_flight--;
> > -		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
> > -			e->ops->elevator_completed_req_fn(q, rq);
> > +		if (blk_sorted_rq(rq)) {
> > +			if (e->ops->elevator_completed_req_fn)
> > +				e->ops->elevator_completed_req_fn(q, rq);
> > +			elv_ioq_completed_request(q, rq);
> > +		}
> >  	}
> > 
> >  	/*
> > @@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
> >  	return NULL;
> >  }
> >  EXPORT_SYMBOL(elv_rb_latter_request);
> > +
> > +/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
> > +void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
> > +{
> > +	return ioq_sched_queue(rq_ioq(rq));
> > +}
> > +EXPORT_SYMBOL(elv_get_sched_queue);
> > +
> > +/* Select an ioscheduler queue to dispatch request from. */
> > +void *elv_select_sched_queue(struct request_queue *q, int force)
> > +{
> > +	return ioq_sched_queue(elv_fq_select_ioq(q, force));
> > +}
> > +EXPORT_SYMBOL(elv_select_sched_queue);
> > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > index b4f71f1..96a94c9 100644
> > --- a/include/linux/blkdev.h
> > +++ b/include/linux/blkdev.h
> > @@ -245,6 +245,11 @@ struct request {
> > 
> >  	/* for bidi */
> >  	struct request *next_rq;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	/* io queue request belongs to */
> > +	struct io_queue *ioq;
> > +#endif
> >  };
> > 
> >  static inline unsigned short req_get_ioprio(struct request *req)
> > diff --git a/include/linux/elevator.h b/include/linux/elevator.h
> > index c59b769..679c149 100644
> > --- a/include/linux/elevator.h
> > +++ b/include/linux/elevator.h
> > @@ -2,6 +2,7 @@
> >  #define _LINUX_ELEVATOR_H
> > 
> >  #include <linux/percpu.h>
> > +#include "../../block/elevator-fq.h"
> > 
> >  #ifdef CONFIG_BLOCK
> > 
> > @@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
> > 
> >  typedef void *(elevator_init_fn) (struct request_queue *);
> >  typedef void (elevator_exit_fn) (struct elevator_queue *);
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
> > +typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
> > +typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
> > +typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
> > +typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
> > +						struct request*);
> > +typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
> > +						struct request*);
> > +typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
> > +						void*, int probe);
> > +#endif
> > 
> >  struct elevator_ops
> >  {
> > @@ -56,6 +69,17 @@ struct elevator_ops
> >  	elevator_init_fn *elevator_init_fn;
> >  	elevator_exit_fn *elevator_exit_fn;
> >  	void (*trim)(struct io_context *);
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
> > +	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
> > +	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
> > +
> > +	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
> > +	elevator_should_preempt_fn *elevator_should_preempt_fn;
> > +	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
> > +	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
> > +#endif
> >  };
> > 
> >  #define ELV_NAME_MAX	(16)
> > @@ -76,6 +100,9 @@ struct elevator_type
> >  	struct elv_fs_entry *elevator_attrs;
> >  	char elevator_name[ELV_NAME_MAX];
> >  	struct module *elevator_owner;
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	int elevator_features;
> > +#endif
> >  };
> > 
> >  /*
> > @@ -89,6 +116,10 @@ struct elevator_queue
> >  	struct elevator_type *elevator_type;
> >  	struct mutex sysfs_lock;
> >  	struct hlist_head *hash;
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	/* fair queuing data */
> > +	struct elv_fq_data efqd;
> > +#endif
> >  };
> > 
> >  /*
> > @@ -209,5 +240,25 @@ enum {
> >  	__val;							\
> >  })
> > 
> > +/* iosched can let elevator know their feature set/capability */
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +/* iosched wants to use fq logic of elevator layer */
> > +#define	ELV_IOSCHED_NEED_FQ	1
> > +
> > +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> > +{
> > +	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
> > +}
> > +
> > +#else /* ELV_IOSCHED_FAIR_QUEUING */
> > +
> > +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> > +{
> > +	return 0;
> > +}
> > +#endif /* ELV_IOSCHED_FAIR_QUEUING */
> > +extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
> > +extern void *elv_select_sched_queue(struct request_queue *q, int force);
> >  #endif /* CONFIG_BLOCK */
> >  #endif
> > -- 
> > 1.6.0.6
> > 
> 
> -- 
> 	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-22  8:46     ` Balbir Singh
  (?)
@ 2009-06-22 12:43     ` Fabio Checconi
       [not found]       ` <20090622124313.GF28770-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
  2009-06-23  2:43         ` Vivek Goyal
  -1 siblings, 2 replies; 176+ messages in thread
From: Fabio Checconi @ 2009-06-22 12:43 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Vivek Goyal, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, paolo.valente, ryov, fernando,
	s-uchida, taka, guijianfeng, jmoyer, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

> From: Balbir Singh <balbir@linux.vnet.ibm.com>
> Date: Mon, Jun 22, 2009 02:16:12PM +0530
>
> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:20]:
> 
> > This is common fair queuing code in elevator layer. This is controlled by
> > config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> > flat fair queuing support where there is only one group, "root group" and all
> > the tasks belong to root group.
> > 
> > This elevator layer changes are backward compatible. That means any ioscheduler
> > using old interfaces will continue to work.
> > 
> > This code is essentially the CFQ code for fair queuing. The primary difference
> > is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
> >
> 
> The patch is quite long and to be honest requires a long time to
> review, which I don't mind. I suspect my frequently diverted mind is
> likely to miss a lot in a big patch like this. Could you consider
> splitting this further if possible. I think you'll notice the number
> of reviews will also increase.
>  

This core scheduler part has not changed too much from the bfq patches,
so I'll try to answer your questions; Vivek, please correct me where
my knowledge is outdated.  I preferred to leave out the questions about
code that was not in the original patches.

...
> > +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> > +					unsigned short prio)
> 
> Why is the return type int and not unsigned int or unsigned long? Can
> the return value ever be negative?
> 
> > +{
> > +	const int base_slice = efqd->elv_slice[sync];
> > +
> > +	WARN_ON(prio >= IOPRIO_BE_NR);
> > +
> > +	return base_slice + (base_slice/ELV_SLICE_SCALE * (4 - prio));
> > +}
> > +
> > +static inline int
> > +elv_prio_to_slice(struct elv_fq_data *efqd, struct io_queue *ioq)
> > +{
> > +	return elv_prio_slice(efqd, elv_ioq_sync(ioq), ioq->entity.ioprio);
> > +}
> > +
> > +/* Mainly the BFQ scheduling code Follows */
> > +
> > +/*
> > + * Shift for timestamp calculations.  This actually limits the maximum
> > + * service allowed in one timestamp delta (small shift values increase it),
> > + * the maximum total weight that can be used for the queues in the system
> > + * (big shift values increase it), and the period of virtual time wraparounds.
> > + */
> > +#define WFQ_SERVICE_SHIFT	22
> > +
> > +/**
> > + * bfq_gt - compare two timestamps.
> > + * @a: first ts.
> > + * @b: second ts.
> > + *
> > + * Return @a > @b, dealing with wrapping correctly.
> > + */
> > +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> > +{
> > +	return (s64)(a - b) > 0;
> > +}
> > +
> 
> a and b are of type u64, but cast to s64 to deal with wrapping?
> Correct?
> 

yes


> > +/**
> > + * bfq_delta - map service into the virtual time domain.
> > + * @service: amount of service.
> > + * @weight: scale factor.
> > + */
> > +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> > +					bfq_weight_t weight)
> > +{
> > +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> > +
> 
> Why is the case required? Does the compiler complain? service is
> already of the correct type.
> 

service is unsigned long, so it can be 32 bits in 32 bit machines,
while timestamps are always u64, so I think we need the cast.


> > +	do_div(d, weight);
> 
> On a 64 system both d and weight are 64 bit, but on a 32 bit system
> weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
> - no?
> 

yes.  here the situation is that we actually don't care about the type
of weight, as long as it can contain a 32 bit value, and weights should
never reach near the 2^32 boundary, otherwise we're prone to any kind
of numerical error.  there are no problems with weight being u32.


> > +	return d;
> > +}
> > +
> > +/**
> > + * bfq_calc_finish - assign the finish time to an entity.
> > + * @entity: the entity to act upon.
> > + * @service: the service to be charged to the entity.
> > + */
> > +static inline void bfq_calc_finish(struct io_entity *entity,
> > +				   bfq_service_t service)
> > +{
> > +	BUG_ON(entity->weight == 0);
> > +
> > +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> > +}
> 
> Should we BUG_ON (entity->finish == entity->start)? Or is that
> expected when the entity has no service time left.
> 

bfq_calc_finish() is used in two cases:

  1) we need to resync the finish time with the service received by an
    entity

  2) we need to assign a new finish time to an entity when it's enqueued

with preemptions 1) can happen with service = 0, and we need to reset the
finish time to the start time (depending on how preemptions are implemented),
so in this case we'd have a false positive (leading to a crashed system :) ).


> > +
> > +static inline struct io_queue *io_entity_to_ioq(struct io_entity *entity)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	BUG_ON(entity == NULL);
> > +	if (entity->my_sched_data == NULL)
> > +		ioq = container_of(entity, struct io_queue, entity);
> > +	return ioq;
> > +}
> > +
> > +/**
> > + * bfq_entity_of - get an entity from a node.
> > + * @node: the node field of the entity.
> > + *
> > + * Convert a node pointer to the relative entity.  This is used only
> > + * to simplify the logic of some functions and not as the generic
> > + * conversion mechanism because, e.g., in the tree walking functions,
> > + * the check for a %NULL value would be redundant.
> > + */
> > +static inline struct io_entity *bfq_entity_of(struct rb_node *node)
> > +{
> > +	struct io_entity *entity = NULL;
> > +
> > +	if (node != NULL)
> > +		entity = rb_entry(node, struct io_entity, rb_node);
> > +
> > +	return entity;
> > +}
> > +
> > +/**
> > + * bfq_extract - remove an entity from a tree.
> > + * @root: the tree root.
> > + * @entity: the entity to remove.
> > + */
> > +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> > +{
> 
> Extract is not common terminology, why not use bfq_remove()?
> 
> > +	BUG_ON(entity->tree != root);
> > +
> > +	entity->tree = NULL;
> > +	rb_erase(&entity->rb_node, root);
> 
> Don't you want to make entity->tree = NULL after rb_erase?
> 

this code assumes to be executed under spinlock, so order doesn't really
matter (tree is not affected by rb_erase(), it is a bfq private field).


> > +}
> > +
> > +/**
> > + * bfq_idle_extract - extract an entity from the idle tree.
> > + * @st: the service tree of the owning @entity.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_idle_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *next;
> > +
> > +	BUG_ON(entity->tree != &st->idle);
> > +
> > +	if (entity == st->first_idle) {
> > +		next = rb_next(&entity->rb_node);
> 
> What happens if next is NULL?
> 

the bfq_entity_of() call below returns NULL


> > +		st->first_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	if (entity == st->last_idle) {
> > +		next = rb_prev(&entity->rb_node);
> 
> What happens if next is NULL?
> 

same as above


> > +		st->last_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	bfq_extract(&st->idle, entity);
> > +}
> > +
> > +/**
> > + * bfq_insert - generic tree insertion.
> > + * @root: tree root.
> > + * @entity: entity to insert.
> > + *
> > + * This is used for the idle and the active tree, since they are both
> > + * ordered by finish time.
> > + */
> > +static void bfq_insert(struct rb_root *root, struct io_entity *entity)
> > +{
> > +	struct io_entity *entry;
> > +	struct rb_node **node = &root->rb_node;
> > +	struct rb_node *parent = NULL;
> > +
> > +	BUG_ON(entity->tree != NULL);
> > +
> > +	while (*node != NULL) {
> > +		parent = *node;
> > +		entry = rb_entry(parent, struct io_entity, rb_node);
> > +
> > +		if (bfq_gt(entry->finish, entity->finish))
> > +			node = &parent->rb_left;
> > +		else
> > +			node = &parent->rb_right;
> > +	}
> > +
> > +	rb_link_node(&entity->rb_node, parent, node);
> > +	rb_insert_color(&entity->rb_node, root);
> > +
> > +	entity->tree = root;
> > +}
> > +
> > +/**
> > + * bfq_update_min - update the min_start field of a entity.
> > + * @entity: the entity to update.
> > + * @node: one of its children.
> > + *
> > + * This function is called when @entity may store an invalid value for
> > + * min_start due to updates to the active tree.  The function  assumes
> > + * that the subtree rooted at @node (which may be its left or its right
> > + * child) has a valid min_start value.
> > + */
> > +static inline void bfq_update_min(struct io_entity *entity,
> > +					struct rb_node *node)
> > +{
> > +	struct io_entity *child;
> > +
> > +	if (node != NULL) {
> > +		child = rb_entry(node, struct io_entity, rb_node);
> > +		if (bfq_gt(entity->min_start, child->min_start))
> > +			entity->min_start = child->min_start;
> > +	}
> > +}
> 
> So.. we check to see if child's min_time is lesser than the root
> entities or node entities and set it to the minimum of the two?
> Can you use min_t here?
> 

no, it would not deal with wraparound correctly


> > +
> > +/**
> > + * bfq_update_active_node - recalculate min_start.
> > + * @node: the node to update.
> > + *
> > + * @node may have changed position or one of its children may have moved,
> > + * this function updates its min_start value.  The left and right subtrees
> > + * are assumed to hold a correct min_start value.
> > + */
> > +static inline void bfq_update_active_node(struct rb_node *node)
> > +{
> > +	struct io_entity *entity = rb_entry(node, struct io_entity, rb_node);
> > +
> > +	entity->min_start = entity->start;
> > +	bfq_update_min(entity, node->rb_right);
> > +	bfq_update_min(entity, node->rb_left);
> > +}
> 
> I don't like this every much, we set the min_time twice, this can be
> easily optimized to look at both left and right child and pick the
> minimum.
> 

it's a minimum between three values (the ->start fields of the two children
and of the node itself), you cannot be sure it will be set twice


> > +
> > +/**
> > + * bfq_update_active_tree - update min_start for the whole active tree.
> > + * @node: the starting node.
> > + *
> > + * @node must be the deepest modified node after an update.  This function
> > + * updates its min_start using the values held by its children, assuming
> > + * that they did not change, and then updates all the nodes that may have
> > + * changed in the path to the root.  The only nodes that may have changed
> > + * are the ones in the path or their siblings.
> > + */
> > +static void bfq_update_active_tree(struct rb_node *node)
> > +{
> > +	struct rb_node *parent;
> > +
> > +up:
> > +	bfq_update_active_node(node);
> > +
> > +	parent = rb_parent(node);
> > +	if (parent == NULL)
> > +		return;
> > +
> > +	if (node == parent->rb_left && parent->rb_right != NULL)
> > +		bfq_update_active_node(parent->rb_right);
> > +	else if (parent->rb_left != NULL)
> > +		bfq_update_active_node(parent->rb_left);
> > +
> > +	node = parent;
> > +	goto up;
> > +}
> > +
> 
> For these functions, take a look at the walk function in the group
> scheduler that does update_shares
> 

are you sure?  AFAICT walk_tg_tree() walks all over the tree, this just
walks a single path to a node up to the root, I don't see what the two
have in common.

in the original patches we cited (among the others):

  http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf

which contains a description of the algorithm.


> > +/**
> > + * bfq_active_insert - insert an entity in the active tree of its group/device.
> > + * @st: the service tree of the entity.
> > + * @entity: the entity being inserted.
> > + *
> > + * The active tree is ordered by finish time, but an extra key is kept
> > + * per each node, containing the minimum value for the start times of
> > + * its children (and the node itself), so it's possible to search for
> > + * the eligible node with the lowest finish time in logarithmic time.
> > + */
> > +static void bfq_active_insert(struct io_service_tree *st,
> > +					struct io_entity *entity)
> > +{
> > +	struct rb_node *node = &entity->rb_node;
> > +
> > +	bfq_insert(&st->active, entity);
> > +
> > +	if (node->rb_left != NULL)
> > +		node = node->rb_left;
> > +	else if (node->rb_right != NULL)
> > +		node = node->rb_right;
> > +
> > +	bfq_update_active_tree(node);
> > +}
> > +
> > +/**
> > + * bfq_ioprio_to_weight - calc a weight from an ioprio.
> > + * @ioprio: the ioprio value to convert.
> > + */
> > +static bfq_weight_t bfq_ioprio_to_weight(int ioprio)
> > +{
> > +	WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
> > +	return IOPRIO_BE_NR - ioprio;
> > +}
> > +
> > +void bfq_get_entity(struct io_entity *entity)
> > +{
> > +	struct io_queue *ioq = io_entity_to_ioq(entity);
> > +
> > +	if (ioq)
> > +		elv_get_ioq(ioq);
> > +}
> > +
> > +void bfq_init_entity(struct io_entity *entity, struct io_group *iog)
> > +{
> > +	entity->ioprio = entity->new_ioprio;
> > +	entity->ioprio_class = entity->new_ioprio_class;
> > +	entity->sched_data = &iog->sched_data;
> > +}
> > +
> > +/**
> > + * bfq_find_deepest - find the deepest node that an extraction can modify.
> > + * @node: the node being removed.
> > + *
> > + * Do the first step of an extraction in an rb tree, looking for the
> > + * node that will replace @node, and returning the deepest node that
> > + * the following modifications to the tree can touch.  If @node is the
> > + * last node in the tree return %NULL.
> > + */
> > +static struct rb_node *bfq_find_deepest(struct rb_node *node)
> > +{
> > +	struct rb_node *deepest;
> > +
> > +	if (node->rb_right == NULL && node->rb_left == NULL)
> > +		deepest = rb_parent(node);
> 
> Why is the parent the deepest? Shouldn't node be the deepest?
> 

this is related to how the RB tree is updated (see below)


> > +	else if (node->rb_right == NULL)
> > +		deepest = node->rb_left;
> > +	else if (node->rb_left == NULL)
> > +		deepest = node->rb_right;
> > +	else {
> > +		deepest = rb_next(node);
> > +		if (deepest->rb_right != NULL)
> > +			deepest = deepest->rb_right;
> > +		else if (rb_parent(deepest) != node)
> > +			deepest = rb_parent(deepest);
> > +	}
> > +
> > +	return deepest;
> > +}
> 
> The function is not clear, could you please define deepest node
> better?
> 

according to the paper cited above, we need to update the min_start
value on the path from the deepest node modified by the extraction
up to the root.  this function tries to consider all the cases of RB
extraction, looking for the deepest node that (after all the rotations
etc.) will need an update to min_start.  one interesting property
of RB trees is that this can be done in O(log N) because there is a
single path that needs to be updated.


> > +
> > +/**
> > + * bfq_active_extract - remove an entity from the active tree.
> > + * @st: the service_tree containing the tree.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_active_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *node;
> > +
> > +	node = bfq_find_deepest(&entity->rb_node);
> > +	bfq_extract(&st->active, entity);
> > +
> > +	if (node != NULL)
> > +		bfq_update_active_tree(node);
> > +}
> > +
> 
> Just to check my understanding, every time an active node is
> added/removed, we update the min_time of the entire tree.
> 

yes, but only O(log N) nodes need to be updated


> > +/**
> > + * bfq_idle_insert - insert an entity into the idle tree.
> > + * @st: the service tree containing the tree.
> > + * @entity: the entity to insert.
> > + */
> > +static void bfq_idle_insert(struct io_service_tree *st,
> > +					struct io_entity *entity)
> > +{
> > +	struct io_entity *first_idle = st->first_idle;
> > +	struct io_entity *last_idle = st->last_idle;
> > +
> > +	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
> > +		st->first_idle = entity;
> > +	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
> > +		st->last_idle = entity;
> > +
> > +	bfq_insert(&st->idle, entity);
> > +}
> > +
> > +/**
> > + * bfq_forget_entity - remove an entity from the wfq trees.
> > + * @st: the service tree.
> > + * @entity: the entity being removed.
> > + *
> > + * Update the device status and forget everything about @entity, putting
> > + * the device reference to it, if it is a queue.  Entities belonging to
> > + * groups are not refcounted.
> > + */
> > +static void bfq_forget_entity(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	BUG_ON(!entity->on_st);
> > +	entity->on_st = 0;
> > +	st->wsum -= entity->weight;
> > +	ioq = io_entity_to_ioq(entity);
> > +	if (!ioq)
> > +		return;
> > +	elv_put_ioq(ioq);
> > +}
> > +
> > +/**
> > + * bfq_put_idle_entity - release the idle tree ref of an entity.
> > + * @st: service tree for the entity.
> > + * @entity: the entity being released.
> > + */
> > +void bfq_put_idle_entity(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	bfq_idle_extract(st, entity);
> > +	bfq_forget_entity(st, entity);
> > +}
> > +
> > +/**
> > + * bfq_forget_idle - update the idle tree if necessary.
> > + * @st: the service tree to act upon.
> > + *
> > + * To preserve the global O(log N) complexity we only remove one entry here;
> > + * as the idle tree will not grow indefinitely this can be done safely.
> > + */
> > +void bfq_forget_idle(struct io_service_tree *st)
> > +{
> > +	struct io_entity *first_idle = st->first_idle;
> > +	struct io_entity *last_idle = st->last_idle;
> > +
> > +	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
> > +	    !bfq_gt(last_idle->finish, st->vtime)) {
> > +		/*
> > +		 * Active tree is empty. Pull back vtime to finish time of
> > +		 * last idle entity on idle tree.
> > +		 * Rational seems to be that it reduces the possibility of
> > +		 * vtime wraparound (bfq_gt(V-F) < 0).
> > +		 */
> > +		st->vtime = last_idle->finish;
> > +	}
> > +
> > +	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
> > +		bfq_put_idle_entity(st, first_idle);
> > +}
> > +
> > +
> > +static struct io_service_tree *
> > +__bfq_entity_update_prio(struct io_service_tree *old_st,
> > +				struct io_entity *entity)
> > +{
> > +	struct io_service_tree *new_st = old_st;
> > +	struct io_queue *ioq = io_entity_to_ioq(entity);
> > +
> > +	if (entity->ioprio_changed) {
> > +		entity->ioprio = entity->new_ioprio;
> > +		entity->ioprio_class = entity->new_ioprio_class;
> > +		entity->ioprio_changed = 0;
> > +
> > +		/*
> > +		 * Also update the scaled budget for ioq. Group will get the
> > +		 * updated budget once ioq is selected to run next.
> > +		 */
> > +		if (ioq) {
> > +			struct elv_fq_data *efqd = ioq->efqd;
> > +			entity->budget = elv_prio_to_slice(efqd, ioq);
> > +		}
> > +
> > +		old_st->wsum -= entity->weight;
> > +		entity->weight = bfq_ioprio_to_weight(entity->ioprio);
> > +
> > +		/*
> > +		 * NOTE: here we may be changing the weight too early,
> > +		 * this will cause unfairness.  The correct approach
> > +		 * would have required additional complexity to defer
> > +		 * weight changes to the proper time instants (i.e.,
> > +		 * when entity->finish <= old_st->vtime).
> > +		 */
> > +		new_st = io_entity_service_tree(entity);
> > +		new_st->wsum += entity->weight;
> > +
> > +		if (new_st != old_st)
> > +			entity->start = new_st->vtime;
> > +	}
> > +
> > +	return new_st;
> > +}
> > +
> > +/**
> > + * __bfq_activate_entity - activate an entity.
> > + * @entity: the entity being activated.
> > + *
> > + * Called whenever an entity is activated, i.e., it is not active and one
> > + * of its children receives a new request, or has to be reactivated due to
> > + * budget exhaustion.  It uses the current budget of the entity (and the
> > + * service received if @entity is active) of the queue to calculate its
> > + * timestamps.
> > + */
> > +static void __bfq_activate_entity(struct io_entity *entity, int add_front)
> > +{
> > +	struct io_sched_data *sd = entity->sched_data;
> > +	struct io_service_tree *st = io_entity_service_tree(entity);
> > +
> > +	if (entity == sd->active_entity) {
> > +		BUG_ON(entity->tree != NULL);
> > +		/*
> > +		 * If we are requeueing the current entity we have
> > +		 * to take care of not charging to it service it has
> > +		 * not received.
> > +		 */
> > +		bfq_calc_finish(entity, entity->service);
> > +		entity->start = entity->finish;
> > +		sd->active_entity = NULL;
> > +	} else if (entity->tree == &st->active) {
> > +		/*
> > +		 * Requeueing an entity due to a change of some
> > +		 * next_active entity below it.  We reuse the old
> > +		 * start time.
> > +		 */
> > +		bfq_active_extract(st, entity);
> > +	} else if (entity->tree == &st->idle) {
> > +		/*
> > +		 * Must be on the idle tree, bfq_idle_extract() will
> > +		 * check for that.
> > +		 */
> > +		bfq_idle_extract(st, entity);
> > +		entity->start = bfq_gt(st->vtime, entity->finish) ?
> > +				       st->vtime : entity->finish;
> > +	} else {
> > +		/*
> > +		 * The finish time of the entity may be invalid, and
> > +		 * it is in the past for sure, otherwise the queue
> > +		 * would have been on the idle tree.
> > +		 */
> > +		entity->start = st->vtime;
> > +		st->wsum += entity->weight;
> > +		bfq_get_entity(entity);
> > +
> > +		BUG_ON(entity->on_st);
> > +		entity->on_st = 1;
> > +	}
> > +
> > +	st = __bfq_entity_update_prio(st, entity);
> > +	/*
> > +	 * This is to emulate cfq like functionality where preemption can
> > +	 * happen with-in same class, like sync queue preempting async queue
> > +	 * May be this is not a very good idea from fairness point of view
> > +	 * as preempting queue gains share. Keeping it for now.
> > +	 */
> > +	if (add_front) {
> > +		struct io_entity *next_entity;
> > +
> > +		/*
> > +		 * Determine the entity which will be dispatched next
> > +		 * Use sd->next_active once hierarchical patch is applied
> > +		 */
> > +		next_entity = bfq_lookup_next_entity(sd, 0);
> > +
> > +		if (next_entity && next_entity != entity) {
> > +			struct io_service_tree *new_st;
> > +			bfq_timestamp_t delta;
> > +
> > +			new_st = io_entity_service_tree(next_entity);
> > +
> > +			/*
> > +			 * At this point, both entities should belong to
> > +			 * same service tree as cross service tree preemption
> > +			 * is automatically taken care by algorithm
> > +			 */
> > +			BUG_ON(new_st != st);
> > +			entity->finish = next_entity->finish - 1;
> > +			delta = bfq_delta(entity->budget, entity->weight);
> > +			entity->start = entity->finish - delta;
> > +			if (bfq_gt(entity->start, st->vtime))
> > +				entity->start = st->vtime;
> > +		}
> > +	} else {
> > +		bfq_calc_finish(entity, entity->budget);
> > +	}
> > +	bfq_active_insert(st, entity);
> > +}
> > +
> > +/**
> > + * bfq_activate_entity - activate an entity.
> > + * @entity: the entity to activate.
> > + */
> > +void bfq_activate_entity(struct io_entity *entity, int add_front)
> > +{
> > +	__bfq_activate_entity(entity, add_front);
> > +}
> > +
> > +/**
> > + * __bfq_deactivate_entity - deactivate an entity from its service tree.
> > + * @entity: the entity to deactivate.
> > + * @requeue: if false, the entity will not be put into the idle tree.
> > + *
> > + * Deactivate an entity, independently from its previous state.  If the
> > + * entity was not on a service tree just return, otherwise if it is on
> > + * any scheduler tree, extract it from that tree, and if necessary
> > + * and if the caller did not specify @requeue, put it on the idle tree.
> > + *
> > + */
> > +int __bfq_deactivate_entity(struct io_entity *entity, int requeue)
> > +{
> > +	struct io_sched_data *sd = entity->sched_data;
> > +	struct io_service_tree *st = io_entity_service_tree(entity);
> > +	int was_active = entity == sd->active_entity;
> > +	int ret = 0;
> > +
> > +	if (!entity->on_st)
> > +		return 0;
> > +
> > +	BUG_ON(was_active && entity->tree != NULL);
> > +
> > +	if (was_active) {
> > +		bfq_calc_finish(entity, entity->service);
> > +		sd->active_entity = NULL;
> > +	} else if (entity->tree == &st->active)
> > +		bfq_active_extract(st, entity);
> > +	else if (entity->tree == &st->idle)
> > +		bfq_idle_extract(st, entity);
> > +	else if (entity->tree != NULL)
> > +		BUG();
> > +
> > +	if (!requeue || !bfq_gt(entity->finish, st->vtime))
> > +		bfq_forget_entity(st, entity);
> > +	else
> > +		bfq_idle_insert(st, entity);
> > +
> > +	BUG_ON(sd->active_entity == entity);
> > +
> > +	return ret;
> > +}
> > +
> > +/**
> > + * bfq_deactivate_entity - deactivate an entity.
> > + * @entity: the entity to deactivate.
> > + * @requeue: true if the entity can be put on the idle tree
> > + */
> > +void bfq_deactivate_entity(struct io_entity *entity, int requeue)
> > +{
> > +	__bfq_deactivate_entity(entity, requeue);
> > +}
> > +
> > +/**
> > + * bfq_update_vtime - update vtime if necessary.
> > + * @st: the service tree to act upon.
> > + *
> > + * If necessary update the service tree vtime to have at least one
> > + * eligible entity, skipping to its start time.  Assumes that the
> > + * active tree of the device is not empty.
> > + *
> > + * NOTE: this hierarchical implementation updates vtimes quite often,
> > + * we may end up with reactivated tasks getting timestamps after a
> > + * vtime skip done because we needed a ->first_active entity on some
> > + * intermediate node.
> > + */
> > +static void bfq_update_vtime(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entry;
> > +	struct rb_node *node = st->active.rb_node;
> > +
> > +	entry = rb_entry(node, struct io_entity, rb_node);
> > +	if (bfq_gt(entry->min_start, st->vtime)) {
> > +		st->vtime = entry->min_start;
> > +		bfq_forget_idle(st);
> > +	}
> > +}
> > +
> > +/**
> > + * bfq_first_active - find the eligible entity with the smallest finish time
> > + * @st: the service tree to select from.
> > + *
> > + * This function searches the first schedulable entity, starting from the
> > + * root of the tree and going on the left every time on this side there is
> > + * a subtree with at least one eligible (start <= vtime) entity.  The path
> > + * on the right is followed only if a) the left subtree contains no eligible
> > + * entities and b) no eligible entity has been found yet.
> > + */
> > +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entry, *first = NULL;
> > +	struct rb_node *node = st->active.rb_node;
> > +
> > +	while (node != NULL) {
> > +		entry = rb_entry(node, struct io_entity, rb_node);
> > +left:
> > +		if (!bfq_gt(entry->start, st->vtime))
> > +			first = entry;
> > +
> > +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> > +
> > +		if (node->rb_left != NULL) {
> > +			entry = rb_entry(node->rb_left,
> > +					 struct io_entity, rb_node);
> > +			if (!bfq_gt(entry->min_start, st->vtime)) {
> > +				node = node->rb_left;
> > +				goto left;
> > +			}
> > +		}
> > +		if (first != NULL)
> > +			break;
> > +		node = node->rb_right;
> 
> Please help me understand this, we sort the tree by finish time, but
> search by vtime, start_time. The worst case could easily be O(N),
> right?
> 

no, (again, the full answer is in the paper); the nice property of
min_start is that it partitions the tree in two regions, one with
eligible entities and one without any of them.  once we know that
there is one eligible entity (checking the min_start at the root)
we can find the node i with min(F_i) subject to S_i < V walking down
a single path from the root to the leftmost eligible entity.  (we
need to go to the right only if the subtree on the left contains 
no eligible entities at all.)  since the RB tree is balanced this
can be done in O(log N).


> > +	}
> > +
> > +	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
> > +	return first;
> > +}
> > +
> > +/**
> > + * __bfq_lookup_next_entity - return the first eligible entity in @st.
> > + * @st: the service tree.
> > + *
> > + * Update the virtual time in @st and return the first eligible entity
> > + * it contains.
> > + */
> > +static struct io_entity *__bfq_lookup_next_entity(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entity;
> > +
> > +	if (RB_EMPTY_ROOT(&st->active))
> > +		return NULL;
> > +
> > +	bfq_update_vtime(st);
> > +	entity = bfq_first_active_entity(st);
> > +	BUG_ON(bfq_gt(entity->start, st->vtime));
> > +
> > +	return entity;
> > +}
> > +
> > +/**
> > + * bfq_lookup_next_entity - return the first eligible entity in @sd.
> > + * @sd: the sched_data.
> > + * @extract: if true the returned entity will be also extracted from @sd.
> > + *
> > + * NOTE: since we cache the next_active entity at each level of the
> > + * hierarchy, the complexity of the lookup can be decreased with
> > + * absolutely no effort just returning the cached next_active value;
> > + * we prefer to do full lookups to test the consistency of * the data
> > + * structures.
> > + */
> > +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> > +						 int extract)
> > +{
> > +	struct io_service_tree *st = sd->service_tree;
> > +	struct io_entity *entity;
> > +	int i;
> > +
> > +	/*
> > +	 * We should not call lookup when an entity is active, as doing lookup
> > +	 * can result in an erroneous vtime jump.
> > +	 */
> > +	BUG_ON(sd->active_entity != NULL);
> > +
> > +	for (i = 0; i < IO_IOPRIO_CLASSES; i++, st++) {
> > +		entity = __bfq_lookup_next_entity(st);
> > +		if (entity != NULL) {
> > +			if (extract) {
> > +				bfq_active_extract(st, entity);
> > +				sd->active_entity = entity;
> > +			}
> > +			break;
> > +		}
> > +	}
> > +
> > +	return entity;
> > +}
> > +
> > +void entity_served(struct io_entity *entity, bfq_service_t served)
> > +{
> > +	struct io_service_tree *st;
> > +
> > +	st = io_entity_service_tree(entity);
> > +	entity->service += served;
> > +	BUG_ON(st->wsum == 0);
> > +	st->vtime += bfq_delta(served, st->wsum);
> > +	bfq_forget_idle(st);
> 
> Forget idle checks to see if the st->vtime > first_idle->finish, if so
> it pushes the first_idle to later, right?
> 

yes, updating the weight sum accordingly


> > +}
> > +
> > +/**
> > + * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
> > + * @st: the service tree being flushed.
> > + */
> > +void io_flush_idle_tree(struct io_service_tree *st)
> > +{
> > +	struct io_entity *entity = st->first_idle;
> > +
> > +	for (; entity != NULL; entity = st->first_idle)
> > +		__bfq_deactivate_entity(entity, 0);
> > +}
> > +
> > +/* Elevator fair queuing function */
> > +struct io_queue *rq_ioq(struct request *rq)
> > +{
> > +	return rq->ioq;
> > +}
> > +
> > +static inline struct io_queue *elv_active_ioq(struct elevator_queue *e)
> > +{
> > +	return e->efqd.active_queue;
> > +}
> > +
> > +void *elv_active_sched_queue(struct elevator_queue *e)
> > +{
> > +	return ioq_sched_queue(elv_active_ioq(e));
> > +}
> > +EXPORT_SYMBOL(elv_active_sched_queue);
> > +
> > +int elv_nr_busy_ioq(struct elevator_queue *e)
> > +{
> > +	return e->efqd.busy_queues;
> > +}
> > +EXPORT_SYMBOL(elv_nr_busy_ioq);
> > +
> > +int elv_hw_tag(struct elevator_queue *e)
> > +{
> > +	return e->efqd.hw_tag;
> > +}
> > +EXPORT_SYMBOL(elv_hw_tag);
> > +
> > +/* Helper functions for operating on elevator idle slice timer */
> > +int elv_mod_idle_slice_timer(struct elevator_queue *eq, unsigned long expires)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +
> > +	return mod_timer(&efqd->idle_slice_timer, expires);
> > +}
> > +EXPORT_SYMBOL(elv_mod_idle_slice_timer);
> > +
> > +int elv_del_idle_slice_timer(struct elevator_queue *eq)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +
> > +	return del_timer(&efqd->idle_slice_timer);
> > +}
> > +EXPORT_SYMBOL(elv_del_idle_slice_timer);
> > +
> > +unsigned int elv_get_slice_idle(struct elevator_queue *eq)
> > +{
> > +	return eq->efqd.elv_slice_idle;
> > +}
> > +EXPORT_SYMBOL(elv_get_slice_idle);
> > +
> > +void elv_ioq_served(struct io_queue *ioq, bfq_service_t served)
> > +{
> > +	entity_served(&ioq->entity, served);
> > +}
> > +
> > +/* Tells whether ioq is queued in root group or not */
> > +static inline int is_root_group_ioq(struct request_queue *q,
> > +					struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	return (ioq->entity.sched_data == &efqd->root_group->sched_data);
> > +}
> > +
> > +/*
> > + * sysfs parts below -->
> > + */
> > +static ssize_t
> > +elv_var_show(unsigned int var, char *page)
> > +{
> > +	return sprintf(page, "%d\n", var);
> > +}
> > +
> > +static ssize_t
> > +elv_var_store(unsigned int *var, const char *page, size_t count)
> > +{
> > +	char *p = (char *) page;
> > +
> > +	*var = simple_strtoul(p, &p, 10);
> > +	return count;
> > +}
> > +
> > +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
> > +ssize_t __FUNC(struct elevator_queue *e, char *page)		\
> > +{									\
> > +	struct elv_fq_data *efqd = &e->efqd;				\
> > +	unsigned int __data = __VAR;					\
> > +	if (__CONV)							\
> > +		__data = jiffies_to_msecs(__data);			\
> > +	return elv_var_show(__data, (page));				\
> > +}
> > +SHOW_FUNCTION(elv_slice_idle_show, efqd->elv_slice_idle, 1);
> > +EXPORT_SYMBOL(elv_slice_idle_show);
> > +SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
> > +EXPORT_SYMBOL(elv_slice_sync_show);
> > +SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
> > +EXPORT_SYMBOL(elv_slice_async_show);
> > +#undef SHOW_FUNCTION
> > +
> > +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
> > +ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
> > +{									\
> > +	struct elv_fq_data *efqd = &e->efqd;				\
> > +	unsigned int __data;						\
> > +	int ret = elv_var_store(&__data, (page), count);		\
> > +	if (__data < (MIN))						\
> > +		__data = (MIN);						\
> > +	else if (__data > (MAX))					\
> > +		__data = (MAX);						\
> > +	if (__CONV)							\
> > +		*(__PTR) = msecs_to_jiffies(__data);			\
> > +	else								\
> > +		*(__PTR) = __data;					\
> > +	return ret;							\
> > +}
> > +STORE_FUNCTION(elv_slice_idle_store, &efqd->elv_slice_idle, 0, UINT_MAX, 1);
> > +EXPORT_SYMBOL(elv_slice_idle_store);
> > +STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
> > +EXPORT_SYMBOL(elv_slice_sync_store);
> > +STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
> > +EXPORT_SYMBOL(elv_slice_async_store);
> > +#undef STORE_FUNCTION
> > +
> > +void elv_schedule_dispatch(struct request_queue *q)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (elv_nr_busy_ioq(q->elevator)) {
> > +		elv_log(efqd, "schedule dispatch");
> > +		kblockd_schedule_work(efqd->queue, &efqd->unplug_work);
> > +	}
> > +}
> > +EXPORT_SYMBOL(elv_schedule_dispatch);
> > +
> > +void elv_kick_queue(struct work_struct *work)
> > +{
> > +	struct elv_fq_data *efqd =
> > +		container_of(work, struct elv_fq_data, unplug_work);
> > +	struct request_queue *q = efqd->queue;
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(q->queue_lock, flags);
> > +	blk_start_queueing(q);
> > +	spin_unlock_irqrestore(q->queue_lock, flags);
> > +}
> > +
> > +void elv_shutdown_timer_wq(struct elevator_queue *e)
> > +{
> > +	del_timer_sync(&e->efqd.idle_slice_timer);
> > +	cancel_work_sync(&e->efqd.unplug_work);
> > +}
> > +EXPORT_SYMBOL(elv_shutdown_timer_wq);
> > +
> > +void elv_ioq_set_prio_slice(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	ioq->slice_end = jiffies + ioq->entity.budget;
> > +	elv_log_ioq(efqd, ioq, "set_slice=%lu", ioq->entity.budget);
> > +}
> > +
> > +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	unsigned long elapsed = jiffies - ioq->last_end_request;
> > +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> > +
> > +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> > +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> > +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> > +}
> 
> Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
> understand the algorithm.
> 

this came from cfq, it's a variation of an exponential moving average,
with ttime_samples used to scale the average value.


> > +
> > +/*
> > + * Disable idle window if the process thinks too long.
> > + * This idle flag can also be updated by io scheduler.
> > + */
> > +static void elv_ioq_update_idle_window(struct elevator_queue *eq,
> > +				struct io_queue *ioq, struct request *rq)
> > +{
> > +	int old_idle, enable_idle;
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +
> > +	/*
> > +	 * Don't idle for async or idle io prio class
> > +	 */
> > +	if (!elv_ioq_sync(ioq) || elv_ioq_class_idle(ioq))
> > +		return;
> > +
> > +	enable_idle = old_idle = elv_ioq_idle_window(ioq);
> > +
> > +	if (!efqd->elv_slice_idle)
> > +		enable_idle = 0;
> > +	else if (ioq_sample_valid(ioq->ttime_samples)) {
> > +		if (ioq->ttime_mean > efqd->elv_slice_idle)
> > +			enable_idle = 0;
> > +		else
> > +			enable_idle = 1;
> > +	}
> > +
> > +	/*
> > +	 * From think time perspective idle should be enabled. Check with
> > +	 * io scheduler if it wants to disable idling based on additional
> > +	 * considrations like seek pattern.
> > +	 */
> > +	if (enable_idle) {
> > +		if (eq->ops->elevator_update_idle_window_fn)
> > +			enable_idle = eq->ops->elevator_update_idle_window_fn(
> > +						eq, ioq->sched_queue, rq);
> > +		if (!enable_idle)
> > +			elv_log_ioq(efqd, ioq, "iosched disabled idle");
> > +	}
> > +
> > +	if (old_idle != enable_idle) {
> > +		elv_log_ioq(efqd, ioq, "idle=%d", enable_idle);
> > +		if (enable_idle)
> > +			elv_mark_ioq_idle_window(ioq);
> > +		else
> > +			elv_clear_ioq_idle_window(ioq);
> > +	}
> > +}
> > +
> > +struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	ioq = kmem_cache_alloc_node(elv_ioq_pool, gfp_mask, q->node);
> > +	return ioq;
> > +}
> > +EXPORT_SYMBOL(elv_alloc_ioq);
> > +
> > +void elv_free_ioq(struct io_queue *ioq)
> > +{
> > +	kmem_cache_free(elv_ioq_pool, ioq);
> > +}
> > +EXPORT_SYMBOL(elv_free_ioq);
> > +
> > +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +			void *sched_queue, int ioprio_class, int ioprio,
> > +			int is_sync)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> > +
> > +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> > +	atomic_set(&ioq->ref, 0);
> > +	ioq->efqd = efqd;
> > +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> > +	elv_ioq_set_ioprio(ioq, ioprio);
> > +	ioq->pid = current->pid;
> 
> Is pid used for cgroup association later? I don't see why we save the
> pid otherwise? If yes, why not store the cgroup of the current->pid?
> 
> > +	ioq->sched_queue = sched_queue;
> > +	if (is_sync && !elv_ioq_class_idle(ioq))
> > +		elv_mark_ioq_idle_window(ioq);
> > +	bfq_init_entity(&ioq->entity, iog);
> > +	ioq->entity.budget = elv_prio_to_slice(efqd, ioq);
> > +	if (is_sync)
> > +		ioq->last_end_request = jiffies;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(elv_init_ioq);
> > +
> > +void elv_put_ioq(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	struct elevator_queue *e = container_of(efqd, struct elevator_queue,
> > +						efqd);
> > +
> > +	BUG_ON(atomic_read(&ioq->ref) <= 0);
> > +	if (!atomic_dec_and_test(&ioq->ref))
> > +		return;
> > +	BUG_ON(ioq->nr_queued);
> > +	BUG_ON(ioq->entity.tree != NULL);
> > +	BUG_ON(elv_ioq_busy(ioq));
> > +	BUG_ON(efqd->active_queue == ioq);
> > +
> > +	/* Can be called by outgoing elevator. Don't use q */
> > +	BUG_ON(!e->ops->elevator_free_sched_queue_fn);
> > +
> > +	e->ops->elevator_free_sched_queue_fn(e, ioq->sched_queue);
> > +	elv_log_ioq(efqd, ioq, "put_queue");
> > +	elv_free_ioq(ioq);
> > +}
> > +EXPORT_SYMBOL(elv_put_ioq);
> > +
> > +void elv_release_ioq(struct elevator_queue *e, struct io_queue **ioq_ptr)
> > +{
> > +	struct io_queue *ioq = *ioq_ptr;
> > +
> > +	if (ioq != NULL) {
> > +		/* Drop the reference taken by the io group */
> > +		elv_put_ioq(ioq);
> > +		*ioq_ptr = NULL;
> > +	}
> > +}
> > +
> > +/*
> > + * Normally next io queue to be served is selected from the service tree.
> > + * This function allows one to choose a specific io queue to run next
> > + * out of order. This is primarily to accomodate the close_cooperator
> > + * feature of cfq.
> > + *
> > + * Currently it is done only for root level as to begin with supporting
> > + * close cooperator feature only for root group to make sure default
> > + * cfq behavior in flat hierarchy is not changed.
> > + */
> > +void elv_set_next_ioq(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_entity *entity = &ioq->entity;
> > +	struct io_sched_data *sd = &efqd->root_group->sched_data;
> > +	struct io_service_tree *st = io_entity_service_tree(entity);
> > +
> > +	BUG_ON(efqd->active_queue != NULL || sd->active_entity != NULL);
> > +	BUG_ON(!efqd->busy_queues);
> > +	BUG_ON(sd != entity->sched_data);
> > +	BUG_ON(!st);
> > +
> > +	bfq_update_vtime(st);
> > +	bfq_active_extract(st, entity);
> > +	sd->active_entity = entity;
> > +	entity->service = 0;
> > +	elv_log_ioq(efqd, ioq, "set_next_ioq");
> > +}
> > +
> > +/* Get next queue for service. */
> > +struct io_queue *elv_get_next_ioq(struct request_queue *q, int extract)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_entity *entity = NULL;
> > +	struct io_queue *ioq = NULL;
> > +	struct io_sched_data *sd;
> > +
> > +	/*
> > +	 * We should not call lookup when an entity is active, as doing
> > +	 * lookup can result in an erroneous vtime jump.
> > +	 */
> > +	BUG_ON(efqd->active_queue != NULL);
> > +
> > +	if (!efqd->busy_queues)
> > +		return NULL;
> > +
> > +	sd = &efqd->root_group->sched_data;
> > +	entity = bfq_lookup_next_entity(sd, 1);
> > +
> > +	BUG_ON(!entity);
> > +	if (extract)
> > +		entity->service = 0;
> > +	ioq = io_entity_to_ioq(entity);
> > +
> > +	return ioq;
> > +}
> > +
> > +/*
> > + * coop tells that io scheduler selected a queue for us and we did not
> 
> coop?
> 
> > + * select the next queue based on fairness.
> > + */
> > +static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> > +					int coop)
> > +{
> > +	struct request_queue *q = efqd->queue;
> > +
> > +	if (ioq) {
> > +		elv_log_ioq(efqd, ioq, "set_active, busy=%d",
> > +							efqd->busy_queues);
> > +		ioq->slice_end = 0;
> > +
> > +		elv_clear_ioq_wait_request(ioq);
> > +		elv_clear_ioq_must_dispatch(ioq);
> > +		elv_mark_ioq_slice_new(ioq);
> > +
> > +		del_timer(&efqd->idle_slice_timer);
> > +	}
> > +
> > +	efqd->active_queue = ioq;
> > +
> > +	/* Let iosched know if it wants to take some action */
> > +	if (ioq) {
> > +		if (q->elevator->ops->elevator_active_ioq_set_fn)
> > +			q->elevator->ops->elevator_active_ioq_set_fn(q,
> > +							ioq->sched_queue, coop);
> > +	}
> > +}
> > +
> > +/* Get and set a new active queue for service. */
> > +struct io_queue *elv_set_active_ioq(struct request_queue *q,
> > +						struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	int coop = 0;
> > +
> > +	if (!ioq)
> > +		ioq = elv_get_next_ioq(q, 1);
> > +	else {
> > +		elv_set_next_ioq(q, ioq);
> > +		/*
> > +		 * io scheduler selected the next queue for us. Pass this
> > +		 * this info back to io scheudler. cfq currently uses it
> > +		 * to reset coop flag on the queue.
> > +		 */
> > +		coop = 1;
> > +	}
> > +	__elv_set_active_ioq(efqd, ioq, coop);
> > +	return ioq;
> > +}
> > +
> > +void elv_reset_active_ioq(struct elv_fq_data *efqd)
> > +{
> > +	struct request_queue *q = efqd->queue;
> > +	struct io_queue *ioq = elv_active_ioq(efqd->queue->elevator);
> > +
> > +	if (q->elevator->ops->elevator_active_ioq_reset_fn)
> > +		q->elevator->ops->elevator_active_ioq_reset_fn(q,
> > +							ioq->sched_queue);
> > +	efqd->active_queue = NULL;
> > +	del_timer(&efqd->idle_slice_timer);
> > +}
> > +
> > +void elv_activate_ioq(struct io_queue *ioq, int add_front)
> > +{
> > +	bfq_activate_entity(&ioq->entity, add_front);
> > +}
> > +
> > +void elv_deactivate_ioq(struct elv_fq_data *efqd, struct io_queue *ioq,
> > +					int requeue)
> > +{
> > +	bfq_deactivate_entity(&ioq->entity, requeue);
> > +}
> > +
> > +/* Called when an inactive queue receives a new request. */
> > +void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
> > +{
> > +	BUG_ON(elv_ioq_busy(ioq));
> > +	BUG_ON(ioq == efqd->active_queue);
> > +	elv_log_ioq(efqd, ioq, "add to busy");
> > +	elv_activate_ioq(ioq, 0);
> > +	elv_mark_ioq_busy(ioq);
> > +	efqd->busy_queues++;
> > +	if (elv_ioq_class_rt(ioq)) {
> > +		struct io_group *iog = ioq_to_io_group(ioq);
> > +		iog->busy_rt_queues++;
> > +	}
> > +}
> > +
> > +void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
> > +					int requeue)
> > +{
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	BUG_ON(!elv_ioq_busy(ioq));
> > +	BUG_ON(ioq->nr_queued);
> > +	elv_log_ioq(efqd, ioq, "del from busy");
> > +	elv_clear_ioq_busy(ioq);
> > +	BUG_ON(efqd->busy_queues == 0);
> > +	efqd->busy_queues--;
> > +	if (elv_ioq_class_rt(ioq)) {
> > +		struct io_group *iog = ioq_to_io_group(ioq);
> > +		iog->busy_rt_queues--;
> > +	}
> > +
> > +	elv_deactivate_ioq(efqd, ioq, requeue);
> > +}
> > +
> > +/*
> > + * Do the accounting. Determine how much service (in terms of time slices)
> > + * current queue used and adjust the start, finish time of queue and vtime
> > + * of the tree accordingly.
> > + *
> > + * Determining the service used in terms of time is tricky in certain
> > + * situations. Especially when underlying device supports command queuing
> > + * and requests from multiple queues can be there at same time, then it
> > + * is not clear which queue consumed how much of disk time.
> > + *
> > + * To mitigate this problem, cfq starts the time slice of the queue only
> > + * after first request from the queue has completed. This does not work
> > + * very well if we expire the queue before we wait for first and more
> > + * request to finish from the queue. For seeky queues, we will expire the
> > + * queue after dispatching few requests without waiting and start dispatching
> > + * from next queue.
> > + *
> > + * Not sure how to determine the time consumed by queue in such scenarios.
> > + * Currently as a crude approximation, we are charging 25% of time slice
> > + * for such cases. A better mechanism is needed for accurate accounting.
> > + */
> > +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_entity *entity = &ioq->entity;
> > +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> > +
> > +	assert_spin_locked(q->queue_lock);
> > +	elv_log_ioq(efqd, ioq, "slice expired");
> > +
> > +	if (elv_ioq_wait_request(ioq))
> > +		del_timer(&efqd->idle_slice_timer);
> > +
> > +	elv_clear_ioq_wait_request(ioq);
> > +
> > +	/*
> > +	 * if ioq->slice_end = 0, that means a queue was expired before first
> > +	 * reuqest from the queue got completed. Of course we are not planning
> > +	 * to idle on the queue otherwise we would not have expired it.
> > +	 *
> > +	 * Charge for the 25% slice in such cases. This is not the best thing
> > +	 * to do but at the same time not very sure what's the next best
> > +	 * thing to do.
> > +	 *
> > +	 * This arises from that fact that we don't have the notion of
> > +	 * one queue being operational at one time. io scheduler can dispatch
> > +	 * requests from multiple queues in one dispatch round. Ideally for
> > +	 * more accurate accounting of exact disk time used by disk, one
> > +	 * should dispatch requests from only one queue and wait for all
> > +	 * the requests to finish. But this will reduce throughput.
> > +	 */
> > +	if (!ioq->slice_end)
> > +		slice_used = entity->budget/4;
> > +	else {
> > +		if (time_after(ioq->slice_end, jiffies)) {
> > +			slice_unused = ioq->slice_end - jiffies;
> > +			if (slice_unused == entity->budget) {
> > +				/*
> > +				 * queue got expired immediately after
> > +				 * completing first request. Charge 25% of
> > +				 * slice.
> > +				 */
> > +				slice_used = entity->budget/4;
> > +			} else
> > +				slice_used = entity->budget - slice_unused;
> > +		} else {
> > +			slice_overshoot = jiffies - ioq->slice_end;
> > +			slice_used = entity->budget + slice_overshoot;
> > +		}
> > +	}
> > +
> > +	elv_log_ioq(efqd, ioq, "sl_end=%lx, jiffies=%lx", ioq->slice_end,
> > +			jiffies);
> > +	elv_log_ioq(efqd, ioq, "sl_used=%ld, budget=%ld overshoot=%ld",
> > +				slice_used, entity->budget, slice_overshoot);
> > +	elv_ioq_served(ioq, slice_used);
> > +
> > +	BUG_ON(ioq != efqd->active_queue);
> > +	elv_reset_active_ioq(efqd);
> > +
> > +	if (!ioq->nr_queued)
> > +		elv_del_ioq_busy(q->elevator, ioq, 1);
> > +	else
> > +		elv_activate_ioq(ioq, 0);
> > +}
> > +EXPORT_SYMBOL(__elv_ioq_slice_expired);
> > +
> > +/*
> > + *  Expire the ioq.
> > + */
> > +void elv_ioq_slice_expired(struct request_queue *q)
> > +{
> > +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> > +
> > +	if (ioq)
> > +		__elv_ioq_slice_expired(q, ioq);
> > +}
> > +
> > +/*
> > + * Check if new_cfqq should preempt the currently active queue. Return 0 for
> > + * no or if we aren't sure, a 1 will cause a preemption attempt.
> > + */
> > +int elv_should_preempt(struct request_queue *q, struct io_queue *new_ioq,
> > +			struct request *rq)
> > +{
> > +	struct io_queue *ioq;
> > +	struct elevator_queue *eq = q->elevator;
> > +	struct io_entity *entity, *new_entity;
> > +
> > +	ioq = elv_active_ioq(eq);
> > +
> > +	if (!ioq)
> > +		return 0;
> > +
> > +	entity = &ioq->entity;
> > +	new_entity = &new_ioq->entity;
> > +
> > +	/*
> > +	 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.
> > +	 */
> > +
> > +	if (new_entity->ioprio_class == IOPRIO_CLASS_RT
> > +	    && entity->ioprio_class != IOPRIO_CLASS_RT)
> > +		return 1;
> > +	/*
> > +	 * Allow an BE request to pre-empt an ongoing IDLE clas timeslice.
> > +	 */
> > +
> > +	if (new_entity->ioprio_class == IOPRIO_CLASS_BE
> > +	    && entity->ioprio_class == IOPRIO_CLASS_IDLE)
> > +		return 1;
> > +
> > +	/*
> > +	 * Check with io scheduler if it has additional criterion based on
> > +	 * which it wants to preempt existing queue.
> > +	 */
> > +	if (eq->ops->elevator_should_preempt_fn)
> > +		return eq->ops->elevator_should_preempt_fn(q,
> > +						ioq_sched_queue(new_ioq), rq);
> > +
> > +	return 0;
> > +}
> > +
> > +static void elv_preempt_queue(struct request_queue *q, struct io_queue *ioq)
> > +{
> > +	elv_log_ioq(&q->elevator->efqd, ioq, "preempt");
> > +	elv_ioq_slice_expired(q);
> > +
> > +	/*
> > +	 * Put the new queue at the front of the of the current list,
> > +	 * so we know that it will be selected next.
> > +	 */
> > +
> > +	elv_activate_ioq(ioq, 1);
> > +	elv_ioq_set_slice_end(ioq, 0);
> > +	elv_mark_ioq_slice_new(ioq);
> > +}
> > +
> > +void elv_ioq_request_add(struct request_queue *q, struct request *rq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_queue *ioq = rq->ioq;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	BUG_ON(!efqd);
> > +	BUG_ON(!ioq);
> > +	efqd->rq_queued++;
> > +	ioq->nr_queued++;
> > +
> > +	if (!elv_ioq_busy(ioq))
> > +		elv_add_ioq_busy(efqd, ioq);
> > +
> > +	elv_ioq_update_io_thinktime(ioq);
> > +	elv_ioq_update_idle_window(q->elevator, ioq, rq);
> > +
> > +	if (ioq == elv_active_ioq(q->elevator)) {
> > +		/*
> > +		 * Remember that we saw a request from this process, but
> > +		 * don't start queuing just yet. Otherwise we risk seeing lots
> > +		 * of tiny requests, because we disrupt the normal plugging
> > +		 * and merging. If the request is already larger than a single
> > +		 * page, let it rip immediately. For that case we assume that
> > +		 * merging is already done. Ditto for a busy system that
> > +		 * has other work pending, don't risk delaying until the
> > +		 * idle timer unplug to continue working.
> > +		 */
> > +		if (elv_ioq_wait_request(ioq)) {
> > +			if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
> > +			    efqd->busy_queues > 1) {
> > +				del_timer(&efqd->idle_slice_timer);
> > +				blk_start_queueing(q);
> > +			}
> > +			elv_mark_ioq_must_dispatch(ioq);
> > +		}
> > +	} else if (elv_should_preempt(q, ioq, rq)) {
> > +		/*
> > +		 * not the active queue - expire current slice if it is
> > +		 * idle and has expired it's mean thinktime or this new queue
> > +		 * has some old slice time left and is of higher priority or
> > +		 * this new queue is RT and the current one is BE
> > +		 */
> > +		elv_preempt_queue(q, ioq);
> > +		blk_start_queueing(q);
> > +	}
> > +}
> > +
> > +void elv_idle_slice_timer(unsigned long data)
> > +{
> > +	struct elv_fq_data *efqd = (struct elv_fq_data *)data;
> > +	struct io_queue *ioq;
> > +	unsigned long flags;
> > +	struct request_queue *q = efqd->queue;
> > +
> > +	elv_log(efqd, "idle timer fired");
> > +
> > +	spin_lock_irqsave(q->queue_lock, flags);
> > +
> > +	ioq = efqd->active_queue;
> > +
> > +	if (ioq) {
> > +
> > +		/*
> > +		 * We saw a request before the queue expired, let it through
> > +		 */
> > +		if (elv_ioq_must_dispatch(ioq))
> > +			goto out_kick;
> > +
> > +		/*
> > +		 * expired
> > +		 */
> > +		if (elv_ioq_slice_used(ioq))
> > +			goto expire;
> > +
> > +		/*
> > +		 * only expire and reinvoke request handler, if there are
> > +		 * other queues with pending requests
> > +		 */
> > +		if (!elv_nr_busy_ioq(q->elevator))
> > +			goto out_cont;
> > +
> > +		/*
> > +		 * not expired and it has a request pending, let it dispatch
> > +		 */
> > +		if (ioq->nr_queued)
> > +			goto out_kick;
> > +	}
> > +expire:
> > +	elv_ioq_slice_expired(q);
> > +out_kick:
> > +	elv_schedule_dispatch(q);
> > +out_cont:
> > +	spin_unlock_irqrestore(q->queue_lock, flags);
> > +}
> > +
> > +void elv_ioq_arm_slice_timer(struct request_queue *q)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_queue *ioq = elv_active_ioq(q->elevator);
> > +	unsigned long sl;
> > +
> > +	BUG_ON(!ioq);
> > +
> > +	/*
> > +	 * SSD device without seek penalty, disable idling. But only do so
> > +	 * for devices that support queuing, otherwise we still have a problem
> > +	 * with sync vs async workloads.
> > +	 */
> > +	if (blk_queue_nonrot(q) && efqd->hw_tag)
> > +		return;
> > +
> > +	/*
> > +	 * still requests with the driver, don't idle
> > +	 */
> > +	if (efqd->rq_in_driver)
> > +		return;
> > +
> > +	/*
> > +	 * idle is disabled, either manually or by past process history
> > +	 */
> > +	if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq))
> > +		return;
> > +
> > +	/*
> > +	 * may be iosched got its own idling logic. In that case io
> > +	 * schduler will take care of arming the timer, if need be.
> > +	 */
> > +	if (q->elevator->ops->elevator_arm_slice_timer_fn) {
> > +		q->elevator->ops->elevator_arm_slice_timer_fn(q,
> > +						ioq->sched_queue);
> > +	} else {
> > +		elv_mark_ioq_wait_request(ioq);
> > +		sl = efqd->elv_slice_idle;
> > +		mod_timer(&efqd->idle_slice_timer, jiffies + sl);
> > +		elv_log_ioq(efqd, ioq, "arm idle: %lu", sl);
> > +	}
> > +}
> > +
> > +/* Common layer function to select the next queue to dispatch from */
> > +void *elv_fq_select_ioq(struct request_queue *q, int force)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> > +	struct io_group *iog;
> > +
> > +	if (!elv_nr_busy_ioq(q->elevator))
> > +		return NULL;
> > +
> > +	if (ioq == NULL)
> > +		goto new_queue;
> > +
> > +	/*
> > +	 * Force dispatch. Continue to dispatch from current queue as long
> > +	 * as it has requests.
> > +	 */
> > +	if (unlikely(force)) {
> > +		if (ioq->nr_queued)
> > +			goto keep_queue;
> > +		else
> > +			goto expire;
> > +	}
> > +
> > +	/*
> > +	 * The active queue has run out of time, expire it and select new.
> > +	 */
> > +	if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq))
> > +		goto expire;
> > +
> > +	/*
> > +	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
> > +	 * cfqq.
> > +	 */
> > +	iog = ioq_to_io_group(ioq);
> > +
> > +	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> > +		/*
> > +		 * We simulate this as cfqq timed out so that it gets to bank
> > +		 * the remaining of its time slice.
> > +		 */
> > +		elv_log_ioq(efqd, ioq, "preempt");
> > +		goto expire;
> > +	}
> > +
> > +	/*
> > +	 * The active queue has requests and isn't expired, allow it to
> > +	 * dispatch.
> > +	 */
> > +
> > +	if (ioq->nr_queued)
> > +		goto keep_queue;
> > +
> > +	/*
> > +	 * If another queue has a request waiting within our mean seek
> > +	 * distance, let it run.  The expire code will check for close
> > +	 * cooperators and put the close queue at the front of the service
> > +	 * tree.
> > +	 */
> > +	new_ioq = elv_close_cooperator(q, ioq, 0);
> > +	if (new_ioq)
> > +		goto expire;
> > +
> > +	/*
> > +	 * No requests pending. If the active queue still has requests in
> > +	 * flight or is idling for a new request, allow either of these
> > +	 * conditions to happen (or time out) before selecting a new queue.
> > +	 */
> > +
> > +	if (timer_pending(&efqd->idle_slice_timer) ||
> > +	    (elv_ioq_nr_dispatched(ioq) && elv_ioq_idle_window(ioq))) {
> > +		ioq = NULL;
> > +		goto keep_queue;
> > +	}
> > +
> > +expire:
> > +	elv_ioq_slice_expired(q);
> > +new_queue:
> > +	ioq = elv_set_active_ioq(q, new_ioq);
> > +keep_queue:
> > +	return ioq;
> > +}
> > +
> > +/* A request got removed from io_queue. Do the accounting */
> > +void elv_ioq_request_removed(struct elevator_queue *e, struct request *rq)
> > +{
> > +	struct io_queue *ioq;
> > +	struct elv_fq_data *efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	ioq = rq->ioq;
> > +	BUG_ON(!ioq);
> > +	ioq->nr_queued--;
> > +
> > +	efqd = ioq->efqd;
> > +	BUG_ON(!efqd);
> > +	efqd->rq_queued--;
> > +
> > +	if (elv_ioq_busy(ioq) && (elv_active_ioq(e) != ioq) && !ioq->nr_queued)
> > +		elv_del_ioq_busy(e, ioq, 1);
> > +}
> > +
> > +/* A request got dispatched. Do the accounting. */
> > +void elv_fq_dispatched_request(struct elevator_queue *e, struct request *rq)
> > +{
> > +	struct io_queue *ioq = rq->ioq;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	BUG_ON(!ioq);
> > +	elv_ioq_request_dispatched(ioq);
> > +	elv_ioq_request_removed(e, rq);
> > +	elv_clear_ioq_must_dispatch(ioq);
> > +}
> > +
> > +void elv_fq_activate_rq(struct request_queue *q, struct request *rq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	efqd->rq_in_driver++;
> > +	elv_log_ioq(efqd, rq_ioq(rq), "activate rq, drv=%d",
> > +						efqd->rq_in_driver);
> > +}
> > +
> > +void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	WARN_ON(!efqd->rq_in_driver);
> > +	efqd->rq_in_driver--;
> > +	elv_log_ioq(efqd, rq_ioq(rq), "deactivate rq, drv=%d",
> > +						efqd->rq_in_driver);
> > +}
> > +
> > +/*
> > + * Update hw_tag based on peak queue depth over 50 samples under
> > + * sufficient load.
> > + */
> > +static void elv_update_hw_tag(struct elv_fq_data *efqd)
> > +{
> > +	if (efqd->rq_in_driver > efqd->rq_in_driver_peak)
> > +		efqd->rq_in_driver_peak = efqd->rq_in_driver;
> > +
> > +	if (efqd->rq_queued <= ELV_HW_QUEUE_MIN &&
> > +	    efqd->rq_in_driver <= ELV_HW_QUEUE_MIN)
> > +		return;
> > +
> > +	if (efqd->hw_tag_samples++ < 50)
> > +		return;
> > +
> > +	if (efqd->rq_in_driver_peak >= ELV_HW_QUEUE_MIN)
> > +		efqd->hw_tag = 1;
> > +	else
> > +		efqd->hw_tag = 0;
> > +
> > +	efqd->hw_tag_samples = 0;
> > +	efqd->rq_in_driver_peak = 0;
> > +}
> > +
> > +/*
> > + * If ioscheduler has functionality of keeping track of close cooperator, check
> > + * with it if it has got a closely co-operating queue.
> > + */
> > +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> > +					struct io_queue *ioq, int probe)
> > +{
> > +	struct elevator_queue *e = q->elevator;
> > +	struct io_queue *new_ioq = NULL;
> > +
> > +	/*
> > +	 * Currently this feature is supported only for flat hierarchy or
> > +	 * root group queues so that default cfq behavior is not changed.
> > +	 */
> > +	if (!is_root_group_ioq(q, ioq))
> > +		return NULL;
> > +
> > +	if (q->elevator->ops->elevator_close_cooperator_fn)
> > +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> > +						ioq->sched_queue, probe);
> > +
> > +	/* Only select co-operating queue if it belongs to root group */
> > +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> > +		return NULL;
> > +
> > +	return new_ioq;
> > +}
> > +
> > +/* A request got completed from io_queue. Do the accounting. */
> > +void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> > +{
> > +	const int sync = rq_is_sync(rq);
> > +	struct io_queue *ioq;
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(q->elevator))
> > +		return;
> > +
> > +	ioq = rq->ioq;
> > +
> > +	elv_log_ioq(efqd, ioq, "complete");
> > +
> > +	elv_update_hw_tag(efqd);
> > +
> > +	WARN_ON(!efqd->rq_in_driver);
> > +	WARN_ON(!ioq->dispatched);
> > +	efqd->rq_in_driver--;
> > +	ioq->dispatched--;
> > +
> > +	if (sync)
> > +		ioq->last_end_request = jiffies;
> > +
> > +	/*
> > +	 * If this is the active queue, check if it needs to be expired,
> > +	 * or if we want to idle in case it has no pending requests.
> > +	 */
> > +
> > +	if (elv_active_ioq(q->elevator) == ioq) {
> > +		if (elv_ioq_slice_new(ioq)) {
> > +			elv_ioq_set_prio_slice(q, ioq);
> > +			elv_clear_ioq_slice_new(ioq);
> > +		}
> > +		/*
> > +		 * If there are no requests waiting in this queue, and
> > +		 * there are other queues ready to issue requests, AND
> > +		 * those other queues are issuing requests within our
> > +		 * mean seek distance, give them a chance to run instead
> > +		 * of idling.
> > +		 */
> > +		if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq))
> > +			elv_ioq_slice_expired(q);
> > +		else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1)
> > +			 && sync && !rq_noidle(rq))
> > +			elv_ioq_arm_slice_timer(q);
> > +	}
> > +
> > +	if (!efqd->rq_in_driver)
> > +		elv_schedule_dispatch(q);
> > +}
> > +
> > +struct io_group *io_lookup_io_group_current(struct request_queue *q)
> > +{
> > +	struct elv_fq_data *efqd = &q->elevator->efqd;
> > +
> > +	return efqd->root_group;
> > +}
> > +EXPORT_SYMBOL(io_lookup_io_group_current);
> > +
> > +void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> > +					int ioprio)
> > +{
> > +	struct io_queue *ioq = NULL;
> > +
> > +	switch (ioprio_class) {
> > +	case IOPRIO_CLASS_RT:
> > +		ioq = iog->async_queue[0][ioprio];
> > +		break;
> > +	case IOPRIO_CLASS_BE:
> > +		ioq = iog->async_queue[1][ioprio];
> > +		break;
> > +	case IOPRIO_CLASS_IDLE:
> > +		ioq = iog->async_idle_queue;
> > +		break;
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	if (ioq)
> > +		return ioq->sched_queue;
> > +	return NULL;
> > +}
> > +EXPORT_SYMBOL(io_group_async_queue_prio);
> > +
> > +void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> > +					int ioprio, struct io_queue *ioq)
> > +{
> > +	switch (ioprio_class) {
> > +	case IOPRIO_CLASS_RT:
> > +		iog->async_queue[0][ioprio] = ioq;
> > +		break;
> > +	case IOPRIO_CLASS_BE:
> > +		iog->async_queue[1][ioprio] = ioq;
> > +		break;
> > +	case IOPRIO_CLASS_IDLE:
> > +		iog->async_idle_queue = ioq;
> > +		break;
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	/*
> > +	 * Take the group reference and pin the queue. Group exit will
> > +	 * clean it up
> > +	 */
> > +	elv_get_ioq(ioq);
> > +}
> > +EXPORT_SYMBOL(io_group_set_async_queue);
> > +
> > +/*
> > + * Release all the io group references to its async queues.
> > + */
> > +void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog)
> > +{
> > +	int i, j;
> > +
> > +	for (i = 0; i < 2; i++)
> > +		for (j = 0; j < IOPRIO_BE_NR; j++)
> > +			elv_release_ioq(e, &iog->async_queue[i][j]);
> > +
> > +	/* Free up async idle queue */
> > +	elv_release_ioq(e, &iog->async_idle_queue);
> > +}
> > +
> > +struct io_group *io_alloc_root_group(struct request_queue *q,
> > +					struct elevator_queue *e, void *key)
> > +{
> > +	struct io_group *iog;
> > +	int i;
> > +
> > +	iog = kmalloc_node(sizeof(*iog), GFP_KERNEL | __GFP_ZERO, q->node);
> > +	if (iog == NULL)
> > +		return NULL;
> > +
> > +	for (i = 0; i < IO_IOPRIO_CLASSES; i++)
> > +		iog->sched_data.service_tree[i] = IO_SERVICE_TREE_INIT;
> > +
> > +	return iog;
> > +}
> > +
> > +void io_free_root_group(struct elevator_queue *e)
> > +{
> > +	struct io_group *iog = e->efqd.root_group;
> > +	struct io_service_tree *st;
> > +	int i;
> > +
> > +	for (i = 0; i < IO_IOPRIO_CLASSES; i++) {
> > +		st = iog->sched_data.service_tree + i;
> > +		io_flush_idle_tree(st);
> > +	}
> > +
> > +	io_put_io_group_queues(e, iog);
> > +	kfree(iog);
> > +}
> > +
> > +static void elv_slab_kill(void)
> > +{
> > +	/*
> > +	 * Caller already ensured that pending RCU callbacks are completed,
> > +	 * so we should have no busy allocations at this point.
> > +	 */
> > +	if (elv_ioq_pool)
> > +		kmem_cache_destroy(elv_ioq_pool);
> > +}
> > +
> > +static int __init elv_slab_setup(void)
> > +{
> > +	elv_ioq_pool = KMEM_CACHE(io_queue, 0);
> > +	if (!elv_ioq_pool)
> > +		goto fail;
> > +
> > +	return 0;
> > +fail:
> > +	elv_slab_kill();
> > +	return -ENOMEM;
> > +}
> > +
> > +/* Initialize fair queueing data associated with elevator */
> > +int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
> > +{
> > +	struct io_group *iog;
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return 0;
> > +
> > +	iog = io_alloc_root_group(q, e, efqd);
> > +	if (iog == NULL)
> > +		return 1;
> > +
> > +	efqd->root_group = iog;
> > +	efqd->queue = q;
> > +
> > +	init_timer(&efqd->idle_slice_timer);
> > +	efqd->idle_slice_timer.function = elv_idle_slice_timer;
> > +	efqd->idle_slice_timer.data = (unsigned long) efqd;
> > +
> > +	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
> > +
> > +	efqd->elv_slice[0] = elv_slice_async;
> > +	efqd->elv_slice[1] = elv_slice_sync;
> > +	efqd->elv_slice_idle = elv_slice_idle;
> > +	efqd->hw_tag = 1;
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * elv_exit_fq_data is called before we call elevator_exit_fn. Before
> > + * we ask elevator to cleanup its queues, we do the cleanup here so
> > + * that all the group and idle tree references to ioq are dropped. Later
> > + * during elevator cleanup, ioc reference will be dropped which will lead
> > + * to removal of ioscheduler queue as well as associated ioq object.
> > + */
> > +void elv_exit_fq_data(struct elevator_queue *e)
> > +{
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	elv_shutdown_timer_wq(e);
> > +
> > +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> > +	io_free_root_group(e);
> > +}
> > +
> > +/*
> > + * This is called after the io scheduler has cleaned up its data structres.
> > + * I don't think that this function is required. Right now just keeping it
> > + * because cfq cleans up timer and work queue again after freeing up
> > + * io contexts. To me io scheduler has already been drained out, and all
> > + * the active queue have already been expired so time and work queue should
> > + * not been activated during cleanup process.
> > + *
> > + * Keeping it here for the time being. Will get rid of it later.
> > + */
> > +void elv_exit_fq_data_post(struct elevator_queue *e)
> > +{
> > +	struct elv_fq_data *efqd = &e->efqd;
> > +
> > +	if (!elv_iosched_fair_queuing_enabled(e))
> > +		return;
> > +
> > +	elv_shutdown_timer_wq(e);
> > +	BUG_ON(timer_pending(&efqd->idle_slice_timer));
> > +}
> > +
> > +
> > +static int __init elv_fq_init(void)
> > +{
> > +	if (elv_slab_setup())
> > +		return -ENOMEM;
> > +
> > +	/* could be 0 on HZ < 1000 setups */
> > +
> > +	if (!elv_slice_async)
> > +		elv_slice_async = 1;
> > +
> > +	if (!elv_slice_idle)
> > +		elv_slice_idle = 1;
> > +
> > +	return 0;
> > +}
> > +
> > +module_init(elv_fq_init);
> > diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> > new file mode 100644
> > index 0000000..5b6c1cc
> > --- /dev/null
> > +++ b/block/elevator-fq.h
> > @@ -0,0 +1,473 @@
> > +/*
> > + * BFQ: data structures and common functions prototypes.
> > + *
> > + * Based on ideas and code from CFQ:
> > + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> > + *
> > + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> > + *		      Paolo Valente <paolo.valente@unimore.it>
> > + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> > + * 	              Nauman Rafique <nauman@google.com>
> > + */
> > +
> > +#include <linux/blkdev.h>
> > +
> > +#ifndef _BFQ_SCHED_H
> > +#define _BFQ_SCHED_H
> > +
> > +#define IO_IOPRIO_CLASSES	3
> > +
> > +typedef u64 bfq_timestamp_t;
> > +typedef unsigned long bfq_weight_t;
> > +typedef unsigned long bfq_service_t;
> 
> Does this abstraction really provide any benefit? Why not directly use
> the standard C types, make the code easier to read.
> 

I have no strong opinions on that, during debugging it helped a lot
to identify the role of variables in the code, but common practice in
the kernel is avoiding typedefs, so they can go now.


> > +struct io_entity;
> > +struct io_queue;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +#define ELV_ATTR(name) \
> > +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> > +
> > +/**
> > + * struct bfq_service_tree - per ioprio_class service tree.
> 
> Comment is old, does not reflect the newer name
> 
> > + * @active: tree for active entities (i.e., those backlogged).
> > + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> > + * @first_idle: idle entity with minimum F_i.
> > + * @last_idle: idle entity with maximum F_i.
> > + * @vtime: scheduler virtual time.
> > + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> > + *
> > + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> > + * ioprio_class has its own independent scheduler, and so its own
> > + * bfq_service_tree.  All the fields are protected by the queue lock
> > + * of the containing efqd.
> > + */
> > +struct io_service_tree {
> > +	struct rb_root active;
> > +	struct rb_root idle;
> > +
> > +	struct io_entity *first_idle;
> > +	struct io_entity *last_idle;
> > +
> > +	bfq_timestamp_t vtime;
> > +	bfq_weight_t wsum;
> > +};
> > +
> > +/**
> > + * struct bfq_sched_data - multi-class scheduler.
> 
> Again the naming convention is broken, you need to change several
> bfq's to io's :)
> 
> > + * @active_entity: entity under service.
> > + * @next_active: head-of-the-line entity in the scheduler.
> > + * @service_tree: array of service trees, one per ioprio_class.
> > + *
> > + * bfq_sched_data is the basic scheduler queue.  It supports three
> > + * ioprio_classes, and can be used either as a toplevel queue or as
> > + * an intermediate queue on a hierarchical setup.
> > + * @next_active points to the active entity of the sched_data service
> > + * trees that will be scheduled next.
> > + *
> > + * The supported ioprio_classes are the same as in CFQ, in descending
> > + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
> > + * Requests from higher priority queues are served before all the
> > + * requests from lower priority queues; among requests of the same
> > + * queue requests are served according to B-WF2Q+.
> > + * All the fields are protected by the queue lock of the containing bfqd.
> > + */
> > +struct io_sched_data {
> > +	struct io_entity *active_entity;
> > +	struct io_service_tree service_tree[IO_IOPRIO_CLASSES];
> > +};
> > +
> > +/**
> > + * struct bfq_entity - schedulable entity.
> > + * @rb_node: service_tree member.
> > + * @on_st: flag, true if the entity is on a tree (either the active or
> > + *         the idle one of its service_tree).
> > + * @finish: B-WF2Q+ finish timestamp (aka F_i).
> > + * @start: B-WF2Q+ start timestamp (aka S_i).
> 
> Could you mention what key is used in the rb_tree? start, finish
> sounds like a range, so my suspicion is that start is used.
> 

finish is used as the key, and min_start keeps the minimum ->start for
the subtree rooted at the given entity (as said in the comment below).


> > + * @tree: tree the entity is enqueued into; %NULL if not on a tree.
> > + * @min_start: minimum start time of the (active) subtree rooted at
> > + *             this entity; used for O(log N) lookups into active trees.
> 
> Used for O(log N) makes no sense to me, RBTree has a worst case
> lookup time of O(log N), but what is the comment saying?
> 

it's badly written (my fault), but it intended to say that this field is
used to allow the lookups to be done in O(log N).  without augmenting
the RB tree with min_start, lookups could not be done in O(log N),
because we want a constrained minimum search.


> > + * @service: service received during the last round of service.
> > + * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
> > + * @weight: weight of the queue, calculated as IOPRIO_BE_NR - @ioprio.
> > + * @parent: parent entity, for hierarchical scheduling.
> > + * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
> > + *                 associated scheduler queue, %NULL on leaf nodes.
> > + * @sched_data: the scheduler queue this entity belongs to.
> > + * @ioprio: the ioprio in use.
> > + * @new_ioprio: when an ioprio change is requested, the new ioprio value
> > + * @ioprio_class: the ioprio_class in use.
> > + * @new_ioprio_class: when an ioprio_class change is requested, the new
> > + *                    ioprio_class value.
> > + * @ioprio_changed: flag, true when the user requested an ioprio or
> > + *                  ioprio_class change.
> > + *
> > + * A bfq_entity is used to represent either a bfq_queue (leaf node in the
> > + * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
> > + * entity belongs to the sched_data of the parent group in the cgroup
> > + * hierarchy.  Non-leaf entities have also their own sched_data, stored
> > + * in @my_sched_data.
> > + *
> > + * Each entity stores independently its priority values; this would allow
> > + * different weights on different devices, but this functionality is not
> > + * exported to userspace by now.  Priorities are updated lazily, first
> > + * storing the new values into the new_* fields, then setting the
> > + * @ioprio_changed flag.  As soon as there is a transition in the entity
> > + * state that allows the priority update to take place the effective and
> > + * the requested priority values are synchronized.
> > + *
> > + * The weight value is calculated from the ioprio to export the same
> > + * interface as CFQ.  When dealing with ``well-behaved'' queues (i.e.,
> > + * queues that do not spend too much time to consume their budget and
> > + * have true sequential behavior, and when there are no external factors
> > + * breaking anticipation) the relative weights at each level of the
> > + * cgroups hierarchy should be guaranteed.
> > + * All the fields are protected by the queue lock of the containing bfqd.
> > + */
> > +struct io_entity {
> > +	struct rb_node rb_node;
> > +
> > +	int on_st;
> > +
> > +	bfq_timestamp_t finish;
> > +	bfq_timestamp_t start;
> > +
> > +	struct rb_root *tree;
> > +
> > +	bfq_timestamp_t min_start;
> > +
> > +	bfq_service_t service, budget;
> > +	bfq_weight_t weight;
> > +
> > +	struct io_entity *parent;
> > +
> > +	struct io_sched_data *my_sched_data;
> > +	struct io_sched_data *sched_data;
> > +
> > +	unsigned short ioprio, new_ioprio;
> > +	unsigned short ioprio_class, new_ioprio_class;
> > +
> > +	int ioprio_changed;
> > +};
> > +
> > +/*
> > + * A common structure embedded by every io scheduler into their respective
> > + * queue structure.
> > + */
> > +struct io_queue {
> > +	struct io_entity entity;
> 
> So the io_queue has an abstract entity called io_entity that contains
> it QoS parameters? Correct?
> 

yes


> > +	atomic_t ref;
> > +	unsigned int flags;
> > +
> > +	/* Pointer to generic elevator data structure */
> > +	struct elv_fq_data *efqd;
> > +	pid_t pid;
> 
> Why do we store the pid?
> 

originally it was for logging purposes


> > +
> > +	/* Number of requests queued on this io queue */
> > +	unsigned long nr_queued;
> > +
> > +	/* Requests dispatched from this queue */
> > +	int dispatched;
> > +
> > +	/* Keep a track of think time of processes in this queue */
> > +	unsigned long last_end_request;
> > +	unsigned long ttime_total;
> > +	unsigned long ttime_samples;
> > +	unsigned long ttime_mean;
> > +
> > +	unsigned long slice_end;
> > +
> > +	/* Pointer to io scheduler's queue */
> > +	void *sched_queue;
> > +};
> > +
> > +struct io_group {
> > +	struct io_sched_data sched_data;
> > +
> > +	/* async_queue and idle_queue are used only for cfq */
> > +	struct io_queue *async_queue[2][IOPRIO_BE_NR];
> 
> Again the 2 is confusing
> 
> > +	struct io_queue *async_idle_queue;
> > +
> > +	/*
> > +	 * Used to track any pending rt requests so we can pre-empt current
> > +	 * non-RT cfqq in service when this value is non-zero.
> > +	 */
> > +	unsigned int busy_rt_queues;
> > +};
> > +
> > +struct elv_fq_data {
> 
> What does fq stand for?
> 
> > +	struct io_group *root_group;
> > +
> > +	struct request_queue *queue;
> > +	unsigned int busy_queues;
> > +
> > +	/* Number of requests queued */
> > +	int rq_queued;
> > +
> > +	/* Pointer to the ioscheduler queue being served */
> > +	void *active_queue;
> > +
> > +	int rq_in_driver;
> > +	int hw_tag;
> > +	int hw_tag_samples;
> > +	int rq_in_driver_peak;
> 
> Some comments of _in_driver and _in_driver_peak would be nice.
> 
> > +
> > +	/*
> > +	 * elevator fair queuing layer has the capability to provide idling
> > +	 * for ensuring fairness for processes doing dependent reads.
> > +	 * This might be needed to ensure fairness among two processes doing
> > +	 * synchronous reads in two different cgroups. noop and deadline don't
> > +	 * have any notion of anticipation/idling. As of now, these are the
> > +	 * users of this functionality.
> > +	 */
> > +	unsigned int elv_slice_idle;
> > +	struct timer_list idle_slice_timer;
> > +	struct work_struct unplug_work;
> > +
> > +	unsigned int elv_slice[2];
> 
> Why [2] makes the code hearder to read
> 
> > +};
> > +
> > +extern int elv_slice_idle;
> > +extern int elv_slice_async;
> > +
> > +/* Logging facilities. */
> > +#define elv_log_ioq(efqd, ioq, fmt, args...) \
> > +	blk_add_trace_msg((efqd)->queue, "elv%d%c " fmt, (ioq)->pid,	\
> > +				elv_ioq_sync(ioq) ? 'S' : 'A', ##args)
> > +
> > +#define elv_log(efqd, fmt, args...) \
> > +	blk_add_trace_msg((efqd)->queue, "elv " fmt, ##args)
> > +
> > +#define ioq_sample_valid(samples)   ((samples) > 80)
> > +
> > +/* Some shared queue flag manipulation functions among elevators */
> > +
> > +enum elv_queue_state_flags {
> > +	ELV_QUEUE_FLAG_busy = 0,          /* has requests or is under service */
> > +	ELV_QUEUE_FLAG_sync,              /* synchronous queue */
> > +	ELV_QUEUE_FLAG_idle_window,	  /* elevator slice idling enabled */
> > +	ELV_QUEUE_FLAG_wait_request,	  /* waiting for a request */
> > +	ELV_QUEUE_FLAG_must_dispatch,	  /* must be allowed a dispatch */
> > +	ELV_QUEUE_FLAG_slice_new,	  /* no requests dispatched in slice */
> > +	ELV_QUEUE_FLAG_NR,
> > +};
> > +
> > +#define ELV_IO_QUEUE_FLAG_FNS(name)					\
> > +static inline void elv_mark_ioq_##name(struct io_queue *ioq)		\
> > +{                                                                       \
> > +	(ioq)->flags |= (1 << ELV_QUEUE_FLAG_##name);			\
> > +}                                                                       \
> > +static inline void elv_clear_ioq_##name(struct io_queue *ioq)		\
> > +{                                                                       \
> > +	(ioq)->flags &= ~(1 << ELV_QUEUE_FLAG_##name);			\
> > +}                                                                       \
> > +static inline int elv_ioq_##name(struct io_queue *ioq)         		\
> > +{                                                                       \
> > +	return ((ioq)->flags & (1 << ELV_QUEUE_FLAG_##name)) != 0;	\
> > +}
> > +
> > +ELV_IO_QUEUE_FLAG_FNS(busy)
> > +ELV_IO_QUEUE_FLAG_FNS(sync)
> > +ELV_IO_QUEUE_FLAG_FNS(wait_request)
> > +ELV_IO_QUEUE_FLAG_FNS(must_dispatch)
> > +ELV_IO_QUEUE_FLAG_FNS(idle_window)
> > +ELV_IO_QUEUE_FLAG_FNS(slice_new)
> > +
> > +static inline struct io_service_tree *
> > +io_entity_service_tree(struct io_entity *entity)
> > +{
> > +	struct io_sched_data *sched_data = entity->sched_data;
> > +	unsigned int idx = entity->ioprio_class - 1;
> > +
> > +	BUG_ON(idx >= IO_IOPRIO_CLASSES);
> > +	BUG_ON(sched_data == NULL);
> > +
> > +	return sched_data->service_tree + idx;
> > +}
> > +
> > +/* A request got dispatched from the io_queue. Do the accounting. */
> > +static inline void elv_ioq_request_dispatched(struct io_queue *ioq)
> > +{
> > +	ioq->dispatched++;
> > +}
> > +
> > +static inline int elv_ioq_slice_used(struct io_queue *ioq)
> > +{
> > +	if (elv_ioq_slice_new(ioq))
> > +		return 0;
> > +	if (time_before(jiffies, ioq->slice_end))
> > +		return 0;
> > +
> > +	return 1;
> > +}
> > +
> > +/* How many request are currently dispatched from the queue */
> > +static inline int elv_ioq_nr_dispatched(struct io_queue *ioq)
> > +{
> > +	return ioq->dispatched;
> > +}
> > +
> > +/* How many request are currently queued in the queue */
> > +static inline int elv_ioq_nr_queued(struct io_queue *ioq)
> > +{
> > +	return ioq->nr_queued;
> > +}
> > +
> > +static inline void elv_get_ioq(struct io_queue *ioq)
> > +{
> > +	atomic_inc(&ioq->ref);
> > +}
> > +
> > +static inline void elv_ioq_set_slice_end(struct io_queue *ioq,
> > +						unsigned long slice_end)
> > +{
> > +	ioq->slice_end = slice_end;
> > +}
> > +
> > +static inline int elv_ioq_class_idle(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.ioprio_class == IOPRIO_CLASS_IDLE;
> > +}
> > +
> > +static inline int elv_ioq_class_rt(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.ioprio_class == IOPRIO_CLASS_RT;
> > +}
> > +
> > +static inline int elv_ioq_ioprio_class(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.new_ioprio_class;
> > +}
> > +
> > +static inline int elv_ioq_ioprio(struct io_queue *ioq)
> > +{
> > +	return ioq->entity.new_ioprio;
> > +}
> > +
> > +static inline void elv_ioq_set_ioprio_class(struct io_queue *ioq,
> > +						int ioprio_class)
> > +{
> > +	ioq->entity.new_ioprio_class = ioprio_class;
> > +	ioq->entity.ioprio_changed = 1;
> > +}
> > +
> > +static inline void elv_ioq_set_ioprio(struct io_queue *ioq, int ioprio)
> > +{
> > +	ioq->entity.new_ioprio = ioprio;
> > +	ioq->entity.ioprio_changed = 1;
> > +}
> > +
> > +static inline void *ioq_sched_queue(struct io_queue *ioq)
> > +{
> > +	if (ioq)
> > +		return ioq->sched_queue;
> > +	return NULL;
> > +}
> > +
> > +static inline struct io_group *ioq_to_io_group(struct io_queue *ioq)
> > +{
> > +	return container_of(ioq->entity.sched_data, struct io_group,
> > +						sched_data);
> > +}
> > +
> > +extern ssize_t elv_slice_idle_show(struct elevator_queue *q, char *name);
> > +extern ssize_t elv_slice_idle_store(struct elevator_queue *q, const char *name,
> > +						size_t count);
> > +extern ssize_t elv_slice_sync_show(struct elevator_queue *q, char *name);
> > +extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
> > +						size_t count);
> > +extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
> > +extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
> > +						size_t count);
> > +
> > +/* Functions used by elevator.c */
> > +extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e);
> > +extern void elv_exit_fq_data(struct elevator_queue *e);
> > +extern void elv_exit_fq_data_post(struct elevator_queue *e);
> > +
> > +extern void elv_ioq_request_add(struct request_queue *q, struct request *rq);
> > +extern void elv_ioq_request_removed(struct elevator_queue *e,
> > +					struct request *rq);
> > +extern void elv_fq_dispatched_request(struct elevator_queue *e,
> > +					struct request *rq);
> > +
> > +extern void elv_fq_activate_rq(struct request_queue *q, struct request *rq);
> > +extern void elv_fq_deactivate_rq(struct request_queue *q, struct request *rq);
> > +
> > +extern void elv_ioq_completed_request(struct request_queue *q,
> > +				struct request *rq);
> > +
> > +extern void *elv_fq_select_ioq(struct request_queue *q, int force);
> > +extern struct io_queue *rq_ioq(struct request *rq);
> > +
> > +/* Functions used by io schedulers */
> > +extern void elv_put_ioq(struct io_queue *ioq);
> > +extern void __elv_ioq_slice_expired(struct request_queue *q,
> > +					struct io_queue *ioq);
> > +extern int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +		void *sched_queue, int ioprio_class, int ioprio, int is_sync);
> > +extern void elv_schedule_dispatch(struct request_queue *q);
> > +extern int elv_hw_tag(struct elevator_queue *e);
> > +extern void *elv_active_sched_queue(struct elevator_queue *e);
> > +extern int elv_mod_idle_slice_timer(struct elevator_queue *eq,
> > +					unsigned long expires);
> > +extern int elv_del_idle_slice_timer(struct elevator_queue *eq);
> > +extern unsigned int elv_get_slice_idle(struct elevator_queue *eq);
> > +extern void *io_group_async_queue_prio(struct io_group *iog, int ioprio_class,
> > +					int ioprio);
> > +extern void io_group_set_async_queue(struct io_group *iog, int ioprio_class,
> > +					int ioprio, struct io_queue *ioq);
> > +extern struct io_group *io_lookup_io_group_current(struct request_queue *q);
> > +extern int elv_nr_busy_ioq(struct elevator_queue *e);
> > +extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask);
> > +extern void elv_free_ioq(struct io_queue *ioq);
> > +
> > +#else /* CONFIG_ELV_FAIR_QUEUING */
> > +
> > +static inline int elv_init_fq_data(struct request_queue *q,
> > +					struct elevator_queue *e)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline void elv_exit_fq_data(struct elevator_queue *e) {}
> > +static inline void elv_exit_fq_data_post(struct elevator_queue *e) {}
> > +
> > +static inline void elv_fq_activate_rq(struct request_queue *q,
> > +					struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_fq_deactivate_rq(struct request_queue *q,
> > +					struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_fq_dispatched_request(struct elevator_queue *e,
> > +						struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_ioq_request_removed(struct elevator_queue *e,
> > +						struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_ioq_request_add(struct request_queue *q,
> > +					struct request *rq)
> > +{
> > +}
> > +
> > +static inline void elv_ioq_completed_request(struct request_queue *q,
> > +						struct request *rq)
> > +{
> > +}
> > +
> > +static inline void *ioq_sched_queue(struct io_queue *ioq) { return NULL; }
> > +static inline struct io_queue *rq_ioq(struct request *rq) { return NULL; }
> > +static inline void *elv_fq_select_ioq(struct request_queue *q, int force)
> > +{
> > +	return NULL;
> > +}
> > +#endif /* CONFIG_ELV_FAIR_QUEUING */
> > +#endif /* _BFQ_SCHED_H */
> > diff --git a/block/elevator.c b/block/elevator.c
> > index 7073a90..c2f07f5 100644
> > --- a/block/elevator.c
> > +++ b/block/elevator.c
> > @@ -231,6 +231,9 @@ static struct elevator_queue *elevator_alloc(struct request_queue *q,
> >  	for (i = 0; i < ELV_HASH_ENTRIES; i++)
> >  		INIT_HLIST_HEAD(&eq->hash[i]);
> > 
> > +	if (elv_init_fq_data(q, eq))
> > +		goto err;
> > +
> >  	return eq;
> >  err:
> >  	kfree(eq);
> > @@ -301,9 +304,11 @@ EXPORT_SYMBOL(elevator_init);
> >  void elevator_exit(struct elevator_queue *e)
> >  {
> >  	mutex_lock(&e->sysfs_lock);
> > +	elv_exit_fq_data(e);
> >  	if (e->ops->elevator_exit_fn)
> >  		e->ops->elevator_exit_fn(e);
> >  	e->ops = NULL;
> > +	elv_exit_fq_data_post(e);
> >  	mutex_unlock(&e->sysfs_lock);
> > 
> >  	kobject_put(&e->kobj);
> > @@ -314,6 +319,8 @@ static void elv_activate_rq(struct request_queue *q, struct request *rq)
> >  {
> >  	struct elevator_queue *e = q->elevator;
> > 
> > +	elv_fq_activate_rq(q, rq);
> > +
> >  	if (e->ops->elevator_activate_req_fn)
> >  		e->ops->elevator_activate_req_fn(q, rq);
> >  }
> > @@ -322,6 +329,8 @@ static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
> >  {
> >  	struct elevator_queue *e = q->elevator;
> > 
> > +	elv_fq_deactivate_rq(q, rq);
> > +
> >  	if (e->ops->elevator_deactivate_req_fn)
> >  		e->ops->elevator_deactivate_req_fn(q, rq);
> >  }
> > @@ -446,6 +455,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
> >  	elv_rqhash_del(q, rq);
> > 
> >  	q->nr_sorted--;
> > +	elv_fq_dispatched_request(q->elevator, rq);
> > 
> >  	boundary = q->end_sector;
> >  	stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
> > @@ -486,6 +496,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
> >  	elv_rqhash_del(q, rq);
> > 
> >  	q->nr_sorted--;
> > +	elv_fq_dispatched_request(q->elevator, rq);
> > 
> >  	q->end_sector = rq_end_sector(rq);
> >  	q->boundary_rq = rq;
> > @@ -553,6 +564,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
> >  	elv_rqhash_del(q, next);
> > 
> >  	q->nr_sorted--;
> > +	elv_ioq_request_removed(e, next);
> >  	q->last_merge = rq;
> >  }
> > 
> > @@ -657,12 +669,8 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
> >  				q->last_merge = rq;
> >  		}
> > 
> > -		/*
> > -		 * Some ioscheds (cfq) run q->request_fn directly, so
> > -		 * rq cannot be accessed after calling
> > -		 * elevator_add_req_fn.
> > -		 */
> >  		q->elevator->ops->elevator_add_req_fn(q, rq);
> > +		elv_ioq_request_add(q, rq);
> >  		break;
> > 
> >  	case ELEVATOR_INSERT_REQUEUE:
> > @@ -872,13 +880,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
> > 
> >  int elv_queue_empty(struct request_queue *q)
> >  {
> > -	struct elevator_queue *e = q->elevator;
> > -
> >  	if (!list_empty(&q->queue_head))
> >  		return 0;
> > 
> > -	if (e->ops->elevator_queue_empty_fn)
> > -		return e->ops->elevator_queue_empty_fn(q);
> > +	/* Hopefully nr_sorted works and no need to call queue_empty_fn */
> > +	if (q->nr_sorted)
> > +		return 0;
> > 
> >  	return 1;
> >  }
> > @@ -953,8 +960,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
> >  	 */
> >  	if (blk_account_rq(rq)) {
> >  		q->in_flight--;
> > -		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
> > -			e->ops->elevator_completed_req_fn(q, rq);
> > +		if (blk_sorted_rq(rq)) {
> > +			if (e->ops->elevator_completed_req_fn)
> > +				e->ops->elevator_completed_req_fn(q, rq);
> > +			elv_ioq_completed_request(q, rq);
> > +		}
> >  	}
> > 
> >  	/*
> > @@ -1242,3 +1252,17 @@ struct request *elv_rb_latter_request(struct request_queue *q,
> >  	return NULL;
> >  }
> >  EXPORT_SYMBOL(elv_rb_latter_request);
> > +
> > +/* Get the io scheduler queue pointer. For cfq, it is stored in rq->ioq*/
> > +void *elv_get_sched_queue(struct request_queue *q, struct request *rq)
> > +{
> > +	return ioq_sched_queue(rq_ioq(rq));
> > +}
> > +EXPORT_SYMBOL(elv_get_sched_queue);
> > +
> > +/* Select an ioscheduler queue to dispatch request from. */
> > +void *elv_select_sched_queue(struct request_queue *q, int force)
> > +{
> > +	return ioq_sched_queue(elv_fq_select_ioq(q, force));
> > +}
> > +EXPORT_SYMBOL(elv_select_sched_queue);
> > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > index b4f71f1..96a94c9 100644
> > --- a/include/linux/blkdev.h
> > +++ b/include/linux/blkdev.h
> > @@ -245,6 +245,11 @@ struct request {
> > 
> >  	/* for bidi */
> >  	struct request *next_rq;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	/* io queue request belongs to */
> > +	struct io_queue *ioq;
> > +#endif
> >  };
> > 
> >  static inline unsigned short req_get_ioprio(struct request *req)
> > diff --git a/include/linux/elevator.h b/include/linux/elevator.h
> > index c59b769..679c149 100644
> > --- a/include/linux/elevator.h
> > +++ b/include/linux/elevator.h
> > @@ -2,6 +2,7 @@
> >  #define _LINUX_ELEVATOR_H
> > 
> >  #include <linux/percpu.h>
> > +#include "../../block/elevator-fq.h"
> > 
> >  #ifdef CONFIG_BLOCK
> > 
> > @@ -29,6 +30,18 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
> > 
> >  typedef void *(elevator_init_fn) (struct request_queue *);
> >  typedef void (elevator_exit_fn) (struct elevator_queue *);
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +typedef void (elevator_free_sched_queue_fn) (struct elevator_queue*, void *);
> > +typedef void (elevator_active_ioq_set_fn) (struct request_queue*, void *, int);
> > +typedef void (elevator_active_ioq_reset_fn) (struct request_queue *, void*);
> > +typedef void (elevator_arm_slice_timer_fn) (struct request_queue*, void*);
> > +typedef int (elevator_should_preempt_fn) (struct request_queue*, void*,
> > +						struct request*);
> > +typedef int (elevator_update_idle_window_fn) (struct elevator_queue*, void*,
> > +						struct request*);
> > +typedef struct io_queue* (elevator_close_cooperator_fn) (struct request_queue*,
> > +						void*, int probe);
> > +#endif
> > 
> >  struct elevator_ops
> >  {
> > @@ -56,6 +69,17 @@ struct elevator_ops
> >  	elevator_init_fn *elevator_init_fn;
> >  	elevator_exit_fn *elevator_exit_fn;
> >  	void (*trim)(struct io_context *);
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	elevator_free_sched_queue_fn *elevator_free_sched_queue_fn;
> > +	elevator_active_ioq_set_fn *elevator_active_ioq_set_fn;
> > +	elevator_active_ioq_reset_fn *elevator_active_ioq_reset_fn;
> > +
> > +	elevator_arm_slice_timer_fn *elevator_arm_slice_timer_fn;
> > +	elevator_should_preempt_fn *elevator_should_preempt_fn;
> > +	elevator_update_idle_window_fn *elevator_update_idle_window_fn;
> > +	elevator_close_cooperator_fn *elevator_close_cooperator_fn;
> > +#endif
> >  };
> > 
> >  #define ELV_NAME_MAX	(16)
> > @@ -76,6 +100,9 @@ struct elevator_type
> >  	struct elv_fs_entry *elevator_attrs;
> >  	char elevator_name[ELV_NAME_MAX];
> >  	struct module *elevator_owner;
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	int elevator_features;
> > +#endif
> >  };
> > 
> >  /*
> > @@ -89,6 +116,10 @@ struct elevator_queue
> >  	struct elevator_type *elevator_type;
> >  	struct mutex sysfs_lock;
> >  	struct hlist_head *hash;
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +	/* fair queuing data */
> > +	struct elv_fq_data efqd;
> > +#endif
> >  };
> > 
> >  /*
> > @@ -209,5 +240,25 @@ enum {
> >  	__val;							\
> >  })
> > 
> > +/* iosched can let elevator know their feature set/capability */
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +/* iosched wants to use fq logic of elevator layer */
> > +#define	ELV_IOSCHED_NEED_FQ	1
> > +
> > +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> > +{
> > +	return (e->elevator_type->elevator_features) & ELV_IOSCHED_NEED_FQ;
> > +}
> > +
> > +#else /* ELV_IOSCHED_FAIR_QUEUING */
> > +
> > +static inline int elv_iosched_fair_queuing_enabled(struct elevator_queue *e)
> > +{
> > +	return 0;
> > +}
> > +#endif /* ELV_IOSCHED_FAIR_QUEUING */
> > +extern void *elv_get_sched_queue(struct request_queue *q, struct request *rq);
> > +extern void *elv_select_sched_queue(struct request_queue *q, int force);
> >  #endif /* CONFIG_BLOCK */
> >  #endif
> > -- 
> > 1.6.0.6
> > 
> 
> -- 
> 	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found]   ` <20090621152116.GC3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
@ 2009-06-22 15:30     ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 15:30 UTC (permalink / raw)
  To: Balbir Singh
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:
> 
> > 
> > Hi All,
> > 
> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> [snip]
> 
> > Testing
> > =======
> >
> 
> [snip]
> 
> I've not been reading through the discussions in complete detail, but
> I see no reference to async reads or aio. In the case of aio, aio
> presumes the context of the user space process. Could you elaborate on
> any testing you've done with these cases? 
> 

Hi Balbir,

So far I had not done any testing with AIO. I have done some just now.
Here are the results.

Test1 (AIO reads)
================
Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.

===================================================================
fio_args="--ioengine=libaio --rw=read --size=512M"

echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness

fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &

fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================

test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.

Following are the results.

test1 statistics: time=8 16 5598   sectors=8 16 1049648 
test2 statistics: time=8 16 2908   sectors=8 16 508560

Above shows that by the time first fio (higher weight), finished, group
test1 got 5598 ms of disk time and group test2 got 2908 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.

Note that disk time given to group test1 is almost double of group2 disk
time.


Test2 (AIO Wries (direct))
==========================
Set up two fio, AIO direct write jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.

===================================================================
fio_args="--ioengine=libaio --rw=write --size=512M --direct=1"

echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness

fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &

fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================

test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.

Following are the results.

test1 statistics: time=8 16 28029   sectors=8 16 1049656
test2 statistics: time=8 16 14093   sectors=8 16 512600

Above shows that by the time first fio (higher weight), finished, group
test1 got 28029 ms of disk time and group test2 got 14093 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.

Note that disk time given to group test1 is almost double of group2 disk
time.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-21 15:21 ` [RFC] IO scheduler based io controller (V5) Balbir Singh
@ 2009-06-22 15:30     ` Vivek Goyal
       [not found]   ` <20090621152116.GC3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 15:30 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, righi.andrea, m-ikeda, jbaron,
	agk, snitzer, akpm, peterz

On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> 
> > 
> > Hi All,
> > 
> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> [snip]
> 
> > Testing
> > =======
> >
> 
> [snip]
> 
> I've not been reading through the discussions in complete detail, but
> I see no reference to async reads or aio. In the case of aio, aio
> presumes the context of the user space process. Could you elaborate on
> any testing you've done with these cases? 
> 

Hi Balbir,

So far I had not done any testing with AIO. I have done some just now.
Here are the results.

Test1 (AIO reads)
================
Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.

===================================================================
fio_args="--ioengine=libaio --rw=read --size=512M"

echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness

fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &

fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================

test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.

Following are the results.

test1 statistics: time=8 16 5598   sectors=8 16 1049648 
test2 statistics: time=8 16 2908   sectors=8 16 508560

Above shows that by the time first fio (higher weight), finished, group
test1 got 5598 ms of disk time and group test2 got 2908 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.

Note that disk time given to group test1 is almost double of group2 disk
time.


Test2 (AIO Wries (direct))
==========================
Set up two fio, AIO direct write jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.

===================================================================
fio_args="--ioengine=libaio --rw=write --size=512M --direct=1"

echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness

fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &

fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================

test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.

Following are the results.

test1 statistics: time=8 16 28029   sectors=8 16 1049656
test2 statistics: time=8 16 14093   sectors=8 16 512600

Above shows that by the time first fio (higher weight), finished, group
test1 got 28029 ms of disk time and group test2 got 14093 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.

Note that disk time given to group test1 is almost double of group2 disk
time.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
@ 2009-06-22 15:30     ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 15:30 UTC (permalink / raw)
  To: Balbir Singh
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	paolo.valente, guijianfeng, fernando, mikew, jmoyer, nauman,
	m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> 
> > 
> > Hi All,
> > 
> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> [snip]
> 
> > Testing
> > =======
> >
> 
> [snip]
> 
> I've not been reading through the discussions in complete detail, but
> I see no reference to async reads or aio. In the case of aio, aio
> presumes the context of the user space process. Could you elaborate on
> any testing you've done with these cases? 
> 

Hi Balbir,

So far I had not done any testing with AIO. I have done some just now.
Here are the results.

Test1 (AIO reads)
================
Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.

===================================================================
fio_args="--ioengine=libaio --rw=read --size=512M"

echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness

fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &

fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================

test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.

Following are the results.

test1 statistics: time=8 16 5598   sectors=8 16 1049648 
test2 statistics: time=8 16 2908   sectors=8 16 508560

Above shows that by the time first fio (higher weight), finished, group
test1 got 5598 ms of disk time and group test2 got 2908 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.

Note that disk time given to group test1 is almost double of group2 disk
time.


Test2 (AIO Wries (direct))
==========================
Set up two fio, AIO direct write jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.

===================================================================
fio_args="--ioengine=libaio --rw=write --size=512M --direct=1"

echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness

fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &

fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================

test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.

Following are the results.

test1 statistics: time=8 16 28029   sectors=8 16 1049656
test2 statistics: time=8 16 14093   sectors=8 16 512600

Above shows that by the time first fio (higher weight), finished, group
test1 got 28029 ms of disk time and group test2 got 14093 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.

Note that disk time given to group test1 is almost double of group2 disk
time.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 15/20] io-controller: map async requests to appropriate cgroup
       [not found]     ` <4A3EE245.7030409-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-22 15:39       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 15:39 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Mon, Jun 22, 2009 at 09:45:41AM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > o So far we were assuming that a bio/rq belongs to the task who is submitting
> >   it. It did not hold good in case of async writes. This patch makes use of
> >   blkio_cgroup pataches to attribute the aysnc writes to right group instead
> >   of task submitting the bio.
> > 
> > o For sync requests, we continue to assume that io belongs to the task
> >   submitting it. Only in case of async requests, we make use of io tracking
> >   patches to track the owner cgroup.
> > 
> > o So far cfq always caches the async queue pointer. With async requests now
> >   not necessarily being tied to submitting task io context, caching the
> >   pointer will not help for async queues. This patch introduces a new config
> >   option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
> >   old behavior where async queue pointer is cached in task context. If it
> >   is not set, async queue pointer is not cached and we take help of bio
> 
> Here "If it is not set" should be "If it is set".

Yes, in last line it should be "If it is set". Thanks Gui. Will fix the
comment.

Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 15/20] io-controller: map async requests to appropriate cgroup
  2009-06-22  1:45   ` Gui Jianfeng
@ 2009-06-22 15:39       ` Vivek Goyal
  2009-06-22 15:39       ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 15:39 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Mon, Jun 22, 2009 at 09:45:41AM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > o So far we were assuming that a bio/rq belongs to the task who is submitting
> >   it. It did not hold good in case of async writes. This patch makes use of
> >   blkio_cgroup pataches to attribute the aysnc writes to right group instead
> >   of task submitting the bio.
> > 
> > o For sync requests, we continue to assume that io belongs to the task
> >   submitting it. Only in case of async requests, we make use of io tracking
> >   patches to track the owner cgroup.
> > 
> > o So far cfq always caches the async queue pointer. With async requests now
> >   not necessarily being tied to submitting task io context, caching the
> >   pointer will not help for async queues. This patch introduces a new config
> >   option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
> >   old behavior where async queue pointer is cached in task context. If it
> >   is not set, async queue pointer is not cached and we take help of bio
> 
> Here "If it is not set" should be "If it is set".

Yes, in last line it should be "If it is set". Thanks Gui. Will fix the
comment.

Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 15/20] io-controller: map async requests to appropriate cgroup
@ 2009-06-22 15:39       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 15:39 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Mon, Jun 22, 2009 at 09:45:41AM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > o So far we were assuming that a bio/rq belongs to the task who is submitting
> >   it. It did not hold good in case of async writes. This patch makes use of
> >   blkio_cgroup pataches to attribute the aysnc writes to right group instead
> >   of task submitting the bio.
> > 
> > o For sync requests, we continue to assume that io belongs to the task
> >   submitting it. Only in case of async requests, we make use of io tracking
> >   patches to track the owner cgroup.
> > 
> > o So far cfq always caches the async queue pointer. With async requests now
> >   not necessarily being tied to submitting task io context, caching the
> >   pointer will not help for async queues. This patch introduces a new config
> >   option CONFIG_TRACK_ASYNC_CONTEXT. If this option is not set, cfq retains
> >   old behavior where async queue pointer is cached in task context. If it
> >   is not set, async queue pointer is not cached and we take help of bio
> 
> Here "If it is not set" should be "If it is set".

Yes, in last line it should be "If it is set". Thanks Gui. Will fix the
comment.

Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found]     ` <20090622153030.GA15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-22 15:40       ` Jeff Moyer
  0 siblings, 0 replies; 176+ messages in thread
From: Jeff Moyer @ 2009-06-22 15:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:

> On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
>> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:
>> 
>> > 
>> > Hi All,
>> > 
>> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
>> [snip]
>> 
>> > Testing
>> > =======
>> >
>> 
>> [snip]
>> 
>> I've not been reading through the discussions in complete detail, but
>> I see no reference to async reads or aio. In the case of aio, aio
>> presumes the context of the user space process. Could you elaborate on
>> any testing you've done with these cases? 
>> 
>
> Hi Balbir,
>
> So far I had not done any testing with AIO. I have done some just now.
> Here are the results.
>
> Test1 (AIO reads)
> ================
> Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> respectively. I am using cfq scheduler. Following are some lines from my test
> script.
>
> ===================================================================
> fio_args="--ioengine=libaio --rw=read --size=512M"

AIO doesn't make sense without O_DIRECT.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-22 15:30     ` Vivek Goyal
  (?)
@ 2009-06-22 15:40     ` Jeff Moyer
  2009-06-22 16:02         ` Vivek Goyal
       [not found]       ` <x497hz4l1j9.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
  -1 siblings, 2 replies; 176+ messages in thread
From: Jeff Moyer @ 2009-06-22 15:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, guijianfeng, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

Vivek Goyal <vgoyal@redhat.com> writes:

> On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
>> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
>> 
>> > 
>> > Hi All,
>> > 
>> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
>> [snip]
>> 
>> > Testing
>> > =======
>> >
>> 
>> [snip]
>> 
>> I've not been reading through the discussions in complete detail, but
>> I see no reference to async reads or aio. In the case of aio, aio
>> presumes the context of the user space process. Could you elaborate on
>> any testing you've done with these cases? 
>> 
>
> Hi Balbir,
>
> So far I had not done any testing with AIO. I have done some just now.
> Here are the results.
>
> Test1 (AIO reads)
> ================
> Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> respectively. I am using cfq scheduler. Following are some lines from my test
> script.
>
> ===================================================================
> fio_args="--ioengine=libaio --rw=read --size=512M"

AIO doesn't make sense without O_DIRECT.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found]       ` <x497hz4l1j9.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
@ 2009-06-22 16:02         ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 16:02 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
> 
> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:
> >> 
> >> > 
> >> > Hi All,
> >> > 
> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> [snip]
> >> 
> >> > Testing
> >> > =======
> >> >
> >> 
> >> [snip]
> >> 
> >> I've not been reading through the discussions in complete detail, but
> >> I see no reference to async reads or aio. In the case of aio, aio
> >> presumes the context of the user space process. Could you elaborate on
> >> any testing you've done with these cases? 
> >> 
> >
> > Hi Balbir,
> >
> > So far I had not done any testing with AIO. I have done some just now.
> > Here are the results.
> >
> > Test1 (AIO reads)
> > ================
> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> > respectively. I am using cfq scheduler. Following are some lines from my test
> > script.
> >
> > ===================================================================
> > fio_args="--ioengine=libaio --rw=read --size=512M"
> 
> AIO doesn't make sense without O_DIRECT.
> 

Ok, here are the read results with --direct=1 for reads. In previous posting,
writes were already direct.

test1 statistics: time=8 16 20796   sectors=8 16 1049648
test2 statistics: time=8 16 10551   sectors=8 16 581160


Not sure why reads are so slow with --direct=1? In the previous test
(no direct IO), I had cleared the caches using
(echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
cache?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-22 15:40     ` Jeff Moyer
@ 2009-06-22 16:02         ` Vivek Goyal
       [not found]       ` <x497hz4l1j9.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 16:02 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, guijianfeng, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> >> 
> >> > 
> >> > Hi All,
> >> > 
> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> [snip]
> >> 
> >> > Testing
> >> > =======
> >> >
> >> 
> >> [snip]
> >> 
> >> I've not been reading through the discussions in complete detail, but
> >> I see no reference to async reads or aio. In the case of aio, aio
> >> presumes the context of the user space process. Could you elaborate on
> >> any testing you've done with these cases? 
> >> 
> >
> > Hi Balbir,
> >
> > So far I had not done any testing with AIO. I have done some just now.
> > Here are the results.
> >
> > Test1 (AIO reads)
> > ================
> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> > respectively. I am using cfq scheduler. Following are some lines from my test
> > script.
> >
> > ===================================================================
> > fio_args="--ioengine=libaio --rw=read --size=512M"
> 
> AIO doesn't make sense without O_DIRECT.
> 

Ok, here are the read results with --direct=1 for reads. In previous posting,
writes were already direct.

test1 statistics: time=8 16 20796   sectors=8 16 1049648
test2 statistics: time=8 16 10551   sectors=8 16 581160


Not sure why reads are so slow with --direct=1? In the previous test
(no direct IO), I had cleared the caches using
(echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
cache?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
@ 2009-06-22 16:02         ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 16:02 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	Balbir Singh, paolo.valente, guijianfeng, fernando, mikew,
	nauman, m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> >> 
> >> > 
> >> > Hi All,
> >> > 
> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> [snip]
> >> 
> >> > Testing
> >> > =======
> >> >
> >> 
> >> [snip]
> >> 
> >> I've not been reading through the discussions in complete detail, but
> >> I see no reference to async reads or aio. In the case of aio, aio
> >> presumes the context of the user space process. Could you elaborate on
> >> any testing you've done with these cases? 
> >> 
> >
> > Hi Balbir,
> >
> > So far I had not done any testing with AIO. I have done some just now.
> > Here are the results.
> >
> > Test1 (AIO reads)
> > ================
> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> > respectively. I am using cfq scheduler. Following are some lines from my test
> > script.
> >
> > ===================================================================
> > fio_args="--ioengine=libaio --rw=read --size=512M"
> 
> AIO doesn't make sense without O_DIRECT.
> 

Ok, here are the read results with --direct=1 for reads. In previous posting,
writes were already direct.

test1 statistics: time=8 16 20796   sectors=8 16 1049648
test2 statistics: time=8 16 10551   sectors=8 16 581160


Not sure why reads are so slow with --direct=1? In the previous test
(no direct IO), I had cleared the caches using
(echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
cache?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-22 16:02         ` Vivek Goyal
@ 2009-06-22 16:06             ` Jeff Moyer
  -1 siblings, 0 replies; 176+ messages in thread
From: Jeff Moyer @ 2009-06-22 16:06 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:

> On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
>> Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
>> 
>> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
>> >> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:
>> >> 
>> >> > 
>> >> > Hi All,
>> >> > 
>> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
>> >> [snip]
>> >> 
>> >> > Testing
>> >> > =======
>> >> >
>> >> 
>> >> [snip]
>> >> 
>> >> I've not been reading through the discussions in complete detail, but
>> >> I see no reference to async reads or aio. In the case of aio, aio
>> >> presumes the context of the user space process. Could you elaborate on
>> >> any testing you've done with these cases? 
>> >> 
>> >
>> > Hi Balbir,
>> >
>> > So far I had not done any testing with AIO. I have done some just now.
>> > Here are the results.
>> >
>> > Test1 (AIO reads)
>> > ================
>> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
>> > respectively. I am using cfq scheduler. Following are some lines from my test
>> > script.
>> >
>> > ===================================================================
>> > fio_args="--ioengine=libaio --rw=read --size=512M"
>> 
>> AIO doesn't make sense without O_DIRECT.
>> 
>
> Ok, here are the read results with --direct=1 for reads. In previous posting,
> writes were already direct.
>
> test1 statistics: time=8 16 20796   sectors=8 16 1049648
> test2 statistics: time=8 16 10551   sectors=8 16 581160
>
>
> Not sure why reads are so slow with --direct=1? In the previous test
> (no direct IO), I had cleared the caches using
> (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> cache?

O_DIRECT bypasses the page cache, and hence the readahead code.  Try
driving deeper queue depths and/or using larger I/O sizes.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
@ 2009-06-22 16:06             ` Jeff Moyer
  0 siblings, 0 replies; 176+ messages in thread
From: Jeff Moyer @ 2009-06-22 16:06 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, guijianfeng, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

Vivek Goyal <vgoyal@redhat.com> writes:

> On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
>> Vivek Goyal <vgoyal@redhat.com> writes:
>> 
>> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
>> >> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
>> >> 
>> >> > 
>> >> > Hi All,
>> >> > 
>> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
>> >> [snip]
>> >> 
>> >> > Testing
>> >> > =======
>> >> >
>> >> 
>> >> [snip]
>> >> 
>> >> I've not been reading through the discussions in complete detail, but
>> >> I see no reference to async reads or aio. In the case of aio, aio
>> >> presumes the context of the user space process. Could you elaborate on
>> >> any testing you've done with these cases? 
>> >> 
>> >
>> > Hi Balbir,
>> >
>> > So far I had not done any testing with AIO. I have done some just now.
>> > Here are the results.
>> >
>> > Test1 (AIO reads)
>> > ================
>> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
>> > respectively. I am using cfq scheduler. Following are some lines from my test
>> > script.
>> >
>> > ===================================================================
>> > fio_args="--ioengine=libaio --rw=read --size=512M"
>> 
>> AIO doesn't make sense without O_DIRECT.
>> 
>
> Ok, here are the read results with --direct=1 for reads. In previous posting,
> writes were already direct.
>
> test1 statistics: time=8 16 20796   sectors=8 16 1049648
> test2 statistics: time=8 16 10551   sectors=8 16 581160
>
>
> Not sure why reads are so slow with --direct=1? In the previous test
> (no direct IO), I had cleared the caches using
> (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> cache?

O_DIRECT bypasses the page cache, and hence the readahead code.  Try
driving deeper queue depths and/or using larger I/O sizes.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found]             ` <x493a9sl0bx.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
@ 2009-06-22 17:08               ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 17:08 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Mon, Jun 22, 2009 at 12:06:42PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
> 
> > On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> >> Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
> >> 
> >> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> >> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:
> >> >> 
> >> >> > 
> >> >> > Hi All,
> >> >> > 
> >> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> >> [snip]
> >> >> 
> >> >> > Testing
> >> >> > =======
> >> >> >
> >> >> 
> >> >> [snip]
> >> >> 
> >> >> I've not been reading through the discussions in complete detail, but
> >> >> I see no reference to async reads or aio. In the case of aio, aio
> >> >> presumes the context of the user space process. Could you elaborate on
> >> >> any testing you've done with these cases? 
> >> >> 
> >> >
> >> > Hi Balbir,
> >> >
> >> > So far I had not done any testing with AIO. I have done some just now.
> >> > Here are the results.
> >> >
> >> > Test1 (AIO reads)
> >> > ================
> >> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> >> > respectively. I am using cfq scheduler. Following are some lines from my test
> >> > script.
> >> >
> >> > ===================================================================
> >> > fio_args="--ioengine=libaio --rw=read --size=512M"
> >> 
> >> AIO doesn't make sense without O_DIRECT.
> >> 
> >
> > Ok, here are the read results with --direct=1 for reads. In previous posting,
> > writes were already direct.
> >
> > test1 statistics: time=8 16 20796   sectors=8 16 1049648
> > test2 statistics: time=8 16 10551   sectors=8 16 581160
> >
> >
> > Not sure why reads are so slow with --direct=1? In the previous test
> > (no direct IO), I had cleared the caches using
> > (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> > cache?
> 
> O_DIRECT bypasses the page cache, and hence the readahead code.  Try
> driving deeper queue depths and/or using larger I/O sizes.

Ok. Thanks. I tried increasing iodepth to 20 and it helped a lot.

test1 statistics: time=8 16 6672   sectors=8 16 1049656
test2 statistics: time=8 16 3508   sectors=8 16 583432

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-22 16:06             ` Jeff Moyer
@ 2009-06-22 17:08               ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 17:08 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, guijianfeng, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

On Mon, Jun 22, 2009 at 12:06:42PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> >> Vivek Goyal <vgoyal@redhat.com> writes:
> >> 
> >> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> >> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> >> >> 
> >> >> > 
> >> >> > Hi All,
> >> >> > 
> >> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> >> [snip]
> >> >> 
> >> >> > Testing
> >> >> > =======
> >> >> >
> >> >> 
> >> >> [snip]
> >> >> 
> >> >> I've not been reading through the discussions in complete detail, but
> >> >> I see no reference to async reads or aio. In the case of aio, aio
> >> >> presumes the context of the user space process. Could you elaborate on
> >> >> any testing you've done with these cases? 
> >> >> 
> >> >
> >> > Hi Balbir,
> >> >
> >> > So far I had not done any testing with AIO. I have done some just now.
> >> > Here are the results.
> >> >
> >> > Test1 (AIO reads)
> >> > ================
> >> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> >> > respectively. I am using cfq scheduler. Following are some lines from my test
> >> > script.
> >> >
> >> > ===================================================================
> >> > fio_args="--ioengine=libaio --rw=read --size=512M"
> >> 
> >> AIO doesn't make sense without O_DIRECT.
> >> 
> >
> > Ok, here are the read results with --direct=1 for reads. In previous posting,
> > writes were already direct.
> >
> > test1 statistics: time=8 16 20796   sectors=8 16 1049648
> > test2 statistics: time=8 16 10551   sectors=8 16 581160
> >
> >
> > Not sure why reads are so slow with --direct=1? In the previous test
> > (no direct IO), I had cleared the caches using
> > (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> > cache?
> 
> O_DIRECT bypasses the page cache, and hence the readahead code.  Try
> driving deeper queue depths and/or using larger I/O sizes.

Ok. Thanks. I tried increasing iodepth to 20 and it helped a lot.

test1 statistics: time=8 16 6672   sectors=8 16 1049656
test2 statistics: time=8 16 3508   sectors=8 16 583432

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
@ 2009-06-22 17:08               ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 17:08 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	Balbir Singh, paolo.valente, guijianfeng, fernando, mikew,
	nauman, m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Mon, Jun 22, 2009 at 12:06:42PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> >> Vivek Goyal <vgoyal@redhat.com> writes:
> >> 
> >> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> >> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> >> >> 
> >> >> > 
> >> >> > Hi All,
> >> >> > 
> >> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> >> [snip]
> >> >> 
> >> >> > Testing
> >> >> > =======
> >> >> >
> >> >> 
> >> >> [snip]
> >> >> 
> >> >> I've not been reading through the discussions in complete detail, but
> >> >> I see no reference to async reads or aio. In the case of aio, aio
> >> >> presumes the context of the user space process. Could you elaborate on
> >> >> any testing you've done with these cases? 
> >> >> 
> >> >
> >> > Hi Balbir,
> >> >
> >> > So far I had not done any testing with AIO. I have done some just now.
> >> > Here are the results.
> >> >
> >> > Test1 (AIO reads)
> >> > ================
> >> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> >> > respectively. I am using cfq scheduler. Following are some lines from my test
> >> > script.
> >> >
> >> > ===================================================================
> >> > fio_args="--ioengine=libaio --rw=read --size=512M"
> >> 
> >> AIO doesn't make sense without O_DIRECT.
> >> 
> >
> > Ok, here are the read results with --direct=1 for reads. In previous posting,
> > writes were already direct.
> >
> > test1 statistics: time=8 16 20796   sectors=8 16 1049648
> > test2 statistics: time=8 16 10551   sectors=8 16 581160
> >
> >
> > Not sure why reads are so slow with --direct=1? In the previous test
> > (no direct IO), I had cleared the caches using
> > (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> > cache?
> 
> O_DIRECT bypasses the page cache, and hence the readahead code.  Try
> driving deeper queue depths and/or using larger I/O sizes.

Ok. Thanks. I tried increasing iodepth to 20 and it helped a lot.

test1 statistics: time=8 16 6672   sectors=8 16 1049656
test2 statistics: time=8 16 3508   sectors=8 16 583432

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
       [not found]     ` <4A3F3648.7080007-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-22 17:21       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 17:21 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
> in ancestor or sibling groups. It will give other group's rt ioq an chance 
> to dispatch ASAP.
> 

Hi Gui,

Will new preempton logic of traversing up the hiearchy so that both new
queue and old queue are at same level to take a preemption decision not
take care of above scenario?

Please have a look at bfq_find_matching_entity().

At the same time we probably don't want to preempt the non-rt queue
with an RT queue in sibling group until and unless sibling group is an
RT group.

		root
		/  \
	   BEgrpA  BEgrpB
	      |     |	
	  BEioq1   RTioq2

Above we got two BE group A and B and assume ioq in group A is being
served and then an RT request in group B comes. Because group B is an
BE class group, we should not preempt the queue in group A.

Thanks
Vivek


> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> ---
>  block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
>  block/elevator-fq.h |    1 +
>  2 files changed, 40 insertions(+), 5 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 2ad40eb..80526fd 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
>  	elv_mark_ioq_busy(ioq);
>  	efqd->busy_queues++;
>  	if (elv_ioq_class_rt(ioq)) {
> +		struct io_entity *entity;
>  		struct io_group *iog = ioq_to_io_group(ioq);
> +
>  		iog->busy_rt_queues++;
> +		entity = iog->entity.parent;
> +
> +		for_each_entity(entity) {
> +			iog = io_entity_to_iog(entity);
> +			iog->sub_busy_rt_queues++;
> +		}
>  	}
>  
>  #ifdef CONFIG_DEBUG_GROUP_IOSCHED
> @@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
>  	elv_clear_ioq_busy(ioq);
>  	BUG_ON(efqd->busy_queues == 0);
>  	efqd->busy_queues--;
> +
>  	if (elv_ioq_class_rt(ioq)) {
> +		struct io_entity *entity;
>  		struct io_group *iog = ioq_to_io_group(ioq);
> +
>  		iog->busy_rt_queues--;
> +		entity = iog->entity.parent;
> +
> +		for_each_entity(entity) {
> +			iog = io_entity_to_iog(entity);
> +			iog->sub_busy_rt_queues--;
> +		}
>  	}
>  
>  	elv_deactivate_ioq(efqd, ioq, requeue);
> @@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>  	return ret;
>  }
>  
> +static int check_rt_queue(struct io_queue *ioq)
> +{
> +	struct io_group *iog;
> +	struct io_entity *entity;
> +
> +	iog = ioq_to_io_group(ioq);
> +
> +	if (iog->busy_rt_queues)
> +		return 1;
> +
> +	entity = iog->entity.parent;
> +
> +	for_each_entity(entity) {
> +		iog = io_entity_to_iog(entity);
> +		if (iog->sub_busy_rt_queues)
> +			return 1;
> +	}
> +
> +	return 0;
> +}
> +
>  /* Common layer function to select the next queue to dispatch from */
>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>  {
>  	struct elv_fq_data *efqd = &q->elevator->efqd;
>  	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> -	struct io_group *iog;
>  	int slice_expired = 1;
>  
>  	if (!elv_nr_busy_ioq(q->elevator))
> @@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>  	/*
>  	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
>  	 * cfqq.
> -	 *
> -	 * TODO: This does not seem right across the io groups. Fix it.
>  	 */
> -	iog = ioq_to_io_group(ioq);
>  
> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
>  		/*
>  		 * We simulate this as cfqq timed out so that it gets to bank
>  		 * the remaining of its time slice.
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> index b3193f8..be6c1af 100644
> --- a/block/elevator-fq.h
> +++ b/block/elevator-fq.h
> @@ -248,6 +248,7 @@ struct io_group {
>  	 * non-RT cfqq in service when this value is non-zero.
>  	 */
>  	unsigned int busy_rt_queues;
> +	unsigned int sub_busy_rt_queues;
>  
>  	int deleting;
>  	unsigned short iocg_id;
> -- 
> 1.5.4.rc3
> 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
  2009-06-22  7:44   ` Gui Jianfeng
@ 2009-06-22 17:21       ` Vivek Goyal
       [not found]     ` <4A3F3648.7080007-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 17:21 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
> in ancestor or sibling groups. It will give other group's rt ioq an chance 
> to dispatch ASAP.
> 

Hi Gui,

Will new preempton logic of traversing up the hiearchy so that both new
queue and old queue are at same level to take a preemption decision not
take care of above scenario?

Please have a look at bfq_find_matching_entity().

At the same time we probably don't want to preempt the non-rt queue
with an RT queue in sibling group until and unless sibling group is an
RT group.

		root
		/  \
	   BEgrpA  BEgrpB
	      |     |	
	  BEioq1   RTioq2

Above we got two BE group A and B and assume ioq in group A is being
served and then an RT request in group B comes. Because group B is an
BE class group, we should not preempt the queue in group A.

Thanks
Vivek


> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
>  block/elevator-fq.h |    1 +
>  2 files changed, 40 insertions(+), 5 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 2ad40eb..80526fd 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
>  	elv_mark_ioq_busy(ioq);
>  	efqd->busy_queues++;
>  	if (elv_ioq_class_rt(ioq)) {
> +		struct io_entity *entity;
>  		struct io_group *iog = ioq_to_io_group(ioq);
> +
>  		iog->busy_rt_queues++;
> +		entity = iog->entity.parent;
> +
> +		for_each_entity(entity) {
> +			iog = io_entity_to_iog(entity);
> +			iog->sub_busy_rt_queues++;
> +		}
>  	}
>  
>  #ifdef CONFIG_DEBUG_GROUP_IOSCHED
> @@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
>  	elv_clear_ioq_busy(ioq);
>  	BUG_ON(efqd->busy_queues == 0);
>  	efqd->busy_queues--;
> +
>  	if (elv_ioq_class_rt(ioq)) {
> +		struct io_entity *entity;
>  		struct io_group *iog = ioq_to_io_group(ioq);
> +
>  		iog->busy_rt_queues--;
> +		entity = iog->entity.parent;
> +
> +		for_each_entity(entity) {
> +			iog = io_entity_to_iog(entity);
> +			iog->sub_busy_rt_queues--;
> +		}
>  	}
>  
>  	elv_deactivate_ioq(efqd, ioq, requeue);
> @@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>  	return ret;
>  }
>  
> +static int check_rt_queue(struct io_queue *ioq)
> +{
> +	struct io_group *iog;
> +	struct io_entity *entity;
> +
> +	iog = ioq_to_io_group(ioq);
> +
> +	if (iog->busy_rt_queues)
> +		return 1;
> +
> +	entity = iog->entity.parent;
> +
> +	for_each_entity(entity) {
> +		iog = io_entity_to_iog(entity);
> +		if (iog->sub_busy_rt_queues)
> +			return 1;
> +	}
> +
> +	return 0;
> +}
> +
>  /* Common layer function to select the next queue to dispatch from */
>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>  {
>  	struct elv_fq_data *efqd = &q->elevator->efqd;
>  	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> -	struct io_group *iog;
>  	int slice_expired = 1;
>  
>  	if (!elv_nr_busy_ioq(q->elevator))
> @@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>  	/*
>  	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
>  	 * cfqq.
> -	 *
> -	 * TODO: This does not seem right across the io groups. Fix it.
>  	 */
> -	iog = ioq_to_io_group(ioq);
>  
> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
>  		/*
>  		 * We simulate this as cfqq timed out so that it gets to bank
>  		 * the remaining of its time slice.
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> index b3193f8..be6c1af 100644
> --- a/block/elevator-fq.h
> +++ b/block/elevator-fq.h
> @@ -248,6 +248,7 @@ struct io_group {
>  	 * non-RT cfqq in service when this value is non-zero.
>  	 */
>  	unsigned int busy_rt_queues;
> +	unsigned int sub_busy_rt_queues;
>  
>  	int deleting;
>  	unsigned short iocg_id;
> -- 
> 1.5.4.rc3
> 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
@ 2009-06-22 17:21       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-22 17:21 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
> in ancestor or sibling groups. It will give other group's rt ioq an chance 
> to dispatch ASAP.
> 

Hi Gui,

Will new preempton logic of traversing up the hiearchy so that both new
queue and old queue are at same level to take a preemption decision not
take care of above scenario?

Please have a look at bfq_find_matching_entity().

At the same time we probably don't want to preempt the non-rt queue
with an RT queue in sibling group until and unless sibling group is an
RT group.

		root
		/  \
	   BEgrpA  BEgrpB
	      |     |	
	  BEioq1   RTioq2

Above we got two BE group A and B and assume ioq in group A is being
served and then an RT request in group B comes. Because group B is an
BE class group, we should not preempt the queue in group A.

Thanks
Vivek


> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
>  block/elevator-fq.h |    1 +
>  2 files changed, 40 insertions(+), 5 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 2ad40eb..80526fd 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
>  	elv_mark_ioq_busy(ioq);
>  	efqd->busy_queues++;
>  	if (elv_ioq_class_rt(ioq)) {
> +		struct io_entity *entity;
>  		struct io_group *iog = ioq_to_io_group(ioq);
> +
>  		iog->busy_rt_queues++;
> +		entity = iog->entity.parent;
> +
> +		for_each_entity(entity) {
> +			iog = io_entity_to_iog(entity);
> +			iog->sub_busy_rt_queues++;
> +		}
>  	}
>  
>  #ifdef CONFIG_DEBUG_GROUP_IOSCHED
> @@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
>  	elv_clear_ioq_busy(ioq);
>  	BUG_ON(efqd->busy_queues == 0);
>  	efqd->busy_queues--;
> +
>  	if (elv_ioq_class_rt(ioq)) {
> +		struct io_entity *entity;
>  		struct io_group *iog = ioq_to_io_group(ioq);
> +
>  		iog->busy_rt_queues--;
> +		entity = iog->entity.parent;
> +
> +		for_each_entity(entity) {
> +			iog = io_entity_to_iog(entity);
> +			iog->sub_busy_rt_queues--;
> +		}
>  	}
>  
>  	elv_deactivate_ioq(efqd, ioq, requeue);
> @@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>  	return ret;
>  }
>  
> +static int check_rt_queue(struct io_queue *ioq)
> +{
> +	struct io_group *iog;
> +	struct io_entity *entity;
> +
> +	iog = ioq_to_io_group(ioq);
> +
> +	if (iog->busy_rt_queues)
> +		return 1;
> +
> +	entity = iog->entity.parent;
> +
> +	for_each_entity(entity) {
> +		iog = io_entity_to_iog(entity);
> +		if (iog->sub_busy_rt_queues)
> +			return 1;
> +	}
> +
> +	return 0;
> +}
> +
>  /* Common layer function to select the next queue to dispatch from */
>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>  {
>  	struct elv_fq_data *efqd = &q->elevator->efqd;
>  	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
> -	struct io_group *iog;
>  	int slice_expired = 1;
>  
>  	if (!elv_nr_busy_ioq(q->elevator))
> @@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>  	/*
>  	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
>  	 * cfqq.
> -	 *
> -	 * TODO: This does not seem right across the io groups. Fix it.
>  	 */
> -	iog = ioq_to_io_group(ioq);
>  
> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
>  		/*
>  		 * We simulate this as cfqq timed out so that it gets to bank
>  		 * the remaining of its time slice.
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> index b3193f8..be6c1af 100644
> --- a/block/elevator-fq.h
> +++ b/block/elevator-fq.h
> @@ -248,6 +248,7 @@ struct io_group {
>  	 * non-RT cfqq in service when this value is non-zero.
>  	 */
>  	unsigned int busy_rt_queues;
> +	unsigned int sub_busy_rt_queues;
>  
>  	int deleting;
>  	unsigned short iocg_id;
> -- 
> 1.5.4.rc3
> 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]     ` <20090622084612.GD3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
  2009-06-22 12:43       ` Fabio Checconi
@ 2009-06-23  2:05       ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23  2:05 UTC (permalink / raw)
  To: Balbir Singh
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Mon, Jun 22, 2009 at 02:16:12PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:20]:
> 
> > This is common fair queuing code in elevator layer. This is controlled by
> > config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> > flat fair queuing support where there is only one group, "root group" and all
> > the tasks belong to root group.
> > 
> > This elevator layer changes are backward compatible. That means any ioscheduler
> > using old interfaces will continue to work.
> > 
> > This code is essentially the CFQ code for fair queuing. The primary difference
> > is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
> >
> 
> The patch is quite long and to be honest requires a long time to
> review, which I don't mind. I suspect my frequently diverted mind is
> likely to miss a lot in a big patch like this. Could you consider
> splitting this further if possible. I think you'll notice the number
> of reviews will also increase.
>  

Hi Balbir,

Thanks for the review. Yes, this is a big patch. I will try to break it
down further.

Fabio has already responded to most of the questions. I will try to cover
rest.

[..]
> > +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> > +					struct io_queue *ioq, int probe);
> > +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> > +						 int extract);
> > +
> > +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> > +					unsigned short prio)
> 
> Why is the return type int and not unsigned int or unsigned long? Can
> the return value ever be negative?

Actually this function was a replacement for cfq_prio_slice() hence return
type int. But as slice value can never be negative, I can make it unsigned
int.

[..]
> > + * bfq_gt - compare two timestamps.
> > + * @a: first ts.
> > + * @b: second ts.
> > + *
> > + * Return @a > @b, dealing with wrapping correctly.
> > + */
> > +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> > +{
> > +	return (s64)(a - b) > 0;
> > +}
> > +
> 
> a and b are of type u64, but cast to s64 to deal with wrapping?
> Correct?

Yes.

> 
> > +/**
> > + * bfq_delta - map service into the virtual time domain.
> > + * @service: amount of service.
> > + * @weight: scale factor.
> > + */
> > +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> > +					bfq_weight_t weight)
> > +{
> > +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> > +
> 
> Why is the case required? Does the compiler complain? service is
> already of the correct type.
> 
> > +	do_div(d, weight);
> 
> On a 64 system both d and weight are 64 bit, but on a 32 bit system
> weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
> - no?
> 

d is of type "bfq_timestamp_t" which is u64 irrespective of 64 or 32 bit
system. I think it might make sense to change type of "weight" from
unsigned long to unsigned int so that it is 32bit on both 64 and 32bit
systems. Will do...


> > +	return d;
> > +}
> > +
> > +/**
> > + * bfq_calc_finish - assign the finish time to an entity.
> > + * @entity: the entity to act upon.
> > + * @service: the service to be charged to the entity.
> > + */
> > +static inline void bfq_calc_finish(struct io_entity *entity,
> > +				   bfq_service_t service)
> > +{
> > +	BUG_ON(entity->weight == 0);
> > +
> > +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> > +}
> 
> Should we BUG_ON (entity->finish == entity->start)? Or is that
> expected when the entity has no service time left.
> 

As Fabio said, that with preemption logic, I guess theoritically, it is
possible that a io queue is preempted without any service received and
requeued back. Hence it might not be a very good idea to
BUG_ON(entity->finish == entity->start); 

[..]
> > +/**
> > + * bfq_extract - remove an entity from a tree.
> > + * @root: the tree root.
> > + * @entity: the entity to remove.
> > + */
> > +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> > +{
> 
> Extract is not common terminology, why not use bfq_remove()?
> 

*_remove() also sounds good. Will replace *_extract() with *_remove().

> > +	BUG_ON(entity->tree != root);
> > +
> > +	entity->tree = NULL;
> > +	rb_erase(&entity->rb_node, root);
> 
> Don't you want to make entity->tree = NULL after rb_erase?

As Fabio said that it happens under queue spinlock held. But from
readability point of view, it probably looks better to first remove it
from rb tree then reset entity fields. Will change the order...

> 
> > +}
> > +
> > +/**
> > + * bfq_idle_extract - extract an entity from the idle tree.
> > + * @st: the service tree of the owning @entity.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_idle_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *next;
> > +
> > +	BUG_ON(entity->tree != &st->idle);
> > +
> > +	if (entity == st->first_idle) {
> > +		next = rb_next(&entity->rb_node);
> 
> What happens if next is NULL?
> 
> > +		st->first_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	if (entity == st->last_idle) {
> > +		next = rb_prev(&entity->rb_node);
> 
> What happens if next is NULL?
> 
> > +		st->last_idle = bfq_entity_of(next);

bfq_entity_of() is capable of handling next == NULL.

I can change it to following if you think it is more readable.

	if (entity == st->first_idle) {
		next = rb_next(&entity->rb_node);
		if (next)
			st->first_idle = bfq_entity_of(next);
		else
			st->first_idle = NULL;
	}

[..]

> > +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	unsigned long elapsed = jiffies - ioq->last_end_request;
> > +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> > +
> > +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> > +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> > +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> > +}
> 
> Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
> understand the algorithm.

Taken from CFQ. 

> > +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +			void *sched_queue, int ioprio_class, int ioprio,
> > +			int is_sync)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> > +
> > +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> > +	atomic_set(&ioq->ref, 0);
> > +	ioq->efqd = efqd;
> > +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> > +	elv_ioq_set_ioprio(ioq, ioprio);
> > +	ioq->pid = current->pid;
> 
> Is pid used for cgroup association later? I don't see why we save the
> pid otherwise? If yes, why not store the cgroup of the current->pid?
> 

This is just for logging purposes (blktrace), useful for CFQ where every task
context sets up one queue and this number becomes the identifier for the queue.
Look at elv_log_ioq(), which uses ioq->pid.

[..]
> > + * coop tells that io scheduler selected a queue for us and we did not
> 
> coop?

coop refers to "cooperating". I guess "coop" is not descriptive. I will
change the name to "cooperating" and also put more description for
clarity.

[..]
> > diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> > new file mode 100644
> > index 0000000..5b6c1cc
> > --- /dev/null
> > +++ b/block/elevator-fq.h
> > @@ -0,0 +1,473 @@
> > +/*
> > + * BFQ: data structures and common functions prototypes.
> > + *
> > + * Based on ideas and code from CFQ:
> > + * Copyright (C) 2003 Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
> > + *
> > + * Copyright (C) 2008 Fabio Checconi <fabio-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
> > + *		      Paolo Valente <paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org>
> > + * Copyright (C) 2009 Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > + * 	              Nauman Rafique <nauman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > + */
> > +
> > +#include <linux/blkdev.h>
> > +
> > +#ifndef _BFQ_SCHED_H
> > +#define _BFQ_SCHED_H
> > +
> > +#define IO_IOPRIO_CLASSES	3
> > +
> > +typedef u64 bfq_timestamp_t;
> > +typedef unsigned long bfq_weight_t;
> > +typedef unsigned long bfq_service_t;
> 
> Does this abstraction really provide any benefit? Why not directly use
> the standard C types, make the code easier to read.

I think using standard C type is better now. Will get rid of these
abstractions. Fabio also seems to be ok with this change.

> 
> > +struct io_entity;
> > +struct io_queue;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +#define ELV_ATTR(name) \
> > +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> > +
> > +/**
> > + * struct bfq_service_tree - per ioprio_class service tree.
> 
> Comment is old, does not reflect the newer name

Yes, this is all over the code. I have not taken care of updating the
comments from original bfq code. Will do it.

> 
> > + * @active: tree for active entities (i.e., those backlogged).
> > + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> > + * @first_idle: idle entity with minimum F_i.
> > + * @last_idle: idle entity with maximum F_i.
> > + * @vtime: scheduler virtual time.
> > + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> > + *
> > + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> > + * ioprio_class has its own independent scheduler, and so its own
> > + * bfq_service_tree.  All the fields are protected by the queue lock
> > + * of the containing efqd.
> > + */
> > +struct io_service_tree {
> > +	struct rb_root active;
> > +	struct rb_root idle;
> > +
> > +	struct io_entity *first_idle;
> > +	struct io_entity *last_idle;
> > +
> > +	bfq_timestamp_t vtime;
> > +	bfq_weight_t wsum;
> > +};
> > +
> > +/**
> > + * struct bfq_sched_data - multi-class scheduler.
> 
> Again the naming convention is broken, you need to change several
> bfq's to io's :)

Yes. Will do. :-)

> > +/*
> > + * A common structure embedded by every io scheduler into their respective
> > + * queue structure.
> > + */
> > +struct io_queue {
> > +	struct io_entity entity;
> 
> So the io_queue has an abstract entity called io_entity that contains
> it QoS parameters? Correct?
> 
> > +	atomic_t ref;
> > +	unsigned int flags;
> > +
> > +	/* Pointer to generic elevator data structure */
> > +	struct elv_fq_data *efqd;
> > +	pid_t pid;
> 
> Why do we store the pid?

pid of the process which caused io queue creation.

> 
> > +
> > +	/* Number of requests queued on this io queue */
> > +	unsigned long nr_queued;
> > +
> > +	/* Requests dispatched from this queue */
> > +	int dispatched;
> > +
> > +	/* Keep a track of think time of processes in this queue */
> > +	unsigned long last_end_request;
> > +	unsigned long ttime_total;
> > +	unsigned long ttime_samples;
> > +	unsigned long ttime_mean;
> > +
> > +	unsigned long slice_end;
> > +
> > +	/* Pointer to io scheduler's queue */
> > +	void *sched_queue;
> > +};
> > +
> > +struct io_group {
> > +	struct io_sched_data sched_data;
> > +
> > +	/* async_queue and idle_queue are used only for cfq */
> > +	struct io_queue *async_queue[2][IOPRIO_BE_NR];
> 
> Again the 2 is confusing
> 

Taken from CFQ. CFQ supports 8 prio levels for RT and BE class. We
maintain one async queue pointer per prio level for both RT and BE class.
Above number 2 is for RT and BE class.

> > +	struct io_queue *async_idle_queue;
> > +
> > +	/*
> > +	 * Used to track any pending rt requests so we can pre-empt current
> > +	 * non-RT cfqq in service when this value is non-zero.
> > +	 */
> > +	unsigned int busy_rt_queues;
> > +};
> > +
> > +struct elv_fq_data {
> 
> What does fq stand for?

Fair queuing. Any suggestions to make it better?

> 
> > +	struct io_group *root_group;
> > +
> > +	struct request_queue *queue;
> > +	unsigned int busy_queues;
> > +
> > +	/* Number of requests queued */
> > +	int rq_queued;
> > +
> > +	/* Pointer to the ioscheduler queue being served */
> > +	void *active_queue;
> > +
> > +	int rq_in_driver;
> > +	int hw_tag;
> > +	int hw_tag_samples;
> > +	int rq_in_driver_peak;
> 
> Some comments of _in_driver and _in_driver_peak would be nice.

Taken from CFQ. So somebody familiar with CFQ code can quickly relate. 
But anyway, I will put two lines of comments.

> 
> > +
> > +	/*
> > +	 * elevator fair queuing layer has the capability to provide idling
> > +	 * for ensuring fairness for processes doing dependent reads.
> > +	 * This might be needed to ensure fairness among two processes doing
> > +	 * synchronous reads in two different cgroups. noop and deadline don't
> > +	 * have any notion of anticipation/idling. As of now, these are the
> > +	 * users of this functionality.
> > +	 */
> > +	unsigned int elv_slice_idle;
> > +	struct timer_list idle_slice_timer;
> > +	struct work_struct unplug_work;
> > +
> > +	unsigned int elv_slice[2];
> 
> Why [2] makes the code hearder to read

Taken from CFQ. it represents base slice length for sync and async queues.
With put a line of comment.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-22  8:46     ` Balbir Singh
@ 2009-06-23  2:05       ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23  2:05 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, righi.andrea, m-ikeda, jbaron,
	agk, snitzer, akpm, peterz

On Mon, Jun 22, 2009 at 02:16:12PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:20]:
> 
> > This is common fair queuing code in elevator layer. This is controlled by
> > config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> > flat fair queuing support where there is only one group, "root group" and all
> > the tasks belong to root group.
> > 
> > This elevator layer changes are backward compatible. That means any ioscheduler
> > using old interfaces will continue to work.
> > 
> > This code is essentially the CFQ code for fair queuing. The primary difference
> > is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
> >
> 
> The patch is quite long and to be honest requires a long time to
> review, which I don't mind. I suspect my frequently diverted mind is
> likely to miss a lot in a big patch like this. Could you consider
> splitting this further if possible. I think you'll notice the number
> of reviews will also increase.
>  

Hi Balbir,

Thanks for the review. Yes, this is a big patch. I will try to break it
down further.

Fabio has already responded to most of the questions. I will try to cover
rest.

[..]
> > +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> > +					struct io_queue *ioq, int probe);
> > +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> > +						 int extract);
> > +
> > +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> > +					unsigned short prio)
> 
> Why is the return type int and not unsigned int or unsigned long? Can
> the return value ever be negative?

Actually this function was a replacement for cfq_prio_slice() hence return
type int. But as slice value can never be negative, I can make it unsigned
int.

[..]
> > + * bfq_gt - compare two timestamps.
> > + * @a: first ts.
> > + * @b: second ts.
> > + *
> > + * Return @a > @b, dealing with wrapping correctly.
> > + */
> > +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> > +{
> > +	return (s64)(a - b) > 0;
> > +}
> > +
> 
> a and b are of type u64, but cast to s64 to deal with wrapping?
> Correct?

Yes.

> 
> > +/**
> > + * bfq_delta - map service into the virtual time domain.
> > + * @service: amount of service.
> > + * @weight: scale factor.
> > + */
> > +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> > +					bfq_weight_t weight)
> > +{
> > +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> > +
> 
> Why is the case required? Does the compiler complain? service is
> already of the correct type.
> 
> > +	do_div(d, weight);
> 
> On a 64 system both d and weight are 64 bit, but on a 32 bit system
> weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
> - no?
> 

d is of type "bfq_timestamp_t" which is u64 irrespective of 64 or 32 bit
system. I think it might make sense to change type of "weight" from
unsigned long to unsigned int so that it is 32bit on both 64 and 32bit
systems. Will do...


> > +	return d;
> > +}
> > +
> > +/**
> > + * bfq_calc_finish - assign the finish time to an entity.
> > + * @entity: the entity to act upon.
> > + * @service: the service to be charged to the entity.
> > + */
> > +static inline void bfq_calc_finish(struct io_entity *entity,
> > +				   bfq_service_t service)
> > +{
> > +	BUG_ON(entity->weight == 0);
> > +
> > +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> > +}
> 
> Should we BUG_ON (entity->finish == entity->start)? Or is that
> expected when the entity has no service time left.
> 

As Fabio said, that with preemption logic, I guess theoritically, it is
possible that a io queue is preempted without any service received and
requeued back. Hence it might not be a very good idea to
BUG_ON(entity->finish == entity->start); 

[..]
> > +/**
> > + * bfq_extract - remove an entity from a tree.
> > + * @root: the tree root.
> > + * @entity: the entity to remove.
> > + */
> > +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> > +{
> 
> Extract is not common terminology, why not use bfq_remove()?
> 

*_remove() also sounds good. Will replace *_extract() with *_remove().

> > +	BUG_ON(entity->tree != root);
> > +
> > +	entity->tree = NULL;
> > +	rb_erase(&entity->rb_node, root);
> 
> Don't you want to make entity->tree = NULL after rb_erase?

As Fabio said that it happens under queue spinlock held. But from
readability point of view, it probably looks better to first remove it
from rb tree then reset entity fields. Will change the order...

> 
> > +}
> > +
> > +/**
> > + * bfq_idle_extract - extract an entity from the idle tree.
> > + * @st: the service tree of the owning @entity.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_idle_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *next;
> > +
> > +	BUG_ON(entity->tree != &st->idle);
> > +
> > +	if (entity == st->first_idle) {
> > +		next = rb_next(&entity->rb_node);
> 
> What happens if next is NULL?
> 
> > +		st->first_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	if (entity == st->last_idle) {
> > +		next = rb_prev(&entity->rb_node);
> 
> What happens if next is NULL?
> 
> > +		st->last_idle = bfq_entity_of(next);

bfq_entity_of() is capable of handling next == NULL.

I can change it to following if you think it is more readable.

	if (entity == st->first_idle) {
		next = rb_next(&entity->rb_node);
		if (next)
			st->first_idle = bfq_entity_of(next);
		else
			st->first_idle = NULL;
	}

[..]

> > +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	unsigned long elapsed = jiffies - ioq->last_end_request;
> > +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> > +
> > +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> > +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> > +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> > +}
> 
> Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
> understand the algorithm.

Taken from CFQ. 

> > +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +			void *sched_queue, int ioprio_class, int ioprio,
> > +			int is_sync)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> > +
> > +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> > +	atomic_set(&ioq->ref, 0);
> > +	ioq->efqd = efqd;
> > +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> > +	elv_ioq_set_ioprio(ioq, ioprio);
> > +	ioq->pid = current->pid;
> 
> Is pid used for cgroup association later? I don't see why we save the
> pid otherwise? If yes, why not store the cgroup of the current->pid?
> 

This is just for logging purposes (blktrace), useful for CFQ where every task
context sets up one queue and this number becomes the identifier for the queue.
Look at elv_log_ioq(), which uses ioq->pid.

[..]
> > + * coop tells that io scheduler selected a queue for us and we did not
> 
> coop?

coop refers to "cooperating". I guess "coop" is not descriptive. I will
change the name to "cooperating" and also put more description for
clarity.

[..]
> > diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> > new file mode 100644
> > index 0000000..5b6c1cc
> > --- /dev/null
> > +++ b/block/elevator-fq.h
> > @@ -0,0 +1,473 @@
> > +/*
> > + * BFQ: data structures and common functions prototypes.
> > + *
> > + * Based on ideas and code from CFQ:
> > + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> > + *
> > + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> > + *		      Paolo Valente <paolo.valente@unimore.it>
> > + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> > + * 	              Nauman Rafique <nauman@google.com>
> > + */
> > +
> > +#include <linux/blkdev.h>
> > +
> > +#ifndef _BFQ_SCHED_H
> > +#define _BFQ_SCHED_H
> > +
> > +#define IO_IOPRIO_CLASSES	3
> > +
> > +typedef u64 bfq_timestamp_t;
> > +typedef unsigned long bfq_weight_t;
> > +typedef unsigned long bfq_service_t;
> 
> Does this abstraction really provide any benefit? Why not directly use
> the standard C types, make the code easier to read.

I think using standard C type is better now. Will get rid of these
abstractions. Fabio also seems to be ok with this change.

> 
> > +struct io_entity;
> > +struct io_queue;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +#define ELV_ATTR(name) \
> > +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> > +
> > +/**
> > + * struct bfq_service_tree - per ioprio_class service tree.
> 
> Comment is old, does not reflect the newer name

Yes, this is all over the code. I have not taken care of updating the
comments from original bfq code. Will do it.

> 
> > + * @active: tree for active entities (i.e., those backlogged).
> > + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> > + * @first_idle: idle entity with minimum F_i.
> > + * @last_idle: idle entity with maximum F_i.
> > + * @vtime: scheduler virtual time.
> > + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> > + *
> > + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> > + * ioprio_class has its own independent scheduler, and so its own
> > + * bfq_service_tree.  All the fields are protected by the queue lock
> > + * of the containing efqd.
> > + */
> > +struct io_service_tree {
> > +	struct rb_root active;
> > +	struct rb_root idle;
> > +
> > +	struct io_entity *first_idle;
> > +	struct io_entity *last_idle;
> > +
> > +	bfq_timestamp_t vtime;
> > +	bfq_weight_t wsum;
> > +};
> > +
> > +/**
> > + * struct bfq_sched_data - multi-class scheduler.
> 
> Again the naming convention is broken, you need to change several
> bfq's to io's :)

Yes. Will do. :-)

> > +/*
> > + * A common structure embedded by every io scheduler into their respective
> > + * queue structure.
> > + */
> > +struct io_queue {
> > +	struct io_entity entity;
> 
> So the io_queue has an abstract entity called io_entity that contains
> it QoS parameters? Correct?
> 
> > +	atomic_t ref;
> > +	unsigned int flags;
> > +
> > +	/* Pointer to generic elevator data structure */
> > +	struct elv_fq_data *efqd;
> > +	pid_t pid;
> 
> Why do we store the pid?

pid of the process which caused io queue creation.

> 
> > +
> > +	/* Number of requests queued on this io queue */
> > +	unsigned long nr_queued;
> > +
> > +	/* Requests dispatched from this queue */
> > +	int dispatched;
> > +
> > +	/* Keep a track of think time of processes in this queue */
> > +	unsigned long last_end_request;
> > +	unsigned long ttime_total;
> > +	unsigned long ttime_samples;
> > +	unsigned long ttime_mean;
> > +
> > +	unsigned long slice_end;
> > +
> > +	/* Pointer to io scheduler's queue */
> > +	void *sched_queue;
> > +};
> > +
> > +struct io_group {
> > +	struct io_sched_data sched_data;
> > +
> > +	/* async_queue and idle_queue are used only for cfq */
> > +	struct io_queue *async_queue[2][IOPRIO_BE_NR];
> 
> Again the 2 is confusing
> 

Taken from CFQ. CFQ supports 8 prio levels for RT and BE class. We
maintain one async queue pointer per prio level for both RT and BE class.
Above number 2 is for RT and BE class.

> > +	struct io_queue *async_idle_queue;
> > +
> > +	/*
> > +	 * Used to track any pending rt requests so we can pre-empt current
> > +	 * non-RT cfqq in service when this value is non-zero.
> > +	 */
> > +	unsigned int busy_rt_queues;
> > +};
> > +
> > +struct elv_fq_data {
> 
> What does fq stand for?

Fair queuing. Any suggestions to make it better?

> 
> > +	struct io_group *root_group;
> > +
> > +	struct request_queue *queue;
> > +	unsigned int busy_queues;
> > +
> > +	/* Number of requests queued */
> > +	int rq_queued;
> > +
> > +	/* Pointer to the ioscheduler queue being served */
> > +	void *active_queue;
> > +
> > +	int rq_in_driver;
> > +	int hw_tag;
> > +	int hw_tag_samples;
> > +	int rq_in_driver_peak;
> 
> Some comments of _in_driver and _in_driver_peak would be nice.

Taken from CFQ. So somebody familiar with CFQ code can quickly relate. 
But anyway, I will put two lines of comments.

> 
> > +
> > +	/*
> > +	 * elevator fair queuing layer has the capability to provide idling
> > +	 * for ensuring fairness for processes doing dependent reads.
> > +	 * This might be needed to ensure fairness among two processes doing
> > +	 * synchronous reads in two different cgroups. noop and deadline don't
> > +	 * have any notion of anticipation/idling. As of now, these are the
> > +	 * users of this functionality.
> > +	 */
> > +	unsigned int elv_slice_idle;
> > +	struct timer_list idle_slice_timer;
> > +	struct work_struct unplug_work;
> > +
> > +	unsigned int elv_slice[2];
> 
> Why [2] makes the code hearder to read

Taken from CFQ. it represents base slice length for sync and async queues.
With put a line of comment.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-23  2:05       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23  2:05 UTC (permalink / raw)
  To: Balbir Singh
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	paolo.valente, guijianfeng, fernando, mikew, jmoyer, nauman,
	m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Mon, Jun 22, 2009 at 02:16:12PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:20]:
> 
> > This is common fair queuing code in elevator layer. This is controlled by
> > config option CONFIG_ELV_FAIR_QUEUING. This patch initially only introduces
> > flat fair queuing support where there is only one group, "root group" and all
> > the tasks belong to root group.
> > 
> > This elevator layer changes are backward compatible. That means any ioscheduler
> > using old interfaces will continue to work.
> > 
> > This code is essentially the CFQ code for fair queuing. The primary difference
> > is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+).
> >
> 
> The patch is quite long and to be honest requires a long time to
> review, which I don't mind. I suspect my frequently diverted mind is
> likely to miss a lot in a big patch like this. Could you consider
> splitting this further if possible. I think you'll notice the number
> of reviews will also increase.
>  

Hi Balbir,

Thanks for the review. Yes, this is a big patch. I will try to break it
down further.

Fabio has already responded to most of the questions. I will try to cover
rest.

[..]
> > +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> > +					struct io_queue *ioq, int probe);
> > +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd,
> > +						 int extract);
> > +
> > +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync,
> > +					unsigned short prio)
> 
> Why is the return type int and not unsigned int or unsigned long? Can
> the return value ever be negative?

Actually this function was a replacement for cfq_prio_slice() hence return
type int. But as slice value can never be negative, I can make it unsigned
int.

[..]
> > + * bfq_gt - compare two timestamps.
> > + * @a: first ts.
> > + * @b: second ts.
> > + *
> > + * Return @a > @b, dealing with wrapping correctly.
> > + */
> > +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b)
> > +{
> > +	return (s64)(a - b) > 0;
> > +}
> > +
> 
> a and b are of type u64, but cast to s64 to deal with wrapping?
> Correct?

Yes.

> 
> > +/**
> > + * bfq_delta - map service into the virtual time domain.
> > + * @service: amount of service.
> > + * @weight: scale factor.
> > + */
> > +static inline bfq_timestamp_t bfq_delta(bfq_service_t service,
> > +					bfq_weight_t weight)
> > +{
> > +	bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT;
> > +
> 
> Why is the case required? Does the compiler complain? service is
> already of the correct type.
> 
> > +	do_div(d, weight);
> 
> On a 64 system both d and weight are 64 bit, but on a 32 bit system
> weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor
> - no?
> 

d is of type "bfq_timestamp_t" which is u64 irrespective of 64 or 32 bit
system. I think it might make sense to change type of "weight" from
unsigned long to unsigned int so that it is 32bit on both 64 and 32bit
systems. Will do...


> > +	return d;
> > +}
> > +
> > +/**
> > + * bfq_calc_finish - assign the finish time to an entity.
> > + * @entity: the entity to act upon.
> > + * @service: the service to be charged to the entity.
> > + */
> > +static inline void bfq_calc_finish(struct io_entity *entity,
> > +				   bfq_service_t service)
> > +{
> > +	BUG_ON(entity->weight == 0);
> > +
> > +	entity->finish = entity->start + bfq_delta(service, entity->weight);
> > +}
> 
> Should we BUG_ON (entity->finish == entity->start)? Or is that
> expected when the entity has no service time left.
> 

As Fabio said, that with preemption logic, I guess theoritically, it is
possible that a io queue is preempted without any service received and
requeued back. Hence it might not be a very good idea to
BUG_ON(entity->finish == entity->start); 

[..]
> > +/**
> > + * bfq_extract - remove an entity from a tree.
> > + * @root: the tree root.
> > + * @entity: the entity to remove.
> > + */
> > +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity)
> > +{
> 
> Extract is not common terminology, why not use bfq_remove()?
> 

*_remove() also sounds good. Will replace *_extract() with *_remove().

> > +	BUG_ON(entity->tree != root);
> > +
> > +	entity->tree = NULL;
> > +	rb_erase(&entity->rb_node, root);
> 
> Don't you want to make entity->tree = NULL after rb_erase?

As Fabio said that it happens under queue spinlock held. But from
readability point of view, it probably looks better to first remove it
from rb tree then reset entity fields. Will change the order...

> 
> > +}
> > +
> > +/**
> > + * bfq_idle_extract - extract an entity from the idle tree.
> > + * @st: the service tree of the owning @entity.
> > + * @entity: the entity being removed.
> > + */
> > +static void bfq_idle_extract(struct io_service_tree *st,
> > +				struct io_entity *entity)
> > +{
> > +	struct rb_node *next;
> > +
> > +	BUG_ON(entity->tree != &st->idle);
> > +
> > +	if (entity == st->first_idle) {
> > +		next = rb_next(&entity->rb_node);
> 
> What happens if next is NULL?
> 
> > +		st->first_idle = bfq_entity_of(next);
> > +	}
> > +
> > +	if (entity == st->last_idle) {
> > +		next = rb_prev(&entity->rb_node);
> 
> What happens if next is NULL?
> 
> > +		st->last_idle = bfq_entity_of(next);

bfq_entity_of() is capable of handling next == NULL.

I can change it to following if you think it is more readable.

	if (entity == st->first_idle) {
		next = rb_next(&entity->rb_node);
		if (next)
			st->first_idle = bfq_entity_of(next);
		else
			st->first_idle = NULL;
	}

[..]

> > +static void elv_ioq_update_io_thinktime(struct io_queue *ioq)
> > +{
> > +	struct elv_fq_data *efqd = ioq->efqd;
> > +	unsigned long elapsed = jiffies - ioq->last_end_request;
> > +	unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle);
> > +
> > +	ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8;
> > +	ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8;
> > +	ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples;
> > +}
> 
> Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me
> understand the algorithm.

Taken from CFQ. 

> > +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq,
> > +			void *sched_queue, int ioprio_class, int ioprio,
> > +			int is_sync)
> > +{
> > +	struct elv_fq_data *efqd = &eq->efqd;
> > +	struct io_group *iog = io_lookup_io_group_current(efqd->queue);
> > +
> > +	RB_CLEAR_NODE(&ioq->entity.rb_node);
> > +	atomic_set(&ioq->ref, 0);
> > +	ioq->efqd = efqd;
> > +	elv_ioq_set_ioprio_class(ioq, ioprio_class);
> > +	elv_ioq_set_ioprio(ioq, ioprio);
> > +	ioq->pid = current->pid;
> 
> Is pid used for cgroup association later? I don't see why we save the
> pid otherwise? If yes, why not store the cgroup of the current->pid?
> 

This is just for logging purposes (blktrace), useful for CFQ where every task
context sets up one queue and this number becomes the identifier for the queue.
Look at elv_log_ioq(), which uses ioq->pid.

[..]
> > + * coop tells that io scheduler selected a queue for us and we did not
> 
> coop?

coop refers to "cooperating". I guess "coop" is not descriptive. I will
change the name to "cooperating" and also put more description for
clarity.

[..]
> > diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> > new file mode 100644
> > index 0000000..5b6c1cc
> > --- /dev/null
> > +++ b/block/elevator-fq.h
> > @@ -0,0 +1,473 @@
> > +/*
> > + * BFQ: data structures and common functions prototypes.
> > + *
> > + * Based on ideas and code from CFQ:
> > + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
> > + *
> > + * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
> > + *		      Paolo Valente <paolo.valente@unimore.it>
> > + * Copyright (C) 2009 Vivek Goyal <vgoyal@redhat.com>
> > + * 	              Nauman Rafique <nauman@google.com>
> > + */
> > +
> > +#include <linux/blkdev.h>
> > +
> > +#ifndef _BFQ_SCHED_H
> > +#define _BFQ_SCHED_H
> > +
> > +#define IO_IOPRIO_CLASSES	3
> > +
> > +typedef u64 bfq_timestamp_t;
> > +typedef unsigned long bfq_weight_t;
> > +typedef unsigned long bfq_service_t;
> 
> Does this abstraction really provide any benefit? Why not directly use
> the standard C types, make the code easier to read.

I think using standard C type is better now. Will get rid of these
abstractions. Fabio also seems to be ok with this change.

> 
> > +struct io_entity;
> > +struct io_queue;
> > +
> > +#ifdef CONFIG_ELV_FAIR_QUEUING
> > +
> > +#define ELV_ATTR(name) \
> > +	__ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store)
> > +
> > +/**
> > + * struct bfq_service_tree - per ioprio_class service tree.
> 
> Comment is old, does not reflect the newer name

Yes, this is all over the code. I have not taken care of updating the
comments from original bfq code. Will do it.

> 
> > + * @active: tree for active entities (i.e., those backlogged).
> > + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
> > + * @first_idle: idle entity with minimum F_i.
> > + * @last_idle: idle entity with maximum F_i.
> > + * @vtime: scheduler virtual time.
> > + * @wsum: scheduler weight sum; active and idle entities contribute to it.
> > + *
> > + * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
> > + * ioprio_class has its own independent scheduler, and so its own
> > + * bfq_service_tree.  All the fields are protected by the queue lock
> > + * of the containing efqd.
> > + */
> > +struct io_service_tree {
> > +	struct rb_root active;
> > +	struct rb_root idle;
> > +
> > +	struct io_entity *first_idle;
> > +	struct io_entity *last_idle;
> > +
> > +	bfq_timestamp_t vtime;
> > +	bfq_weight_t wsum;
> > +};
> > +
> > +/**
> > + * struct bfq_sched_data - multi-class scheduler.
> 
> Again the naming convention is broken, you need to change several
> bfq's to io's :)

Yes. Will do. :-)

> > +/*
> > + * A common structure embedded by every io scheduler into their respective
> > + * queue structure.
> > + */
> > +struct io_queue {
> > +	struct io_entity entity;
> 
> So the io_queue has an abstract entity called io_entity that contains
> it QoS parameters? Correct?
> 
> > +	atomic_t ref;
> > +	unsigned int flags;
> > +
> > +	/* Pointer to generic elevator data structure */
> > +	struct elv_fq_data *efqd;
> > +	pid_t pid;
> 
> Why do we store the pid?

pid of the process which caused io queue creation.

> 
> > +
> > +	/* Number of requests queued on this io queue */
> > +	unsigned long nr_queued;
> > +
> > +	/* Requests dispatched from this queue */
> > +	int dispatched;
> > +
> > +	/* Keep a track of think time of processes in this queue */
> > +	unsigned long last_end_request;
> > +	unsigned long ttime_total;
> > +	unsigned long ttime_samples;
> > +	unsigned long ttime_mean;
> > +
> > +	unsigned long slice_end;
> > +
> > +	/* Pointer to io scheduler's queue */
> > +	void *sched_queue;
> > +};
> > +
> > +struct io_group {
> > +	struct io_sched_data sched_data;
> > +
> > +	/* async_queue and idle_queue are used only for cfq */
> > +	struct io_queue *async_queue[2][IOPRIO_BE_NR];
> 
> Again the 2 is confusing
> 

Taken from CFQ. CFQ supports 8 prio levels for RT and BE class. We
maintain one async queue pointer per prio level for both RT and BE class.
Above number 2 is for RT and BE class.

> > +	struct io_queue *async_idle_queue;
> > +
> > +	/*
> > +	 * Used to track any pending rt requests so we can pre-empt current
> > +	 * non-RT cfqq in service when this value is non-zero.
> > +	 */
> > +	unsigned int busy_rt_queues;
> > +};
> > +
> > +struct elv_fq_data {
> 
> What does fq stand for?

Fair queuing. Any suggestions to make it better?

> 
> > +	struct io_group *root_group;
> > +
> > +	struct request_queue *queue;
> > +	unsigned int busy_queues;
> > +
> > +	/* Number of requests queued */
> > +	int rq_queued;
> > +
> > +	/* Pointer to the ioscheduler queue being served */
> > +	void *active_queue;
> > +
> > +	int rq_in_driver;
> > +	int hw_tag;
> > +	int hw_tag_samples;
> > +	int rq_in_driver_peak;
> 
> Some comments of _in_driver and _in_driver_peak would be nice.

Taken from CFQ. So somebody familiar with CFQ code can quickly relate. 
But anyway, I will put two lines of comments.

> 
> > +
> > +	/*
> > +	 * elevator fair queuing layer has the capability to provide idling
> > +	 * for ensuring fairness for processes doing dependent reads.
> > +	 * This might be needed to ensure fairness among two processes doing
> > +	 * synchronous reads in two different cgroups. noop and deadline don't
> > +	 * have any notion of anticipation/idling. As of now, these are the
> > +	 * users of this functionality.
> > +	 */
> > +	unsigned int elv_slice_idle;
> > +	struct timer_list idle_slice_timer;
> > +	struct work_struct unplug_work;
> > +
> > +	unsigned int elv_slice[2];
> 
> Why [2] makes the code hearder to read

Taken from CFQ. it represents base slice length for sync and async queues.
With put a line of comment.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-23  2:05       ` Vivek Goyal
@ 2009-06-23  2:20           ` Jeff Moyer
  -1 siblings, 0 replies; 176+ messages in thread
From: Jeff Moyer @ 2009-06-23  2:20 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:

> On Mon, Jun 22, 2009 at 02:16:12PM +0530, Balbir Singh wrote:
>> > +	ioq->pid = current->pid;
>> 
>> Is pid used for cgroup association later? I don't see why we save the
>> pid otherwise? If yes, why not store the cgroup of the current->pid?
>> 
>
> This is just for logging purposes (blktrace), useful for CFQ where every task
> context sets up one queue and this number becomes the identifier for the queue.
> Look at elv_log_ioq(), which uses ioq->pid.

Well, that's not 100% accurate as tasks can share I/O contexts.
However, the 1:1 mapping does hold true most of the time.

> [..]
>> > + * coop tells that io scheduler selected a queue for us and we did not
>> 
>> coop?
>
> coop refers to "cooperating". I guess "coop" is not descriptive. I will
> change the name to "cooperating" and also put more description for
> clarity.

I think just more description is fine.  I'm not sure you need to spell
out cooperating (that will make for some long lines!).

>> > +	struct io_queue *async_idle_queue;
>> > +
>> > +	/*
>> > +	 * Used to track any pending rt requests so we can pre-empt current
>> > +	 * non-RT cfqq in service when this value is non-zero.
>> > +	 */
>> > +	unsigned int busy_rt_queues;
>> > +};
>> > +
>> > +struct elv_fq_data {
>> 
>> What does fq stand for?
>
> Fair queuing. Any suggestions to make it better?

I think you could just put it in the comment.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-23  2:20           ` Jeff Moyer
  0 siblings, 0 replies; 176+ messages in thread
From: Jeff Moyer @ 2009-06-23  2:20 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, guijianfeng, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

Vivek Goyal <vgoyal@redhat.com> writes:

> On Mon, Jun 22, 2009 at 02:16:12PM +0530, Balbir Singh wrote:
>> > +	ioq->pid = current->pid;
>> 
>> Is pid used for cgroup association later? I don't see why we save the
>> pid otherwise? If yes, why not store the cgroup of the current->pid?
>> 
>
> This is just for logging purposes (blktrace), useful for CFQ where every task
> context sets up one queue and this number becomes the identifier for the queue.
> Look at elv_log_ioq(), which uses ioq->pid.

Well, that's not 100% accurate as tasks can share I/O contexts.
However, the 1:1 mapping does hold true most of the time.

> [..]
>> > + * coop tells that io scheduler selected a queue for us and we did not
>> 
>> coop?
>
> coop refers to "cooperating". I guess "coop" is not descriptive. I will
> change the name to "cooperating" and also put more description for
> clarity.

I think just more description is fine.  I'm not sure you need to spell
out cooperating (that will make for some long lines!).

>> > +	struct io_queue *async_idle_queue;
>> > +
>> > +	/*
>> > +	 * Used to track any pending rt requests so we can pre-empt current
>> > +	 * non-RT cfqq in service when this value is non-zero.
>> > +	 */
>> > +	unsigned int busy_rt_queues;
>> > +};
>> > +
>> > +struct elv_fq_data {
>> 
>> What does fq stand for?
>
> Fair queuing. Any suggestions to make it better?

I think you could just put it in the comment.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]       ` <20090622124313.GF28770-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
@ 2009-06-23  2:43         ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23  2:43 UTC (permalink / raw)
  To: Fabio Checconi
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:

[..]
> > > +/**
> > > + * bfq_first_active - find the eligible entity with the smallest finish time
> > > + * @st: the service tree to select from.
> > > + *
> > > + * This function searches the first schedulable entity, starting from the
> > > + * root of the tree and going on the left every time on this side there is
> > > + * a subtree with at least one eligible (start <= vtime) entity.  The path
> > > + * on the right is followed only if a) the left subtree contains no eligible
> > > + * entities and b) no eligible entity has been found yet.
> > > + */
> > > +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> > > +{
> > > +	struct io_entity *entry, *first = NULL;
> > > +	struct rb_node *node = st->active.rb_node;
> > > +
> > > +	while (node != NULL) {
> > > +		entry = rb_entry(node, struct io_entity, rb_node);
> > > +left:
> > > +		if (!bfq_gt(entry->start, st->vtime))
> > > +			first = entry;
> > > +
> > > +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> > > +
> > > +		if (node->rb_left != NULL) {
> > > +			entry = rb_entry(node->rb_left,
> > > +					 struct io_entity, rb_node);
> > > +			if (!bfq_gt(entry->min_start, st->vtime)) {
> > > +				node = node->rb_left;
> > > +				goto left;
> > > +			}
> > > +		}
> > > +		if (first != NULL)
> > > +			break;
> > > +		node = node->rb_right;
> > 
> > Please help me understand this, we sort the tree by finish time, but
> > search by vtime, start_time. The worst case could easily be O(N),
> > right?
> > 
> 
> no, (again, the full answer is in the paper); the nice property of
> min_start is that it partitions the tree in two regions, one with
> eligible entities and one without any of them.  once we know that
> there is one eligible entity (checking the min_start at the root)
> we can find the node i with min(F_i) subject to S_i < V walking down
> a single path from the root to the leftmost eligible entity.  (we
> need to go to the right only if the subtree on the left contains 
> no eligible entities at all.)  since the RB tree is balanced this
> can be done in O(log N).
> 

Hi Fabio,

When I go thorough the paper you mentioned above, they seem to have
sorted the tree based on eligible time (looks like equivalent of start
time) and then keep track of minimum deadline on each node (equivalnet of
finish time).

We seem to be doing reverse in BFQ where we sort tree on finish time
and keep track of minimum start time on each node. Is there any specific
reason behind that?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-22 12:43     ` Fabio Checconi
@ 2009-06-23  2:43         ` Vivek Goyal
  2009-06-23  2:43         ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23  2:43 UTC (permalink / raw)
  To: Fabio Checconi
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, paolo.valente, ryov, fernando,
	s-uchida, taka, guijianfeng, jmoyer, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:

[..]
> > > +/**
> > > + * bfq_first_active - find the eligible entity with the smallest finish time
> > > + * @st: the service tree to select from.
> > > + *
> > > + * This function searches the first schedulable entity, starting from the
> > > + * root of the tree and going on the left every time on this side there is
> > > + * a subtree with at least one eligible (start <= vtime) entity.  The path
> > > + * on the right is followed only if a) the left subtree contains no eligible
> > > + * entities and b) no eligible entity has been found yet.
> > > + */
> > > +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> > > +{
> > > +	struct io_entity *entry, *first = NULL;
> > > +	struct rb_node *node = st->active.rb_node;
> > > +
> > > +	while (node != NULL) {
> > > +		entry = rb_entry(node, struct io_entity, rb_node);
> > > +left:
> > > +		if (!bfq_gt(entry->start, st->vtime))
> > > +			first = entry;
> > > +
> > > +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> > > +
> > > +		if (node->rb_left != NULL) {
> > > +			entry = rb_entry(node->rb_left,
> > > +					 struct io_entity, rb_node);
> > > +			if (!bfq_gt(entry->min_start, st->vtime)) {
> > > +				node = node->rb_left;
> > > +				goto left;
> > > +			}
> > > +		}
> > > +		if (first != NULL)
> > > +			break;
> > > +		node = node->rb_right;
> > 
> > Please help me understand this, we sort the tree by finish time, but
> > search by vtime, start_time. The worst case could easily be O(N),
> > right?
> > 
> 
> no, (again, the full answer is in the paper); the nice property of
> min_start is that it partitions the tree in two regions, one with
> eligible entities and one without any of them.  once we know that
> there is one eligible entity (checking the min_start at the root)
> we can find the node i with min(F_i) subject to S_i < V walking down
> a single path from the root to the leftmost eligible entity.  (we
> need to go to the right only if the subtree on the left contains 
> no eligible entities at all.)  since the RB tree is balanced this
> can be done in O(log N).
> 

Hi Fabio,

When I go thorough the paper you mentioned above, they seem to have
sorted the tree based on eligible time (looks like equivalent of start
time) and then keep track of minimum deadline on each node (equivalnet of
finish time).

We seem to be doing reverse in BFQ where we sort tree on finish time
and keep track of minimum start time on each node. Is there any specific
reason behind that?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-23  2:43         ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23  2:43 UTC (permalink / raw)
  To: Fabio Checconi
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	Balbir Singh, paolo.valente, guijianfeng, fernando, mikew,
	jmoyer, nauman, m-ikeda, lizf, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:

[..]
> > > +/**
> > > + * bfq_first_active - find the eligible entity with the smallest finish time
> > > + * @st: the service tree to select from.
> > > + *
> > > + * This function searches the first schedulable entity, starting from the
> > > + * root of the tree and going on the left every time on this side there is
> > > + * a subtree with at least one eligible (start <= vtime) entity.  The path
> > > + * on the right is followed only if a) the left subtree contains no eligible
> > > + * entities and b) no eligible entity has been found yet.
> > > + */
> > > +static struct io_entity *bfq_first_active_entity(struct io_service_tree *st)
> > > +{
> > > +	struct io_entity *entry, *first = NULL;
> > > +	struct rb_node *node = st->active.rb_node;
> > > +
> > > +	while (node != NULL) {
> > > +		entry = rb_entry(node, struct io_entity, rb_node);
> > > +left:
> > > +		if (!bfq_gt(entry->start, st->vtime))
> > > +			first = entry;
> > > +
> > > +		BUG_ON(bfq_gt(entry->min_start, st->vtime));
> > > +
> > > +		if (node->rb_left != NULL) {
> > > +			entry = rb_entry(node->rb_left,
> > > +					 struct io_entity, rb_node);
> > > +			if (!bfq_gt(entry->min_start, st->vtime)) {
> > > +				node = node->rb_left;
> > > +				goto left;
> > > +			}
> > > +		}
> > > +		if (first != NULL)
> > > +			break;
> > > +		node = node->rb_right;
> > 
> > Please help me understand this, we sort the tree by finish time, but
> > search by vtime, start_time. The worst case could easily be O(N),
> > right?
> > 
> 
> no, (again, the full answer is in the paper); the nice property of
> min_start is that it partitions the tree in two regions, one with
> eligible entities and one without any of them.  once we know that
> there is one eligible entity (checking the min_start at the root)
> we can find the node i with min(F_i) subject to S_i < V walking down
> a single path from the root to the leftmost eligible entity.  (we
> need to go to the right only if the subtree on the left contains 
> no eligible entities at all.)  since the RB tree is balanced this
> can be done in O(log N).
> 

Hi Fabio,

When I go thorough the paper you mentioned above, they seem to have
sorted the tree based on eligible time (looks like equivalent of start
time) and then keep track of minimum deadline on each node (equivalnet of
finish time).

We seem to be doing reverse in BFQ where we sort tree on finish time
and keep track of minimum start time on each node. Is there any specific
reason behind that?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-23  2:43         ` Vivek Goyal
@ 2009-06-23  4:10             ` Fabio Checconi
  -1 siblings, 0 replies; 176+ messages in thread
From: Fabio Checconi @ 2009-06-23  4:10 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	Balbir Singh, paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

> From: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Date: Mon, Jun 22, 2009 10:43:37PM -0400
>
> On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> 
...
> > > Please help me understand this, we sort the tree by finish time, but
> > > search by vtime, start_time. The worst case could easily be O(N),
> > > right?
> > > 
> > 
> > no, (again, the full answer is in the paper); the nice property of
> > min_start is that it partitions the tree in two regions, one with
> > eligible entities and one without any of them.  once we know that
> > there is one eligible entity (checking the min_start at the root)
> > we can find the node i with min(F_i) subject to S_i < V walking down
> > a single path from the root to the leftmost eligible entity.  (we
> > need to go to the right only if the subtree on the left contains 
> > no eligible entities at all.)  since the RB tree is balanced this
> > can be done in O(log N).
> > 
> 
> Hi Fabio,
> 
> When I go thorough the paper you mentioned above, they seem to have
> sorted the tree based on eligible time (looks like equivalent of start
> time) and then keep track of minimum deadline on each node (equivalnet of
> finish time).
> 
> We seem to be doing reverse in BFQ where we sort tree on finish time
> and keep track of minimum start time on each node. Is there any specific
> reason behind that?
> 

Well... no specific reasons...  I think that our implementation is easier
to understand than the one of the paper, because it actually uses finish
times as the ordering key, and min_start to quickly locate eligible
subtrees, following the definition of the algorithm.

Moreover, if you look at the get_req() code in the paper, it needs a
couple of loops to get to the result, while with our implementation
we save the second loop.

Our version is still correct, because it always moves to the left
(towards smaller finish times), except when moving to the left would
mean entering a non feasible subtree, in which case it moves to the
right.

Unfortunately I'm not aware of any paper describing a version of the
algorithm more similar to the one we've implemented.  Sorry for not
having mentioned that difference in the comments nor anywhere else,
it has been a long long time since I read the paper, and I must have
forgotten about that.

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-23  4:10             ` Fabio Checconi
  0 siblings, 0 replies; 176+ messages in thread
From: Fabio Checconi @ 2009-06-23  4:10 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Balbir Singh, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, paolo.valente, ryov, fernando,
	s-uchida, taka, guijianfeng, jmoyer, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

> From: Vivek Goyal <vgoyal@redhat.com>
> Date: Mon, Jun 22, 2009 10:43:37PM -0400
>
> On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> 
...
> > > Please help me understand this, we sort the tree by finish time, but
> > > search by vtime, start_time. The worst case could easily be O(N),
> > > right?
> > > 
> > 
> > no, (again, the full answer is in the paper); the nice property of
> > min_start is that it partitions the tree in two regions, one with
> > eligible entities and one without any of them.  once we know that
> > there is one eligible entity (checking the min_start at the root)
> > we can find the node i with min(F_i) subject to S_i < V walking down
> > a single path from the root to the leftmost eligible entity.  (we
> > need to go to the right only if the subtree on the left contains 
> > no eligible entities at all.)  since the RB tree is balanced this
> > can be done in O(log N).
> > 
> 
> Hi Fabio,
> 
> When I go thorough the paper you mentioned above, they seem to have
> sorted the tree based on eligible time (looks like equivalent of start
> time) and then keep track of minimum deadline on each node (equivalnet of
> finish time).
> 
> We seem to be doing reverse in BFQ where we sort tree on finish time
> and keep track of minimum start time on each node. Is there any specific
> reason behind that?
> 

Well... no specific reasons...  I think that our implementation is easier
to understand than the one of the paper, because it actually uses finish
times as the ordering key, and min_start to quickly locate eligible
subtrees, following the definition of the algorithm.

Moreover, if you look at the get_req() code in the paper, it needs a
couple of loops to get to the result, while with our implementation
we save the second loop.

Our version is still correct, because it always moves to the left
(towards smaller finish times), except when moving to the left would
mean entering a non feasible subtree, in which case it moves to the
right.

Unfortunately I'm not aware of any paper describing a version of the
algorithm more similar to the one we've implemented.  Sorry for not
having mentioned that difference in the comments nor anywhere else,
it has been a long long time since I read the paper, and I must have
forgotten about that.

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
       [not found]       ` <20090622172123.GE15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-23  6:44         ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-23  6:44 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
> On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
>> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
>> in ancestor or sibling groups. It will give other group's rt ioq an chance 
>> to dispatch ASAP.
>>
> 
> Hi Gui,
> 
> Will new preempton logic of traversing up the hiearchy so that both new
> queue and old queue are at same level to take a preemption decision not
> take care of above scenario?

Hi Vivek,

Would you explain a bit what do you mean about "both new queue and old queue 
are at same level to take a preemption decision". I don't understand it well.

> 
> Please have a look at bfq_find_matching_entity().
> 
> At the same time we probably don't want to preempt the non-rt queue
> with an RT queue in sibling group until and unless sibling group is an
> RT group.
> 
> 		root
> 		/  \
> 	   BEgrpA  BEgrpB
> 	      |     |	
> 	  BEioq1   RTioq2
> 
> Above we got two BE group A and B and assume ioq in group A is being
> served and then an RT request in group B comes. Because group B is an
> BE class group, we should not preempt the queue in group A.

  Yes, i also have this concern. So, it does not allow non-rt group preempts
  another group. I'll check whether there is a way to address this issue.

> 
> Thanks
> Vivek
> 
> 
>> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
>> ---
>>  block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
>>  block/elevator-fq.h |    1 +
>>  2 files changed, 40 insertions(+), 5 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index 2ad40eb..80526fd 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
>>  	elv_mark_ioq_busy(ioq);
>>  	efqd->busy_queues++;
>>  	if (elv_ioq_class_rt(ioq)) {
>> +		struct io_entity *entity;
>>  		struct io_group *iog = ioq_to_io_group(ioq);
>> +
>>  		iog->busy_rt_queues++;
>> +		entity = iog->entity.parent;
>> +
>> +		for_each_entity(entity) {
>> +			iog = io_entity_to_iog(entity);
>> +			iog->sub_busy_rt_queues++;
>> +		}
>>  	}
>>  
>>  #ifdef CONFIG_DEBUG_GROUP_IOSCHED
>> @@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
>>  	elv_clear_ioq_busy(ioq);
>>  	BUG_ON(efqd->busy_queues == 0);
>>  	efqd->busy_queues--;
>> +
>>  	if (elv_ioq_class_rt(ioq)) {
>> +		struct io_entity *entity;
>>  		struct io_group *iog = ioq_to_io_group(ioq);
>> +
>>  		iog->busy_rt_queues--;
>> +		entity = iog->entity.parent;
>> +
>> +		for_each_entity(entity) {
>> +			iog = io_entity_to_iog(entity);
>> +			iog->sub_busy_rt_queues--;
>> +		}
>>  	}
>>  
>>  	elv_deactivate_ioq(efqd, ioq, requeue);
>> @@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>>  	return ret;
>>  }
>>  
>> +static int check_rt_queue(struct io_queue *ioq)
>> +{
>> +	struct io_group *iog;
>> +	struct io_entity *entity;
>> +
>> +	iog = ioq_to_io_group(ioq);
>> +
>> +	if (iog->busy_rt_queues)
>> +		return 1;
>> +
>> +	entity = iog->entity.parent;
>> +
>> +	for_each_entity(entity) {
>> +		iog = io_entity_to_iog(entity);
>> +		if (iog->sub_busy_rt_queues)
>> +			return 1;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>  /* Common layer function to select the next queue to dispatch from */
>>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>>  {
>>  	struct elv_fq_data *efqd = &q->elevator->efqd;
>>  	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
>> -	struct io_group *iog;
>>  	int slice_expired = 1;
>>  
>>  	if (!elv_nr_busy_ioq(q->elevator))
>> @@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>>  	/*
>>  	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
>>  	 * cfqq.
>> -	 *
>> -	 * TODO: This does not seem right across the io groups. Fix it.
>>  	 */
>> -	iog = ioq_to_io_group(ioq);
>>  
>> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
>> +	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
>>  		/*
>>  		 * We simulate this as cfqq timed out so that it gets to bank
>>  		 * the remaining of its time slice.
>> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
>> index b3193f8..be6c1af 100644
>> --- a/block/elevator-fq.h
>> +++ b/block/elevator-fq.h
>> @@ -248,6 +248,7 @@ struct io_group {
>>  	 * non-RT cfqq in service when this value is non-zero.
>>  	 */
>>  	unsigned int busy_rt_queues;
>> +	unsigned int sub_busy_rt_queues;
>>  
>>  	int deleting;
>>  	unsigned short iocg_id;
>> -- 
>> 1.5.4.rc3
>>
> 
> 
> 

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
  2009-06-22 17:21       ` Vivek Goyal
  (?)
@ 2009-06-23  6:44       ` Gui Jianfeng
  2009-06-23 14:02           ` Vivek Goyal
       [not found]         ` <4A4079B8.4020402-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  -1 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-23  6:44 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
> On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
>> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
>> in ancestor or sibling groups. It will give other group's rt ioq an chance 
>> to dispatch ASAP.
>>
> 
> Hi Gui,
> 
> Will new preempton logic of traversing up the hiearchy so that both new
> queue and old queue are at same level to take a preemption decision not
> take care of above scenario?

Hi Vivek,

Would you explain a bit what do you mean about "both new queue and old queue 
are at same level to take a preemption decision". I don't understand it well.

> 
> Please have a look at bfq_find_matching_entity().
> 
> At the same time we probably don't want to preempt the non-rt queue
> with an RT queue in sibling group until and unless sibling group is an
> RT group.
> 
> 		root
> 		/  \
> 	   BEgrpA  BEgrpB
> 	      |     |	
> 	  BEioq1   RTioq2
> 
> Above we got two BE group A and B and assume ioq in group A is being
> served and then an RT request in group B comes. Because group B is an
> BE class group, we should not preempt the queue in group A.

  Yes, i also have this concern. So, it does not allow non-rt group preempts
  another group. I'll check whether there is a way to address this issue.

> 
> Thanks
> Vivek
> 
> 
>> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
>> ---
>>  block/elevator-fq.c |   44 +++++++++++++++++++++++++++++++++++++++-----
>>  block/elevator-fq.h |    1 +
>>  2 files changed, 40 insertions(+), 5 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index 2ad40eb..80526fd 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -3245,8 +3245,16 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
>>  	elv_mark_ioq_busy(ioq);
>>  	efqd->busy_queues++;
>>  	if (elv_ioq_class_rt(ioq)) {
>> +		struct io_entity *entity;
>>  		struct io_group *iog = ioq_to_io_group(ioq);
>> +
>>  		iog->busy_rt_queues++;
>> +		entity = iog->entity.parent;
>> +
>> +		for_each_entity(entity) {
>> +			iog = io_entity_to_iog(entity);
>> +			iog->sub_busy_rt_queues++;
>> +		}
>>  	}
>>  
>>  #ifdef CONFIG_DEBUG_GROUP_IOSCHED
>> @@ -3290,9 +3298,18 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
>>  	elv_clear_ioq_busy(ioq);
>>  	BUG_ON(efqd->busy_queues == 0);
>>  	efqd->busy_queues--;
>> +
>>  	if (elv_ioq_class_rt(ioq)) {
>> +		struct io_entity *entity;
>>  		struct io_group *iog = ioq_to_io_group(ioq);
>> +
>>  		iog->busy_rt_queues--;
>> +		entity = iog->entity.parent;
>> +
>> +		for_each_entity(entity) {
>> +			iog = io_entity_to_iog(entity);
>> +			iog->sub_busy_rt_queues--;
>> +		}
>>  	}
>>  
>>  	elv_deactivate_ioq(efqd, ioq, requeue);
>> @@ -3735,12 +3752,32 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>>  	return ret;
>>  }
>>  
>> +static int check_rt_queue(struct io_queue *ioq)
>> +{
>> +	struct io_group *iog;
>> +	struct io_entity *entity;
>> +
>> +	iog = ioq_to_io_group(ioq);
>> +
>> +	if (iog->busy_rt_queues)
>> +		return 1;
>> +
>> +	entity = iog->entity.parent;
>> +
>> +	for_each_entity(entity) {
>> +		iog = io_entity_to_iog(entity);
>> +		if (iog->sub_busy_rt_queues)
>> +			return 1;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>  /* Common layer function to select the next queue to dispatch from */
>>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>>  {
>>  	struct elv_fq_data *efqd = &q->elevator->efqd;
>>  	struct io_queue *new_ioq = NULL, *ioq = elv_active_ioq(q->elevator);
>> -	struct io_group *iog;
>>  	int slice_expired = 1;
>>  
>>  	if (!elv_nr_busy_ioq(q->elevator))
>> @@ -3811,12 +3848,9 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>>  	/*
>>  	 * If we have a RT cfqq waiting, then we pre-empt the current non-rt
>>  	 * cfqq.
>> -	 *
>> -	 * TODO: This does not seem right across the io groups. Fix it.
>>  	 */
>> -	iog = ioq_to_io_group(ioq);
>>  
>> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
>> +	if (!elv_ioq_class_rt(ioq) && check_rt_queue(ioq)) {
>>  		/*
>>  		 * We simulate this as cfqq timed out so that it gets to bank
>>  		 * the remaining of its time slice.
>> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
>> index b3193f8..be6c1af 100644
>> --- a/block/elevator-fq.h
>> +++ b/block/elevator-fq.h
>> @@ -248,6 +248,7 @@ struct io_group {
>>  	 * non-RT cfqq in service when this value is non-zero.
>>  	 */
>>  	unsigned int busy_rt_queues;
>> +	unsigned int sub_busy_rt_queues;
>>  
>>  	int deleting;
>>  	unsigned short iocg_id;
>> -- 
>> 1.5.4.rc3
>>
> 
> 
> 

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found]               ` <20090622170812.GD15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-23  6:52                 ` Balbir Singh
  0 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-23  6:52 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, Jeff Moyer,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

* Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-22 13:08:12]:

> On Mon, Jun 22, 2009 at 12:06:42PM -0400, Jeff Moyer wrote:
> > Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
> > 
> > > On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> > >> Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
> > >> 
> > >> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> > >> >> * Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> [2009-06-19 16:37:18]:
> > >> >> 
> > >> >> > 
> > >> >> > Hi All,
> > >> >> > 
> > >> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> > >> >> [snip]
> > >> >> 
> > >> >> > Testing
> > >> >> > =======
> > >> >> >
> > >> >> 
> > >> >> [snip]
> > >> >> 
> > >> >> I've not been reading through the discussions in complete detail, but
> > >> >> I see no reference to async reads or aio. In the case of aio, aio
> > >> >> presumes the context of the user space process. Could you elaborate on
> > >> >> any testing you've done with these cases? 
> > >> >> 
> > >> >
> > >> > Hi Balbir,
> > >> >
> > >> > So far I had not done any testing with AIO. I have done some just now.
> > >> > Here are the results.
> > >> >
> > >> > Test1 (AIO reads)
> > >> > ================
> > >> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> > >> > respectively. I am using cfq scheduler. Following are some lines from my test
> > >> > script.
> > >> >
> > >> > ===================================================================
> > >> > fio_args="--ioengine=libaio --rw=read --size=512M"
> > >> 
> > >> AIO doesn't make sense without O_DIRECT.
> > >> 
> > >
> > > Ok, here are the read results with --direct=1 for reads. In previous posting,
> > > writes were already direct.
> > >
> > > test1 statistics: time=8 16 20796   sectors=8 16 1049648
> > > test2 statistics: time=8 16 10551   sectors=8 16 581160
> > >
> > >
> > > Not sure why reads are so slow with --direct=1? In the previous test
> > > (no direct IO), I had cleared the caches using
> > > (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> > > cache?
> > 
> > O_DIRECT bypasses the page cache, and hence the readahead code.  Try
> > driving deeper queue depths and/or using larger I/O sizes.
> 
> Ok. Thanks. I tried increasing iodepth to 20 and it helped a lot.
> 
> test1 statistics: time=8 16 6672   sectors=8 16 1049656
> test2 statistics: time=8 16 3508   sectors=8 16 583432
>

Good to see.. Thanks! 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-22 17:08               ` Vivek Goyal
  (?)
  (?)
@ 2009-06-23  6:52               ` Balbir Singh
  -1 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-23  6:52 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Jeff Moyer, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, guijianfeng, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

* Vivek Goyal <vgoyal@redhat.com> [2009-06-22 13:08:12]:

> On Mon, Jun 22, 2009 at 12:06:42PM -0400, Jeff Moyer wrote:
> > Vivek Goyal <vgoyal@redhat.com> writes:
> > 
> > > On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> > >> Vivek Goyal <vgoyal@redhat.com> writes:
> > >> 
> > >> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> > >> >> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
> > >> >> 
> > >> >> > 
> > >> >> > Hi All,
> > >> >> > 
> > >> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> > >> >> [snip]
> > >> >> 
> > >> >> > Testing
> > >> >> > =======
> > >> >> >
> > >> >> 
> > >> >> [snip]
> > >> >> 
> > >> >> I've not been reading through the discussions in complete detail, but
> > >> >> I see no reference to async reads or aio. In the case of aio, aio
> > >> >> presumes the context of the user space process. Could you elaborate on
> > >> >> any testing you've done with these cases? 
> > >> >> 
> > >> >
> > >> > Hi Balbir,
> > >> >
> > >> > So far I had not done any testing with AIO. I have done some just now.
> > >> > Here are the results.
> > >> >
> > >> > Test1 (AIO reads)
> > >> > ================
> > >> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> > >> > respectively. I am using cfq scheduler. Following are some lines from my test
> > >> > script.
> > >> >
> > >> > ===================================================================
> > >> > fio_args="--ioengine=libaio --rw=read --size=512M"
> > >> 
> > >> AIO doesn't make sense without O_DIRECT.
> > >> 
> > >
> > > Ok, here are the read results with --direct=1 for reads. In previous posting,
> > > writes were already direct.
> > >
> > > test1 statistics: time=8 16 20796   sectors=8 16 1049648
> > > test2 statistics: time=8 16 10551   sectors=8 16 581160
> > >
> > >
> > > Not sure why reads are so slow with --direct=1? In the previous test
> > > (no direct IO), I had cleared the caches using
> > > (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> > > cache?
> > 
> > O_DIRECT bypasses the page cache, and hence the readahead code.  Try
> > driving deeper queue depths and/or using larger I/O sizes.
> 
> Ok. Thanks. I tried increasing iodepth to 20 and it helped a lot.
> 
> test1 statistics: time=8 16 6672   sectors=8 16 1049656
> test2 statistics: time=8 16 3508   sectors=8 16 583432
>

Good to see.. Thanks! 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]             ` <20090623041052.GS28770-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
@ 2009-06-23  7:32               ` Balbir Singh
  0 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-23  7:32 UTC (permalink / raw)
  To: Fabio Checconi
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

* Fabio Checconi <fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> [2009-06-23 06:10:52]:

> > From: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > Date: Mon, Jun 22, 2009 10:43:37PM -0400
> >
> > On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> > 
> ...
> > > > Please help me understand this, we sort the tree by finish time, but
> > > > search by vtime, start_time. The worst case could easily be O(N),
> > > > right?
> > > > 
> > > 
> > > no, (again, the full answer is in the paper); the nice property of
> > > min_start is that it partitions the tree in two regions, one with
> > > eligible entities and one without any of them.  once we know that
> > > there is one eligible entity (checking the min_start at the root)
> > > we can find the node i with min(F_i) subject to S_i < V walking down
> > > a single path from the root to the leftmost eligible entity.  (we
> > > need to go to the right only if the subtree on the left contains 
> > > no eligible entities at all.)  since the RB tree is balanced this
> > > can be done in O(log N).
> > > 
> > 
> > Hi Fabio,
> > 
> > When I go thorough the paper you mentioned above, they seem to have
> > sorted the tree based on eligible time (looks like equivalent of start
> > time) and then keep track of minimum deadline on each node (equivalnet of
> > finish time).
> > 
> > We seem to be doing reverse in BFQ where we sort tree on finish time
> > and keep track of minimum start time on each node. Is there any specific
> > reason behind that?
> > 
> 
> Well... no specific reasons...  I think that our implementation is easier
> to understand than the one of the paper, because it actually uses finish
> times as the ordering key, and min_start to quickly locate eligible
> subtrees, following the definition of the algorithm.
> 

Is it still O(log N)?

> Moreover, if you look at the get_req() code in the paper, it needs a
> couple of loops to get to the result, while with our implementation
> we save the second loop.
> 
> Our version is still correct, because it always moves to the left
> (towards smaller finish times), except when moving to the left would
> mean entering a non feasible subtree, in which case it moves to the
> right.
> 
> Unfortunately I'm not aware of any paper describing a version of the
> algorithm more similar to the one we've implemented.  Sorry for not
> having mentioned that difference in the comments nor anywhere else,
> it has been a long long time since I read the paper, and I must have
> forgotten about that.

/me needs to go read the paper in full.

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-23  4:10             ` Fabio Checconi
@ 2009-06-23  7:32               ` Balbir Singh
  -1 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-23  7:32 UTC (permalink / raw)
  To: Fabio Checconi
  Cc: Vivek Goyal, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, paolo.valente, ryov, fernando,
	s-uchida, taka, guijianfeng, jmoyer, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

* Fabio Checconi <fchecconi@gmail.com> [2009-06-23 06:10:52]:

> > From: Vivek Goyal <vgoyal@redhat.com>
> > Date: Mon, Jun 22, 2009 10:43:37PM -0400
> >
> > On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> > 
> ...
> > > > Please help me understand this, we sort the tree by finish time, but
> > > > search by vtime, start_time. The worst case could easily be O(N),
> > > > right?
> > > > 
> > > 
> > > no, (again, the full answer is in the paper); the nice property of
> > > min_start is that it partitions the tree in two regions, one with
> > > eligible entities and one without any of them.  once we know that
> > > there is one eligible entity (checking the min_start at the root)
> > > we can find the node i with min(F_i) subject to S_i < V walking down
> > > a single path from the root to the leftmost eligible entity.  (we
> > > need to go to the right only if the subtree on the left contains 
> > > no eligible entities at all.)  since the RB tree is balanced this
> > > can be done in O(log N).
> > > 
> > 
> > Hi Fabio,
> > 
> > When I go thorough the paper you mentioned above, they seem to have
> > sorted the tree based on eligible time (looks like equivalent of start
> > time) and then keep track of minimum deadline on each node (equivalnet of
> > finish time).
> > 
> > We seem to be doing reverse in BFQ where we sort tree on finish time
> > and keep track of minimum start time on each node. Is there any specific
> > reason behind that?
> > 
> 
> Well... no specific reasons...  I think that our implementation is easier
> to understand than the one of the paper, because it actually uses finish
> times as the ordering key, and min_start to quickly locate eligible
> subtrees, following the definition of the algorithm.
> 

Is it still O(log N)?

> Moreover, if you look at the get_req() code in the paper, it needs a
> couple of loops to get to the result, while with our implementation
> we save the second loop.
> 
> Our version is still correct, because it always moves to the left
> (towards smaller finish times), except when moving to the left would
> mean entering a non feasible subtree, in which case it moves to the
> right.
> 
> Unfortunately I'm not aware of any paper describing a version of the
> algorithm more similar to the one we've implemented.  Sorry for not
> having mentioned that difference in the comments nor anywhere else,
> it has been a long long time since I read the paper, and I must have
> forgotten about that.

/me needs to go read the paper in full.

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-06-23  7:32               ` Balbir Singh
  0 siblings, 0 replies; 176+ messages in thread
From: Balbir Singh @ 2009-06-23  7:32 UTC (permalink / raw)
  To: Fabio Checconi
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	paolo.valente, guijianfeng, fernando, mikew, jmoyer, nauman,
	Vivek Goyal, m-ikeda, lizf, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

* Fabio Checconi <fchecconi@gmail.com> [2009-06-23 06:10:52]:

> > From: Vivek Goyal <vgoyal@redhat.com>
> > Date: Mon, Jun 22, 2009 10:43:37PM -0400
> >
> > On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> > 
> ...
> > > > Please help me understand this, we sort the tree by finish time, but
> > > > search by vtime, start_time. The worst case could easily be O(N),
> > > > right?
> > > > 
> > > 
> > > no, (again, the full answer is in the paper); the nice property of
> > > min_start is that it partitions the tree in two regions, one with
> > > eligible entities and one without any of them.  once we know that
> > > there is one eligible entity (checking the min_start at the root)
> > > we can find the node i with min(F_i) subject to S_i < V walking down
> > > a single path from the root to the leftmost eligible entity.  (we
> > > need to go to the right only if the subtree on the left contains 
> > > no eligible entities at all.)  since the RB tree is balanced this
> > > can be done in O(log N).
> > > 
> > 
> > Hi Fabio,
> > 
> > When I go thorough the paper you mentioned above, they seem to have
> > sorted the tree based on eligible time (looks like equivalent of start
> > time) and then keep track of minimum deadline on each node (equivalnet of
> > finish time).
> > 
> > We seem to be doing reverse in BFQ where we sort tree on finish time
> > and keep track of minimum start time on each node. Is there any specific
> > reason behind that?
> > 
> 
> Well... no specific reasons...  I think that our implementation is easier
> to understand than the one of the paper, because it actually uses finish
> times as the ordering key, and min_start to quickly locate eligible
> subtrees, following the definition of the algorithm.
> 

Is it still O(log N)?

> Moreover, if you look at the get_req() code in the paper, it needs a
> couple of loops to get to the result, while with our implementation
> we save the second loop.
> 
> Our version is still correct, because it always moves to the left
> (towards smaller finish times), except when moving to the left would
> mean entering a non feasible subtree, in which case it moves to the
> right.
> 
> Unfortunately I'm not aware of any paper describing a version of the
> algorithm more similar to the one we've implemented.  Sorry for not
> having mentioned that difference in the comments nor anywhere else,
> it has been a long long time since I read the paper, and I must have
> forgotten about that.

/me needs to go read the paper in full.

-- 
	Balbir

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
       [not found]   ` <1245443858-8487-8-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-23 12:10     ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-23 12:10 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
...
> +
> +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
> +				struct cftype *cftype, struct seq_file *m)
> +{
> +	struct io_cgroup *iocg;
> +	struct io_group *iog;
> +	struct hlist_node *n;
> +
> +	if (!cgroup_lock_live_group(cgroup))
> +		return -ENODEV;
> +
> +	iocg = cgroup_to_io_cgroup(cgroup);
> +
> +	spin_lock_irq(&iocg->lock);

It's better to make use of rcu_read_lock instead since it's
a read action.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 2ad40eb..d779282 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
 
 	iocg = cgroup_to_io_cgroup(cgroup);
 
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
 		 * There might be groups which are not functional and
@@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
 					iog->entity.total_service);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
@@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 
 	iocg = cgroup_to_io_cgroup(cgroup);
 
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
 		 * There might be groups which are not functional and
@@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 					iog->entity.total_sector_service);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
@@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
 		return -ENODEV;
 
 	iocg = cgroup_to_io_cgroup(cgroup);
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	/* Loop through all the io groups and print statistics */
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
@@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
 					iog->queue_duration);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
-- 
1.5.4.rc3



> +	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
> +		/*
> +		 * There might be groups which are not functional and
> +		 * waiting to be reclaimed upon cgoup deletion.
> +		 */
> +		if (iog->key) {
> +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> +					MINOR(iog->dev),
> +					iog->entity.total_sector_service);
> +		}
> +	}
> +	spin_unlock_irq(&iocg->lock);
> +	cgroup_unlock();
> +
> +	return 0;
> +}
> +
>

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
  2009-06-19 20:37   ` Vivek Goyal
@ 2009-06-23 12:10     ` Gui Jianfeng
  -1 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-23 12:10 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
...
> +
> +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
> +				struct cftype *cftype, struct seq_file *m)
> +{
> +	struct io_cgroup *iocg;
> +	struct io_group *iog;
> +	struct hlist_node *n;
> +
> +	if (!cgroup_lock_live_group(cgroup))
> +		return -ENODEV;
> +
> +	iocg = cgroup_to_io_cgroup(cgroup);
> +
> +	spin_lock_irq(&iocg->lock);

It's better to make use of rcu_read_lock instead since it's
a read action.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 2ad40eb..d779282 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
 
 	iocg = cgroup_to_io_cgroup(cgroup);
 
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
 		 * There might be groups which are not functional and
@@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
 					iog->entity.total_service);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
@@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 
 	iocg = cgroup_to_io_cgroup(cgroup);
 
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
 		 * There might be groups which are not functional and
@@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 					iog->entity.total_sector_service);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
@@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
 		return -ENODEV;
 
 	iocg = cgroup_to_io_cgroup(cgroup);
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	/* Loop through all the io groups and print statistics */
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
@@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
 					iog->queue_duration);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
-- 
1.5.4.rc3



> +	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
> +		/*
> +		 * There might be groups which are not functional and
> +		 * waiting to be reclaimed upon cgoup deletion.
> +		 */
> +		if (iog->key) {
> +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> +					MINOR(iog->dev),
> +					iog->entity.total_sector_service);
> +		}
> +	}
> +	spin_unlock_irq(&iocg->lock);
> +	cgroup_unlock();
> +
> +	return 0;
> +}
> +
>


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
@ 2009-06-23 12:10     ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-23 12:10 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

Vivek Goyal wrote:
...
> +
> +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
> +				struct cftype *cftype, struct seq_file *m)
> +{
> +	struct io_cgroup *iocg;
> +	struct io_group *iog;
> +	struct hlist_node *n;
> +
> +	if (!cgroup_lock_live_group(cgroup))
> +		return -ENODEV;
> +
> +	iocg = cgroup_to_io_cgroup(cgroup);
> +
> +	spin_lock_irq(&iocg->lock);

It's better to make use of rcu_read_lock instead since it's
a read action.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 2ad40eb..d779282 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
 
 	iocg = cgroup_to_io_cgroup(cgroup);
 
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
 		 * There might be groups which are not functional and
@@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
 					iog->entity.total_service);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
@@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 
 	iocg = cgroup_to_io_cgroup(cgroup);
 
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
 		 * There might be groups which are not functional and
@@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
 					iog->entity.total_sector_service);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
@@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
 		return -ENODEV;
 
 	iocg = cgroup_to_io_cgroup(cgroup);
-	spin_lock_irq(&iocg->lock);
+	rcu_read_lock();
 	/* Loop through all the io groups and print statistics */
 	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
 		/*
@@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
 					iog->queue_duration);
 		}
 	}
-	spin_unlock_irq(&iocg->lock);
+	rcu_read_unlock();
 	cgroup_unlock();
 
 	return 0;
-- 
1.5.4.rc3



> +	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
> +		/*
> +		 * There might be groups which are not functional and
> +		 * waiting to be reclaimed upon cgoup deletion.
> +		 */
> +		if (iog->key) {
> +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> +					MINOR(iog->dev),
> +					iog->entity.total_sector_service);
> +		}
> +	}
> +	spin_unlock_irq(&iocg->lock);
> +	cgroup_unlock();
> +
> +	return 0;
> +}
> +
>

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]               ` <20090623073252.GJ8642-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
@ 2009-06-23 13:42                 ` Fabio Checconi
  0 siblings, 0 replies; 176+ messages in thread
From: Fabio Checconi @ 2009-06-23 13:42 UTC (permalink / raw)
  To: Balbir Singh
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

> From: Balbir Singh <balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
> Date: Tue, Jun 23, 2009 01:02:52PM +0530
>
> * Fabio Checconi <fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> [2009-06-23 06:10:52]:
> 
> > > From: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > Date: Mon, Jun 22, 2009 10:43:37PM -0400
> > >
> > > On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> > > 
> > ...
> > > > > Please help me understand this, we sort the tree by finish time, but
> > > > > search by vtime, start_time. The worst case could easily be O(N),
> > > > > right?
> > > > > 
> > > > 
> > > > no, (again, the full answer is in the paper); the nice property of
> > > > min_start is that it partitions the tree in two regions, one with
> > > > eligible entities and one without any of them.  once we know that
> > > > there is one eligible entity (checking the min_start at the root)
> > > > we can find the node i with min(F_i) subject to S_i < V walking down
> > > > a single path from the root to the leftmost eligible entity.  (we
> > > > need to go to the right only if the subtree on the left contains 
> > > > no eligible entities at all.)  since the RB tree is balanced this
> > > > can be done in O(log N).
> > > > 
> > > 
> > > Hi Fabio,
> > > 
> > > When I go thorough the paper you mentioned above, they seem to have
> > > sorted the tree based on eligible time (looks like equivalent of start
> > > time) and then keep track of minimum deadline on each node (equivalnet of
> > > finish time).
> > > 
> > > We seem to be doing reverse in BFQ where we sort tree on finish time
> > > and keep track of minimum start time on each node. Is there any specific
> > > reason behind that?
> > > 
> > 
> > Well... no specific reasons...  I think that our implementation is easier
> > to understand than the one of the paper, because it actually uses finish
> > times as the ordering key, and min_start to quickly locate eligible
> > subtrees, following the definition of the algorithm.
> > 
> 
> Is it still O(log N)?
> 

Yes, it goes along a single path from the root to a leaf of a balanced
tree (i.e., it starts from the root, and at the end of each iteration
it selects the left or the right child of the current node), thus it is
O(log N).

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-23  7:32               ` Balbir Singh
  (?)
  (?)
@ 2009-06-23 13:42               ` Fabio Checconi
  -1 siblings, 0 replies; 176+ messages in thread
From: Fabio Checconi @ 2009-06-23 13:42 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Vivek Goyal, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, paolo.valente, ryov, fernando,
	s-uchida, taka, guijianfeng, jmoyer, dhaval, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

> From: Balbir Singh <balbir@linux.vnet.ibm.com>
> Date: Tue, Jun 23, 2009 01:02:52PM +0530
>
> * Fabio Checconi <fchecconi@gmail.com> [2009-06-23 06:10:52]:
> 
> > > From: Vivek Goyal <vgoyal@redhat.com>
> > > Date: Mon, Jun 22, 2009 10:43:37PM -0400
> > >
> > > On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
> > > 
> > ...
> > > > > Please help me understand this, we sort the tree by finish time, but
> > > > > search by vtime, start_time. The worst case could easily be O(N),
> > > > > right?
> > > > > 
> > > > 
> > > > no, (again, the full answer is in the paper); the nice property of
> > > > min_start is that it partitions the tree in two regions, one with
> > > > eligible entities and one without any of them.  once we know that
> > > > there is one eligible entity (checking the min_start at the root)
> > > > we can find the node i with min(F_i) subject to S_i < V walking down
> > > > a single path from the root to the leftmost eligible entity.  (we
> > > > need to go to the right only if the subtree on the left contains 
> > > > no eligible entities at all.)  since the RB tree is balanced this
> > > > can be done in O(log N).
> > > > 
> > > 
> > > Hi Fabio,
> > > 
> > > When I go thorough the paper you mentioned above, they seem to have
> > > sorted the tree based on eligible time (looks like equivalent of start
> > > time) and then keep track of minimum deadline on each node (equivalnet of
> > > finish time).
> > > 
> > > We seem to be doing reverse in BFQ where we sort tree on finish time
> > > and keep track of minimum start time on each node. Is there any specific
> > > reason behind that?
> > > 
> > 
> > Well... no specific reasons...  I think that our implementation is easier
> > to understand than the one of the paper, because it actually uses finish
> > times as the ordering key, and min_start to quickly locate eligible
> > subtrees, following the definition of the algorithm.
> > 
> 
> Is it still O(log N)?
> 

Yes, it goes along a single path from the root to a leaf of a balanced
tree (i.e., it starts from the root, and at the end of each iteration
it selects the left or the right child of the current node), thus it is
O(log N).

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
       [not found]         ` <4A4079B8.4020402-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-23 14:02           ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23 14:02 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Tue, Jun 23, 2009 at 02:44:08PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
> >> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
> >> in ancestor or sibling groups. It will give other group's rt ioq an chance 
> >> to dispatch ASAP.
> >>
> > 
> > Hi Gui,
> > 
> > Will new preempton logic of traversing up the hiearchy so that both new
> > queue and old queue are at same level to take a preemption decision not
> > take care of above scenario?
> 
> Hi Vivek,
> 
> Would you explain a bit what do you mean about "both new queue and old queue 
> are at same level to take a preemption decision". I don't understand it well.
> 

Consider following hierarchy.

			root
			/ | 
		       A  1   
		       | 
		       2 
In the above diagram, A is the group and "1" and "2" are two io queues 
associated with tasks.

Now assume that queue "1" is being served and queue "2" gets backlogged.
Should queue 2 preempt queue 1 now?

To take that decision, we need to do the comparision between type of
entity of group A and queue 1 (That is at the same level or IOW, the
entities in question have the same parent). If group A is of class RT and
queue 1 is of type BE then queue 2 should preempt queue 1 otherwise not.

Hence in hierarchical setups to take a preemption decision, comparison
should be done at same level.

> > 
> > Please have a look at bfq_find_matching_entity().
> > 
> > At the same time we probably don't want to preempt the non-rt queue
> > with an RT queue in sibling group until and unless sibling group is an
> > RT group.
> > 
> > 		root
> > 		/  \
> > 	   BEgrpA  BEgrpB
> > 	      |     |	
> > 	  BEioq1   RTioq2
> > 
> > Above we got two BE group A and B and assume ioq in group A is being
> > served and then an RT request in group B comes. Because group B is an
> > BE class group, we should not preempt the queue in group A.
> 
>   Yes, i also have this concern. So, it does not allow non-rt group preempts
>   another group. I'll check whether there is a way to address this issue.
> 

So here also assume ioq1 is being served and ioq2 gets backlogged. To
decide whether ioq2 should preempt ioq1 or not, one needs to go up the 
hiearchy till two paths share the parent. That means one needs to go up
at the BEgrpA and BEgrpB level where they have common parent "root". Now
both the groups are of class BE hence ioq2 should not preempt ioq1.

Hope it helps.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
  2009-06-23  6:44       ` Gui Jianfeng
@ 2009-06-23 14:02           ` Vivek Goyal
       [not found]         ` <4A4079B8.4020402-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23 14:02 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Tue, Jun 23, 2009 at 02:44:08PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
> >> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
> >> in ancestor or sibling groups. It will give other group's rt ioq an chance 
> >> to dispatch ASAP.
> >>
> > 
> > Hi Gui,
> > 
> > Will new preempton logic of traversing up the hiearchy so that both new
> > queue and old queue are at same level to take a preemption decision not
> > take care of above scenario?
> 
> Hi Vivek,
> 
> Would you explain a bit what do you mean about "both new queue and old queue 
> are at same level to take a preemption decision". I don't understand it well.
> 

Consider following hierarchy.

			root
			/ | 
		       A  1   
		       | 
		       2 
In the above diagram, A is the group and "1" and "2" are two io queues 
associated with tasks.

Now assume that queue "1" is being served and queue "2" gets backlogged.
Should queue 2 preempt queue 1 now?

To take that decision, we need to do the comparision between type of
entity of group A and queue 1 (That is at the same level or IOW, the
entities in question have the same parent). If group A is of class RT and
queue 1 is of type BE then queue 2 should preempt queue 1 otherwise not.

Hence in hierarchical setups to take a preemption decision, comparison
should be done at same level.

> > 
> > Please have a look at bfq_find_matching_entity().
> > 
> > At the same time we probably don't want to preempt the non-rt queue
> > with an RT queue in sibling group until and unless sibling group is an
> > RT group.
> > 
> > 		root
> > 		/  \
> > 	   BEgrpA  BEgrpB
> > 	      |     |	
> > 	  BEioq1   RTioq2
> > 
> > Above we got two BE group A and B and assume ioq in group A is being
> > served and then an RT request in group B comes. Because group B is an
> > BE class group, we should not preempt the queue in group A.
> 
>   Yes, i also have this concern. So, it does not allow non-rt group preempts
>   another group. I'll check whether there is a way to address this issue.
> 

So here also assume ioq1 is being served and ioq2 gets backlogged. To
decide whether ioq2 should preempt ioq1 or not, one needs to go up the 
hiearchy till two paths share the parent. That means one needs to go up
at the BEgrpA and BEgrpB level where they have common parent "root". Now
both the groups are of class BE hence ioq2 should not preempt ioq1.

Hope it helps.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
@ 2009-06-23 14:02           ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23 14:02 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Tue, Jun 23, 2009 at 02:44:08PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
> >> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
> >> in ancestor or sibling groups. It will give other group's rt ioq an chance 
> >> to dispatch ASAP.
> >>
> > 
> > Hi Gui,
> > 
> > Will new preempton logic of traversing up the hiearchy so that both new
> > queue and old queue are at same level to take a preemption decision not
> > take care of above scenario?
> 
> Hi Vivek,
> 
> Would you explain a bit what do you mean about "both new queue and old queue 
> are at same level to take a preemption decision". I don't understand it well.
> 

Consider following hierarchy.

			root
			/ | 
		       A  1   
		       | 
		       2 
In the above diagram, A is the group and "1" and "2" are two io queues 
associated with tasks.

Now assume that queue "1" is being served and queue "2" gets backlogged.
Should queue 2 preempt queue 1 now?

To take that decision, we need to do the comparision between type of
entity of group A and queue 1 (That is at the same level or IOW, the
entities in question have the same parent). If group A is of class RT and
queue 1 is of type BE then queue 2 should preempt queue 1 otherwise not.

Hence in hierarchical setups to take a preemption decision, comparison
should be done at same level.

> > 
> > Please have a look at bfq_find_matching_entity().
> > 
> > At the same time we probably don't want to preempt the non-rt queue
> > with an RT queue in sibling group until and unless sibling group is an
> > RT group.
> > 
> > 		root
> > 		/  \
> > 	   BEgrpA  BEgrpB
> > 	      |     |	
> > 	  BEioq1   RTioq2
> > 
> > Above we got two BE group A and B and assume ioq in group A is being
> > served and then an RT request in group B comes. Because group B is an
> > BE class group, we should not preempt the queue in group A.
> 
>   Yes, i also have this concern. So, it does not allow non-rt group preempts
>   another group. I'll check whether there is a way to address this issue.
> 

So here also assume ioq1 is being served and ioq2 gets backlogged. To
decide whether ioq2 should preempt ioq1 or not, one needs to go up the 
hiearchy till two paths share the parent. That means one needs to go up
at the BEgrpA and BEgrpB level where they have common parent "root". Now
both the groups are of class BE hence ioq2 should not preempt ioq1.

Hope it helps.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
       [not found]     ` <4A40C64E.8040305-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-23 14:38       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23 14:38 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Tue, Jun 23, 2009 at 08:10:54PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > +
> > +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
> > +				struct cftype *cftype, struct seq_file *m)
> > +{
> > +	struct io_cgroup *iocg;
> > +	struct io_group *iog;
> > +	struct hlist_node *n;
> > +
> > +	if (!cgroup_lock_live_group(cgroup))
> > +		return -ENODEV;
> > +
> > +	iocg = cgroup_to_io_cgroup(cgroup);
> > +
> > +	spin_lock_irq(&iocg->lock);
> 
> It's better to make use of rcu_read_lock instead since it's
> a read action.
> 

Thanks Gui. Queued for next posting.

Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> ---
>  block/elevator-fq.c |   12 ++++++------
>  1 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 2ad40eb..d779282 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
>  
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
> @@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
>  					iog->entity.total_service);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> @@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
>  
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
> @@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
>  					iog->entity.total_sector_service);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> @@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
>  		return -ENODEV;
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	/* Loop through all the io groups and print statistics */
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
> @@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
>  					iog->queue_duration);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> -- 
> 1.5.4.rc3
> 
> 
> 
> > +	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
> > +		/*
> > +		 * There might be groups which are not functional and
> > +		 * waiting to be reclaimed upon cgoup deletion.
> > +		 */
> > +		if (iog->key) {
> > +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> > +					MINOR(iog->dev),
> > +					iog->entity.total_sector_service);
> > +		}
> > +	}
> > +	spin_unlock_irq(&iocg->lock);
> > +	cgroup_unlock();
> > +
> > +	return 0;
> > +}
> > +
> >

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
  2009-06-23 12:10     ` Gui Jianfeng
@ 2009-06-23 14:38       ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23 14:38 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Tue, Jun 23, 2009 at 08:10:54PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > +
> > +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
> > +				struct cftype *cftype, struct seq_file *m)
> > +{
> > +	struct io_cgroup *iocg;
> > +	struct io_group *iog;
> > +	struct hlist_node *n;
> > +
> > +	if (!cgroup_lock_live_group(cgroup))
> > +		return -ENODEV;
> > +
> > +	iocg = cgroup_to_io_cgroup(cgroup);
> > +
> > +	spin_lock_irq(&iocg->lock);
> 
> It's better to make use of rcu_read_lock instead since it's
> a read action.
> 

Thanks Gui. Queued for next posting.

Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   12 ++++++------
>  1 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 2ad40eb..d779282 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
>  
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
> @@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
>  					iog->entity.total_service);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> @@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
>  
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
> @@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
>  					iog->entity.total_sector_service);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> @@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
>  		return -ENODEV;
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	/* Loop through all the io groups and print statistics */
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
> @@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
>  					iog->queue_duration);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> -- 
> 1.5.4.rc3
> 
> 
> 
> > +	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
> > +		/*
> > +		 * There might be groups which are not functional and
> > +		 * waiting to be reclaimed upon cgoup deletion.
> > +		 */
> > +		if (iog->key) {
> > +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> > +					MINOR(iog->dev),
> > +					iog->entity.total_sector_service);
> > +		}
> > +	}
> > +	spin_unlock_irq(&iocg->lock);
> > +	cgroup_unlock();
> > +
> > +	return 0;
> > +}
> > +
> >

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups
@ 2009-06-23 14:38       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-23 14:38 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Tue, Jun 23, 2009 at 08:10:54PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > +
> > +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
> > +				struct cftype *cftype, struct seq_file *m)
> > +{
> > +	struct io_cgroup *iocg;
> > +	struct io_group *iog;
> > +	struct hlist_node *n;
> > +
> > +	if (!cgroup_lock_live_group(cgroup))
> > +		return -ENODEV;
> > +
> > +	iocg = cgroup_to_io_cgroup(cgroup);
> > +
> > +	spin_lock_irq(&iocg->lock);
> 
> It's better to make use of rcu_read_lock instead since it's
> a read action.
> 

Thanks Gui. Queued for next posting.

Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   12 ++++++------
>  1 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 2ad40eb..d779282 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
>  
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
> @@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup,
>  					iog->entity.total_service);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> @@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
>  
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
> @@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup,
>  					iog->entity.total_sector_service);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> @@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
>  		return -ENODEV;
>  
>  	iocg = cgroup_to_io_cgroup(cgroup);
> -	spin_lock_irq(&iocg->lock);
> +	rcu_read_lock();
>  	/* Loop through all the io groups and print statistics */
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
> @@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup,
>  					iog->queue_duration);
>  		}
>  	}
> -	spin_unlock_irq(&iocg->lock);
> +	rcu_read_unlock();
>  	cgroup_unlock();
>  
>  	return 0;
> -- 
> 1.5.4.rc3
> 
> 
> 
> > +	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
> > +		/*
> > +		 * There might be groups which are not functional and
> > +		 * waiting to be reclaimed upon cgoup deletion.
> > +		 */
> > +		if (iog->key) {
> > +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> > +					MINOR(iog->dev),
> > +					iog->entity.total_sector_service);
> > +		}
> > +	}
> > +	spin_unlock_irq(&iocg->lock);
> > +	cgroup_unlock();
> > +
> > +	return 0;
> > +}
> > +
> >

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
       [not found]           ` <20090623140250.GA4262-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-24  9:20             ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-24  9:20 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
> On Tue, Jun 23, 2009 at 02:44:08PM +0800, Gui Jianfeng wrote:
>> Vivek Goyal wrote:
>>> On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
>>>> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
>>>> in ancestor or sibling groups. It will give other group's rt ioq an chance 
>>>> to dispatch ASAP.
>>>>
>>> Hi Gui,
>>>
>>> Will new preempton logic of traversing up the hiearchy so that both new
>>> queue and old queue are at same level to take a preemption decision not
>>> take care of above scenario?
>> Hi Vivek,
>>
>> Would you explain a bit what do you mean about "both new queue and old queue 
>> are at same level to take a preemption decision". I don't understand it well.
>>
> 
> Consider following hierarchy.
> 
> 			root
> 			/ | 
> 		       A  1   
> 		       | 
> 		       2 
> In the above diagram, A is the group and "1" and "2" are two io queues 
> associated with tasks.
> 
> Now assume that queue "1" is being served and queue "2" gets backlogged.
> Should queue 2 preempt queue 1 now?
> 
> To take that decision, we need to do the comparision between type of
> entity of group A and queue 1 (That is at the same level or IOW, the
> entities in question have the same parent). If group A is of class RT and
> queue 1 is of type BE then queue 2 should preempt queue 1 otherwise not.
> 
> Hence in hierarchical setups to take a preemption decision, comparison
> should be done at same level.

  So what bfq_find_matching_entity has done is going to figure out the same
  level entities, in turn, taking the decision.

> 
>>> Please have a look at bfq_find_matching_entity().
>>>
>>> At the same time we probably don't want to preempt the non-rt queue
>>> with an RT queue in sibling group until and unless sibling group is an
>>> RT group.
>>>
>>> 		root
>>> 		/  \
>>> 	   BEgrpA  BEgrpB
>>> 	      |     |	
>>> 	  BEioq1   RTioq2
>>>
>>> Above we got two BE group A and B and assume ioq in group A is being
>>> served and then an RT request in group B comes. Because group B is an
>>> BE class group, we should not preempt the queue in group A.
>>   Yes, i also have this concern. So, it does not allow non-rt group preempts
>>   another group. I'll check whether there is a way to address this issue.
>>
> 
> So here also assume ioq1 is being served and ioq2 gets backlogged. To
> decide whether ioq2 should preempt ioq1 or not, one needs to go up the 
> hiearchy till two paths share the parent. That means one needs to go up
> at the BEgrpA and BEgrpB level where they have common parent "root". Now
> both the groups are of class BE hence ioq2 should not preempt ioq1.
> 
> Hope it helps.

  Thanks, it's very helpful.

  I have a thought now. Whether we can maintain a rt ioq list in efqd, and 
  elv_fq_select_ioq() travels this list to take preemtion decision for each
  available rt ioqs at the same level(by using bfq_find_matching_entity).
  I'd like to try it out.

> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng

_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups
  2009-06-23 14:02           ` Vivek Goyal
  (?)
@ 2009-06-24  9:20           ` Gui Jianfeng
  2009-06-26  8:13               ` Gui Jianfeng
                               ` (2 more replies)
  -1 siblings, 3 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-24  9:20 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
> On Tue, Jun 23, 2009 at 02:44:08PM +0800, Gui Jianfeng wrote:
>> Vivek Goyal wrote:
>>> On Mon, Jun 22, 2009 at 03:44:08PM +0800, Gui Jianfeng wrote:
>>>> Preempt the ongoing non-rt ioq if there are rt ioqs waiting for dispatching
>>>> in ancestor or sibling groups. It will give other group's rt ioq an chance 
>>>> to dispatch ASAP.
>>>>
>>> Hi Gui,
>>>
>>> Will new preempton logic of traversing up the hiearchy so that both new
>>> queue and old queue are at same level to take a preemption decision not
>>> take care of above scenario?
>> Hi Vivek,
>>
>> Would you explain a bit what do you mean about "both new queue and old queue 
>> are at same level to take a preemption decision". I don't understand it well.
>>
> 
> Consider following hierarchy.
> 
> 			root
> 			/ | 
> 		       A  1   
> 		       | 
> 		       2 
> In the above diagram, A is the group and "1" and "2" are two io queues 
> associated with tasks.
> 
> Now assume that queue "1" is being served and queue "2" gets backlogged.
> Should queue 2 preempt queue 1 now?
> 
> To take that decision, we need to do the comparision between type of
> entity of group A and queue 1 (That is at the same level or IOW, the
> entities in question have the same parent). If group A is of class RT and
> queue 1 is of type BE then queue 2 should preempt queue 1 otherwise not.
> 
> Hence in hierarchical setups to take a preemption decision, comparison
> should be done at same level.

  So what bfq_find_matching_entity has done is going to figure out the same
  level entities, in turn, taking the decision.

> 
>>> Please have a look at bfq_find_matching_entity().
>>>
>>> At the same time we probably don't want to preempt the non-rt queue
>>> with an RT queue in sibling group until and unless sibling group is an
>>> RT group.
>>>
>>> 		root
>>> 		/  \
>>> 	   BEgrpA  BEgrpB
>>> 	      |     |	
>>> 	  BEioq1   RTioq2
>>>
>>> Above we got two BE group A and B and assume ioq in group A is being
>>> served and then an RT request in group B comes. Because group B is an
>>> BE class group, we should not preempt the queue in group A.
>>   Yes, i also have this concern. So, it does not allow non-rt group preempts
>>   another group. I'll check whether there is a way to address this issue.
>>
> 
> So here also assume ioq1 is being served and ioq2 gets backlogged. To
> decide whether ioq2 should preempt ioq1 or not, one needs to go up the 
> hiearchy till two paths share the parent. That means one needs to go up
> at the BEgrpA and BEgrpB level where they have common parent "root". Now
> both the groups are of class BE hence ioq2 should not preempt ioq1.
> 
> Hope it helps.

  Thanks, it's very helpful.

  I have a thought now. Whether we can maintain a rt ioq list in efqd, and 
  elv_fq_select_ioq() travels this list to take preemtion decision for each
  available rt ioqs at the same level(by using bfq_find_matching_entity).
  I'd like to try it out.

> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 18/20] io-controller: Support per cgroup per device weights and io class
       [not found]   ` <1245443858-8487-19-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-24 21:52     ` Paul Menage
  0 siblings, 0 replies; 176+ messages in thread
From: Paul Menage @ 2009-06-24 21:52 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>
> You can use the following format to play with the new interface.
> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
> weight=0 means removing the policy for DEV.
>
> Examples:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo /dev/hdb:300:2 > io.policy
> # cat io.policy
> dev weight class
> /dev/hdb 300 2

I think that the read and write should be consistent. Can you just use
white-space separation for both, rather than colon-separation for
writes and white-space separation for reads?

Also, storing device inode paths statically as strings into the
io_policy structure seems wrong, since it's quite possible for the
device node that was used originally to be gone by the time that
someone reads the io.policy file, or renamed, or even replaced with an
inode that refers to to a different block device

My preferred alternatives would be:

- read/write the value as a device number rather than a name
- read/write the block device's actual name (e.g. hda or sda) rather
than a path to the inode

Paul

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 18/20] io-controller: Support per cgroup per device  weights and io class
  2009-06-19 20:37   ` Vivek Goyal
@ 2009-06-24 21:52     ` Paul Menage
  -1 siblings, 0 replies; 176+ messages in thread
From: Paul Menage @ 2009-06-24 21:52 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron, agk, snitzer, akpm, peterz

On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
>
> You can use the following format to play with the new interface.
> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
> weight=0 means removing the policy for DEV.
>
> Examples:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo /dev/hdb:300:2 > io.policy
> # cat io.policy
> dev weight class
> /dev/hdb 300 2

I think that the read and write should be consistent. Can you just use
white-space separation for both, rather than colon-separation for
writes and white-space separation for reads?

Also, storing device inode paths statically as strings into the
io_policy structure seems wrong, since it's quite possible for the
device node that was used originally to be gone by the time that
someone reads the io.policy file, or renamed, or even replaced with an
inode that refers to to a different block device

My preferred alternatives would be:

- read/write the value as a device number rather than a name
- read/write the block device's actual name (e.g. hda or sda) rather
than a path to the inode

Paul

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 18/20] io-controller: Support per cgroup per device weights and io class
@ 2009-06-24 21:52     ` Paul Menage
  0 siblings, 0 replies; 176+ messages in thread
From: Paul Menage @ 2009-06-24 21:52 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron, agk, snitzer, akpm, peterz

On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
>
> You can use the following format to play with the new interface.
> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
> weight=0 means removing the policy for DEV.
>
> Examples:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo /dev/hdb:300:2 > io.policy
> # cat io.policy
> dev weight class
> /dev/hdb 300 2

I think that the read and write should be consistent. Can you just use
white-space separation for both, rather than colon-separation for
writes and white-space separation for reads?

Also, storing device inode paths statically as strings into the
io_policy structure seems wrong, since it's quite possible for the
device node that was used originally to be gone by the time that
someone reads the io.policy file, or renamed, or even replaced with an
inode that refers to to a different block device

My preferred alternatives would be:

- read/write the value as a device number rather than a name
- read/write the block device's actual name (e.g. hda or sda) rather
than a path to the inode

Paul

^ permalink raw reply	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: do some changes of io.policy interface
       [not found]     ` <6599ad830906241452t76e64815s7d68a22a6e746a59-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-06-25 10:23       ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-25 10:23 UTC (permalink / raw)
  To: Paul Menage, Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Paul Menage wrote:
> On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> You can use the following format to play with the new interface.
>> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
>> weight=0 means removing the policy for DEV.
>>
>> Examples:
>> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
>> # echo /dev/hdb:300:2 > io.policy
>> # cat io.policy
>> dev weight class
>> /dev/hdb 300 2
> 
> I think that the read and write should be consistent. Can you just use
> white-space separation for both, rather than colon-separation for
> writes and white-space separation for reads?
> 
> Also, storing device inode paths statically as strings into the
> io_policy structure seems wrong, since it's quite possible for the
> device node that was used originally to be gone by the time that
> someone reads the io.policy file, or renamed, or even replaced with an
> inode that refers to to a different block device
> 
> My preferred alternatives would be:
> 
> - read/write the value as a device number rather than a name
> - read/write the block device's actual name (e.g. hda or sda) rather
> than a path to the inode
> 

Hi Paul, Vivek

Here is a patch to fix the issue Paul raised.

This patch achives the following goals
1 According to Paul's comment, Modifing io.policy interface to
  use device number for read/write directly. 
2 Just use white-space separation for both, rather than colon-
  separation for writes and white-space separation for reads.
3 Do more strict checking for inputting.

old interface:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo "/dev/hdb:300:2" > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

new interface:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo "3:64 300 2" > io.policy
# cat io.policy
dev     weight  class
3:64    300     2

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |   59 ++++++++++++++++++++++++++++++++++----------------
 block/elevator-fq.h |    1 -
 2 files changed, 40 insertions(+), 20 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..83c831b 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1895,12 +1895,12 @@ static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
 	if (list_empty(&iocg->policy_list))
 		goto out;
 
-	seq_printf(m, "dev weight class\n");
+	seq_printf(m, "dev\tweight\tclass\n");
 
 	spin_lock_irq(&iocg->lock);
 	list_for_each_entry(pn, &iocg->policy_list, node) {
-		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
-			   pn->weight, pn->ioprio_class);
+		seq_printf(m, "%u:%u\t%lu\t%lu\n", MAJOR(pn->dev),
+			   MINOR(pn->dev), pn->weight, pn->ioprio_class);
 	}
 	spin_unlock_irq(&iocg->lock);
 out:
@@ -1936,44 +1936,65 @@ static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
 	return NULL;
 }
 
-static int devname_to_devnum(const char *buf, dev_t *dev)
+static int check_dev_num(dev_t dev)
 {
-	struct block_device *bdev;
+	int part = 0;
 	struct gendisk *disk;
-	int part;
 
-	bdev = lookup_bdev(buf);
-	if (IS_ERR(bdev))
+	disk = get_gendisk(dev, &part);
+	if (!disk || part)
 		return -ENODEV;
 
-	disk = get_gendisk(bdev->bd_dev, &part);
-	if (part)
-		return -EINVAL;
-
-	*dev = MKDEV(disk->major, disk->first_minor);
-	bdput(bdev);
-
 	return 0;
 }
 
 static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
 {
-	char *s[3], *p;
+	char *s[4], *p, *major_s = NULL, *minor_s = NULL;
 	int ret;
+	unsigned long major, minor;
 	int i = 0;
+	dev_t dev;
 
 	memset(s, 0, sizeof(s));
-	while ((p = strsep(&buf, ":")) != NULL) {
+	while ((p = strsep(&buf, " ")) != NULL) {
 		if (!*p)
 			continue;
 		s[i++] = p;
+
+		/* Prevent from inputing too many things */
+		if (i == 4)
+			break;
 	}
 
-	ret = devname_to_devnum(s[0], &newpn->dev);
+	if (i != 3)
+		return -EINVAL;
+
+	p = strsep(&s[0], ":");
+	if (p != NULL)
+		major_s = p;
+	else
+		return -EINVAL;
+
+	minor_s = s[0];
+	if (!minor_s)
+		return -EINVAL;
+
+	ret = strict_strtoul(major_s, 10, &major);
+	if (ret)
+		return -EINVAL;
+
+	ret = strict_strtoul(minor_s, 10, &minor);
+	if (ret)
+		return -EINVAL;
+
+	dev = MKDEV(major, minor);
+
+	ret = check_dev_num(dev);
 	if (ret)
 		return ret;
 
-	strcpy(newpn->dev_name, s[0]);
+	newpn->dev = dev;
 
 	if (s[1] == NULL)
 		return -EINVAL;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..7722ebe 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -286,7 +286,6 @@ struct io_group {
 
 struct io_policy_node {
 	struct list_head node;
-	char dev_name[32];
 	dev_t dev;
 	unsigned long weight;
 	unsigned long ioprio_class;
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: do some changes of io.policy interface
  2009-06-24 21:52     ` Paul Menage
@ 2009-06-25 10:23       ` Gui Jianfeng
  -1 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-25 10:23 UTC (permalink / raw)
  To: Paul Menage, Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Paul Menage wrote:
> On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
>> You can use the following format to play with the new interface.
>> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
>> weight=0 means removing the policy for DEV.
>>
>> Examples:
>> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
>> # echo /dev/hdb:300:2 > io.policy
>> # cat io.policy
>> dev weight class
>> /dev/hdb 300 2
> 
> I think that the read and write should be consistent. Can you just use
> white-space separation for both, rather than colon-separation for
> writes and white-space separation for reads?
> 
> Also, storing device inode paths statically as strings into the
> io_policy structure seems wrong, since it's quite possible for the
> device node that was used originally to be gone by the time that
> someone reads the io.policy file, or renamed, or even replaced with an
> inode that refers to to a different block device
> 
> My preferred alternatives would be:
> 
> - read/write the value as a device number rather than a name
> - read/write the block device's actual name (e.g. hda or sda) rather
> than a path to the inode
> 

Hi Paul, Vivek

Here is a patch to fix the issue Paul raised.

This patch achives the following goals
1 According to Paul's comment, Modifing io.policy interface to
  use device number for read/write directly. 
2 Just use white-space separation for both, rather than colon-
  separation for writes and white-space separation for reads.
3 Do more strict checking for inputting.

old interface:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo "/dev/hdb:300:2" > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

new interface:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo "3:64 300 2" > io.policy
# cat io.policy
dev     weight  class
3:64    300     2

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   59 ++++++++++++++++++++++++++++++++++----------------
 block/elevator-fq.h |    1 -
 2 files changed, 40 insertions(+), 20 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..83c831b 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1895,12 +1895,12 @@ static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
 	if (list_empty(&iocg->policy_list))
 		goto out;
 
-	seq_printf(m, "dev weight class\n");
+	seq_printf(m, "dev\tweight\tclass\n");
 
 	spin_lock_irq(&iocg->lock);
 	list_for_each_entry(pn, &iocg->policy_list, node) {
-		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
-			   pn->weight, pn->ioprio_class);
+		seq_printf(m, "%u:%u\t%lu\t%lu\n", MAJOR(pn->dev),
+			   MINOR(pn->dev), pn->weight, pn->ioprio_class);
 	}
 	spin_unlock_irq(&iocg->lock);
 out:
@@ -1936,44 +1936,65 @@ static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
 	return NULL;
 }
 
-static int devname_to_devnum(const char *buf, dev_t *dev)
+static int check_dev_num(dev_t dev)
 {
-	struct block_device *bdev;
+	int part = 0;
 	struct gendisk *disk;
-	int part;
 
-	bdev = lookup_bdev(buf);
-	if (IS_ERR(bdev))
+	disk = get_gendisk(dev, &part);
+	if (!disk || part)
 		return -ENODEV;
 
-	disk = get_gendisk(bdev->bd_dev, &part);
-	if (part)
-		return -EINVAL;
-
-	*dev = MKDEV(disk->major, disk->first_minor);
-	bdput(bdev);
-
 	return 0;
 }
 
 static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
 {
-	char *s[3], *p;
+	char *s[4], *p, *major_s = NULL, *minor_s = NULL;
 	int ret;
+	unsigned long major, minor;
 	int i = 0;
+	dev_t dev;
 
 	memset(s, 0, sizeof(s));
-	while ((p = strsep(&buf, ":")) != NULL) {
+	while ((p = strsep(&buf, " ")) != NULL) {
 		if (!*p)
 			continue;
 		s[i++] = p;
+
+		/* Prevent from inputing too many things */
+		if (i == 4)
+			break;
 	}
 
-	ret = devname_to_devnum(s[0], &newpn->dev);
+	if (i != 3)
+		return -EINVAL;
+
+	p = strsep(&s[0], ":");
+	if (p != NULL)
+		major_s = p;
+	else
+		return -EINVAL;
+
+	minor_s = s[0];
+	if (!minor_s)
+		return -EINVAL;
+
+	ret = strict_strtoul(major_s, 10, &major);
+	if (ret)
+		return -EINVAL;
+
+	ret = strict_strtoul(minor_s, 10, &minor);
+	if (ret)
+		return -EINVAL;
+
+	dev = MKDEV(major, minor);
+
+	ret = check_dev_num(dev);
 	if (ret)
 		return ret;
 
-	strcpy(newpn->dev_name, s[0]);
+	newpn->dev = dev;
 
 	if (s[1] == NULL)
 		return -EINVAL;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..7722ebe 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -286,7 +286,6 @@ struct io_group {
 
 struct io_policy_node {
 	struct list_head node;
-	char dev_name[32];
 	dev_t dev;
 	unsigned long weight;
 	unsigned long ioprio_class;
-- 
1.5.4.rc3



^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: do some changes of io.policy interface
@ 2009-06-25 10:23       ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-25 10:23 UTC (permalink / raw)
  To: Paul Menage, Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

Paul Menage wrote:
> On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
>> You can use the following format to play with the new interface.
>> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
>> weight=0 means removing the policy for DEV.
>>
>> Examples:
>> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
>> # echo /dev/hdb:300:2 > io.policy
>> # cat io.policy
>> dev weight class
>> /dev/hdb 300 2
> 
> I think that the read and write should be consistent. Can you just use
> white-space separation for both, rather than colon-separation for
> writes and white-space separation for reads?
> 
> Also, storing device inode paths statically as strings into the
> io_policy structure seems wrong, since it's quite possible for the
> device node that was used originally to be gone by the time that
> someone reads the io.policy file, or renamed, or even replaced with an
> inode that refers to to a different block device
> 
> My preferred alternatives would be:
> 
> - read/write the value as a device number rather than a name
> - read/write the block device's actual name (e.g. hda or sda) rather
> than a path to the inode
> 

Hi Paul, Vivek

Here is a patch to fix the issue Paul raised.

This patch achives the following goals
1 According to Paul's comment, Modifing io.policy interface to
  use device number for read/write directly. 
2 Just use white-space separation for both, rather than colon-
  separation for writes and white-space separation for reads.
3 Do more strict checking for inputting.

old interface:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo "/dev/hdb:300:2" > io.policy
# cat io.policy
dev weight class
/dev/hdb 300 2

new interface:
Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
# echo "3:64 300 2" > io.policy
# cat io.policy
dev     weight  class
3:64    300     2

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   59 ++++++++++++++++++++++++++++++++++----------------
 block/elevator-fq.h |    1 -
 2 files changed, 40 insertions(+), 20 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..83c831b 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -1895,12 +1895,12 @@ static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
 	if (list_empty(&iocg->policy_list))
 		goto out;
 
-	seq_printf(m, "dev weight class\n");
+	seq_printf(m, "dev\tweight\tclass\n");
 
 	spin_lock_irq(&iocg->lock);
 	list_for_each_entry(pn, &iocg->policy_list, node) {
-		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
-			   pn->weight, pn->ioprio_class);
+		seq_printf(m, "%u:%u\t%lu\t%lu\n", MAJOR(pn->dev),
+			   MINOR(pn->dev), pn->weight, pn->ioprio_class);
 	}
 	spin_unlock_irq(&iocg->lock);
 out:
@@ -1936,44 +1936,65 @@ static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
 	return NULL;
 }
 
-static int devname_to_devnum(const char *buf, dev_t *dev)
+static int check_dev_num(dev_t dev)
 {
-	struct block_device *bdev;
+	int part = 0;
 	struct gendisk *disk;
-	int part;
 
-	bdev = lookup_bdev(buf);
-	if (IS_ERR(bdev))
+	disk = get_gendisk(dev, &part);
+	if (!disk || part)
 		return -ENODEV;
 
-	disk = get_gendisk(bdev->bd_dev, &part);
-	if (part)
-		return -EINVAL;
-
-	*dev = MKDEV(disk->major, disk->first_minor);
-	bdput(bdev);
-
 	return 0;
 }
 
 static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
 {
-	char *s[3], *p;
+	char *s[4], *p, *major_s = NULL, *minor_s = NULL;
 	int ret;
+	unsigned long major, minor;
 	int i = 0;
+	dev_t dev;
 
 	memset(s, 0, sizeof(s));
-	while ((p = strsep(&buf, ":")) != NULL) {
+	while ((p = strsep(&buf, " ")) != NULL) {
 		if (!*p)
 			continue;
 		s[i++] = p;
+
+		/* Prevent from inputing too many things */
+		if (i == 4)
+			break;
 	}
 
-	ret = devname_to_devnum(s[0], &newpn->dev);
+	if (i != 3)
+		return -EINVAL;
+
+	p = strsep(&s[0], ":");
+	if (p != NULL)
+		major_s = p;
+	else
+		return -EINVAL;
+
+	minor_s = s[0];
+	if (!minor_s)
+		return -EINVAL;
+
+	ret = strict_strtoul(major_s, 10, &major);
+	if (ret)
+		return -EINVAL;
+
+	ret = strict_strtoul(minor_s, 10, &minor);
+	if (ret)
+		return -EINVAL;
+
+	dev = MKDEV(major, minor);
+
+	ret = check_dev_num(dev);
 	if (ret)
 		return ret;
 
-	strcpy(newpn->dev_name, s[0]);
+	newpn->dev = dev;
 
 	if (s[1] == NULL)
 		return -EINVAL;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..7722ebe 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -286,7 +286,6 @@ struct io_group {
 
 struct io_policy_node {
 	struct list_head node;
-	char dev_name[32];
 	dev_t dev;
 	unsigned long weight;
 	unsigned long ioprio_class;
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
       [not found]       ` <4A435038.60406-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-25 12:55         ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-25 12:55 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w, Paul Menage,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Thu, Jun 25, 2009 at 06:23:52PM +0800, Gui Jianfeng wrote:
> Paul Menage wrote:
> > On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> >> You can use the following format to play with the new interface.
> >> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
> >> weight=0 means removing the policy for DEV.
> >>
> >> Examples:
> >> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> >> # echo /dev/hdb:300:2 > io.policy
> >> # cat io.policy
> >> dev weight class
> >> /dev/hdb 300 2
> > 
> > I think that the read and write should be consistent. Can you just use
> > white-space separation for both, rather than colon-separation for
> > writes and white-space separation for reads?
> > 
> > Also, storing device inode paths statically as strings into the
> > io_policy structure seems wrong, since it's quite possible for the
> > device node that was used originally to be gone by the time that
> > someone reads the io.policy file, or renamed, or even replaced with an
> > inode that refers to to a different block device
> > 
> > My preferred alternatives would be:
> > 
> > - read/write the value as a device number rather than a name
> > - read/write the block device's actual name (e.g. hda or sda) rather
> > than a path to the inode
> > 
> 
> Hi Paul, Vivek
> 
> Here is a patch to fix the issue Paul raised.
> 
> This patch achives the following goals
> 1 According to Paul's comment, Modifing io.policy interface to
>   use device number for read/write directly. 
> 2 Just use white-space separation for both, rather than colon-
>   separation for writes and white-space separation for reads.
> 3 Do more strict checking for inputting.
> 
> old interface:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo "/dev/hdb:300:2" > io.policy
> # cat io.policy
> dev weight class
> /dev/hdb 300 2
> 
> new interface:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo "3:64 300 2" > io.policy
> # cat io.policy
> dev     weight  class
> 3:64    300     2
> 
> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> ---
>  block/elevator-fq.c |   59 ++++++++++++++++++++++++++++++++++----------------
>  block/elevator-fq.h |    1 -
>  2 files changed, 40 insertions(+), 20 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index d779282..83c831b 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -1895,12 +1895,12 @@ static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
>  	if (list_empty(&iocg->policy_list))
>  		goto out;
>  
> -	seq_printf(m, "dev weight class\n");
> +	seq_printf(m, "dev\tweight\tclass\n");
>  
>  	spin_lock_irq(&iocg->lock);
>  	list_for_each_entry(pn, &iocg->policy_list, node) {
> -		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
> -			   pn->weight, pn->ioprio_class);
> +		seq_printf(m, "%u:%u\t%lu\t%lu\n", MAJOR(pn->dev),
> +			   MINOR(pn->dev), pn->weight, pn->ioprio_class);
>  	}
>  	spin_unlock_irq(&iocg->lock);
>  out:
> @@ -1936,44 +1936,65 @@ static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
>  	return NULL;
>  }
>  
> -static int devname_to_devnum(const char *buf, dev_t *dev)
> +static int check_dev_num(dev_t dev)
>  {
> -	struct block_device *bdev;
> +	int part = 0;
>  	struct gendisk *disk;
> -	int part;
>  
> -	bdev = lookup_bdev(buf);
> -	if (IS_ERR(bdev))
> +	disk = get_gendisk(dev, &part);
> +	if (!disk || part)
>  		return -ENODEV;
>  
> -	disk = get_gendisk(bdev->bd_dev, &part);
> -	if (part)
> -		return -EINVAL;
> -
> -	*dev = MKDEV(disk->major, disk->first_minor);
> -	bdput(bdev);
> -
>  	return 0;
>  }
>  
>  static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
>  {
> -	char *s[3], *p;
> +	char *s[4], *p, *major_s = NULL, *minor_s = NULL;
>  	int ret;
> +	unsigned long major, minor;
>  	int i = 0;
> +	dev_t dev;
>  
>  	memset(s, 0, sizeof(s));
> -	while ((p = strsep(&buf, ":")) != NULL) {
> +	while ((p = strsep(&buf, " ")) != NULL) {
>  		if (!*p)
>  			continue;
>  		s[i++] = p;
> +
> +		/* Prevent from inputing too many things */
> +		if (i == 4)
> +			break;
>  	}
>  
> -	ret = devname_to_devnum(s[0], &newpn->dev);
> +	if (i != 3)
> +		return -EINVAL;
> +
> +	p = strsep(&s[0], ":");
> +	if (p != NULL)
> +		major_s = p;
> +	else
> +		return -EINVAL;
> +
> +	minor_s = s[0];
> +	if (!minor_s)
> +		return -EINVAL;
> +
> +	ret = strict_strtoul(major_s, 10, &major);
> +	if (ret)
> +		return -EINVAL;
> +
> +	ret = strict_strtoul(minor_s, 10, &minor);
> +	if (ret)
> +		return -EINVAL;
> +
> +	dev = MKDEV(major, minor);
> +
> +	ret = check_dev_num(dev);
>  	if (ret)
>  		return ret;
>  
> -	strcpy(newpn->dev_name, s[0]);
> +	newpn->dev = dev;
>  
>  	if (s[1] == NULL)
>  		return -EINVAL;
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> index b3193f8..7722ebe 100644
> --- a/block/elevator-fq.h
> +++ b/block/elevator-fq.h
> @@ -286,7 +286,6 @@ struct io_group {
>  
>  struct io_policy_node {
>  	struct list_head node;
> -	char dev_name[32];
>  	dev_t dev;
>  	unsigned long weight;
>  	unsigned long ioprio_class;

Hi Gui,

Thanks for the patch. "unsigned long" for ioprio_class is too big. How
about using "unsigned short"? I noticed that in io_cgroup also we are
using "unsigned long". I will fix that.

For storing weight now we are planning to use "unsigned int". Can you
please switch to that.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
  2009-06-25 10:23       ` Gui Jianfeng
@ 2009-06-25 12:55         ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-25 12:55 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: Paul Menage, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, jmoyer, dhaval, balbir, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

On Thu, Jun 25, 2009 at 06:23:52PM +0800, Gui Jianfeng wrote:
> Paul Menage wrote:
> > On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
> >> You can use the following format to play with the new interface.
> >> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
> >> weight=0 means removing the policy for DEV.
> >>
> >> Examples:
> >> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> >> # echo /dev/hdb:300:2 > io.policy
> >> # cat io.policy
> >> dev weight class
> >> /dev/hdb 300 2
> > 
> > I think that the read and write should be consistent. Can you just use
> > white-space separation for both, rather than colon-separation for
> > writes and white-space separation for reads?
> > 
> > Also, storing device inode paths statically as strings into the
> > io_policy structure seems wrong, since it's quite possible for the
> > device node that was used originally to be gone by the time that
> > someone reads the io.policy file, or renamed, or even replaced with an
> > inode that refers to to a different block device
> > 
> > My preferred alternatives would be:
> > 
> > - read/write the value as a device number rather than a name
> > - read/write the block device's actual name (e.g. hda or sda) rather
> > than a path to the inode
> > 
> 
> Hi Paul, Vivek
> 
> Here is a patch to fix the issue Paul raised.
> 
> This patch achives the following goals
> 1 According to Paul's comment, Modifing io.policy interface to
>   use device number for read/write directly. 
> 2 Just use white-space separation for both, rather than colon-
>   separation for writes and white-space separation for reads.
> 3 Do more strict checking for inputting.
> 
> old interface:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo "/dev/hdb:300:2" > io.policy
> # cat io.policy
> dev weight class
> /dev/hdb 300 2
> 
> new interface:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo "3:64 300 2" > io.policy
> # cat io.policy
> dev     weight  class
> 3:64    300     2
> 
> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   59 ++++++++++++++++++++++++++++++++++----------------
>  block/elevator-fq.h |    1 -
>  2 files changed, 40 insertions(+), 20 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index d779282..83c831b 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -1895,12 +1895,12 @@ static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
>  	if (list_empty(&iocg->policy_list))
>  		goto out;
>  
> -	seq_printf(m, "dev weight class\n");
> +	seq_printf(m, "dev\tweight\tclass\n");
>  
>  	spin_lock_irq(&iocg->lock);
>  	list_for_each_entry(pn, &iocg->policy_list, node) {
> -		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
> -			   pn->weight, pn->ioprio_class);
> +		seq_printf(m, "%u:%u\t%lu\t%lu\n", MAJOR(pn->dev),
> +			   MINOR(pn->dev), pn->weight, pn->ioprio_class);
>  	}
>  	spin_unlock_irq(&iocg->lock);
>  out:
> @@ -1936,44 +1936,65 @@ static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
>  	return NULL;
>  }
>  
> -static int devname_to_devnum(const char *buf, dev_t *dev)
> +static int check_dev_num(dev_t dev)
>  {
> -	struct block_device *bdev;
> +	int part = 0;
>  	struct gendisk *disk;
> -	int part;
>  
> -	bdev = lookup_bdev(buf);
> -	if (IS_ERR(bdev))
> +	disk = get_gendisk(dev, &part);
> +	if (!disk || part)
>  		return -ENODEV;
>  
> -	disk = get_gendisk(bdev->bd_dev, &part);
> -	if (part)
> -		return -EINVAL;
> -
> -	*dev = MKDEV(disk->major, disk->first_minor);
> -	bdput(bdev);
> -
>  	return 0;
>  }
>  
>  static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
>  {
> -	char *s[3], *p;
> +	char *s[4], *p, *major_s = NULL, *minor_s = NULL;
>  	int ret;
> +	unsigned long major, minor;
>  	int i = 0;
> +	dev_t dev;
>  
>  	memset(s, 0, sizeof(s));
> -	while ((p = strsep(&buf, ":")) != NULL) {
> +	while ((p = strsep(&buf, " ")) != NULL) {
>  		if (!*p)
>  			continue;
>  		s[i++] = p;
> +
> +		/* Prevent from inputing too many things */
> +		if (i == 4)
> +			break;
>  	}
>  
> -	ret = devname_to_devnum(s[0], &newpn->dev);
> +	if (i != 3)
> +		return -EINVAL;
> +
> +	p = strsep(&s[0], ":");
> +	if (p != NULL)
> +		major_s = p;
> +	else
> +		return -EINVAL;
> +
> +	minor_s = s[0];
> +	if (!minor_s)
> +		return -EINVAL;
> +
> +	ret = strict_strtoul(major_s, 10, &major);
> +	if (ret)
> +		return -EINVAL;
> +
> +	ret = strict_strtoul(minor_s, 10, &minor);
> +	if (ret)
> +		return -EINVAL;
> +
> +	dev = MKDEV(major, minor);
> +
> +	ret = check_dev_num(dev);
>  	if (ret)
>  		return ret;
>  
> -	strcpy(newpn->dev_name, s[0]);
> +	newpn->dev = dev;
>  
>  	if (s[1] == NULL)
>  		return -EINVAL;
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> index b3193f8..7722ebe 100644
> --- a/block/elevator-fq.h
> +++ b/block/elevator-fq.h
> @@ -286,7 +286,6 @@ struct io_group {
>  
>  struct io_policy_node {
>  	struct list_head node;
> -	char dev_name[32];
>  	dev_t dev;
>  	unsigned long weight;
>  	unsigned long ioprio_class;

Hi Gui,

Thanks for the patch. "unsigned long" for ioprio_class is too big. How
about using "unsigned short"? I noticed that in io_cgroup also we are
using "unsigned long". I will fix that.

For storing weight now we are planning to use "unsigned int". Can you
please switch to that.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
@ 2009-06-25 12:55         ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-25 12:55 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, Paul Menage, akpm, jbaron, linux-kernel,
	s-uchida, righi.andrea, containers

On Thu, Jun 25, 2009 at 06:23:52PM +0800, Gui Jianfeng wrote:
> Paul Menage wrote:
> > On Fri, Jun 19, 2009 at 1:37 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
> >> You can use the following format to play with the new interface.
> >> #echo DEV:weight:ioprio_class > /patch/to/cgroup/policy
> >> weight=0 means removing the policy for DEV.
> >>
> >> Examples:
> >> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> >> # echo /dev/hdb:300:2 > io.policy
> >> # cat io.policy
> >> dev weight class
> >> /dev/hdb 300 2
> > 
> > I think that the read and write should be consistent. Can you just use
> > white-space separation for both, rather than colon-separation for
> > writes and white-space separation for reads?
> > 
> > Also, storing device inode paths statically as strings into the
> > io_policy structure seems wrong, since it's quite possible for the
> > device node that was used originally to be gone by the time that
> > someone reads the io.policy file, or renamed, or even replaced with an
> > inode that refers to to a different block device
> > 
> > My preferred alternatives would be:
> > 
> > - read/write the value as a device number rather than a name
> > - read/write the block device's actual name (e.g. hda or sda) rather
> > than a path to the inode
> > 
> 
> Hi Paul, Vivek
> 
> Here is a patch to fix the issue Paul raised.
> 
> This patch achives the following goals
> 1 According to Paul's comment, Modifing io.policy interface to
>   use device number for read/write directly. 
> 2 Just use white-space separation for both, rather than colon-
>   separation for writes and white-space separation for reads.
> 3 Do more strict checking for inputting.
> 
> old interface:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo "/dev/hdb:300:2" > io.policy
> # cat io.policy
> dev weight class
> /dev/hdb 300 2
> 
> new interface:
> Configure weight=300 ioprio_class=2 on /dev/hdb in this cgroup
> # echo "3:64 300 2" > io.policy
> # cat io.policy
> dev     weight  class
> 3:64    300     2
> 
> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   59 ++++++++++++++++++++++++++++++++++----------------
>  block/elevator-fq.h |    1 -
>  2 files changed, 40 insertions(+), 20 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index d779282..83c831b 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -1895,12 +1895,12 @@ static int io_cgroup_policy_read(struct cgroup *cgrp, struct cftype *cft,
>  	if (list_empty(&iocg->policy_list))
>  		goto out;
>  
> -	seq_printf(m, "dev weight class\n");
> +	seq_printf(m, "dev\tweight\tclass\n");
>  
>  	spin_lock_irq(&iocg->lock);
>  	list_for_each_entry(pn, &iocg->policy_list, node) {
> -		seq_printf(m, "%s %lu %lu\n", pn->dev_name,
> -			   pn->weight, pn->ioprio_class);
> +		seq_printf(m, "%u:%u\t%lu\t%lu\n", MAJOR(pn->dev),
> +			   MINOR(pn->dev), pn->weight, pn->ioprio_class);
>  	}
>  	spin_unlock_irq(&iocg->lock);
>  out:
> @@ -1936,44 +1936,65 @@ static struct io_policy_node *policy_search_node(const struct io_cgroup *iocg,
>  	return NULL;
>  }
>  
> -static int devname_to_devnum(const char *buf, dev_t *dev)
> +static int check_dev_num(dev_t dev)
>  {
> -	struct block_device *bdev;
> +	int part = 0;
>  	struct gendisk *disk;
> -	int part;
>  
> -	bdev = lookup_bdev(buf);
> -	if (IS_ERR(bdev))
> +	disk = get_gendisk(dev, &part);
> +	if (!disk || part)
>  		return -ENODEV;
>  
> -	disk = get_gendisk(bdev->bd_dev, &part);
> -	if (part)
> -		return -EINVAL;
> -
> -	*dev = MKDEV(disk->major, disk->first_minor);
> -	bdput(bdev);
> -
>  	return 0;
>  }
>  
>  static int policy_parse_and_set(char *buf, struct io_policy_node *newpn)
>  {
> -	char *s[3], *p;
> +	char *s[4], *p, *major_s = NULL, *minor_s = NULL;
>  	int ret;
> +	unsigned long major, minor;
>  	int i = 0;
> +	dev_t dev;
>  
>  	memset(s, 0, sizeof(s));
> -	while ((p = strsep(&buf, ":")) != NULL) {
> +	while ((p = strsep(&buf, " ")) != NULL) {
>  		if (!*p)
>  			continue;
>  		s[i++] = p;
> +
> +		/* Prevent from inputing too many things */
> +		if (i == 4)
> +			break;
>  	}
>  
> -	ret = devname_to_devnum(s[0], &newpn->dev);
> +	if (i != 3)
> +		return -EINVAL;
> +
> +	p = strsep(&s[0], ":");
> +	if (p != NULL)
> +		major_s = p;
> +	else
> +		return -EINVAL;
> +
> +	minor_s = s[0];
> +	if (!minor_s)
> +		return -EINVAL;
> +
> +	ret = strict_strtoul(major_s, 10, &major);
> +	if (ret)
> +		return -EINVAL;
> +
> +	ret = strict_strtoul(minor_s, 10, &minor);
> +	if (ret)
> +		return -EINVAL;
> +
> +	dev = MKDEV(major, minor);
> +
> +	ret = check_dev_num(dev);
>  	if (ret)
>  		return ret;
>  
> -	strcpy(newpn->dev_name, s[0]);
> +	newpn->dev = dev;
>  
>  	if (s[1] == NULL)
>  		return -EINVAL;
> diff --git a/block/elevator-fq.h b/block/elevator-fq.h
> index b3193f8..7722ebe 100644
> --- a/block/elevator-fq.h
> +++ b/block/elevator-fq.h
> @@ -286,7 +286,6 @@ struct io_group {
>  
>  struct io_policy_node {
>  	struct list_head node;
> -	char dev_name[32];
>  	dev_t dev;
>  	unsigned long weight;
>  	unsigned long ioprio_class;

Hi Gui,

Thanks for the patch. "unsigned long" for ioprio_class is too big. How
about using "unsigned short"? I noticed that in io_cgroup also we are
using "unsigned long". I will fix that.

For storing weight now we are planning to use "unsigned int". Can you
please switch to that.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
       [not found]         ` <20090625125513.GA25439-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-26  0:27           ` Gui Jianfeng
  2009-06-26  0:59           ` Gui Jianfeng
  1 sibling, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  0:27 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w, Paul Menage,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
...
> 
> Hi Gui,
> 
> Thanks for the patch. "unsigned long" for ioprio_class is too big. How
> about using "unsigned short"? I noticed that in io_cgroup also we are
> using "unsigned long". I will fix that.
> 
> For storing weight now we are planning to use "unsigned int". Can you
> please switch to that.

  Sure, i'll post another patch to switch to that.

> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
  2009-06-25 12:55         ` Vivek Goyal
@ 2009-06-26  0:27           ` Gui Jianfeng
  -1 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  0:27 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Paul Menage, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, jmoyer, dhaval, balbir, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

Vivek Goyal wrote:
...
> 
> Hi Gui,
> 
> Thanks for the patch. "unsigned long" for ioprio_class is too big. How
> about using "unsigned short"? I noticed that in io_cgroup also we are
> using "unsigned long". I will fix that.
> 
> For storing weight now we are planning to use "unsigned int". Can you
> please switch to that.

  Sure, i'll post another patch to switch to that.

> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
@ 2009-06-26  0:27           ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  0:27 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, Paul Menage, akpm, jbaron, linux-kernel,
	s-uchida, righi.andrea, containers

Vivek Goyal wrote:
...
> 
> Hi Gui,
> 
> Thanks for the patch. "unsigned long" for ioprio_class is too big. How
> about using "unsigned short"? I noticed that in io_cgroup also we are
> using "unsigned long". I will fix that.
> 
> For storing weight now we are planning to use "unsigned int". Can you
> please switch to that.

  Sure, i'll post another patch to switch to that.

> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
       [not found]         ` <20090625125513.GA25439-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2009-06-26  0:27           ` Gui Jianfeng
@ 2009-06-26  0:59           ` Gui Jianfeng
  1 sibling, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  0:59 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w, Paul Menage,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
...
> 
> Hi Gui,
> 
> Thanks for the patch. "unsigned long" for ioprio_class is too big. How
> about using "unsigned short"? I noticed that in io_cgroup also we are
> using "unsigned long". I will fix that.

  Ah, i see, If you already have the patch, would you share it?

> 
> For storing weight now we are planning to use "unsigned int". Can you
> please switch to that.
> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: do some changes of io.policy interface
  2009-06-25 12:55         ` Vivek Goyal
                           ` (2 preceding siblings ...)
  (?)
@ 2009-06-26  0:59         ` Gui Jianfeng
  -1 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  0:59 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Paul Menage, linux-kernel, containers, dm-devel, jens.axboe,
	nauman, dpshah, lizf, mikew, fchecconi, paolo.valente, ryov,
	fernando, s-uchida, taka, jmoyer, dhaval, balbir, righi.andrea,
	m-ikeda, jbaron, agk, snitzer, akpm, peterz

Vivek Goyal wrote:
...
> 
> Hi Gui,
> 
> Thanks for the patch. "unsigned long" for ioprio_class is too big. How
> about using "unsigned short"? I noticed that in io_cgroup also we are
> using "unsigned long". I will fix that.

  Ah, i see, If you already have the patch, would you share it?

> 
> For storing weight now we are planning to use "unsigned int". Can you
> please switch to that.
> 
> Thanks
> Vivek
> 
> 
> 

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* [PATCH 1/2] io-controller: Prepare a rt ioq list in efqd to keep track of busy rt ioqs
       [not found]             ` <4A41EFE1.5050101-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-26  8:13               ` Gui Jianfeng
  2009-06-26  8:13               ` [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy Gui Jianfeng
  1 sibling, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  8:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Maintain a busy rt ioq list in efqd so that we can easily
keep track of all busy rt ioqs in system.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |    8 ++++++++
 block/elevator-fq.h |    6 ++++++
 2 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..1d4ec1f 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3247,6 +3247,10 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 	if (elv_ioq_class_rt(ioq)) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues++;
+
+		/* queue lock has been already held by caller */
+		hlist_add_head_rcu(&ioq->rt_node,
+				   &ioq->efqd->rt_ioq_list);
 	}
 
 #ifdef CONFIG_DEBUG_GROUP_IOSCHED
@@ -3293,6 +3297,9 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 	if (elv_ioq_class_rt(ioq)) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues--;
+
+		/* queue lock has been already held by caller */
+		hlist_del_rcu(&ioq->rt_node);
 	}
 
 	elv_deactivate_ioq(efqd, ioq, requeue);
@@ -4196,6 +4203,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 
 	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
 	INIT_HLIST_HEAD(&efqd->group_list);
+	INIT_HLIST_HEAD(&efqd->rt_ioq_list);
 
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..53a64b6 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -169,6 +169,9 @@ struct io_queue {
 	atomic_t ref;
 	unsigned int flags;
 
+	/* node to insert into efqd->rt_ioq_list */
+	struct hlist_node rt_node;
+
 	/* Pointer to generic elevator data structure */
 	struct elv_fq_data *efqd;
 	pid_t pid;
@@ -336,6 +339,9 @@ struct elv_fq_data {
 	/* List of io groups hanging on this elevator */
 	struct hlist_head group_list;
 
+	/* List of rt ioqs in hierarchy*/
+	struct hlist_head rt_ioq_list;
+
 	struct request_queue *queue;
 	unsigned int busy_queues;
 
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 1/2] io-controller: Prepare a rt ioq list in efqd to keep track of busy rt ioqs
  2009-06-24  9:20           ` Gui Jianfeng
@ 2009-06-26  8:13               ` Gui Jianfeng
       [not found]             ` <4A41EFE1.5050101-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  2009-06-26  8:13             ` Gui Jianfeng
  2 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  8:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Maintain a busy rt ioq list in efqd so that we can easily
keep track of all busy rt ioqs in system.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |    8 ++++++++
 block/elevator-fq.h |    6 ++++++
 2 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..1d4ec1f 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3247,6 +3247,10 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 	if (elv_ioq_class_rt(ioq)) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues++;
+
+		/* queue lock has been already held by caller */
+		hlist_add_head_rcu(&ioq->rt_node,
+				   &ioq->efqd->rt_ioq_list);
 	}
 
 #ifdef CONFIG_DEBUG_GROUP_IOSCHED
@@ -3293,6 +3297,9 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 	if (elv_ioq_class_rt(ioq)) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues--;
+
+		/* queue lock has been already held by caller */
+		hlist_del_rcu(&ioq->rt_node);
 	}
 
 	elv_deactivate_ioq(efqd, ioq, requeue);
@@ -4196,6 +4203,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 
 	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
 	INIT_HLIST_HEAD(&efqd->group_list);
+	INIT_HLIST_HEAD(&efqd->rt_ioq_list);
 
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..53a64b6 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -169,6 +169,9 @@ struct io_queue {
 	atomic_t ref;
 	unsigned int flags;
 
+	/* node to insert into efqd->rt_ioq_list */
+	struct hlist_node rt_node;
+
 	/* Pointer to generic elevator data structure */
 	struct elv_fq_data *efqd;
 	pid_t pid;
@@ -336,6 +339,9 @@ struct elv_fq_data {
 	/* List of io groups hanging on this elevator */
 	struct hlist_head group_list;
 
+	/* List of rt ioqs in hierarchy*/
+	struct hlist_head rt_ioq_list;
+
 	struct request_queue *queue;
 	unsigned int busy_queues;
 
-- 
1.5.4.rc3



^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 1/2] io-controller: Prepare a rt ioq list in efqd to keep track of busy rt ioqs
@ 2009-06-26  8:13               ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  8:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

Maintain a busy rt ioq list in efqd so that we can easily
keep track of all busy rt ioqs in system.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |    8 ++++++++
 block/elevator-fq.h |    6 ++++++
 2 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..1d4ec1f 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3247,6 +3247,10 @@ void elv_add_ioq_busy(struct elv_fq_data *efqd, struct io_queue *ioq)
 	if (elv_ioq_class_rt(ioq)) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues++;
+
+		/* queue lock has been already held by caller */
+		hlist_add_head_rcu(&ioq->rt_node,
+				   &ioq->efqd->rt_ioq_list);
 	}
 
 #ifdef CONFIG_DEBUG_GROUP_IOSCHED
@@ -3293,6 +3297,9 @@ void elv_del_ioq_busy(struct elevator_queue *e, struct io_queue *ioq,
 	if (elv_ioq_class_rt(ioq)) {
 		struct io_group *iog = ioq_to_io_group(ioq);
 		iog->busy_rt_queues--;
+
+		/* queue lock has been already held by caller */
+		hlist_del_rcu(&ioq->rt_node);
 	}
 
 	elv_deactivate_ioq(efqd, ioq, requeue);
@@ -4196,6 +4203,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e)
 
 	INIT_WORK(&efqd->unplug_work, elv_kick_queue);
 	INIT_HLIST_HEAD(&efqd->group_list);
+	INIT_HLIST_HEAD(&efqd->rt_ioq_list);
 
 	efqd->elv_slice[0] = elv_slice_async;
 	efqd->elv_slice[1] = elv_slice_sync;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index b3193f8..53a64b6 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -169,6 +169,9 @@ struct io_queue {
 	atomic_t ref;
 	unsigned int flags;
 
+	/* node to insert into efqd->rt_ioq_list */
+	struct hlist_node rt_node;
+
 	/* Pointer to generic elevator data structure */
 	struct elv_fq_data *efqd;
 	pid_t pid;
@@ -336,6 +339,9 @@ struct elv_fq_data {
 	/* List of io groups hanging on this elevator */
 	struct hlist_head group_list;
 
+	/* List of rt ioqs in hierarchy*/
+	struct hlist_head rt_ioq_list;
+
 	struct request_queue *queue;
 	unsigned int busy_queues;
 
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy
       [not found]             ` <4A41EFE1.5050101-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  2009-06-26  8:13               ` Gui Jianfeng
@ 2009-06-26  8:13               ` Gui Jianfeng
  1 sibling, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  8:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

let rt queue preempt non-rt queue if needed.
Make sure comparision happens at the same level.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |   28 +++++++++++++++++++++++++++-
 1 files changed, 27 insertions(+), 1 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 1d4ec1f..21d38f5 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3742,6 +3742,31 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 	return ret;
 }
 
+static int check_rt_preemption(struct io_queue *ioq)
+{
+	struct hlist_node *node;
+	struct hlist_head *hhead = &ioq->efqd->rt_ioq_list;
+	struct io_queue *rt_ioq;
+	struct io_entity *entity = &ioq->entity;
+	struct io_entity *new_entity;
+
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(rt_ioq, node, hhead, rt_node) {
+		new_entity = &rt_ioq->entity;
+
+		bfq_find_matching_entity(&entity, &new_entity);
+
+		if (new_entity->ioprio_class == IOPRIO_CLASS_RT &&
+		    entity->ioprio_class != IOPRIO_CLASS_RT) {
+			rcu_read_unlock();
+			return 1;
+		}
+	}
+	rcu_read_unlock();
+
+	return 0;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
@@ -3823,7 +3848,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	 */
 	iog = ioq_to_io_group(ioq);
 
-	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+	if (!elv_ioq_class_rt(ioq) &&
+	    (iog->busy_rt_queues || check_rt_preemption(ioq))) {
 		/*
 		 * We simulate this as cfqq timed out so that it gets to bank
 		 * the remaining of its time slice.
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy
  2009-06-24  9:20           ` Gui Jianfeng
  2009-06-26  8:13               ` Gui Jianfeng
       [not found]             ` <4A41EFE1.5050101-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-26  8:13             ` Gui Jianfeng
       [not found]               ` <4A44833F.8040308-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  2009-06-26 12:39                 ` Vivek Goyal
  2 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-26  8:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

let rt queue preempt non-rt queue if needed.
Make sure comparision happens at the same level.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   28 +++++++++++++++++++++++++++-
 1 files changed, 27 insertions(+), 1 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 1d4ec1f..21d38f5 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -3742,6 +3742,31 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
 	return ret;
 }
 
+static int check_rt_preemption(struct io_queue *ioq)
+{
+	struct hlist_node *node;
+	struct hlist_head *hhead = &ioq->efqd->rt_ioq_list;
+	struct io_queue *rt_ioq;
+	struct io_entity *entity = &ioq->entity;
+	struct io_entity *new_entity;
+
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(rt_ioq, node, hhead, rt_node) {
+		new_entity = &rt_ioq->entity;
+
+		bfq_find_matching_entity(&entity, &new_entity);
+
+		if (new_entity->ioprio_class == IOPRIO_CLASS_RT &&
+		    entity->ioprio_class != IOPRIO_CLASS_RT) {
+			rcu_read_unlock();
+			return 1;
+		}
+	}
+	rcu_read_unlock();
+
+	return 0;
+}
+
 /* Common layer function to select the next queue to dispatch from */
 void *elv_fq_select_ioq(struct request_queue *q, int force)
 {
@@ -3823,7 +3848,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
 	 */
 	iog = ioq_to_io_group(ioq);
 
-	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
+	if (!elv_ioq_class_rt(ioq) &&
+	    (iog->busy_rt_queues || check_rt_preemption(ioq))) {
 		/*
 		 * We simulate this as cfqq timed out so that it gets to bank
 		 * the remaining of its time slice.
-- 
1.5.4.rc3



^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy
       [not found]               ` <4A44833F.8040308-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-26 12:39                 ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-26 12:39 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Fri, Jun 26, 2009 at 04:13:51PM +0800, Gui Jianfeng wrote:
> let rt queue preempt non-rt queue if needed.
> Make sure comparision happens at the same level.
> 
> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> ---
>  block/elevator-fq.c |   28 +++++++++++++++++++++++++++-
>  1 files changed, 27 insertions(+), 1 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 1d4ec1f..21d38f5 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -3742,6 +3742,31 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>  	return ret;
>  }
>  
> +static int check_rt_preemption(struct io_queue *ioq)
> +{
> +	struct hlist_node *node;
> +	struct hlist_head *hhead = &ioq->efqd->rt_ioq_list;
> +	struct io_queue *rt_ioq;
> +	struct io_entity *entity = &ioq->entity;
> +	struct io_entity *new_entity;
> +
> +	rcu_read_lock();
> +	hlist_for_each_entry_rcu(rt_ioq, node, hhead, rt_node) {
> +		new_entity = &rt_ioq->entity;
> +
> +		bfq_find_matching_entity(&entity, &new_entity);
> +
> +		if (new_entity->ioprio_class == IOPRIO_CLASS_RT &&
> +		    entity->ioprio_class != IOPRIO_CLASS_RT) {
> +			rcu_read_unlock();
> +			return 1;
> +		}
> +	}
> +	rcu_read_unlock();
> +
> +	return 0;
> +}
> +
>  /* Common layer function to select the next queue to dispatch from */
>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>  {
> @@ -3823,7 +3848,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>  	 */
>  	iog = ioq_to_io_group(ioq);
>  
> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +	if (!elv_ioq_class_rt(ioq) &&
> +	    (iog->busy_rt_queues || check_rt_preemption(ioq))) {
>  		/*

Hi Gui,

I am not able to understand why do we need above changes?

BFQ scheduler already takes care of selecting an RT queue for dispatch (if
the queue is entitled to).

In case a new RT queue backlogged while a BE queue is being served, we
do preemtion check to make sure RT queue gets to run as soon as possible.

In fact I think that busy_rt_queues infrastructure is also redundant and
I plan to get rid of it. 

Can you please help me understand what use case are you addressing with
above patch?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy
  2009-06-26  8:13             ` Gui Jianfeng
@ 2009-06-26 12:39                 ` Vivek Goyal
  2009-06-26 12:39                 ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-26 12:39 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Fri, Jun 26, 2009 at 04:13:51PM +0800, Gui Jianfeng wrote:
> let rt queue preempt non-rt queue if needed.
> Make sure comparision happens at the same level.
> 
> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   28 +++++++++++++++++++++++++++-
>  1 files changed, 27 insertions(+), 1 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 1d4ec1f..21d38f5 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -3742,6 +3742,31 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>  	return ret;
>  }
>  
> +static int check_rt_preemption(struct io_queue *ioq)
> +{
> +	struct hlist_node *node;
> +	struct hlist_head *hhead = &ioq->efqd->rt_ioq_list;
> +	struct io_queue *rt_ioq;
> +	struct io_entity *entity = &ioq->entity;
> +	struct io_entity *new_entity;
> +
> +	rcu_read_lock();
> +	hlist_for_each_entry_rcu(rt_ioq, node, hhead, rt_node) {
> +		new_entity = &rt_ioq->entity;
> +
> +		bfq_find_matching_entity(&entity, &new_entity);
> +
> +		if (new_entity->ioprio_class == IOPRIO_CLASS_RT &&
> +		    entity->ioprio_class != IOPRIO_CLASS_RT) {
> +			rcu_read_unlock();
> +			return 1;
> +		}
> +	}
> +	rcu_read_unlock();
> +
> +	return 0;
> +}
> +
>  /* Common layer function to select the next queue to dispatch from */
>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>  {
> @@ -3823,7 +3848,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>  	 */
>  	iog = ioq_to_io_group(ioq);
>  
> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +	if (!elv_ioq_class_rt(ioq) &&
> +	    (iog->busy_rt_queues || check_rt_preemption(ioq))) {
>  		/*

Hi Gui,

I am not able to understand why do we need above changes?

BFQ scheduler already takes care of selecting an RT queue for dispatch (if
the queue is entitled to).

In case a new RT queue backlogged while a BE queue is being served, we
do preemtion check to make sure RT queue gets to run as soon as possible.

In fact I think that busy_rt_queues infrastructure is also redundant and
I plan to get rid of it. 

Can you please help me understand what use case are you addressing with
above patch?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy
@ 2009-06-26 12:39                 ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-26 12:39 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Fri, Jun 26, 2009 at 04:13:51PM +0800, Gui Jianfeng wrote:
> let rt queue preempt non-rt queue if needed.
> Make sure comparision happens at the same level.
> 
> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   28 +++++++++++++++++++++++++++-
>  1 files changed, 27 insertions(+), 1 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 1d4ec1f..21d38f5 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -3742,6 +3742,31 @@ int elv_iosched_expire_ioq(struct request_queue *q, int slice_expired,
>  	return ret;
>  }
>  
> +static int check_rt_preemption(struct io_queue *ioq)
> +{
> +	struct hlist_node *node;
> +	struct hlist_head *hhead = &ioq->efqd->rt_ioq_list;
> +	struct io_queue *rt_ioq;
> +	struct io_entity *entity = &ioq->entity;
> +	struct io_entity *new_entity;
> +
> +	rcu_read_lock();
> +	hlist_for_each_entry_rcu(rt_ioq, node, hhead, rt_node) {
> +		new_entity = &rt_ioq->entity;
> +
> +		bfq_find_matching_entity(&entity, &new_entity);
> +
> +		if (new_entity->ioprio_class == IOPRIO_CLASS_RT &&
> +		    entity->ioprio_class != IOPRIO_CLASS_RT) {
> +			rcu_read_unlock();
> +			return 1;
> +		}
> +	}
> +	rcu_read_unlock();
> +
> +	return 0;
> +}
> +
>  /* Common layer function to select the next queue to dispatch from */
>  void *elv_fq_select_ioq(struct request_queue *q, int force)
>  {
> @@ -3823,7 +3848,8 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
>  	 */
>  	iog = ioq_to_io_group(ioq);
>  
> -	if (!elv_ioq_class_rt(ioq) && iog->busy_rt_queues) {
> +	if (!elv_ioq_class_rt(ioq) &&
> +	    (iog->busy_rt_queues || check_rt_preemption(ioq))) {
>  		/*

Hi Gui,

I am not able to understand why do we need above changes?

BFQ scheduler already takes care of selecting an RT queue for dispatch (if
the queue is entitled to).

In case a new RT queue backlogged while a BE queue is being served, we
do preemtion check to make sure RT queue gets to run as soon as possible.

In fact I think that busy_rt_queues infrastructure is also redundant and
I plan to get rid of it. 

Can you please help me understand what use case are you addressing with
above patch?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: optimization for iog deletion when elevator exiting
       [not found]   ` <1245443858-8487-6-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-29  5:27     ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-29  5:27 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Hi Vivek,

There's no need to travel the iocg->group_data for each iog
when exiting a elevator, that costs too much. An alternative 
solution is reseting iocg_id as soon as an io group unlinking 
from a iocg. Make a decision that whether it's need to carry 
out deleting action by checking iocg_id.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |   29 ++++++++++-------------------
 1 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..b26fe0f 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
 	BUG_ON(iog->sched_data.active_entity != NULL);
 	BUG_ON(entity != NULL && entity->tree != NULL);
 
-	iog->iocg_id = 0;
-
 	/*
 	 * Wait for any rcu readers to exit before freeing up the group.
 	 * Primarily useful when io_get_io_group() is called without queue
@@ -2376,6 +2374,7 @@ remove_entry:
 			  group_node);
 	efqd = rcu_dereference(iog->key);
 	hlist_del_rcu(&iog->group_node);
+	iog->iocg_id = 0;
 	spin_unlock_irqrestore(&iocg->lock, flags);
 
 	spin_lock_irqsave(efqd->queue->queue_lock, flags);
@@ -2403,35 +2402,27 @@ done:
 void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
 {
 	struct io_cgroup *iocg;
-	unsigned short id = iog->iocg_id;
-	struct hlist_node *n;
-	struct io_group *__iog;
 	unsigned long flags;
 	struct cgroup_subsys_state *css;
 
 	rcu_read_lock();
 
-	BUG_ON(!id);
-	css = css_lookup(&io_subsys, id);
+	css = css_lookup(&io_subsys, iog->iocg_id);
 
-	/* css can't go away as associated io group is still around */
-	BUG_ON(!css);
+	if (!css)
+		goto out;
 
 	iocg = container_of(css, struct io_cgroup, css);
 
 	spin_lock_irqsave(&iocg->lock, flags);
-	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
-		/*
-		 * Remove iog only if it is still in iocg list. Cgroup
-		 * deletion could have deleted it already.
-		 */
-		if (__iog == iog) {
-			hlist_del_rcu(&iog->group_node);
-			__io_destroy_group(efqd, iog);
-			break;
-		}
+
+	if (iog->iocg_id) {
+		hlist_del_rcu(&iog->group_node);
+		__io_destroy_group(efqd, iog);
 	}
+
 	spin_unlock_irqrestore(&iocg->lock, flags);
+out:
 	rcu_read_unlock();
 }
 
-- 1.5.4.rc3 

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: optimization for iog deletion when elevator exiting
  2009-06-19 20:37   ` Vivek Goyal
  (?)
@ 2009-06-29  5:27   ` Gui Jianfeng
       [not found]     ` <4A4850D3.3000700-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  2009-06-29 14:06       ` Vivek Goyal
  -1 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-29  5:27 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Hi Vivek,

There's no need to travel the iocg->group_data for each iog
when exiting a elevator, that costs too much. An alternative 
solution is reseting iocg_id as soon as an io group unlinking 
from a iocg. Make a decision that whether it's need to carry 
out deleting action by checking iocg_id.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   29 ++++++++++-------------------
 1 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index d779282..b26fe0f 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
 	BUG_ON(iog->sched_data.active_entity != NULL);
 	BUG_ON(entity != NULL && entity->tree != NULL);
 
-	iog->iocg_id = 0;
-
 	/*
 	 * Wait for any rcu readers to exit before freeing up the group.
 	 * Primarily useful when io_get_io_group() is called without queue
@@ -2376,6 +2374,7 @@ remove_entry:
 			  group_node);
 	efqd = rcu_dereference(iog->key);
 	hlist_del_rcu(&iog->group_node);
+	iog->iocg_id = 0;
 	spin_unlock_irqrestore(&iocg->lock, flags);
 
 	spin_lock_irqsave(efqd->queue->queue_lock, flags);
@@ -2403,35 +2402,27 @@ done:
 void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
 {
 	struct io_cgroup *iocg;
-	unsigned short id = iog->iocg_id;
-	struct hlist_node *n;
-	struct io_group *__iog;
 	unsigned long flags;
 	struct cgroup_subsys_state *css;
 
 	rcu_read_lock();
 
-	BUG_ON(!id);
-	css = css_lookup(&io_subsys, id);
+	css = css_lookup(&io_subsys, iog->iocg_id);
 
-	/* css can't go away as associated io group is still around */
-	BUG_ON(!css);
+	if (!css)
+		goto out;
 
 	iocg = container_of(css, struct io_cgroup, css);
 
 	spin_lock_irqsave(&iocg->lock, flags);
-	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
-		/*
-		 * Remove iog only if it is still in iocg list. Cgroup
-		 * deletion could have deleted it already.
-		 */
-		if (__iog == iog) {
-			hlist_del_rcu(&iog->group_node);
-			__io_destroy_group(efqd, iog);
-			break;
-		}
+
+	if (iog->iocg_id) {
+		hlist_del_rcu(&iog->group_node);
+		__io_destroy_group(efqd, iog);
 	}
+
 	spin_unlock_irqrestore(&iocg->lock, flags);
+out:
 	rcu_read_unlock();
 }
 
-- 1.5.4.rc3 


^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
       [not found]     ` <4A4850D3.3000700-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-06-29 14:06       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-29 14:06 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
> 
> There's no need to travel the iocg->group_data for each iog
> when exiting a elevator, that costs too much. An alternative 
> solution is reseting iocg_id as soon as an io group unlinking 
> from a iocg. Make a decision that whether it's need to carry 
> out deleting action by checking iocg_id.
> 

Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
wheter group is still on iocg list or not instead of traversing the list.

Nauman, do you see any issues with the patch?

Thanks
Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> ---
>  block/elevator-fq.c |   29 ++++++++++-------------------
>  1 files changed, 10 insertions(+), 19 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index d779282..b26fe0f 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>  	BUG_ON(iog->sched_data.active_entity != NULL);
>  	BUG_ON(entity != NULL && entity->tree != NULL);
>  
> -	iog->iocg_id = 0;
> -
>  	/*
>  	 * Wait for any rcu readers to exit before freeing up the group.
>  	 * Primarily useful when io_get_io_group() is called without queue
> @@ -2376,6 +2374,7 @@ remove_entry:
>  			  group_node);
>  	efqd = rcu_dereference(iog->key);
>  	hlist_del_rcu(&iog->group_node);
> +	iog->iocg_id = 0;
>  	spin_unlock_irqrestore(&iocg->lock, flags);
>  
>  	spin_lock_irqsave(efqd->queue->queue_lock, flags);
> @@ -2403,35 +2402,27 @@ done:
>  void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>  {
>  	struct io_cgroup *iocg;
> -	unsigned short id = iog->iocg_id;
> -	struct hlist_node *n;
> -	struct io_group *__iog;
>  	unsigned long flags;
>  	struct cgroup_subsys_state *css;
>  
>  	rcu_read_lock();
>  
> -	BUG_ON(!id);
> -	css = css_lookup(&io_subsys, id);
> +	css = css_lookup(&io_subsys, iog->iocg_id);
>  
> -	/* css can't go away as associated io group is still around */
> -	BUG_ON(!css);
> +	if (!css)
> +		goto out;
>  
>  	iocg = container_of(css, struct io_cgroup, css);
>  
>  	spin_lock_irqsave(&iocg->lock, flags);
> -	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
> -		/*
> -		 * Remove iog only if it is still in iocg list. Cgroup
> -		 * deletion could have deleted it already.
> -		 */
> -		if (__iog == iog) {
> -			hlist_del_rcu(&iog->group_node);
> -			__io_destroy_group(efqd, iog);
> -			break;
> -		}
> +
> +	if (iog->iocg_id) {
> +		hlist_del_rcu(&iog->group_node);
> +		__io_destroy_group(efqd, iog);
>  	}
> +
>  	spin_unlock_irqrestore(&iocg->lock, flags);
> +out:
>  	rcu_read_unlock();
>  }
>  
> -- 1.5.4.rc3 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
  2009-06-29  5:27   ` [PATCH] io-controller: optimization for iog deletion when elevator exiting Gui Jianfeng
@ 2009-06-29 14:06       ` Vivek Goyal
  2009-06-29 14:06       ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-29 14:06 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
> 
> There's no need to travel the iocg->group_data for each iog
> when exiting a elevator, that costs too much. An alternative 
> solution is reseting iocg_id as soon as an io group unlinking 
> from a iocg. Make a decision that whether it's need to carry 
> out deleting action by checking iocg_id.
> 

Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
wheter group is still on iocg list or not instead of traversing the list.

Nauman, do you see any issues with the patch?

Thanks
Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   29 ++++++++++-------------------
>  1 files changed, 10 insertions(+), 19 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index d779282..b26fe0f 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>  	BUG_ON(iog->sched_data.active_entity != NULL);
>  	BUG_ON(entity != NULL && entity->tree != NULL);
>  
> -	iog->iocg_id = 0;
> -
>  	/*
>  	 * Wait for any rcu readers to exit before freeing up the group.
>  	 * Primarily useful when io_get_io_group() is called without queue
> @@ -2376,6 +2374,7 @@ remove_entry:
>  			  group_node);
>  	efqd = rcu_dereference(iog->key);
>  	hlist_del_rcu(&iog->group_node);
> +	iog->iocg_id = 0;
>  	spin_unlock_irqrestore(&iocg->lock, flags);
>  
>  	spin_lock_irqsave(efqd->queue->queue_lock, flags);
> @@ -2403,35 +2402,27 @@ done:
>  void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>  {
>  	struct io_cgroup *iocg;
> -	unsigned short id = iog->iocg_id;
> -	struct hlist_node *n;
> -	struct io_group *__iog;
>  	unsigned long flags;
>  	struct cgroup_subsys_state *css;
>  
>  	rcu_read_lock();
>  
> -	BUG_ON(!id);
> -	css = css_lookup(&io_subsys, id);
> +	css = css_lookup(&io_subsys, iog->iocg_id);
>  
> -	/* css can't go away as associated io group is still around */
> -	BUG_ON(!css);
> +	if (!css)
> +		goto out;
>  
>  	iocg = container_of(css, struct io_cgroup, css);
>  
>  	spin_lock_irqsave(&iocg->lock, flags);
> -	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
> -		/*
> -		 * Remove iog only if it is still in iocg list. Cgroup
> -		 * deletion could have deleted it already.
> -		 */
> -		if (__iog == iog) {
> -			hlist_del_rcu(&iog->group_node);
> -			__io_destroy_group(efqd, iog);
> -			break;
> -		}
> +
> +	if (iog->iocg_id) {
> +		hlist_del_rcu(&iog->group_node);
> +		__io_destroy_group(efqd, iog);
>  	}
> +
>  	spin_unlock_irqrestore(&iocg->lock, flags);
> +out:
>  	rcu_read_unlock();
>  }
>  
> -- 1.5.4.rc3 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
@ 2009-06-29 14:06       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-29 14:06 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
> 
> There's no need to travel the iocg->group_data for each iog
> when exiting a elevator, that costs too much. An alternative 
> solution is reseting iocg_id as soon as an io group unlinking 
> from a iocg. Make a decision that whether it's need to carry 
> out deleting action by checking iocg_id.
> 

Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
wheter group is still on iocg list or not instead of traversing the list.

Nauman, do you see any issues with the patch?

Thanks
Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   29 ++++++++++-------------------
>  1 files changed, 10 insertions(+), 19 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index d779282..b26fe0f 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>  	BUG_ON(iog->sched_data.active_entity != NULL);
>  	BUG_ON(entity != NULL && entity->tree != NULL);
>  
> -	iog->iocg_id = 0;
> -
>  	/*
>  	 * Wait for any rcu readers to exit before freeing up the group.
>  	 * Primarily useful when io_get_io_group() is called without queue
> @@ -2376,6 +2374,7 @@ remove_entry:
>  			  group_node);
>  	efqd = rcu_dereference(iog->key);
>  	hlist_del_rcu(&iog->group_node);
> +	iog->iocg_id = 0;
>  	spin_unlock_irqrestore(&iocg->lock, flags);
>  
>  	spin_lock_irqsave(efqd->queue->queue_lock, flags);
> @@ -2403,35 +2402,27 @@ done:
>  void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>  {
>  	struct io_cgroup *iocg;
> -	unsigned short id = iog->iocg_id;
> -	struct hlist_node *n;
> -	struct io_group *__iog;
>  	unsigned long flags;
>  	struct cgroup_subsys_state *css;
>  
>  	rcu_read_lock();
>  
> -	BUG_ON(!id);
> -	css = css_lookup(&io_subsys, id);
> +	css = css_lookup(&io_subsys, iog->iocg_id);
>  
> -	/* css can't go away as associated io group is still around */
> -	BUG_ON(!css);
> +	if (!css)
> +		goto out;
>  
>  	iocg = container_of(css, struct io_cgroup, css);
>  
>  	spin_lock_irqsave(&iocg->lock, flags);
> -	hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
> -		/*
> -		 * Remove iog only if it is still in iocg list. Cgroup
> -		 * deletion could have deleted it already.
> -		 */
> -		if (__iog == iog) {
> -			hlist_del_rcu(&iog->group_node);
> -			__io_destroy_group(efqd, iog);
> -			break;
> -		}
> +
> +	if (iog->iocg_id) {
> +		hlist_del_rcu(&iog->group_node);
> +		__io_destroy_group(efqd, iog);
>  	}
> +
>  	spin_unlock_irqrestore(&iocg->lock, flags);
> +out:
>  	rcu_read_unlock();
>  }
>  
> -- 1.5.4.rc3 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
                     ` (20 preceding siblings ...)
  2009-06-21 15:21   ` [RFC] IO scheduler based io controller (V5) Balbir Singh
@ 2009-06-29 16:04   ` Vladislav Bolkhovitin
  21 siblings, 0 replies; 176+ messages in thread
From: Vladislav Bolkhovitin @ 2009-06-29 16:04 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

Hi,

Vivek Goyal, on 06/20/2009 12:37 AM wrote:
> Hi All,
> 
> Here is the V5 of the IO controller patches generated on top of 2.6.30.
> 
> Previous versions of the patches was posted here.
> 
> (V1) http://lkml.org/lkml/2009/3/11/486
> (V2) http://lkml.org/lkml/2009/5/5/275
> (V3) http://lkml.org/lkml/2009/5/26/472
> (V4) http://lkml.org/lkml/2009/6/8/580
> 
> This patchset is still work in progress but I want to keep on getting the
> snapshot of my tree out at regular intervals to get the feedback hence V5.

[..]

> Testing
> =======
> 
> I have been able to do only very basic testing of reads and writes.
> 
> Test1 (Fairness for synchronous reads)
> ======================================
> - Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
>   cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)
> 
> dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
> dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &
> 
> 234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
> 234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s

Sorry, but the above isn't a correct way to test proportional fairness 
for synchronous reads. You need throughput only when *both* dd's 
running, don't you?

Considering both transfers started simultaneously (which isn't obvious 
too) in the way you test the throughput value only for the first 
finished dd is correct, because after it finished, the second dd started 
transferring data *alone*, hence the result throughput value for it got 
partially for simultaneous, partially for alone reads, i.e. screwed.

I'd suggest you instead test as 2 runs of:

1. while true; do dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null; done
    dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null

2. while true; do dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null; done
    dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null

and take results from the standalone dd's.

Vlad

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-19 20:37 ` Vivek Goyal
                   ` (22 preceding siblings ...)
  (?)
@ 2009-06-29 16:04 ` Vladislav Bolkhovitin
  2009-06-29 17:23     ` Vivek Goyal
       [not found]   ` <4A48E601.2050203-d+Crzxg7Rs0@public.gmane.org>
  -1 siblings, 2 replies; 176+ messages in thread
From: Vladislav Bolkhovitin @ 2009-06-29 16:04 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron, agk, snitzer, akpm, peterz

Hi,

Vivek Goyal, on 06/20/2009 12:37 AM wrote:
> Hi All,
> 
> Here is the V5 of the IO controller patches generated on top of 2.6.30.
> 
> Previous versions of the patches was posted here.
> 
> (V1) http://lkml.org/lkml/2009/3/11/486
> (V2) http://lkml.org/lkml/2009/5/5/275
> (V3) http://lkml.org/lkml/2009/5/26/472
> (V4) http://lkml.org/lkml/2009/6/8/580
> 
> This patchset is still work in progress but I want to keep on getting the
> snapshot of my tree out at regular intervals to get the feedback hence V5.

[..]

> Testing
> =======
> 
> I have been able to do only very basic testing of reads and writes.
> 
> Test1 (Fairness for synchronous reads)
> ======================================
> - Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
>   cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)
> 
> dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
> dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &
> 
> 234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
> 234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s

Sorry, but the above isn't a correct way to test proportional fairness 
for synchronous reads. You need throughput only when *both* dd's 
running, don't you?

Considering both transfers started simultaneously (which isn't obvious 
too) in the way you test the throughput value only for the first 
finished dd is correct, because after it finished, the second dd started 
transferring data *alone*, hence the result throughput value for it got 
partially for simultaneous, partially for alone reads, i.e. screwed.

I'd suggest you instead test as 2 runs of:

1. while true; do dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null; done
    dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null

2. while true; do dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null; done
    dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null

and take results from the standalone dd's.

Vlad


^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
       [not found]   ` <4A48E601.2050203-d+Crzxg7Rs0@public.gmane.org>
@ 2009-06-29 17:23     ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-29 17:23 UTC (permalink / raw)
  To: Vladislav Bolkhovitin
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Mon, Jun 29, 2009 at 08:04:17PM +0400, Vladislav Bolkhovitin wrote:
> Hi,
>
> Vivek Goyal, on 06/20/2009 12:37 AM wrote:
>> Hi All,
>>
>> Here is the V5 of the IO controller patches generated on top of 2.6.30.
>>
>> Previous versions of the patches was posted here.
>>
>> (V1) http://lkml.org/lkml/2009/3/11/486
>> (V2) http://lkml.org/lkml/2009/5/5/275
>> (V3) http://lkml.org/lkml/2009/5/26/472
>> (V4) http://lkml.org/lkml/2009/6/8/580
>>
>> This patchset is still work in progress but I want to keep on getting the
>> snapshot of my tree out at regular intervals to get the feedback hence V5.
>
> [..]
>
>> Testing
>> =======
>>
>> I have been able to do only very basic testing of reads and writes.
>>
>> Test1 (Fairness for synchronous reads)
>> ======================================
>> - Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
>>   cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)
>>
>> dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
>> dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &
>>
>> 234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
>> 234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s
>
> Sorry, but the above isn't a correct way to test proportional fairness  
> for synchronous reads. You need throughput only when *both* dd's  
> running, don't you?
>

Hi Vladislav,

Actually the focus here is following two lines.

group1 time=8 16 2471 group1 sectors=8 16 457840
group2 time=8 16 1220 group2 sectors=8 16 225736

I have pasted the output of dd completion output but as you pointed out
it does not mean much as higher weight dd finishes first and after that
second dd will get 100% of disk.

What I have done is that launch two dd jobs. The moment first dd finishes,
my scripts go and read up "io.disk_time" and "io.disk_sectors" files in two
cgroups.

disk_time keeps track of how much disk time a cgroup has got on a
particular disk and disk_sector keeps track of how many sector of IO a
cgroup has done on a particular disk.

Please notice above that once first dd (higher weight dd) finished, at
that point group1 got 2471 ms of disk time and group2 got 1220 ms of disk
time.

Similarly, by the time first dd finished, group1 has transferred 457840
sectors and group 2 has transferred 225736 sectors.

Here disk time of group1 is almost double of disk time received by group2.
(group1 weight=1000 and group2 weight=500). Currently like CFQ, we provide
fairness in terms of disk time.

So I think test should be fine. Just that output of "dd" is confusing and
probably I did not explain the testing procedure well. In next posting
will make the procedure more clear.

> Considering both transfers started simultaneously (which isn't obvious  
> too) in the way you test the throughput value only for the first  
> finished dd is correct, because after it finished, the second dd started  
> transferring data *alone*, hence the result throughput value for it got  
> partially for simultaneous, partially for alone reads, i.e. screwed.
>
> I'd suggest you instead test as 2 runs of:
>
> 1. while true; do dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null; done
>    dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null
>
> 2. while true; do dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null; done
>    dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null
>
> and take results from the standalone dd's.
>
> Vlad

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
  2009-06-29 16:04 ` Vladislav Bolkhovitin
@ 2009-06-29 17:23     ` Vivek Goyal
       [not found]   ` <4A48E601.2050203-d+Crzxg7Rs0@public.gmane.org>
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-29 17:23 UTC (permalink / raw)
  To: Vladislav Bolkhovitin
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, guijianfeng, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron, agk, snitzer, akpm, peterz

On Mon, Jun 29, 2009 at 08:04:17PM +0400, Vladislav Bolkhovitin wrote:
> Hi,
>
> Vivek Goyal, on 06/20/2009 12:37 AM wrote:
>> Hi All,
>>
>> Here is the V5 of the IO controller patches generated on top of 2.6.30.
>>
>> Previous versions of the patches was posted here.
>>
>> (V1) http://lkml.org/lkml/2009/3/11/486
>> (V2) http://lkml.org/lkml/2009/5/5/275
>> (V3) http://lkml.org/lkml/2009/5/26/472
>> (V4) http://lkml.org/lkml/2009/6/8/580
>>
>> This patchset is still work in progress but I want to keep on getting the
>> snapshot of my tree out at regular intervals to get the feedback hence V5.
>
> [..]
>
>> Testing
>> =======
>>
>> I have been able to do only very basic testing of reads and writes.
>>
>> Test1 (Fairness for synchronous reads)
>> ======================================
>> - Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
>>   cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)
>>
>> dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
>> dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &
>>
>> 234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
>> 234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s
>
> Sorry, but the above isn't a correct way to test proportional fairness  
> for synchronous reads. You need throughput only when *both* dd's  
> running, don't you?
>

Hi Vladislav,

Actually the focus here is following two lines.

group1 time=8 16 2471 group1 sectors=8 16 457840
group2 time=8 16 1220 group2 sectors=8 16 225736

I have pasted the output of dd completion output but as you pointed out
it does not mean much as higher weight dd finishes first and after that
second dd will get 100% of disk.

What I have done is that launch two dd jobs. The moment first dd finishes,
my scripts go and read up "io.disk_time" and "io.disk_sectors" files in two
cgroups.

disk_time keeps track of how much disk time a cgroup has got on a
particular disk and disk_sector keeps track of how many sector of IO a
cgroup has done on a particular disk.

Please notice above that once first dd (higher weight dd) finished, at
that point group1 got 2471 ms of disk time and group2 got 1220 ms of disk
time.

Similarly, by the time first dd finished, group1 has transferred 457840
sectors and group 2 has transferred 225736 sectors.

Here disk time of group1 is almost double of disk time received by group2.
(group1 weight=1000 and group2 weight=500). Currently like CFQ, we provide
fairness in terms of disk time.

So I think test should be fine. Just that output of "dd" is confusing and
probably I did not explain the testing procedure well. In next posting
will make the procedure more clear.

> Considering both transfers started simultaneously (which isn't obvious  
> too) in the way you test the throughput value only for the first  
> finished dd is correct, because after it finished, the second dd started  
> transferring data *alone*, hence the result throughput value for it got  
> partially for simultaneous, partially for alone reads, i.e. screwed.
>
> I'd suggest you instead test as 2 runs of:
>
> 1. while true; do dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null; done
>    dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null
>
> 2. while true; do dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null; done
>    dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null
>
> and take results from the standalone dd's.
>
> Vlad

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [RFC] IO scheduler based io controller (V5)
@ 2009-06-29 17:23     ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-29 17:23 UTC (permalink / raw)
  To: Vladislav Bolkhovitin
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, guijianfeng, fernando, mikew, jmoyer,
	nauman, m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Mon, Jun 29, 2009 at 08:04:17PM +0400, Vladislav Bolkhovitin wrote:
> Hi,
>
> Vivek Goyal, on 06/20/2009 12:37 AM wrote:
>> Hi All,
>>
>> Here is the V5 of the IO controller patches generated on top of 2.6.30.
>>
>> Previous versions of the patches was posted here.
>>
>> (V1) http://lkml.org/lkml/2009/3/11/486
>> (V2) http://lkml.org/lkml/2009/5/5/275
>> (V3) http://lkml.org/lkml/2009/5/26/472
>> (V4) http://lkml.org/lkml/2009/6/8/580
>>
>> This patchset is still work in progress but I want to keep on getting the
>> snapshot of my tree out at regular intervals to get the feedback hence V5.
>
> [..]
>
>> Testing
>> =======
>>
>> I have been able to do only very basic testing of reads and writes.
>>
>> Test1 (Fairness for synchronous reads)
>> ======================================
>> - Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
>>   cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)
>>
>> dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
>> dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &
>>
>> 234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
>> 234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s
>
> Sorry, but the above isn't a correct way to test proportional fairness  
> for synchronous reads. You need throughput only when *both* dd's  
> running, don't you?
>

Hi Vladislav,

Actually the focus here is following two lines.

group1 time=8 16 2471 group1 sectors=8 16 457840
group2 time=8 16 1220 group2 sectors=8 16 225736

I have pasted the output of dd completion output but as you pointed out
it does not mean much as higher weight dd finishes first and after that
second dd will get 100% of disk.

What I have done is that launch two dd jobs. The moment first dd finishes,
my scripts go and read up "io.disk_time" and "io.disk_sectors" files in two
cgroups.

disk_time keeps track of how much disk time a cgroup has got on a
particular disk and disk_sector keeps track of how many sector of IO a
cgroup has done on a particular disk.

Please notice above that once first dd (higher weight dd) finished, at
that point group1 got 2471 ms of disk time and group2 got 1220 ms of disk
time.

Similarly, by the time first dd finished, group1 has transferred 457840
sectors and group 2 has transferred 225736 sectors.

Here disk time of group1 is almost double of disk time received by group2.
(group1 weight=1000 and group2 weight=500). Currently like CFQ, we provide
fairness in terms of disk time.

So I think test should be fine. Just that output of "dd" is confusing and
probably I did not explain the testing procedure well. In next posting
will make the procedure more clear.

> Considering both transfers started simultaneously (which isn't obvious  
> too) in the way you test the throughput value only for the first  
> finished dd is correct, because after it finished, the second dd started  
> transferring data *alone*, hence the result throughput value for it got  
> partially for simultaneous, partially for alone reads, i.e. screwed.
>
> I'd suggest you instead test as 2 runs of:
>
> 1. while true; do dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null; done
>    dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null
>
> 2. while true; do dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null; done
>    dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null
>
> and take results from the standalone dd's.
>
> Vlad

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]   ` <1245443858-8487-3-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2009-06-22  8:46     ` Balbir Singh
@ 2009-06-30  6:40     ` Gui Jianfeng
  2009-07-01  9:24     ` Gui Jianfeng
  2 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-30  6:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
...
> +
> +/*
> + * Do the accounting. Determine how much service (in terms of time slices)
> + * current queue used and adjust the start, finish time of queue and vtime
> + * of the tree accordingly.
> + *
> + * Determining the service used in terms of time is tricky in certain
> + * situations. Especially when underlying device supports command queuing
> + * and requests from multiple queues can be there at same time, then it
> + * is not clear which queue consumed how much of disk time.
> + *
> + * To mitigate this problem, cfq starts the time slice of the queue only
> + * after first request from the queue has completed. This does not work
> + * very well if we expire the queue before we wait for first and more
> + * request to finish from the queue. For seeky queues, we will expire the
> + * queue after dispatching few requests without waiting and start dispatching
> + * from next queue.
> + *
> + * Not sure how to determine the time consumed by queue in such scenarios.
> + * Currently as a crude approximation, we are charging 25% of time slice
> + * for such cases. A better mechanism is needed for accurate accounting.
> + */

  Hi Vivek,

  The comment is out of date, would you update it accordingly?

> +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> +
> +	assert_spin_locked(q->queue_lock);
> +	elv_log_ioq(efqd, ioq, "slice expired");
> +
> +	if (elv_ioq_wait_request(ioq))
> +		del_timer(&efqd->idle_slice_timer);
> +
> +	elv_clear_ioq_wait_request(ioq);
> +
> +	/*

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-19 20:37   ` Vivek Goyal
  (?)
  (?)
@ 2009-06-30  6:40   ` Gui Jianfeng
  2009-07-01  1:28       ` Vivek Goyal
       [not found]     ` <4A49B364.5000508-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  -1 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-30  6:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
...
> +
> +/*
> + * Do the accounting. Determine how much service (in terms of time slices)
> + * current queue used and adjust the start, finish time of queue and vtime
> + * of the tree accordingly.
> + *
> + * Determining the service used in terms of time is tricky in certain
> + * situations. Especially when underlying device supports command queuing
> + * and requests from multiple queues can be there at same time, then it
> + * is not clear which queue consumed how much of disk time.
> + *
> + * To mitigate this problem, cfq starts the time slice of the queue only
> + * after first request from the queue has completed. This does not work
> + * very well if we expire the queue before we wait for first and more
> + * request to finish from the queue. For seeky queues, we will expire the
> + * queue after dispatching few requests without waiting and start dispatching
> + * from next queue.
> + *
> + * Not sure how to determine the time consumed by queue in such scenarios.
> + * Currently as a crude approximation, we are charging 25% of time slice
> + * for such cases. A better mechanism is needed for accurate accounting.
> + */

  Hi Vivek,

  The comment is out of date, would you update it accordingly?

> +void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq)
> +{
> +	struct elv_fq_data *efqd = &q->elevator->efqd;
> +	struct io_entity *entity = &ioq->entity;
> +	long slice_unused = 0, slice_used = 0, slice_overshoot = 0;
> +
> +	assert_spin_locked(q->queue_lock);
> +	elv_log_ioq(efqd, ioq, "slice expired");
> +
> +	if (elv_ioq_wait_request(ioq))
> +		del_timer(&efqd->idle_slice_timer);
> +
> +	elv_clear_ioq_wait_request(ioq);
> +
> +	/*

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
       [not found]   ` <1245443858-8487-9-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-30  7:49     ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-30  7:49 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Hi Vivek,

We don't expect expiring an idle ioq if it's the only ioq 
in the hierarchy.

Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
 block/elevator-fq.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 4270cfd..0b65e16 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			elv_clear_ioq_slice_new(ioq);
 		}
 
-		if (elv_ioq_class_idle(ioq)) {
-			if (elv_iosched_expire_ioq(q, 1, 0))
-				elv_ioq_slice_expired(q);
-			goto done;
-		}
-
 		/*
 		 * If there is only root group present, don't expire the queue
 		 * for single queue ioschedulers (noop, deadline, AS). It is
@@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			goto done;
 		}
 
+		if (elv_ioq_class_idle(ioq)) {
+			if (elv_iosched_expire_ioq(q, 1, 0))
+				elv_ioq_slice_expired(q);
+			goto done;
+		}
+
 		/* For async queue try to do wait busy */
 		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
 		    && (elv_iog_nr_active(iog) <= 1)) {
-- 
1.5.4.rc3

^ permalink raw reply related	[flat|nested] 176+ messages in thread

* [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
  2009-06-19 20:37   ` Vivek Goyal
  (?)
@ 2009-06-30  7:49   ` Gui Jianfeng
       [not found]     ` <4A49C381.3040302-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  2009-07-01  1:32       ` Vivek Goyal
  -1 siblings, 2 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-06-30  7:49 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Hi Vivek,

We don't expect expiring an idle ioq if it's the only ioq 
in the hierarchy.

Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
---
 block/elevator-fq.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 4270cfd..0b65e16 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			elv_clear_ioq_slice_new(ioq);
 		}
 
-		if (elv_ioq_class_idle(ioq)) {
-			if (elv_iosched_expire_ioq(q, 1, 0))
-				elv_ioq_slice_expired(q);
-			goto done;
-		}
-
 		/*
 		 * If there is only root group present, don't expire the queue
 		 * for single queue ioschedulers (noop, deadline, AS). It is
@@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 			goto done;
 		}
 
+		if (elv_ioq_class_idle(ioq)) {
+			if (elv_iosched_expire_ioq(q, 1, 0))
+				elv_ioq_slice_expired(q);
+			goto done;
+		}
+
 		/* For async queue try to do wait busy */
 		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
 		    && (elv_iog_nr_active(iog) <= 1)) {
-- 
1.5.4.rc3



^ permalink raw reply related	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
       [not found]       ` <20090629140631.GA4622-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-06-30 17:14         ` Nauman Rafique
  0 siblings, 0 replies; 176+ messages in thread
From: Nauman Rafique @ 2009-06-30 17:14 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> There's no need to travel the iocg->group_data for each iog
>> when exiting a elevator, that costs too much. An alternative
>> solution is reseting iocg_id as soon as an io group unlinking
>> from a iocg. Make a decision that whether it's need to carry
>> out deleting action by checking iocg_id.
>>
>
> Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> wheter group is still on iocg list or not instead of traversing the list.
>
> Nauman, do you see any issues with the patch?

Looks like this should work. The only iog with zero id is associated
with root group, which gets deleted outside of this function anyways.

>
> Thanks
> Vivek
>
>> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
>> ---
>>  block/elevator-fq.c |   29 ++++++++++-------------------
>>  1 files changed, 10 insertions(+), 19 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index d779282..b26fe0f 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>>       BUG_ON(iog->sched_data.active_entity != NULL);
>>       BUG_ON(entity != NULL && entity->tree != NULL);
>>
>> -     iog->iocg_id = 0;
>> -
>>       /*
>>        * Wait for any rcu readers to exit before freeing up the group.
>>        * Primarily useful when io_get_io_group() is called without queue
>> @@ -2376,6 +2374,7 @@ remove_entry:
>>                         group_node);
>>       efqd = rcu_dereference(iog->key);
>>       hlist_del_rcu(&iog->group_node);
>> +     iog->iocg_id = 0;
>>       spin_unlock_irqrestore(&iocg->lock, flags);
>>
>>       spin_lock_irqsave(efqd->queue->queue_lock, flags);
>> @@ -2403,35 +2402,27 @@ done:
>>  void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>>  {
>>       struct io_cgroup *iocg;
>> -     unsigned short id = iog->iocg_id;
>> -     struct hlist_node *n;
>> -     struct io_group *__iog;
>>       unsigned long flags;
>>       struct cgroup_subsys_state *css;
>>
>>       rcu_read_lock();
>>
>> -     BUG_ON(!id);
>> -     css = css_lookup(&io_subsys, id);
>> +     css = css_lookup(&io_subsys, iog->iocg_id);
>>
>> -     /* css can't go away as associated io group is still around */
>> -     BUG_ON(!css);
>> +     if (!css)
>> +             goto out;
>>
>>       iocg = container_of(css, struct io_cgroup, css);
>>
>>       spin_lock_irqsave(&iocg->lock, flags);
>> -     hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
>> -             /*
>> -              * Remove iog only if it is still in iocg list. Cgroup
>> -              * deletion could have deleted it already.
>> -              */
>> -             if (__iog == iog) {
>> -                     hlist_del_rcu(&iog->group_node);
>> -                     __io_destroy_group(efqd, iog);
>> -                     break;
>> -             }
>> +
>> +     if (iog->iocg_id) {
>> +             hlist_del_rcu(&iog->group_node);
>> +             __io_destroy_group(efqd, iog);
>>       }
>> +
>>       spin_unlock_irqrestore(&iocg->lock, flags);
>> +out:
>>       rcu_read_unlock();
>>  }
>>
>> -- 1.5.4.rc3
>

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when  elevator exiting
  2009-06-29 14:06       ` Vivek Goyal
@ 2009-06-30 17:14         ` Nauman Rafique
  -1 siblings, 0 replies; 176+ messages in thread
From: Nauman Rafique @ 2009-06-30 17:14 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Gui Jianfeng, linux-kernel, containers, dm-devel, jens.axboe,
	dpshah, lizf, mikew, fchecconi, paolo.valente, ryov, fernando,
	s-uchida, taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron, agk, snitzer, akpm, peterz

On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal@redhat.com> wrote:
> On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> There's no need to travel the iocg->group_data for each iog
>> when exiting a elevator, that costs too much. An alternative
>> solution is reseting iocg_id as soon as an io group unlinking
>> from a iocg. Make a decision that whether it's need to carry
>> out deleting action by checking iocg_id.
>>
>
> Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> wheter group is still on iocg list or not instead of traversing the list.
>
> Nauman, do you see any issues with the patch?

Looks like this should work. The only iog with zero id is associated
with root group, which gets deleted outside of this function anyways.

>
> Thanks
> Vivek
>
>> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
>> ---
>>  block/elevator-fq.c |   29 ++++++++++-------------------
>>  1 files changed, 10 insertions(+), 19 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index d779282..b26fe0f 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>>       BUG_ON(iog->sched_data.active_entity != NULL);
>>       BUG_ON(entity != NULL && entity->tree != NULL);
>>
>> -     iog->iocg_id = 0;
>> -
>>       /*
>>        * Wait for any rcu readers to exit before freeing up the group.
>>        * Primarily useful when io_get_io_group() is called without queue
>> @@ -2376,6 +2374,7 @@ remove_entry:
>>                         group_node);
>>       efqd = rcu_dereference(iog->key);
>>       hlist_del_rcu(&iog->group_node);
>> +     iog->iocg_id = 0;
>>       spin_unlock_irqrestore(&iocg->lock, flags);
>>
>>       spin_lock_irqsave(efqd->queue->queue_lock, flags);
>> @@ -2403,35 +2402,27 @@ done:
>>  void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>>  {
>>       struct io_cgroup *iocg;
>> -     unsigned short id = iog->iocg_id;
>> -     struct hlist_node *n;
>> -     struct io_group *__iog;
>>       unsigned long flags;
>>       struct cgroup_subsys_state *css;
>>
>>       rcu_read_lock();
>>
>> -     BUG_ON(!id);
>> -     css = css_lookup(&io_subsys, id);
>> +     css = css_lookup(&io_subsys, iog->iocg_id);
>>
>> -     /* css can't go away as associated io group is still around */
>> -     BUG_ON(!css);
>> +     if (!css)
>> +             goto out;
>>
>>       iocg = container_of(css, struct io_cgroup, css);
>>
>>       spin_lock_irqsave(&iocg->lock, flags);
>> -     hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
>> -             /*
>> -              * Remove iog only if it is still in iocg list. Cgroup
>> -              * deletion could have deleted it already.
>> -              */
>> -             if (__iog == iog) {
>> -                     hlist_del_rcu(&iog->group_node);
>> -                     __io_destroy_group(efqd, iog);
>> -                     break;
>> -             }
>> +
>> +     if (iog->iocg_id) {
>> +             hlist_del_rcu(&iog->group_node);
>> +             __io_destroy_group(efqd, iog);
>>       }
>> +
>>       spin_unlock_irqrestore(&iocg->lock, flags);
>> +out:
>>       rcu_read_unlock();
>>  }
>>
>> -- 1.5.4.rc3
>

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
@ 2009-06-30 17:14         ` Nauman Rafique
  0 siblings, 0 replies; 176+ messages in thread
From: Nauman Rafique @ 2009-06-30 17:14 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, Gui Jianfeng, fernando, mikew, jmoyer,
	m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal@redhat.com> wrote:
> On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> There's no need to travel the iocg->group_data for each iog
>> when exiting a elevator, that costs too much. An alternative
>> solution is reseting iocg_id as soon as an io group unlinking
>> from a iocg. Make a decision that whether it's need to carry
>> out deleting action by checking iocg_id.
>>
>
> Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> wheter group is still on iocg list or not instead of traversing the list.
>
> Nauman, do you see any issues with the patch?

Looks like this should work. The only iog with zero id is associated
with root group, which gets deleted outside of this function anyways.

>
> Thanks
> Vivek
>
>> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
>> ---
>>  block/elevator-fq.c |   29 ++++++++++-------------------
>>  1 files changed, 10 insertions(+), 19 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index d779282..b26fe0f 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>>       BUG_ON(iog->sched_data.active_entity != NULL);
>>       BUG_ON(entity != NULL && entity->tree != NULL);
>>
>> -     iog->iocg_id = 0;
>> -
>>       /*
>>        * Wait for any rcu readers to exit before freeing up the group.
>>        * Primarily useful when io_get_io_group() is called without queue
>> @@ -2376,6 +2374,7 @@ remove_entry:
>>                         group_node);
>>       efqd = rcu_dereference(iog->key);
>>       hlist_del_rcu(&iog->group_node);
>> +     iog->iocg_id = 0;
>>       spin_unlock_irqrestore(&iocg->lock, flags);
>>
>>       spin_lock_irqsave(efqd->queue->queue_lock, flags);
>> @@ -2403,35 +2402,27 @@ done:
>>  void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>>  {
>>       struct io_cgroup *iocg;
>> -     unsigned short id = iog->iocg_id;
>> -     struct hlist_node *n;
>> -     struct io_group *__iog;
>>       unsigned long flags;
>>       struct cgroup_subsys_state *css;
>>
>>       rcu_read_lock();
>>
>> -     BUG_ON(!id);
>> -     css = css_lookup(&io_subsys, id);
>> +     css = css_lookup(&io_subsys, iog->iocg_id);
>>
>> -     /* css can't go away as associated io group is still around */
>> -     BUG_ON(!css);
>> +     if (!css)
>> +             goto out;
>>
>>       iocg = container_of(css, struct io_cgroup, css);
>>
>>       spin_lock_irqsave(&iocg->lock, flags);
>> -     hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
>> -             /*
>> -              * Remove iog only if it is still in iocg list. Cgroup
>> -              * deletion could have deleted it already.
>> -              */
>> -             if (__iog == iog) {
>> -                     hlist_del_rcu(&iog->group_node);
>> -                     __io_destroy_group(efqd, iog);
>> -                     break;
>> -             }
>> +
>> +     if (iog->iocg_id) {
>> +             hlist_del_rcu(&iog->group_node);
>> +             __io_destroy_group(efqd, iog);
>>       }
>> +
>>       spin_unlock_irqrestore(&iocg->lock, flags);
>> +out:
>>       rcu_read_unlock();
>>  }
>>
>> -- 1.5.4.rc3
>

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]     ` <4A49B364.5000508-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-07-01  1:28       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:28 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Tue, Jun 30, 2009 at 02:40:36PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > +
> > +/*
> > + * Do the accounting. Determine how much service (in terms of time slices)
> > + * current queue used and adjust the start, finish time of queue and vtime
> > + * of the tree accordingly.
> > + *
> > + * Determining the service used in terms of time is tricky in certain
> > + * situations. Especially when underlying device supports command queuing
> > + * and requests from multiple queues can be there at same time, then it
> > + * is not clear which queue consumed how much of disk time.
> > + *
> > + * To mitigate this problem, cfq starts the time slice of the queue only
> > + * after first request from the queue has completed. This does not work
> > + * very well if we expire the queue before we wait for first and more
> > + * request to finish from the queue. For seeky queues, we will expire the
> > + * queue after dispatching few requests without waiting and start dispatching
> > + * from next queue.
> > + *
> > + * Not sure how to determine the time consumed by queue in such scenarios.
> > + * Currently as a crude approximation, we are charging 25% of time slice
> > + * for such cases. A better mechanism is needed for accurate accounting.
> > + */
> 
>   Hi Vivek,
> 
>   The comment is out of date, would you update it accordingly?
> 

Thanks Gui. Yes, I will update it in next posting.

Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-30  6:40   ` Gui Jianfeng
@ 2009-07-01  1:28       ` Vivek Goyal
       [not found]     ` <4A49B364.5000508-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:28 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Tue, Jun 30, 2009 at 02:40:36PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > +
> > +/*
> > + * Do the accounting. Determine how much service (in terms of time slices)
> > + * current queue used and adjust the start, finish time of queue and vtime
> > + * of the tree accordingly.
> > + *
> > + * Determining the service used in terms of time is tricky in certain
> > + * situations. Especially when underlying device supports command queuing
> > + * and requests from multiple queues can be there at same time, then it
> > + * is not clear which queue consumed how much of disk time.
> > + *
> > + * To mitigate this problem, cfq starts the time slice of the queue only
> > + * after first request from the queue has completed. This does not work
> > + * very well if we expire the queue before we wait for first and more
> > + * request to finish from the queue. For seeky queues, we will expire the
> > + * queue after dispatching few requests without waiting and start dispatching
> > + * from next queue.
> > + *
> > + * Not sure how to determine the time consumed by queue in such scenarios.
> > + * Currently as a crude approximation, we are charging 25% of time slice
> > + * for such cases. A better mechanism is needed for accurate accounting.
> > + */
> 
>   Hi Vivek,
> 
>   The comment is out of date, would you update it accordingly?
> 

Thanks Gui. Yes, I will update it in next posting.

Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
@ 2009-07-01  1:28       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:28 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Tue, Jun 30, 2009 at 02:40:36PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > +
> > +/*
> > + * Do the accounting. Determine how much service (in terms of time slices)
> > + * current queue used and adjust the start, finish time of queue and vtime
> > + * of the tree accordingly.
> > + *
> > + * Determining the service used in terms of time is tricky in certain
> > + * situations. Especially when underlying device supports command queuing
> > + * and requests from multiple queues can be there at same time, then it
> > + * is not clear which queue consumed how much of disk time.
> > + *
> > + * To mitigate this problem, cfq starts the time slice of the queue only
> > + * after first request from the queue has completed. This does not work
> > + * very well if we expire the queue before we wait for first and more
> > + * request to finish from the queue. For seeky queues, we will expire the
> > + * queue after dispatching few requests without waiting and start dispatching
> > + * from next queue.
> > + *
> > + * Not sure how to determine the time consumed by queue in such scenarios.
> > + * Currently as a crude approximation, we are charging 25% of time slice
> > + * for such cases. A better mechanism is needed for accurate accounting.
> > + */
> 
>   Hi Vivek,
> 
>   The comment is out of date, would you update it accordingly?
> 

Thanks Gui. Yes, I will update it in next posting.

Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
       [not found]     ` <4A49C381.3040302-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2009-07-01  1:32       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:32 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Tue, Jun 30, 2009 at 03:49:21PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
> 
> We don't expect expiring an idle ioq if it's the only ioq 
> in the hierarchy.
> 

Hi Gui,

This patch will avoid idle queue expiry for single ioq schedulers. But
that's not an issue anyway as single ioq schedulers don't have the notion
of idle queue. It is only CFQ which allows creation of idle ioq.

Thanks
Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> ---
>  block/elevator-fq.c |   12 ++++++------
>  1 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 4270cfd..0b65e16 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>  			elv_clear_ioq_slice_new(ioq);
>  		}
>  
> -		if (elv_ioq_class_idle(ioq)) {
> -			if (elv_iosched_expire_ioq(q, 1, 0))
> -				elv_ioq_slice_expired(q);
> -			goto done;
> -		}
> -
>  		/*
>  		 * If there is only root group present, don't expire the queue
>  		 * for single queue ioschedulers (noop, deadline, AS). It is
> @@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>  			goto done;
>  		}
>  
> +		if (elv_ioq_class_idle(ioq)) {
> +			if (elv_iosched_expire_ioq(q, 1, 0))
> +				elv_ioq_slice_expired(q);
> +			goto done;
> +		}
> +
>  		/* For async queue try to do wait busy */
>  		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
>  		    && (elv_iog_nr_active(iog) <= 1)) {
> -- 
> 1.5.4.rc3
> 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
  2009-06-30  7:49   ` [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy Gui Jianfeng
@ 2009-07-01  1:32       ` Vivek Goyal
  2009-07-01  1:32       ` Vivek Goyal
  1 sibling, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:32 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

On Tue, Jun 30, 2009 at 03:49:21PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
> 
> We don't expect expiring an idle ioq if it's the only ioq 
> in the hierarchy.
> 

Hi Gui,

This patch will avoid idle queue expiry for single ioq schedulers. But
that's not an issue anyway as single ioq schedulers don't have the notion
of idle queue. It is only CFQ which allows creation of idle ioq.

Thanks
Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   12 ++++++------
>  1 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 4270cfd..0b65e16 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>  			elv_clear_ioq_slice_new(ioq);
>  		}
>  
> -		if (elv_ioq_class_idle(ioq)) {
> -			if (elv_iosched_expire_ioq(q, 1, 0))
> -				elv_ioq_slice_expired(q);
> -			goto done;
> -		}
> -
>  		/*
>  		 * If there is only root group present, don't expire the queue
>  		 * for single queue ioschedulers (noop, deadline, AS). It is
> @@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>  			goto done;
>  		}
>  
> +		if (elv_ioq_class_idle(ioq)) {
> +			if (elv_iosched_expire_ioq(q, 1, 0))
> +				elv_ioq_slice_expired(q);
> +			goto done;
> +		}
> +
>  		/* For async queue try to do wait busy */
>  		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
>  		    && (elv_iog_nr_active(iog) <= 1)) {
> -- 
> 1.5.4.rc3
> 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
@ 2009-07-01  1:32       ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:32 UTC (permalink / raw)
  To: Gui Jianfeng
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

On Tue, Jun 30, 2009 at 03:49:21PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
> 
> We don't expect expiring an idle ioq if it's the only ioq 
> in the hierarchy.
> 

Hi Gui,

This patch will avoid idle queue expiry for single ioq schedulers. But
that's not an issue anyway as single ioq schedulers don't have the notion
of idle queue. It is only CFQ which allows creation of idle ioq.

Thanks
Vivek

> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
> ---
>  block/elevator-fq.c |   12 ++++++------
>  1 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index 4270cfd..0b65e16 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>  			elv_clear_ioq_slice_new(ioq);
>  		}
>  
> -		if (elv_ioq_class_idle(ioq)) {
> -			if (elv_iosched_expire_ioq(q, 1, 0))
> -				elv_ioq_slice_expired(q);
> -			goto done;
> -		}
> -
>  		/*
>  		 * If there is only root group present, don't expire the queue
>  		 * for single queue ioschedulers (noop, deadline, AS). It is
> @@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>  			goto done;
>  		}
>  
> +		if (elv_ioq_class_idle(ioq)) {
> +			if (elv_iosched_expire_ioq(q, 1, 0))
> +				elv_ioq_slice_expired(q);
> +			goto done;
> +		}
> +
>  		/* For async queue try to do wait busy */
>  		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
>  		    && (elv_iog_nr_active(iog) <= 1)) {
> -- 
> 1.5.4.rc3
> 

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
       [not found]         ` <e98e18940906301014n146e7146vb5a73c2f33c9e819-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-07-01  1:34           ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:34 UTC (permalink / raw)
  To: Nauman Rafique
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w

On Tue, Jun 30, 2009 at 10:14:48AM -0700, Nauman Rafique wrote:
> On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
> >> Hi Vivek,
> >>
> >> There's no need to travel the iocg->group_data for each iog
> >> when exiting a elevator, that costs too much. An alternative
> >> solution is reseting iocg_id as soon as an io group unlinking
> >> from a iocg. Make a decision that whether it's need to carry
> >> out deleting action by checking iocg_id.
> >>
> >
> > Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> > wheter group is still on iocg list or not instead of traversing the list.
> >
> > Nauman, do you see any issues with the patch?
> 
> Looks like this should work. The only iog with zero id is associated
> with root group, which gets deleted outside of this function anyways.
> 

Minor correction. Even root group has id "1" and not zero. 0 is associated
with error when cgroup is not present.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
  2009-06-30 17:14         ` Nauman Rafique
@ 2009-07-01  1:34           ` Vivek Goyal
  -1 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:34 UTC (permalink / raw)
  To: Nauman Rafique
  Cc: Gui Jianfeng, linux-kernel, containers, dm-devel, jens.axboe,
	dpshah, lizf, mikew, fchecconi, paolo.valente, ryov, fernando,
	s-uchida, taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda,
	jbaron, agk, snitzer, akpm, peterz

On Tue, Jun 30, 2009 at 10:14:48AM -0700, Nauman Rafique wrote:
> On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal@redhat.com> wrote:
> > On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
> >> Hi Vivek,
> >>
> >> There's no need to travel the iocg->group_data for each iog
> >> when exiting a elevator, that costs too much. An alternative
> >> solution is reseting iocg_id as soon as an io group unlinking
> >> from a iocg. Make a decision that whether it's need to carry
> >> out deleting action by checking iocg_id.
> >>
> >
> > Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> > wheter group is still on iocg list or not instead of traversing the list.
> >
> > Nauman, do you see any issues with the patch?
> 
> Looks like this should work. The only iog with zero id is associated
> with root group, which gets deleted outside of this function anyways.
> 

Minor correction. Even root group has id "1" and not zero. 0 is associated
with error when cgroup is not present.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: optimization for iog deletion when elevator exiting
@ 2009-07-01  1:34           ` Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-07-01  1:34 UTC (permalink / raw)
  To: Nauman Rafique
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, Gui Jianfeng, fernando, mikew, jmoyer,
	m-ikeda, lizf, fchecconi, akpm, containers, linux-kernel,
	s-uchida, righi.andrea, jbaron

On Tue, Jun 30, 2009 at 10:14:48AM -0700, Nauman Rafique wrote:
> On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal@redhat.com> wrote:
> > On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
> >> Hi Vivek,
> >>
> >> There's no need to travel the iocg->group_data for each iog
> >> when exiting a elevator, that costs too much. An alternative
> >> solution is reseting iocg_id as soon as an io group unlinking
> >> from a iocg. Make a decision that whether it's need to carry
> >> out deleting action by checking iocg_id.
> >>
> >
> > Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> > wheter group is still on iocg list or not instead of traversing the list.
> >
> > Nauman, do you see any issues with the patch?
> 
> Looks like this should work. The only iog with zero id is associated
> with root group, which gets deleted outside of this function anyways.
> 

Minor correction. Even root group has id "1" and not zero. 0 is associated
with error when cgroup is not present.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
       [not found]       ` <20090701013239.GB13958-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2009-07-01  1:40         ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-07-01  1:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
> On Tue, Jun 30, 2009 at 03:49:21PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> We don't expect expiring an idle ioq if it's the only ioq 
>> in the hierarchy.
>>
> 
> Hi Gui,
> 
> This patch will avoid idle queue expiry for single ioq schedulers. But
> that's not an issue anyway as single ioq schedulers don't have the notion
> of idle queue. It is only CFQ which allows creation of idle ioq.

  Oh, yes, please ignore this mindless patch.

> 
> Thanks
> Vivek
> 
>> Signed-off-by: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
>> ---
>>  block/elevator-fq.c |   12 ++++++------
>>  1 files changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index 4270cfd..0b65e16 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>>  			elv_clear_ioq_slice_new(ioq);
>>  		}
>>  
>> -		if (elv_ioq_class_idle(ioq)) {
>> -			if (elv_iosched_expire_ioq(q, 1, 0))
>> -				elv_ioq_slice_expired(q);
>> -			goto done;
>> -		}
>> -
>>  		/*
>>  		 * If there is only root group present, don't expire the queue
>>  		 * for single queue ioschedulers (noop, deadline, AS). It is
>> @@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>>  			goto done;
>>  		}
>>  
>> +		if (elv_ioq_class_idle(ioq)) {
>> +			if (elv_iosched_expire_ioq(q, 1, 0))
>> +				elv_ioq_slice_expired(q);
>> +			goto done;
>> +		}
>> +
>>  		/* For async queue try to do wait busy */
>>  		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
>>  		    && (elv_iog_nr_active(iog) <= 1)) {
>> -- 
>> 1.5.4.rc3
>>
> 
> 
> 

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
  2009-07-01  1:32       ` Vivek Goyal
@ 2009-07-01  1:40         ` Gui Jianfeng
  -1 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-07-01  1:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
> On Tue, Jun 30, 2009 at 03:49:21PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> We don't expect expiring an idle ioq if it's the only ioq 
>> in the hierarchy.
>>
> 
> Hi Gui,
> 
> This patch will avoid idle queue expiry for single ioq schedulers. But
> that's not an issue anyway as single ioq schedulers don't have the notion
> of idle queue. It is only CFQ which allows creation of idle ioq.

  Oh, yes, please ignore this mindless patch.

> 
> Thanks
> Vivek
> 
>> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
>> ---
>>  block/elevator-fq.c |   12 ++++++------
>>  1 files changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index 4270cfd..0b65e16 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>>  			elv_clear_ioq_slice_new(ioq);
>>  		}
>>  
>> -		if (elv_ioq_class_idle(ioq)) {
>> -			if (elv_iosched_expire_ioq(q, 1, 0))
>> -				elv_ioq_slice_expired(q);
>> -			goto done;
>> -		}
>> -
>>  		/*
>>  		 * If there is only root group present, don't expire the queue
>>  		 * for single queue ioschedulers (noop, deadline, AS). It is
>> @@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>>  			goto done;
>>  		}
>>  
>> +		if (elv_ioq_class_idle(ioq)) {
>> +			if (elv_iosched_expire_ioq(q, 1, 0))
>> +				elv_ioq_slice_expired(q);
>> +			goto done;
>> +		}
>> +
>>  		/* For async queue try to do wait busy */
>>  		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
>>  		    && (elv_iog_nr_active(iog) <= 1)) {
>> -- 
>> 1.5.4.rc3
>>
> 
> 
> 

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy
@ 2009-07-01  1:40         ` Gui Jianfeng
  0 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-07-01  1:40 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval, snitzer, peterz, dm-devel, dpshah, jens.axboe, agk,
	balbir, paolo.valente, fernando, mikew, jmoyer, nauman, m-ikeda,
	lizf, fchecconi, akpm, jbaron, linux-kernel, s-uchida,
	righi.andrea, containers

Vivek Goyal wrote:
> On Tue, Jun 30, 2009 at 03:49:21PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> We don't expect expiring an idle ioq if it's the only ioq 
>> in the hierarchy.
>>
> 
> Hi Gui,
> 
> This patch will avoid idle queue expiry for single ioq schedulers. But
> that's not an issue anyway as single ioq schedulers don't have the notion
> of idle queue. It is only CFQ which allows creation of idle ioq.

  Oh, yes, please ignore this mindless patch.

> 
> Thanks
> Vivek
> 
>> Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
>> ---
>>  block/elevator-fq.c |   12 ++++++------
>>  1 files changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index 4270cfd..0b65e16 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -4058,12 +4058,6 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>>  			elv_clear_ioq_slice_new(ioq);
>>  		}
>>  
>> -		if (elv_ioq_class_idle(ioq)) {
>> -			if (elv_iosched_expire_ioq(q, 1, 0))
>> -				elv_ioq_slice_expired(q);
>> -			goto done;
>> -		}
>> -
>>  		/*
>>  		 * If there is only root group present, don't expire the queue
>>  		 * for single queue ioschedulers (noop, deadline, AS). It is
>> @@ -4077,6 +4071,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
>>  			goto done;
>>  		}
>>  
>> +		if (elv_ioq_class_idle(ioq)) {
>> +			if (elv_iosched_expire_ioq(q, 1, 0))
>> +				elv_ioq_slice_expired(q);
>> +			goto done;
>> +		}
>> +
>>  		/* For async queue try to do wait busy */
>>  		if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued
>>  		    && (elv_iog_nr_active(iog) <= 1)) {
>> -- 
>> 1.5.4.rc3
>>
> 
> 
> 

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
       [not found]   ` <1245443858-8487-3-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2009-06-22  8:46     ` Balbir Singh
  2009-06-30  6:40     ` Gui Jianfeng
@ 2009-07-01  9:24     ` Gui Jianfeng
  2 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-07-01  9:24 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	paolo.valente-rcYM44yAMweonA0d6jMUrA,
	fernando-gVGce1chcLdL9jVzuh4AOg, jmoyer-H+wXaHxf7aLQT0dZR+AlfA,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Vivek Goyal wrote:
...
> + * If ioscheduler has functionality of keeping track of close cooperator, check
> + * with it if it has got a closely co-operating queue.
> + */
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe)
> +{
> +	struct elevator_queue *e = q->elevator;
> +	struct io_queue *new_ioq = NULL;
> +
> +	/*
> +	 * Currently this feature is supported only for flat hierarchy or
> +	 * root group queues so that default cfq behavior is not changed.
> +	 */
> +	if (!is_root_group_ioq(q, ioq))
> +		return NULL;
> +
> +	if (q->elevator->ops->elevator_close_cooperator_fn)
> +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> +						ioq->sched_queue, probe);
> +
> +	/* Only select co-operating queue if it belongs to root group */
> +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> +		return NULL;
> +
> +	return new_ioq;
> +}

  Hi Vivek,

  Currently, whether we can let this feature be supported if current queue 
  and found queue are in the same group? I guess this change won't break 
  the default cfq behavior. Am i missing something?

-- 
Regards
Gui Jianfeng

^ permalink raw reply	[flat|nested] 176+ messages in thread

* Re: [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer
  2009-06-19 20:37   ` Vivek Goyal
                     ` (2 preceding siblings ...)
  (?)
@ 2009-07-01  9:24   ` Gui Jianfeng
  -1 siblings, 0 replies; 176+ messages in thread
From: Gui Jianfeng @ 2009-07-01  9:24 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: linux-kernel, containers, dm-devel, jens.axboe, nauman, dpshah,
	lizf, mikew, fchecconi, paolo.valente, ryov, fernando, s-uchida,
	taka, jmoyer, dhaval, balbir, righi.andrea, m-ikeda, jbaron, agk,
	snitzer, akpm, peterz

Vivek Goyal wrote:
...
> + * If ioscheduler has functionality of keeping track of close cooperator, check
> + * with it if it has got a closely co-operating queue.
> + */
> +static inline struct io_queue *elv_close_cooperator(struct request_queue *q,
> +					struct io_queue *ioq, int probe)
> +{
> +	struct elevator_queue *e = q->elevator;
> +	struct io_queue *new_ioq = NULL;
> +
> +	/*
> +	 * Currently this feature is supported only for flat hierarchy or
> +	 * root group queues so that default cfq behavior is not changed.
> +	 */
> +	if (!is_root_group_ioq(q, ioq))
> +		return NULL;
> +
> +	if (q->elevator->ops->elevator_close_cooperator_fn)
> +		new_ioq = e->ops->elevator_close_cooperator_fn(q,
> +						ioq->sched_queue, probe);
> +
> +	/* Only select co-operating queue if it belongs to root group */
> +	if (new_ioq && !is_root_group_ioq(q, new_ioq))
> +		return NULL;
> +
> +	return new_ioq;
> +}

  Hi Vivek,

  Currently, whether we can let this feature be supported if current queue 
  and found queue are in the same group? I guess this change won't break 
  the default cfq behavior. Am i missing something?

-- 
Regards
Gui Jianfeng


^ permalink raw reply	[flat|nested] 176+ messages in thread

* [RFC] IO scheduler based io controller (V5)
@ 2009-06-19 20:37 Vivek Goyal
  0 siblings, 0 replies; 176+ messages in thread
From: Vivek Goyal @ 2009-06-19 20:37 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA, nauman-hpIqsD4AKlfQT0dZR+AlfA,
	dpshah-hpIqsD4AKlfQT0dZR+AlfA, lizf-BthXqXjhjHXQFUHtdCDX3A
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA, agk-H+wXaHxf7aLQT0dZR+AlfA


Hi All,

Here is the V5 of the IO controller patches generated on top of 2.6.30.

Previous versions of the patches was posted here.

(V1) http://lkml.org/lkml/2009/3/11/486
(V2) http://lkml.org/lkml/2009/5/5/275
(V3) http://lkml.org/lkml/2009/5/26/472
(V4) http://lkml.org/lkml/2009/6/8/580

This patchset is still work in progress but I want to keep on getting the
snapshot of my tree out at regular intervals to get the feedback hence V5.

Changes from V4
===============
- Implemented bdi_*_congested_group() functions to also determine if a
  particular io group on a bdi is congested or not. So far we only used
  determine whether bdi is congested or not. But now there is one request
  list per group and one also needs to check whether the particular
  io group io is going into is congested or not.

- Fixed preemption logic in hiearchical mode. In hiearchical mode, one
  needs to traverse up the hiearchy so that current queue and new queue
  are at same level to make a decision whether preeption should be done
  or not. Took the idea and code from CFS cpu scheduler.

- There were some tunables which were appearing under
  /sys/block/<device>/queue dir but these tunables actually belonged to
  ioschedulers in hierarhical moded. Fixed it.
 
- Fixed another preemption issue where if any RT queue was pending
  (busy_rt_queues), current queue was being expired. Now this preemption is
  done only if there are busy_rt_queues in the same group.

  (Though I think that busy_rt_queues is redundant code as the moment RT
   request comes, we preempt the BE queue so we should never run into the
   issue of RT reuqest pending while BE is running. Keeping the code for the
   time being). 
 
- Applied the patch from Gui where he got rid of only_root_group code and
  now used cgroups children list to determine if root group is only group
  or there are childrens too.

- Applied few cleanup patches from Gui.

- We store the device id (major, minor) in io group. Previously I was
  retrieving that info from bio. Switched to gettting that info from
  backing device.

Limitations
===========

- This IO controller provides the bandwidth control at the IO scheduler
  level (leaf node in stacked hiearchy of logical devices). So there can
  be cases (depending on configuration) where application does not see
  proportional BW division at higher logical level device.

  LWN has written an article about the issue here.

	http://lwn.net/Articles/332839/

How to solve the issue of fairness at higher level logical devices
==================================================================
Couple of suggestions have come forward.

- Implement IO control at IO scheduler layer and then with the help of
  some daemon, adjust the weight on underlying devices dynamiclly, depending
  on what kind of BW gurantees are to be achieved at higher level logical
  block devices.

- Also implement a higher level IO controller along with IO scheduler
  based controller and let user choose one depending on his needs.

  A higher level controller does not know about the assumptions/policies
  of unerldying IO scheduler, hence it has the potential to break down
  the IO scheduler's policy with-in cgroup. A lower level controller
  can work with IO scheduler much more closely and efficiently.
 
Other active IO controller developments
=======================================

IO throttling
-------------

  This is a max bandwidth controller and not the proportional one. Secondly
  it is a second level controller which can break the IO scheduler's
  policy/assumtions with-in cgroup. 

dm-ioband
---------

 This is a proportional bandwidth controller implemented as device mapper
 driver. It is also a second level controller which can break the
 IO scheduler's policy/assumptions with-in cgroup.

Testing
=======

I have been able to do only very basic testing of reads and writes.

Test1 (Fairness for synchronous reads)
======================================
- Two dd in two cgroups with cgrop weights 1000 and 500. Ran two "dd" in those
  cgroups (With CFQ scheduler and /sys/block/<device>/queue/fairness = 1)

dd if=/mnt/$BLOCKDEV/zerofile1 of=/dev/null &
dd if=/mnt/$BLOCKDEV/zerofile2 of=/dev/null &

234179072 bytes (234 MB) copied, 3.9065 s, 59.9 MB/s
234179072 bytes (234 MB) copied, 5.19232 s, 45.1 MB/s

group1 time=8 16 2471 group1 sectors=8 16 457840
group2 time=8 16 1220 group2 sectors=8 16 225736

First two fields in time and sectors statistics represent major and minor
number of the device. Third field represents disk time in milliseconds and
number of sectors transferred respectively.

This patchset tries to provide fairness in terms of disk time received. group1
got almost double of group2 disk time (At the time of first dd finish). These
time and sectors statistics can be read using io.disk_time and io.disk_sector
files in cgroup. More about it in documentation file.

Test2 (Fairness for async writes)
=================================
Fairness for async writes is tricky and biggest reason is that async writes
are cached in higher layers (page cahe) as well as possibly in file system
layer also (btrfs, xfs etc), and are dispatched to lower layers not necessarily
in proportional manner.

For example, consider two dd threads reading /dev/zero as input file and doing
writes of huge files. Very soon we will cross vm_dirty_ratio and dd thread will
be forced to write out some pages to disk before more pages can be dirtied. But
not necessarily dirty pages of same thread are picked. It can very well pick
the inode of lesser priority dd thread and do some writeout. So effectively
higher weight dd is doing writeouts of lower weight dd pages and we don't see
service differentation.

IOW, the core problem with async write fairness is that higher weight thread
does not throw enought IO traffic at IO controller to keep the queue
continuously backlogged. In my testing, there are many .2 to .8 second
intervals where higher weight queue is empty and in that duration lower weight
queue get lots of job done giving the impression that there was no service
differentiation.

In summary, from IO controller point of view async writes support is there.
Because page cache has not been designed in such a manner that higher 
prio/weight writer can do more write out as compared to lower prio/weight
writer, gettting service differentiation is hard and it is visible in some
cases and not visible in some cases.

To get fairness for async writes in all cases, higher layer needs to be
fixed. That probably is a lot of work. Do we really care that much for
fairness among two writer cgroups? One can choose to do direct IO if
fairness for buffered writes really matters for him. I think we care more
for fairness in following cases and with this patch we should be able to
achive that.

- Read Vs Read
- Read Vs Writes (Buffered writes or direct IO writes)
	- Making sure that isolation is achieved between reader and writer
	  cgroup.  
- All form of direct IO.

Following is the only case where it is hard to ensure fairness between cgroups
because of higher layer design.

- Buffered writes Vs Buffered Writes.

So to test async writes I generated lots of write traffic in two cgroups (50
fio threads) and watched the disk time statistics in respective cgroups at
the interval of 2 seconds. Thanks to ryo tsuruta for the test case.

*****************************************************************
sync
echo 3 > /proc/sys/vm/drop_caches

fio_args="--size=64m --rw=write --numjobs=50 --group_reporting"

echo $$ > /cgroup/bfqio/test1/tasks
fio $fio_args --name=test1 --directory=/mnt/sdd1/fio/ --output=/mnt/sdd1/fio/test1.log &

echo $$ > /cgroup/bfqio/test2/tasks
fio $fio_args --name=test2 --directory=/mnt/sdd2/fio/ --output=/mnt/sdd2/fio/test2.log &
*********************************************************************** 

And watched the disk time and sector statistics for the both the cgroups
every 2 seconds using a script. How is snippet from output.

test1 statistics: time=8 48 1315   sectors=8 48 55776 dq=8 48 1
test2 statistics: time=8 48 633   sectors=8 48 14720 dq=8 48 2

test1 statistics: time=8 48 5586   sectors=8 48 339064 dq=8 48 2
test2 statistics: time=8 48 2985   sectors=8 48 146656 dq=8 48 3

test1 statistics: time=8 48 9935   sectors=8 48 628728 dq=8 48 3
test2 statistics: time=8 48 5265   sectors=8 48 278688 dq=8 48 4

test1 statistics: time=8 48 14156   sectors=8 48 932488 dq=8 48 6
test2 statistics: time=8 48 7646   sectors=8 48 412704 dq=8 48 7

test1 statistics: time=8 48 18141   sectors=8 48 1231488 dq=8 48 10
test2 statistics: time=8 48 9820   sectors=8 48 548400 dq=8 48 8

test1 statistics: time=8 48 21953   sectors=8 48 1485632 dq=8 48 13
test2 statistics: time=8 48 12394   sectors=8 48 698288 dq=8 48 10

test1 statistics: time=8 48 25167   sectors=8 48 1705264 dq=8 48 13
test2 statistics: time=8 48 14042   sectors=8 48 817808 dq=8 48 10

First two fields in time and sectors statistics represent major and minor
number of the device. Third field represents disk time in milliseconds and
number of sectors transferred respectively.

So disk time consumed by group1 is almost double of group2.

TODO
====
- Lots of code cleanups, testing, bug fixing, optimizations, benchmarking
  etc...

- Work on a better interface (possibly cgroup based) for configuring per
  group request descriptor limits.

- Debug and fix some of the areas like page cache where higher weight cgroup
  async writes are stuck behind lower weight cgroup async writes.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 176+ messages in thread

end of thread, other threads:[~2009-07-01  9:25 UTC | newest]

Thread overview: 176+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-19 20:37 [RFC] IO scheduler based io controller (V5) Vivek Goyal
2009-06-19 20:37 ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 01/20] io-controller: Documentation Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-22  8:46   ` Balbir Singh
2009-06-22  8:46     ` Balbir Singh
2009-06-22 12:43     ` Fabio Checconi
     [not found]       ` <20090622124313.GF28770-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
2009-06-23  2:43         ` Vivek Goyal
2009-06-23  2:43       ` Vivek Goyal
2009-06-23  2:43         ` Vivek Goyal
     [not found]         ` <20090623024337.GC3620-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-23  4:10           ` Fabio Checconi
2009-06-23  4:10             ` Fabio Checconi
2009-06-23  7:32             ` Balbir Singh
2009-06-23  7:32               ` Balbir Singh
     [not found]               ` <20090623073252.GJ8642-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-06-23 13:42                 ` Fabio Checconi
2009-06-23 13:42               ` Fabio Checconi
     [not found]             ` <20090623041052.GS28770-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
2009-06-23  7:32               ` Balbir Singh
2009-06-23  2:05     ` Vivek Goyal
2009-06-23  2:05       ` Vivek Goyal
     [not found]       ` <20090623020515.GA3620-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-23  2:20         ` Jeff Moyer
2009-06-23  2:20           ` Jeff Moyer
     [not found]     ` <20090622084612.GD3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-06-22 12:43       ` Fabio Checconi
2009-06-23  2:05       ` Vivek Goyal
2009-06-30  6:40   ` Gui Jianfeng
2009-07-01  1:28     ` Vivek Goyal
2009-07-01  1:28       ` Vivek Goyal
     [not found]     ` <4A49B364.5000508-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-01  1:28       ` Vivek Goyal
2009-07-01  9:24   ` Gui Jianfeng
     [not found]   ` <1245443858-8487-3-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-22  8:46     ` Balbir Singh
2009-06-30  6:40     ` Gui Jianfeng
2009-07-01  9:24     ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 03/20] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-29  5:27   ` [PATCH] io-controller: optimization for iog deletion when elevator exiting Gui Jianfeng
     [not found]     ` <4A4850D3.3000700-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-29 14:06       ` Vivek Goyal
2009-06-29 14:06     ` Vivek Goyal
2009-06-29 14:06       ` Vivek Goyal
2009-06-30 17:14       ` Nauman Rafique
2009-06-30 17:14         ` Nauman Rafique
2009-07-01  1:34         ` Vivek Goyal
2009-07-01  1:34           ` Vivek Goyal
     [not found]         ` <e98e18940906301014n146e7146vb5a73c2f33c9e819-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-07-01  1:34           ` Vivek Goyal
     [not found]       ` <20090629140631.GA4622-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-30 17:14         ` Nauman Rafique
     [not found]   ` <1245443858-8487-6-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-29  5:27     ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 06/20] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-23 12:10   ` Gui Jianfeng
2009-06-23 12:10     ` Gui Jianfeng
     [not found]     ` <4A40C64E.8040305-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-23 14:38       ` Vivek Goyal
2009-06-23 14:38     ` Vivek Goyal
2009-06-23 14:38       ` Vivek Goyal
     [not found]   ` <1245443858-8487-8-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-23 12:10     ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-30  7:49   ` [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy Gui Jianfeng
     [not found]     ` <4A49C381.3040302-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-01  1:32       ` Vivek Goyal
2009-07-01  1:32     ` Vivek Goyal
2009-07-01  1:32       ` Vivek Goyal
     [not found]       ` <20090701013239.GB13958-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-01  1:40         ` Gui Jianfeng
2009-07-01  1:40       ` Gui Jianfeng
2009-07-01  1:40         ` Gui Jianfeng
     [not found]   ` <1245443858-8487-9-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-30  7:49     ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 09/20] io-controller: Separate out queue and data Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 12/20] io-controller: deadline " Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 13/20] io-controller: anticipatory " Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
     [not found] ` <1245443858-8487-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-19 20:37   ` [PATCH 01/20] io-controller: Documentation Vivek Goyal
2009-06-19 20:37   ` [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-06-19 20:37   ` [PATCH 03/20] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-06-19 20:37   ` [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-06-19 20:37   ` [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-06-19 20:37   ` [PATCH 06/20] io-controller: cfq changes to use " Vivek Goyal
2009-06-19 20:37   ` [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-06-19 20:37   ` [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-06-19 20:37   ` [PATCH 09/20] io-controller: Separate out queue and data Vivek Goyal
2009-06-19 20:37   ` [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-06-19 20:37   ` [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-06-19 20:37   ` [PATCH 12/20] io-controller: deadline " Vivek Goyal
2009-06-19 20:37   ` [PATCH 13/20] io-controller: anticipatory " Vivek Goyal
2009-06-19 20:37   ` [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-06-19 20:37   ` [PATCH 15/20] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-06-19 20:37   ` [PATCH 16/20] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-06-19 20:37   ` [PATCH 17/20] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-06-19 20:37   ` [PATCH 18/20] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-06-19 20:37   ` [PATCH 19/20] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-06-19 20:37   ` [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
2009-06-21 15:21   ` [RFC] IO scheduler based io controller (V5) Balbir Singh
2009-06-29 16:04   ` Vladislav Bolkhovitin
2009-06-19 20:37 ` [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 15/20] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
     [not found]   ` <1245443858-8487-16-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-22  1:45     ` Gui Jianfeng
2009-06-22  1:45   ` Gui Jianfeng
     [not found]     ` <4A3EE245.7030409-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-22 15:39       ` Vivek Goyal
2009-06-22 15:39     ` Vivek Goyal
2009-06-22 15:39       ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 16/20] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 17/20] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 18/20] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-24 21:52   ` Paul Menage
2009-06-24 21:52     ` Paul Menage
2009-06-25 10:23     ` [PATCH] io-controller: do some changes of io.policy interface Gui Jianfeng
2009-06-25 10:23       ` Gui Jianfeng
     [not found]       ` <4A435038.60406-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-25 12:55         ` Vivek Goyal
2009-06-25 12:55       ` Vivek Goyal
2009-06-25 12:55         ` Vivek Goyal
2009-06-26  0:27         ` Gui Jianfeng
2009-06-26  0:27           ` Gui Jianfeng
     [not found]         ` <20090625125513.GA25439-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-26  0:27           ` Gui Jianfeng
2009-06-26  0:59           ` Gui Jianfeng
2009-06-26  0:59         ` Gui Jianfeng
     [not found]     ` <6599ad830906241452t76e64815s7d68a22a6e746a59-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-06-25 10:23       ` Gui Jianfeng
     [not found]   ` <1245443858-8487-19-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-24 21:52     ` [PATCH 18/20] io-controller: Support per cgroup per device weights and io class Paul Menage
2009-06-19 20:37 ` [PATCH 19/20] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
2009-06-19 20:37   ` Vivek Goyal
     [not found]   ` <1245443858-8487-21-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-22  7:44     ` [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups Gui Jianfeng
2009-06-22  7:44   ` Gui Jianfeng
2009-06-22 17:21     ` Vivek Goyal
2009-06-22 17:21       ` Vivek Goyal
2009-06-23  6:44       ` Gui Jianfeng
2009-06-23 14:02         ` Vivek Goyal
2009-06-23 14:02           ` Vivek Goyal
2009-06-24  9:20           ` Gui Jianfeng
2009-06-26  8:13             ` [PATCH 1/2] io-controller: Prepare a rt ioq list in efqd to keep track of busy rt ioqs Gui Jianfeng
2009-06-26  8:13               ` Gui Jianfeng
     [not found]             ` <4A41EFE1.5050101-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-26  8:13               ` Gui Jianfeng
2009-06-26  8:13               ` [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy Gui Jianfeng
2009-06-26  8:13             ` Gui Jianfeng
     [not found]               ` <4A44833F.8040308-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-26 12:39                 ` Vivek Goyal
2009-06-26 12:39               ` Vivek Goyal
2009-06-26 12:39                 ` Vivek Goyal
     [not found]           ` <20090623140250.GA4262-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-24  9:20             ` [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups Gui Jianfeng
     [not found]         ` <4A4079B8.4020402-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-23 14:02           ` Vivek Goyal
     [not found]       ` <20090622172123.GE15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-23  6:44         ` Gui Jianfeng
     [not found]     ` <4A3F3648.7080007-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-22 17:21       ` Vivek Goyal
2009-06-21 15:21 ` [RFC] IO scheduler based io controller (V5) Balbir Singh
2009-06-22 15:30   ` Vivek Goyal
2009-06-22 15:30     ` Vivek Goyal
2009-06-22 15:40     ` Jeff Moyer
2009-06-22 16:02       ` Vivek Goyal
2009-06-22 16:02         ` Vivek Goyal
     [not found]         ` <20090622160207.GC15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-22 16:06           ` Jeff Moyer
2009-06-22 16:06             ` Jeff Moyer
     [not found]             ` <x493a9sl0bx.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
2009-06-22 17:08               ` Vivek Goyal
2009-06-22 17:08             ` Vivek Goyal
2009-06-22 17:08               ` Vivek Goyal
     [not found]               ` <20090622170812.GD15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-23  6:52                 ` Balbir Singh
2009-06-23  6:52               ` Balbir Singh
     [not found]       ` <x497hz4l1j9.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
2009-06-22 16:02         ` Vivek Goyal
     [not found]     ` <20090622153030.GA15600-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-22 15:40       ` Jeff Moyer
     [not found]   ` <20090621152116.GC3728-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-06-22 15:30     ` Vivek Goyal
2009-06-29 16:04 ` Vladislav Bolkhovitin
2009-06-29 17:23   ` Vivek Goyal
2009-06-29 17:23     ` Vivek Goyal
     [not found]   ` <4A48E601.2050203-d+Crzxg7Rs0@public.gmane.org>
2009-06-29 17:23     ` Vivek Goyal
2009-06-19 20:37 Vivek Goyal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.